Skip to content

torch.compile failed due to modified backward pass #11

@sablin39

Description

@sablin39
RuntimeError: This backward function was compiled with non-empty donated buffers which requires create_graph=False and retain_graph=False. Please keep backward(create_graph=False, retain_graph=False) across all backward() function calls, or set torch._functorch.config.donated_buffer=False to disable donated buffer.

When models are wrapped by @torch.compile, this error will be triggered when using Phantom Gradient, calculating sradius with power_method,.etc.

Though it can be skipped via setting TORCH_COMPILE_DISABLE=1, it's still quite annoying.

When calculating sradius, another warning also shows up:

/home/lynn/miniforge3/envs/flow/lib/python3.12/site-packages/torch/autograd/graph.py:841: UserWarning: Attempting to run cuBLAS, but there was no current CUDA context! Attempting to set the primary context... (Triggered internally at /pytorch/aten/src/ATen/cuda/CublasHandlePool.cpp:270.)
  return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions