Home / Function/ get_fsdp_auto_wrap_policy() — pytorch Function Reference

get_fsdp_auto_wrap_policy() — pytorch Function Reference

Architecture documentation for the get_fsdp_auto_wrap_policy() function in common.py from the pytorch codebase.

Entity Profile

Dependency Diagram

graph TD
  3a736ec9_08b0_1fc3_7baa_eb145af83a59["get_fsdp_auto_wrap_policy()"]
  6a0a2015_4bf4_1e7a_daa6_dbf2c23883c7["deepcopy_and_maybe_parallelize()"]
  6a0a2015_4bf4_1e7a_daa6_dbf2c23883c7 -->|calls| 3a736ec9_08b0_1fc3_7baa_eb145af83a59
  style 3a736ec9_08b0_1fc3_7baa_eb145af83a59 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

benchmarks/dynamo/common.py lines 2096–2119

    def get_fsdp_auto_wrap_policy(self, model_name: str):
        from diffusers.models.transformer_2d import Transformer2DModel
        from torchbenchmark.models.nanogpt.model import Block
        from transformers.models.llama.modeling_llama import LlamaDecoderLayer

        from torch.distributed.fsdp.wrap import (
            ModuleWrapPolicy,
            size_based_auto_wrap_policy,
        )

        # handcrafted wrap policy
        MODEL_FSDP_WRAP = {
            "stable_diffusion_unet": (Transformer2DModel,),
            "llama_v2_7b_16h": (LlamaDecoderLayer,),
            "nanogpt": (Block,),
        }

        if model_name not in MODEL_FSDP_WRAP:
            # default to using wrap policy based on module size
            return functools.partial(
                size_based_auto_wrap_policy, recurse=True, min_num_params=int(1e5)
            )

        return ModuleWrapPolicy(MODEL_FSDP_WRAP[model_name])

Subdomains

Frequently Asked Questions

What does get_fsdp_auto_wrap_policy() do?
get_fsdp_auto_wrap_policy() is a function in the pytorch codebase.
What calls get_fsdp_auto_wrap_policy()?
get_fsdp_auto_wrap_policy() is called by 1 function(s): deepcopy_and_maybe_parallelize.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free