deepcopy_and_maybe_parallelize() — pytorch Function Reference
Architecture documentation for the deepcopy_and_maybe_parallelize() function in common.py from the pytorch codebase.
Entity Profile
Dependency Diagram
graph TD 6a0a2015_4bf4_1e7a_daa6_dbf2c23883c7["deepcopy_and_maybe_parallelize()"] 79f63331_206c_51dc_bfd1_4fd84b939754["check_accuracy()"] 79f63331_206c_51dc_bfd1_4fd84b939754 -->|calls| 6a0a2015_4bf4_1e7a_daa6_dbf2c23883c7 c52cc8f1_b576_9d50_98d9_34f721215c0e["run_performance_test_non_alternate()"] c52cc8f1_b576_9d50_98d9_34f721215c0e -->|calls| 6a0a2015_4bf4_1e7a_daa6_dbf2c23883c7 d162fe35_2cc5_7738_ed94_76ad697846ef["run_performance_test()"] d162fe35_2cc5_7738_ed94_76ad697846ef -->|calls| 6a0a2015_4bf4_1e7a_daa6_dbf2c23883c7 4b936914_ae3d_7efa_4e57_180bdf6f9020["deepcopy_model()"] 6a0a2015_4bf4_1e7a_daa6_dbf2c23883c7 -->|calls| 4b936914_ae3d_7efa_4e57_180bdf6f9020 3a736ec9_08b0_1fc3_7baa_eb145af83a59["get_fsdp_auto_wrap_policy()"] 6a0a2015_4bf4_1e7a_daa6_dbf2c23883c7 -->|calls| 3a736ec9_08b0_1fc3_7baa_eb145af83a59 style 6a0a2015_4bf4_1e7a_daa6_dbf2c23883c7 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
benchmarks/dynamo/common.py lines 2121–2166
def deepcopy_and_maybe_parallelize(self, model):
model = self.deepcopy_model(model)
if self.args.ddp:
if not torch.distributed.is_available():
raise AssertionError(
"Can't use DDP without a distributed enabled build"
)
from torch.nn.parallel import DistributedDataParallel as DDP
model = DDP(model, find_unused_parameters=True)
elif self.args.fsdp:
if not torch.distributed.is_available():
raise AssertionError(
"Can't use FSDP without a distributed enabled build"
)
from torch.distributed.fsdp import (
FullyShardedDataParallel as FSDP,
MixedPrecision,
)
if self.args.float16:
dtype = torch.float16
elif self.args.bfloat16:
dtype = torch.bfloat16
else:
dtype = torch.float32
mp_policy = MixedPrecision(
param_dtype=dtype,
# Gradient communication precision.
reduce_dtype=dtype,
# Buffer precision.
buffer_dtype=dtype,
)
model = FSDP(
model,
use_orig_params=True,
device_id=torch.cuda.current_device()
if self.args.devices[-1] == "cuda"
else None,
mixed_precision=mp_policy,
limit_all_gathers=True,
auto_wrap_policy=self.get_fsdp_auto_wrap_policy(self.args.only),
)
return model
Domain
Subdomains
Source
Frequently Asked Questions
What does deepcopy_and_maybe_parallelize() do?
deepcopy_and_maybe_parallelize() is a function in the pytorch codebase.
What does deepcopy_and_maybe_parallelize() call?
deepcopy_and_maybe_parallelize() calls 2 function(s): deepcopy_model, get_fsdp_auto_wrap_policy.
What calls deepcopy_and_maybe_parallelize()?
deepcopy_and_maybe_parallelize() is called by 3 function(s): check_accuracy, run_performance_test, run_performance_test_non_alternate.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free