Home / Function/ init_optimizer() — pytorch Function Reference

init_optimizer() — pytorch Function Reference

Architecture documentation for the init_optimizer() function in common.py from the pytorch codebase.

Entity Profile

Dependency Diagram

graph TD
  6c83aab9_f1ee_6751_91aa_682a715a5746["init_optimizer()"]
  79f63331_206c_51dc_bfd1_4fd84b939754["check_accuracy()"]
  79f63331_206c_51dc_bfd1_4fd84b939754 -->|calls| 6c83aab9_f1ee_6751_91aa_682a715a5746
  0f41377a_e71d_fd3c_1974_a0ef9ec1158e["check_tolerance()"]
  0f41377a_e71d_fd3c_1974_a0ef9ec1158e -->|calls| 6c83aab9_f1ee_6751_91aa_682a715a5746
  c52cc8f1_b576_9d50_98d9_34f721215c0e["run_performance_test_non_alternate()"]
  c52cc8f1_b576_9d50_98d9_34f721215c0e -->|calls| 6c83aab9_f1ee_6751_91aa_682a715a5746
  d162fe35_2cc5_7738_ed94_76ad697846ef["run_performance_test()"]
  d162fe35_2cc5_7738_ed94_76ad697846ef -->|calls| 6c83aab9_f1ee_6751_91aa_682a715a5746
  style 6c83aab9_f1ee_6751_91aa_682a715a5746 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

benchmarks/dynamo/common.py lines 1831–1846

    def init_optimizer(self, name, device, params):
        if device == "cuda" and self.args.training and name not in CI_SKIP_OPTIMIZER:
            if (name in CI_USE_SGD and self.args.ci) or name in BENCHMARK_USE_SGD:
                self.optimizer = torch.optim.SGD(params, lr=0.01, foreach=True)
                # Disable multi_tensor_sgd for benchmarking, there isn't a large performance benefit (~1%) to compiling
                # this optimizer because it is a single foreach add, and increases compile time.
                # After autotuning and fake tensor caching lands, we can enable, because the compile time impact will be lower.
                # Fake Tensor caching: https://github.com/pytorch/pytorch/pull/113873
                # Autotuning: https://github.com/pytorch/pytorch/issues/117447
                self.optimizer.step = torch._dynamo.disable(self.optimizer.step)
            else:
                self.optimizer = torch.optim.Adam(
                    params, lr=0.01, capturable=True, foreach=True
                )
        else:
            self.optimizer = None

Subdomains

Frequently Asked Questions

What does init_optimizer() do?
init_optimizer() is a function in the pytorch codebase.
What calls init_optimizer()?
init_optimizer() is called by 4 function(s): check_accuracy, check_tolerance, run_performance_test, run_performance_test_non_alternate.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free