get_peak_memory() — pytorch Function Reference
Architecture documentation for the get_peak_memory() function in common.py from the pytorch codebase.
Entity Profile
Dependency Diagram
graph TD 7a733239_de08_527b_74a9_4a187c4bb634["get_peak_memory()"] c52cc8f1_b576_9d50_98d9_34f721215c0e["run_performance_test_non_alternate()"] c52cc8f1_b576_9d50_98d9_34f721215c0e -->|calls| 7a733239_de08_527b_74a9_4a187c4bb634 d162fe35_2cc5_7738_ed94_76ad697846ef["run_performance_test()"] d162fe35_2cc5_7738_ed94_76ad697846ef -->|calls| 7a733239_de08_527b_74a9_4a187c4bb634 style 7a733239_de08_527b_74a9_4a187c4bb634 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
benchmarks/dynamo/common.py lines 1667–1668
def get_peak_memory():
return torch.cuda.max_memory_allocated() / 10**9
Domain
Subdomains
Source
Frequently Asked Questions
What does get_peak_memory() do?
get_peak_memory() is a function in the pytorch codebase.
What calls get_peak_memory()?
get_peak_memory() is called by 2 function(s): run_performance_test, run_performance_test_non_alternate.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free