latency_experiment() — pytorch Function Reference
Architecture documentation for the latency_experiment() function in common.py from the pytorch codebase.
Entity Profile
Dependency Diagram
graph TD f5d4c5a3_21f5_4ed1_f582_7d73c454a4d7["latency_experiment()"] d0c96460_b5ec_95d1_765a_084b5860c03d["randomize_input()"] f5d4c5a3_21f5_4ed1_f582_7d73c454a4d7 -->|calls| d0c96460_b5ec_95d1_765a_084b5860c03d cea445e5_003e_b07a_0de9_43b0801fe53a["maybe_mark_step()"] f5d4c5a3_21f5_4ed1_f582_7d73c454a4d7 -->|calls| cea445e5_003e_b07a_0de9_43b0801fe53a 9c8df7bf_0e05_9bbb_5e2f_6c88f28b52d4["timed()"] f5d4c5a3_21f5_4ed1_f582_7d73c454a4d7 -->|calls| 9c8df7bf_0e05_9bbb_5e2f_6c88f28b52d4 style f5d4c5a3_21f5_4ed1_f582_7d73c454a4d7 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
benchmarks/dynamo/common.py lines 895–949
def latency_experiment(args, model_iter_fn, model, example_inputs, mark, **kwargs):
"""
Measure latency on a specific backend.
"""
timings = np.zeros((args.repeat,), np.float64)
# if we randomize the input, we should also check the result is correct
should_randomize_input = args.randomize_input
import contextlib
from torch._inductor.utils import maybe_profile
@contextlib.contextmanager
def maybe_mark_profile(*args, **kwargs):
prof: torch.profiler.profile = kwargs.pop("p", None)
mark = kwargs.pop("mark", None)
if prof:
with torch.profiler.record_function(mark):
yield
else:
yield
times = args.iterations_per_run
with maybe_profile(args.export_profiler_trace, **args.profile_details) as p:
for rep in trange(args.repeat, desc="running benchmark"):
inputs = (
randomize_input(copy.deepcopy(example_inputs))
if should_randomize_input
else example_inputs
)
# need call mark_step to perform the computation
# on randomize_input. Otherwise the first call using the
# inputs will incur high penalty then the next one.
maybe_mark_step(args)
with maybe_mark_profile(p=p, mark=mark):
timings[rep], actual_output = timed(
model,
model_iter_fn,
inputs,
return_result=True,
times=times,
collect_outputs=args.collect_outputs,
)
if args.export_profiler_trace:
name = args.profiler_trace_name + "_" + model.name
if hasattr(args, "rank"):
name += f"_rank_{args.rank}"
name += ".json"
name = os.path.join(torch._dynamo.config.base_dir, name)
p.export_chrome_trace(name)
return timings
Domain
Subdomains
Source
Frequently Asked Questions
What does latency_experiment() do?
latency_experiment() is a function in the pytorch codebase.
What does latency_experiment() call?
latency_experiment() calls 3 function(s): maybe_mark_step, randomize_input, timed.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free