get_model_and_inputs() — pytorch Function Reference
Architecture documentation for the get_model_and_inputs() function in huggingface_llm_models.py from the pytorch codebase.
Entity Profile
Dependency Diagram
graph TD 9c419eaa_0abe_49a5_b127_071313449a6b["get_model_and_inputs()"] 6c4a7daf_e704_6d7a_bcaa_98900e3a377b["get_model_and_inputs()"] 6c4a7daf_e704_6d7a_bcaa_98900e3a377b -->|calls| 9c419eaa_0abe_49a5_b127_071313449a6b 6c4a7daf_e704_6d7a_bcaa_98900e3a377b["get_model_and_inputs()"] 9c419eaa_0abe_49a5_b127_071313449a6b -->|calls| 6c4a7daf_e704_6d7a_bcaa_98900e3a377b style 9c419eaa_0abe_49a5_b127_071313449a6b fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
benchmarks/dynamo/huggingface_llm_models.py lines 41–63
def get_model_and_inputs(model_name, device):
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name).to(device)
model.config.forced_decoder_ids = None
model.generation_config.do_sample = False
model.generation_config.temperature = 0.0
num_samples = int(WhisperBenchmark.DURATION * WhisperBenchmark.SAMPLE_RATE)
audio = torch.randn(num_samples) * 0.1
inputs = dict(
processor(
audio, sampling_rate=WhisperBenchmark.SAMPLE_RATE, return_tensors="pt"
)
)
inputs["input_features"] = inputs["input_features"].to(device)
decoder_start_token = model.config.decoder_start_token_id
inputs["decoder_input_ids"] = torch.tensor(
[[decoder_start_token]], device=device
)
return model, inputs
Domain
Subdomains
Calls
Called By
Source
Frequently Asked Questions
What does get_model_and_inputs() do?
get_model_and_inputs() is a function in the pytorch codebase.
What does get_model_and_inputs() call?
get_model_and_inputs() calls 1 function(s): get_model_and_inputs.
What calls get_model_and_inputs()?
get_model_and_inputs() is called by 1 function(s): get_model_and_inputs.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free