tensor_is_on_xla() — pytorch Function Reference
Architecture documentation for the tensor_is_on_xla() function in common.py from the pytorch codebase.
Entity Profile
Dependency Diagram
graph TD 82d098c4_9a2a_eaf4_0bba_655d342fc39e["tensor_is_on_xla()"] 9c8df7bf_0e05_9bbb_5e2f_6c88f28b52d4["timed()"] 9c8df7bf_0e05_9bbb_5e2f_6c88f28b52d4 -->|calls| 82d098c4_9a2a_eaf4_0bba_655d342fc39e style 82d098c4_9a2a_eaf4_0bba_655d342fc39e fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
benchmarks/dynamo/common.py lines 652–660
def tensor_is_on_xla(tensors):
def visit(x: torch.Tensor):
nonlocal result
if x.device.type == "xla":
result = True
result = False
tree_map_only(torch.Tensor, visit, tensors)
return result
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does tensor_is_on_xla() do?
tensor_is_on_xla() is a function in the pytorch codebase.
What calls tensor_is_on_xla()?
tensor_is_on_xla() is called by 1 function(s): timed.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free