QLinearDynamicFp16 Class — pytorch Architecture
Architecture documentation for the QLinearDynamicFp16 class in qlinear_dynamic.cpp from the pytorch codebase.
Entity Profile
Source Code
aten/src/ATen/native/quantized/cpu/qlinear_dynamic.cpp lines 850–882
class QLinearDynamicFp16 final {
public:
#ifdef USE_FBGEMM
static at::Tensor run(
at::Tensor input,
const c10::intrusive_ptr<LinearPackedParamsBase>& packed_weight) {
// We make a strong guarantee that models using these operators will have
// the same numerics across different machines. Therefore, we do not provide
// a fallback path and rather fail loudly if we cannot run FBGEMM.
TORCH_CHECK(
fbgemm::fbgemmSupportedCPU(), "Your CPU doesn't support FBGEMM.");
auto output = packed_weight->apply_dynamic(std::move(input));
// Call the relu operator here until fp16 linear dynamic in FBGEMM
// supports it natively.
if (ReluFused) {
output.relu_();
}
return output;
}
#else // USE_FBGEMM
static at::Tensor run(
at::Tensor /* input */,
const c10::intrusive_ptr<LinearPackedParamsBase>& /* packed_weight */) {
// We make a strong guarantee that models using these operators will have
// the same numerics across different machines. Therefore, we do not provide
// a fallback path and rather fail loudly if we cannot run FBGEMM.
TORCH_CHECK(
false, "This PyTorch installation was not built with FBGEMM operators");
}
#endif // USE_FBGEMM
};
Source
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free