QLinearUnpackedDynamicFp16 Class — pytorch Architecture
Architecture documentation for the QLinearUnpackedDynamicFp16 class in qlinear_dynamic.cpp from the pytorch codebase.
Entity Profile
Source Code
aten/src/ATen/native/quantized/cpu/qlinear_dynamic.cpp lines 884–947
class QLinearUnpackedDynamicFp16 final {
public:
#ifdef USE_FBGEMM
static at::Tensor run(
at::Tensor input,
const at::Tensor& weight,
const std::optional<at::Tensor>& bias) {
// We make a strong guarantee that models using these operators will have
// the same numerics across different machines. Therefore, we do not provide
// a fallback path and rather fail loudly if we cannot run FBGEMM.
TORCH_CHECK(
fbgemm::fbgemmSupportedCPU(), "Your CPU doesn't support FBGEMM.");
TORCH_CHECK(
weight.dim() == 2,
"The dimension of weight tensor should be equal to 2");
auto packed_weight = PackedLinearWeightFp16::prepack(weight, bias);
auto output = packed_weight->apply_dynamic(std::move(input));
return output;
}
static at::Tensor meta(
at::Tensor input,
const at::Tensor& weight,
const std::optional<at::Tensor>& bias) {
// We make a strong guarantee that models using these operators will have
// the same numerics across different machines. Therefore, we do not provide
// a fallback path and rather fail loudly if we cannot run FBGEMM.
TORCH_CHECK(
fbgemm::fbgemmSupportedCPU(), "Your CPU doesn't support FBGEMM.");
TORCH_CHECK(
weight.dim() == 2,
"The dimension of weight tensor should be equal to 2");
auto out_channel = weight.sym_sizes().vec()[0];
auto out_sizes = input.sym_sizes().vec();
out_sizes[out_sizes.size() - 1] = out_channel;
return at::empty_symint(out_sizes, input.options());
}
#else // USE_FBGEMM
static at::Tensor run(
at::Tensor /* input */,
const at::Tensor& weight,
const std::optional<at::Tensor>& bias) {
// We make a strong guarantee that models using these operators will have
// the same numerics across different machines. Therefore, we do not provide
// a fallback path and rather fail loudly if we cannot run FBGEMM.
TORCH_CHECK(
false, "This PyTorch installation was not built with FBGEMM operators");
}
static at::Tensor meta(
at::Tensor /* input */,
const at::Tensor& weight,
const std::optional<at::Tensor>& bias) {
TORCH_CHECK(
false, "This PyTorch installation was not built with FBGEMM operators");
}
#endif // USE_FBGEMM
};
Source
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free