XNNPackConv2dOpContext Class — pytorch Architecture
Architecture documentation for the XNNPackConv2dOpContext class in OpContext.h from the pytorch codebase.
Entity Profile
Source Code
aten/src/ATen/native/xnnpack/OpContext.h lines 147–193
class XNNPackConv2dOpContext final : public Conv2dOpContext {
private:
ContextConv2D op_context_;
// xnnpack convs use indirection buffer.
// These buffers need setup at runtime and/or when input
// dims change. If we are running the same model on multiple
// threads, this can lead to contention where indirection buffer
// is being accessed and updated at the same time from two different
// threads.
std::mutex xnnp_mutex_;
public:
XNNPackConv2dOpContext(
Tensor&& weight,
std::optional<Tensor>&& bias,
std::vector<int64_t>&& padding,
std::vector<int64_t>&& stride,
std::vector<int64_t>&& dilation,
uint64_t groups,
const std::optional<Scalar>& min,
const std::optional<Scalar>& max,
ContextConv2D&& op_context)
: op_context_(std::move(op_context)) {
orig_weight_ = std::move(weight);
orig_bias_ = std::move(bias);
padding_ = std::move(padding);
stride_ = std::move(stride);
dilation_ = std::move(dilation);
groups_ = groups;
output_min_ = min;
output_max_ = max;
orig_weight_and_bias_freed_ = false;
}
Tensor run(const Tensor& input) override;
void free_orig_weight_and_bias() override;
static c10::intrusive_ptr<Conv2dOpContext> create_context(
Tensor&& weight,
std::optional<Tensor>&& bias,
std::vector<int64_t>&& padding,
std::vector<int64_t>&& stride,
std::vector<int64_t>&& dilation,
int64_t groups,
const std::optional<Scalar>& output_min,
const std::optional<Scalar>& output_max);
};
Source
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free