Home / Class/ multi_margin_loss_cpu_kernel Class — pytorch Architecture

multi_margin_loss_cpu_kernel Class — pytorch Architecture

Architecture documentation for the multi_margin_loss_cpu_kernel class in LossMultiMargin.cpp from the pytorch codebase.

Entity Profile

Source Code

aten/src/ATen/native/LossMultiMargin.cpp lines 59–97

template <typename scalar_t>
inline void multi_margin_loss_cpu_kernel(
    Tensor& output,
    const scalar_t* input_data,
    const int64_t* target_data,
    const int p,
    scalar_t margin,
    const scalar_t* weight_data,
    const int64_t nframe,
    const int64_t dim,
    const int64_t reduction) {
  using accscalar_t = at::acc_type<scalar_t, false>;

  // dim() != 0 check is for 1d input which produces a scalar output (that
  // cannot be handled by TensorAccessor)
  if (reduction == Reduction::None && output.dim() > 0) {
    auto output_acc = output.accessor<scalar_t, 1>();
    for (const auto t : c10::irange(nframe)) {
      const auto idx = target_index_checked(target_data, t, dim);
      auto sum = multi_margin_inner_sum_cpu(
          input_data, weight_data, p, margin, dim, idx);
      output_acc[t] = sum;
      input_data += dim;
    }
  } else {
    accscalar_t sum = 0;
    auto output_acc = output.data_ptr<scalar_t>();
    for (const auto t : c10::irange(nframe)) {
      const auto idx = target_index_checked(target_data, t, dim);
      sum += multi_margin_inner_sum_cpu(
          input_data, weight_data, p, margin, dim, idx);
      input_data += dim;
    }
    if (reduction == Reduction::Mean) {
      sum /= nframe;
    }
    output_acc[0] = sum;
  }
}

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free