Home / Class/ mkl_result_copy_ Class — pytorch Architecture

mkl_result_copy_ Class — pytorch Architecture

Architecture documentation for the mkl_result_copy_ class in SparseBlasImpl.cpp from the pytorch codebase.

Entity Profile

Source Code

aten/src/ATen/native/mkl/SparseBlasImpl.cpp lines 110–154

template <typename scalar_t>
void mkl_result_copy_(const Tensor& input, sparse_matrix_t mkl_desc) {
  sparse_index_base_t indexing = SPARSE_INDEX_BASE_ZERO;
  MKL_INT rows = 0, cols = 0;
  MKL_INT *rows_start = nullptr, *rows_end = nullptr, *columns = nullptr;
  scalar_t* values = nullptr;
  at::mkl::sparse::export_csr(
      mkl_desc,
      &indexing,
      &rows,
      &cols,
      &rows_start,
      &rows_end,
      &columns,
      &values);

  // Resize input using nnz information from MKL
  MKL_INT nnz = rows_end[rows - 1];
  col_indices_and_values_resize_(input, nnz);

  auto crow_indices = input.crow_indices();
  auto col_indices = input.col_indices();
  auto input_values = input.values();

  // NB: When nnz is zero it is possible that input_values.data_ptr<scalar_t> is
  // a nullptr, if input was created via empty. As such we need to check that
  // nnz is not zero to avoid passing nullptr to std::memcpy. We will apply
  // the same precautions to crow_indices.data_ptr<MKL_INT>.
  //
  // Otherwise ASAN will complain.

  if (nnz > 0) {
    // MKL Sparse Inspector-Executor doesn't have a way to provide external
    // buffers So we have to copy the memory allocated by MKL
    std::memcpy(
        input_values.mutable_data_ptr<scalar_t>(), values, nnz * sizeof(scalar_t));
    std::memcpy(
        col_indices.mutable_data_ptr<MKL_INT>(), columns, nnz * sizeof(MKL_INT));
  }
  if (rows > 0) {
    std::memcpy(
        crow_indices.mutable_data_ptr<MKL_INT>(), rows_start, rows * sizeof(MKL_INT));
  }
  crow_indices.mutable_data_ptr<MKL_INT>()[rows] = nnz;
}

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free