Home / Class/ checkSetStorage Class — pytorch Architecture

checkSetStorage Class — pytorch Architecture

Architecture documentation for the checkSetStorage class in Resize.h from the pytorch codebase.

Entity Profile

Source Code

aten/src/ATen/native/Resize.h lines 126–177

template <typename T>
inline void checkSetStorage(Tensor& result, Storage storage, T storage_offset,
                                   ArrayRef<T> size, ArrayRef<T> stride, bool check_offset_in_bounds = true) {
  // FIXME: stride should be optional
  if (stride.data()) {
    TORCH_CHECK(size.size() == stride.size(), "unequal size length (", size.size(),
                                              ") and stride length (", stride.size(), ")");
  }

#ifdef DEBUG
  TORCH_CHECK(size.size() <= INT_MAX, "size length (", size.size(), ") greater than INT_MAX");
#endif

  // storageOffset
  TORCH_CHECK(
    TORCH_GUARD_OR_TRUE(sym_ge(storage_offset, 0)), "Tensor: invalid storage offset ", storage_offset);

  // set_storage_{device} (except set_storage_meta__symint)
  // will (unsafely) set the storage offset and then call resize_impl that
  // handles resizing the storage However, resize_impl will only resize the
  // storage if the sizes/strides changed. For the case that the sizes/strides
  // remain unchanged, the storage offset is not properly validated, so we do
  // that here.
  if (check_offset_in_bounds) {
    auto result_tensor_impl = result.unsafeGetTensorImpl();
    bool size_unchanged = result_tensor_impl->generic_sizes<T>() == size;
    bool stride_unchanged = stride.data()
        ? result_tensor_impl->generic_strides<T>() == stride
        : true;
    if (size_unchanged && stride_unchanged) {
      checkInBoundsForStorage(
          size, stride, storage_offset, result.dtype(), storage);
    }
  }

  // storage: note this can't be replaced with result.set_(storage) as the semantics of that
  // function is to set the tensor size to be equal to the size of the storage.
  if (!result.storage().is_alias_of(storage)) {
    // Caffe2 might have tensors whose storages are null, but we
    // don't allow it in PyTorch.
    TORCH_INTERNAL_ASSERT(storage);
    TORCH_INTERNAL_ASSERT(result.storage());

    // We used to allow this, but this breaks device caching.
    // Let's put an actual error message for this one.
    TORCH_CHECK(result.storage().device() == storage.device(),
                "Attempted to set the storage of a tensor on device \"", result.storage().device(),
                "\" to a storage on different device \"", storage.device(),
                "\".  This is no longer allowed; the devices must match.");
    result.unsafeGetTensorImpl()->set_storage_keep_dtype(std::move(storage));
  }
}

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free