Home / Class/ ComputePipelineCache Class — pytorch Architecture

ComputePipelineCache Class — pytorch Architecture

Architecture documentation for the ComputePipelineCache class in Pipeline.h from the pytorch codebase.

Entity Profile

Source Code

aten/src/ATen/native/vulkan/api/Pipeline.h lines 147–193

class ComputePipelineCache final {
 public:
  explicit ComputePipelineCache(VkDevice device);

  ComputePipelineCache(const ComputePipelineCache&) = delete;
  ComputePipelineCache& operator=(const ComputePipelineCache&) = delete;

  ComputePipelineCache(ComputePipelineCache&&) noexcept;
  ComputePipelineCache& operator=(ComputePipelineCache&&) = delete;

  ~ComputePipelineCache();

  using Key = ComputePipeline::Descriptor;
  using Value = ComputePipeline;

  struct Hasher {
    inline size_t operator()(
        const ComputePipeline::Descriptor& descriptor) const {
      size_t seed = 0;
      seed = utils::hash_combine(
          seed, std::hash<VkPipelineLayout>()(descriptor.pipeline_layout));
      seed = utils::hash_combine(
          seed, std::hash<VkShaderModule>()(descriptor.shader_module));
      seed = utils::hash_combine(
          seed, std::hash<uint32_t>()(descriptor.local_work_group.data[0u]));
      seed = utils::hash_combine(
          seed, std::hash<uint32_t>()(descriptor.local_work_group.data[1u]));
      seed = utils::hash_combine(
          seed, std::hash<uint32_t>()(descriptor.local_work_group.data[2u]));

      return seed;
    }
  };

 private:
  // Multiple threads could potentially be adding entries into the cache, so use
  // a mutex to manage access
  std::mutex cache_mutex_;

  VkDevice device_;
  VkPipelineCache pipeline_cache_;
  std::unordered_map<Key, Value, Hasher> cache_;

 public:
  VkPipeline retrieve(const Key&);
  void purge();
};

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free