torch.utils.cpp_extension

Created
Created
2024 May 9 17:27
Editor
Creator
Creator
Seonglae Cho
Edited
Edited
2024 May 9 17:30
Refs
Refs
CUDA

Write and Use Custom CUDA Extensions for Critical Operations
llm.c
karpathyUpdated 2024 Jul 4 12:51

You can start by using profiling tools to identify specific operations in your model that are potential bottlenecks and could benefit from a custom CUDA implementation.
__global__ voidmy_kernel(float *x, float *y, int size) { int index = blockIdx.x * blockDim.x + threadIdx.x; if(index < size) { y[index] = 2 * x[index]; // Example operation } }
cpp_extension module to create a bridge between your CUDA kernels and your PyTorch code.
from torch.utils.cpp_extension import load import torch custom_op = load( name="custom_op", sources=["my_kernel.cu"], extra_cuda_cflags=["--expt-relaxed-constexpr"], ) tensor = torch.randn(1024, device='cuda') result = custom_op.my_kernel(tensor)
 
Once compiled and loaded, the custom operation can be used directly in your PyTorch models like any other function.
 
 
 
 
 

Recommendations