karpathy / llm.c
LLM training in simple, raw C/CUDA
See what the GitHub community is most excited about today.
LLM training in simple, raw C/CUDA
Alex Krizhevsky's original code from Google Code
[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
CUDA Kernel Benchmarking Library
Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)
Causal depthwise conv1d in CUDA, with a PyTorch interface
NCCL Tests
FlashInfer: Kernel Library for LLM Serving
cuGraph - RAPIDS Graph Analytics Library
RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.
🎉CUDA 笔记 / 大模型手撕CUDA / C++笔记,更新随缘: flash_attn、sgemm、sgemv、warp reduce、block reduce、dot product、elementwise、softmax、layernorm、rmsnorm、hist etc.
how to optimize some algorithm in cuda.