4-bit, 8-bit functions for PyTorch
BNB are useful for fine-tuning
Quantization
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
https://huggingface.co/docs/transformers/main/en/quantization?bnb=4-bit#bitsandbytes
Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
https://huggingface.co/blog/4bit-transformers-bitsandbytes

Seonglae Cho