Texonom
Texonom
/
Science
Science
/Mathematics/Math Field/Statistics/Statistical Model/Model Generalization/Model Training/Fine Tuning/PEFT/
PEQA
Search

PEQA

Creator
Creator
Seonglae Cho
Created
Created
2023 Jul 6 19:12
Editor
Editor
Seonglae Cho
Edited
Edited
2023 Jul 6 19:15
Refs
Refs

Parameter Efficient Quantization-aware Adaptation

LoRA보다 훨씬 적은 양의 메모리를 점유하는 Fine-tuning이 가능
결과는 3/4-bit Weight-only Uniform Quantization된 형태
 
 
 
Memory-Efficient Fine-Tuning of Compressed Large Language Models...
Parameter-efficient fine-tuning (PEFT) methods have emerged to mitigate the prohibitive cost of full fine-tuning large language models (LLMs). Nonetheless, the enormous size of LLMs impedes...
Memory-Efficient Fine-Tuning of Compressed Large Language Models...
https://arxiv.org/abs/2305.14152
Memory-Efficient Fine-Tuning of Compressed Large Language Models...
 
 

Recommendations

Texonom
Texonom
/
Science
Science
/Mathematics/Math Field/Statistics/Statistical Model/Model Generalization/Model Training/Fine Tuning/PEFT/
PEQA
Copyright Seonglae Cho