lora_alpha
lora_dropout
r
bias
target_modules- all-linear
task_type- CAUSAL_LM
How to Fine-Tune LLMs in 2024 with Hugging Face
In this blog post you will learn how to fine-tune LLMs using Hugging Face TRL, Transformers and Datasets in 2024. We will fine-tune a LLM on a text to SQL dataset.
https://www.philschmid.de/fine-tune-llms-in-2024-with-trl
[논문리뷰] LoRA: Low-Rank Adaptation of Large Language Models
LoRA 논문 리뷰
https://kimjy99.github.io/논문리뷰/lora/

Seonglae Cho