What is Fine-Tuning?
Additional training of a pre-trained LLM on domain-specific data to specialize its behavior.
Definition
Fine-tuning is the process of continuing to train a pre-trained language model on a smaller, domain-specific dataset to adapt its behavior for a particular task or domain. Fine-tuned models are better at specific formats, styles, or knowledge domains. However, fine-tuning is expensive, requires training data, and the resulting knowledge can become stale — making RAG often preferable for knowledge that changes over time.
Example
A legal AI company fine-tunes GPT-4 on thousands of legal briefs so the model consistently outputs properly formatted legal language. A support team with changing product documentation would use RAG instead of fine-tuning, since RAG allows the knowledge base to update without retraining.
Fine-Tuning vs prompt-engineering: What's the difference?
Additional training of a pre-trained LLM on domain-specific data to specialize its behavior.
Prompt engineering is fast and free — you change the input, not the model. Fine-tuning is slow and expensive — you change the model's weights through additional training.