← Back to glossarymodels
Fine-tuning
Fine-tuning is the additional training of an already-trained language model on your own data — changes the model itself, unlike RAG.
In detail
Fine-tuning fits when you need:
- a very specific style (brand voice, legal German)
- to trim a smaller model for a specialized task (cost)
- consistent format for structured outputs
Important: fine-tuning does not add knowledge — RAG does that. We recommend fine-tuning only after RAG plus prompt engineering aren't enough.
Related terms
- Context windowThe context window is the maximum amount of text (measured in tokens) a language model can process at once — typically 128k to 1M tokens with current models.
- LLMA Large Language Model is a neural network trained on billions of texts that understands, generates and transforms language. GPT-4, Claude, and Mistral are examples.
- AI agentAn AI agent is a program built on a language model that completes tasks on its own: it understands a request, plans steps, calls tools, and responds with a result instead of just text.