« Back to Glossary Index

In the context of LLMs (Large Language Models), LLMs fine-tuning is a technique that allows the training of a pre-trained model to solve specific tasks or to extend its knowledge to a specific domain. Typically, the advantage of this technique is that training requires a smaller dataset, which is faster to construct and the process is generally more cost-effective. Additionally, it is one of the approaches used to mitigate the phenomenon of hallucinations.