Research Papers

Advancing Knowledge Through Research

A Comparative Study of Fine-Tuning, Retrieval-Augmented Generation and Hybrid Approaches for Large Language Models

Aditya Dinesh K, Sanjay D K, Dr. Rakesh Kumar

Published: October 30, 2025

Large Language Models (LLMs) are used through either by fine-tuning or Retrieval Augmented Generation (RAG) to specific area tasks, but empirically comparing these two approaches within a unified framework is limited. The three adaptation strategies, which are fine-tuning alone, RAG alone, and hybrid approach varying between the three, are performed in this study with the same base model, data, and evaluation configuration. We evaluate performance by cosine similarity scoring with true knowledge generated by a higher-capacity evaluator model, and a qualitative error analysis and practical implementation issues. The findings indicate that domain alignment is greatly enhanced by fine-tuning, factual grounding is enhanced by RAG, and hybrid method is stable and more accurate.

Large Language Models Fine-Tuning RAG QLoRA Machine Learning