Fine-Tuning vs. Prompting: Evaluating the Knowledge Graph Construction with LLMs - Université de Technologie de Belfort-Montbeliard
Communication Dans Un Congrès Année : 2024

Fine-Tuning vs. Prompting: Evaluating the Knowledge Graph Construction with LLMs

Résumé

This paper explores Text-to-Knowledge Graph (T2KG) construction„ assessing Zero-Shot Prompting (ZSP), Few-Shot Prompting (FSP), and Fine-Tuning (FT) methods with Large Language Models (LLMs). Through comprehensive experimentation with Llama2, Mistral, and Starling, we highlight the strengths of FT, emphasize dataset size’s role, and introduce nuanced evaluation metrics. Promising perspectives include synonym-aware metric refinement, and data augmentation with LLMs. The study contributes valuable insights to KG construction methodologies, setting the stage for further advancements
Fichier principal
Vignette du fichier
221.pdf (1.63 Mo) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04862235 , version 1 (08-01-2025)

Licence

Identifiants

  • HAL Id : hal-04862235 , version 1

Citer

Hussam Ghanem, Christophe Cruz. Fine-Tuning vs. Prompting: Evaluating the Knowledge Graph Construction with LLMs. 3rd International Workshop on Knowledge Graph Generation from Text (Text2KG) Co-located with the Extended Semantic Web Conference (ESWC 2024), May 2024, Hersonissos, Greece. pp.7. ⟨hal-04862235⟩
0 Consultations
0 Téléchargements

Partager

More