Fine-Tuning LLMs Or Zero/Few-Shot Prompting for Knowledge Graph Construction?
Résumé
This paper explores Text-to-Knowledge Graph (T2KG) construction" assessing Zero-Shot Prompting (ZSP), Few-Shot Prompting (FSP), and Fine-Tuning (FT) methods with Large Language Models (LLMs). Through comprehensive experimentation with Llama2, Mistral, and Starling, we highlight the strengths of FT, emphasize dataset size's role, and introduce nuanced evaluation metrics. Promising perspectives include synonym-aware metric refinement, and data augmentation with LLMs. The study contributes valuable insights to KG construction methodologies, setting the stage for further advancements.
Origine | Fichiers produits par l'(les) auteur(s) |
---|