Chain-of-Refinement: Enhancing Zero-shot/few-shot Knowledge Graph Creation with Chain of Refinement Prompting
|Chain-of-Refinement: Enhancing Zero-shot/few-shot Knowledge Graph Creation with Chain of Refinement Prompting
|Amirhossein Layegh Kheirabadi <email@example.com>
|Kungliga Tekniska högskolan
|2023-12-01 – 2024-06-01
The knowledge graph construction task is pivotal for organizing and structuring information from diverse sources. Conventional methods for knowledge graph construction often hinge on extensive labeled datasets, posing challenges in terms of cost and time for acquisition.
Building upon the success of our previous project (Berzelius-2023-147), where our paper on few-shot relation extraction was accepted as a full paper at SAC2024, this new project focuses on advancing zero-shot and few-shot knowledge graph construction.
This research introduces a novel approach, "Chain-of-Refinement," to elevate the efficacy of zero-shot knowledge graph creation, specifically leveraging large language models. Similar to chain-of-thought prompting, our method harnesses the power of iterative refinement, utilizing a series of prompts, each building upon the knowledge acquired in the previous step. These refinement steps help reduce hallucinations in large language models (LLMs). By incorporating prominent LLMs like Zephyr, Llama 2, and GPT-3.5, we enhance the depth of understanding, allowing for a more comprehensive and nuanced construction of knowledge graphs. This innovative approach seeks to push the boundaries of knowledge graph construction in the context of zero-shot and few-shot learning.