title
stringlengths 45
132
| paper_text
stringlengths 11.6k
38.1k
| citations
sequencelengths 14
84
|
|---|---|---|
EconLogicQA: A Question-Answering Benchmark for Evaluating
Large Language Models in Economic Sequential Reasoning
|
EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning Abstract In this paper, we introduce EconLogicQA, a rig- orous benchmark designed to assess the sequen- tial reasoning capabilities of large language models (LLMs) within the intricate realms of economics, business, and supply chain man- agement. Diverging from traditional bench- marks that predict subsequent events individu- ally, EconLogicQA poses a more challenging task: it requires models to discern and sequence multiple interconnected events, capturing the complexity of economic logics. EconLogicQA comprises an array of multi-event scenarios de- rived from economic articles, which necessitate an insightful understanding of both temporal and logical event relationships. Through com- prehensive evaluations, we exhibit that Econ- LogicQA effectively gauges a LLM’s profi- ciency in navigating the sequential complexi- ties inherent in economic contexts. We provide a detailed description of EconLogicQA dataset and shows the outcomes from evaluating the benchmark across various leading-edge LLMs, thereby offering a thorough perspective on their sequential reasoning potential in economic con- texts. 1 Introduction Logical reasoning is a pivotal skill in many pro- fessional and academic domains, enabling individ- uals to make informed decisions by understanding relationships between sequential events or pieces of information. In practice, the reasoning capabilities of large language models (LLMs) are frequently uti- lized across various applications, yet their effective- ness in logical reasoning remains underexplored. Despite its importance, there is a evident gap in the literature regarding the capability of LLMs to per- form logical reasoning at a high level. This paper addresses this gap by introducing EconLogicQA, a new benchmark designed to rigorously assess the logical reasoning capabilities of LLMs specifically within the contexts of economics, business, and supply chain management. EconLogicQA distinguishes itself from existing benchmarks by challenging LLMs to not only iden- tify but also logically sequence multiple intercon- nected events from realistic economic scenarios. This approach aims to reflect the intricate decision- making processes required in these fields, going beyond mere fact recall or simple event prediction. By focusing on the sequencing of events based on logical rather than simply chronological order, EconLogicQA inspects the LLMs’ ability to en- gage with and understand the underlying mechan- ics of economic phenomena. The benchmark utilizes a curated dataset de- rived from a wide range of business news articles, guiding GPT-4 to generate multi-choice questions that demand an intelligent understanding of logi- cal connections. A rigorous human review process ensures the accuracy and appropriateness of the content, refining the dataset to enhance its practi- cal value. Through comprehensive testing across various state-of-the-art LLMs, this paper not only demonstrates EconLogicQA’s effectiveness in eval- uating logical reasoning but also provides insights into the potential improvements and applications of LLMs in complex reasoning tasks. Our contributions of this paper are as follows: 1.We propose a novel benchmark, EconLog- icQA, which rigorously assesses LLMs’ log- ical reasoning capabilities within economics, business, and supply chain management. 2.We utilize GPT-4 to generate questions and answers from business articles, ensuring high- quality, well-crafted multiple-choice ques- tions through meticulous human review. 3.We conduct a comprehensive evaluation of both open and proprietary LLMs to assess their performance on this benchmark. 1\n2 Related Work Sequential Reasoning Benchmarks. In the realm of assessing complex reasoning abilities, Jin et al. (2023) introduce the CL ADDER dataset, ex- ploring capacities of large language models (LLMs) for causal reasoning, differentiating itself by focus- ing on formal rules-based causal inference instead of the typical evaluation of commonsense causal- ity in Natural Language Processing (NLP). Wang et al. (2023) present STEPS, a rigorous benchmark de- signed to assess models’ understanding of action sequence order in sequential tasks such as cooking and manufacturing, which highlights challenges of current LLMs in performing order reasoning with- out specific tuning. In examining adjacent domains, Guha et al. (2024) launch L EGAL BENCH , emerg- ing as a notable benchmark that evaluates LLMs in legal reasoning, having been developed collabora- tively with legal experts to cover various facets of practical and theoretical legal analysis. Yang et al. (2024) establish AQA-Bench, serving as an inter- active benchmark that evaluates LLMs’ sequential reasoning abilities across various algorithmic tasks, including Depth-First Search (DFS), Breadth-First Search (BFS), and binary search, by requiring mod- els to dynamically interact with the task environ- ment, and thereby uncovering notable performance disparities among different LLMs. Valmeekam et al. (2024) create PlanBench as an extensible benchmark focused on evaluating LLMs’ planning and reasoning capabilities, particularly in action and change, where diverse scenarios are used from the automated planning community to discern be- tween genuine planning abilities and mere retrieval from pre-trained knowledge. Economic Benchmarks. In the finance do- main, Shah et al. (2022) launch the Financial Lan- guage Understanding Evaluation (FLUE) bench- mark alongside the Financial LANGuage (FLANG) model, offering a comprehensive suite of evalua- tions focused on economic and financial domains, significantly outperforming existing models on var- ious NLP tasks. Hendrycks et al. (2020) compile the Massive Multitask Language Understanding (MMLU) benchmark of 57 diverse tasks, includ- ing the economics subject, designed to evaluate the multitask accuracy of language models, reveal- ing that even the largest models still struggle with expert-level performance and have inconsistent ac- curacy across subjects. Lu et al. (2023) propose the BBT-CFLEB benchmark, supporting advanced understanding and generation tasks in the finan- cial domain and fostering significant research and development in this specialized area. Zhang et al. (2023) present the FinEval, a specialized bench- mark for assessing financial knowledge in LLMs, demonstrating significant potential through GPT- 4’s high performance across diverse prompt types. Van Patten (2023) introduce the EconQA, a novel dataset for assessing LLMs on multiple-choice eco- nomics questions, reveals that Chain-of-Thought reasoning improves performance, particularly in mathematical queries, while prompt variations have a moderate effect on accuracy. 3 EconLogicQA In this section, we detail the dataset generation and human review processes for creating the Econ- LogicQA benchmark and provide illustrative exam- ples from it. 3.1 Dataset Generation To streamline the question-generation process and reduce the subjectivity, labor-intensiveness, and randomness of manual creation, we utilize the GPT-4 to automatically generate questions by ex- tracting key points from news articles. We specif- ically select economics-related articles from the 2011 to 2022 news dataset available on Kaggle1, which is under the CC0 Public Domain license. This cleaned dataset provides a comprehensive range of economic news articles, and we further narrow our focus to those categorized under busi- ness to align with our research scope in economics. In the data generation process, instructional prompts are developed to guide GPT-4 in creat- ing multi-choice questions that challenge models to logically sequence events within the framework of business-related scenarios. These questions start with a brief scenario description and involve four events that must be ordered based on their logical or chronological sequence rather than their appear- ances in the source articles. The selected events pertain to typical business or economic situations, necessitating a deep understanding of business prac- tices and economic principles for accurate sequenc- ing. The prompts specify that the generated content should be original, concise, and crafted without ref- erencing the original news articles and unnecessary 1https://www.kaggle.com/datasets/hadasu92/ cnn-articles-after-basic-cleaning/ 2\ndetails. Each question is designed to be completed independently, making it suitable for evaluation. The formatted output includes a scenario descrip- tion followed by four choices labeled A, B, C, and D, concluding with the correct sequence and a brief explanation to ensure that the reasoning behind the sequence is clear and deducible solely from the information presented in the question and choices. This structure is intended to enhance comprehen- sion and application of business concepts. See Appendix A Figure 1 for an example of GPT-4 response with the prompt. 3.2 Review Process In order to maintain the integrity and quality of the dataset, human verification is incorporated into the workflow. This manual review is essential as some generated responses exhibit errors in the correct sequencing of events. Each question under- goes meticulous examination, and adjustments are made to ensure accuracy and clarity in the logical sequence provided. Furthermore, the dataset un- dergoes a rigorous review to identify and exclude sensitive news articles that could be inappropriate. In total, 204 questions are removed from the initial pool of 854 questions. The criteria for removal in- clude scenarios that yield multiple valid sequences and instances where a logical sequence cannot be clearly established. This comprehensive vetting process significantly enhances the evaluation qual- ity. The final dataset consists of 650 questions, divided into training, validation, and test sets con- taining 390, 130, and 130 questions, respectively. 3.3 Dataset Examples To provide a clear depiction of EconLogicQA’s contents, we present two examples from the dataset in Appendix B Table 2. The first example details a sequence of decisions by Costco to manage its chicken supply chain effectively, while the second outlines steps taken by the Federal Reserve to nav- igate fiscal challenges. These examples illustrate the dataset’s primary objective: to evaluate the ca- pability of large language models in sequencing economic events logically, not just chronologically. Each question is meticulously designed to chal- lenge models to demonstrate their understanding of complex economic interactions and to apply logical reasoning within real-world business contexts. 4 Experiments This section outlines experiments with the Econ- LogicQA dataset, assessing the sequential reason- ing of multiple open and proprietary large language models (LLMs) in economic scenarios. 4.1 Experiment Setup We implement experiments on various LLMs using the EconLogicQA dataset to assess its se- quential reasoning capabilities within the intricate realms of economics, business, and supply chain management. We select the current mainstream open and proprietary LLMs in our study, includ- ing Llama-2 (Touvron et al., 2023a,b), Llama 3 , Gemma , Mis- tral , Yi , Zephyr , GPT-3.5 , and GPT-4 . Each model is evaluated in both 1-shot and 5-shot settings. We do not include 0-shot in our selec- tion because the results are unsatisfactory due to the task’s complexity. Therefore, we recommend using a few-shot approach for sorting problems. Accuracy is the primary metric used, offering a direct measure of each model’s understanding of the concepts within the EconLogicQA dataset. All experiments are conducted on NVIDIA A100 GPUs. Each open LLM use in this paper is sourced from the Huggingface Transformers li- brary . Language Model Evalua- tion Harness is used to test open LLMs on sequential reasoning evaluation tasks. The YAML configuration file is used to set key parameters in our scenario, such as terminating out- puts, extracting answers, evaluating results, and so on. LLMs are configured to stop generating re- sponses once it begins to pose new questions after answering the question in the prompt, setting the temperature to 0.0 without any sampling. Then, we extract the answer from the response generated by LLMs using regular expression. Finally, we verify the answer through exact matching and use accuracy as the evaluation metric. 4.2 Experiment Results The results from our experiments shown in Table 1 indicate diverse performances among the models, with significant disparities observed in their ability to handle the sequencing of economic events. GPT- 4-Turbo exhibits the highest accuracy, achieving 56.92% in the 1-shot scenario and 56.15% in the 5- 3\nshot scenario, making it the best-performing model in our tests. GPT-4 follows closely, demonstrat- ing the second-highest performance with 55.38% in the 1-shot and 53.85% in the 5-shot settings. Remarkably, the 1-shot scenario generally results in better performance than the 5-shot scenario for these two models, which could be attributed to the models’ ability to leverage their pre-trained knowl- edge effectively without the potential confusion introduced by additional context in the 5-shot sce- nario. Open LLMs are evaluated, showing varied per- formances as outlined in Table 1. Notably, Llama- 3-8B-Instruct demonstrates significant improve- ments when fine-tuned with instructions compared with Llama-3-8B, achieving 34.62% accuracy in the 1-shot setting and 37.69% in the 5-shot setting, which highlights the substantial impact of instruc- tion tuning on enhancing performance for question answering tasks. Similarly, Mistral-7B-Instruct- v0.2 exhibits promising results, with the accuracy of 31.54% in the 1-shot setting and 32.31% in the 5-shot setting, underscoring its adaptability to com- plex reasoning tasks, though it still lags behind GPT-4’s overall performance. These experiments collectively demonstrate the varying degrees of proficiency in applying LLMs to economic sequential reasoning, reflecting the current landscape of LLMs capabilities in this do- main. There is still a clear gap in the ability of current LLMs to accurately handle many economic scenarios and correctly sequence events, especially open LLMs. This limitation points to significant challenges that persist in the field, emphasizing the need for targeted improvements and innovations in future research. Addressing these shortcomings could lead to more robust models that are better equipped to navigate the complexity of economic reasoning. 5 Conclusion This study introduces EconLogicQA, a novel benchmark specifically designed to assess the logi- cal reasoning capabilities of large language models (LLMs) in the domains of economics, business, and supply chain management. The benchmark chal- lenges LLMs with complex realistic economic sce- narios. Utilizing GPT-4, high-quality, well-crafted multiple-choice questions are generated from busi- ness articles and refined through meticulous hu- man review. A comprehensive evaluation of both Model 1-Shot 5-Shot Llama-2-7B 0.77% 1.54% Llama-2-7B-Chat 9.23% 10.00% Llama-2-13B 9.23% 1.54% Llama-2-13B-Chat 14.62% 8.46% Llama-3-8B 23.85% 23.85% Llama-3-8B-Instruct 34.62% 37.69% Gemma-2B-IT 7.69% 7.69% Gemma-1.1-2B-IT 8.46% 6.92% Gemma-7B-IT 2.31% 4.62% Gemma-1.1-7B-IT 0.77% 3.85% Mistral-7B-v0.1 26.15% 30.00% Mistral-7B-v0.2 26.15% 32.31% Mistral-7B-Instruct-v0.1 15.38% 20.77% Mistral-7B-Instruct-v0.2 31.54% 32.31% Yi-6B 3.85% 29.23% Yi-6B-Chat 20.77% 30.77% Zephyr-7B-Alpha 23.08% 23.08% Zephyr-7B-Beta 17.69% 14.62% GPT-3.5-Turbo 37.69% 38.46% GPT-4 55.38% 53.85% GPT-4-Turbo 56.92% 56.15% Table 1: Comparison of the accuracy of multiple large language models on the EconLogicQA dataset under 1-shot and 5-shot learning scenarios. open and proprietary LLMs is conducted, providing deep insights into their capabilities and limitations within this specialized context. In the future, various enhancements can be made to improve the performance of LLMs in economic reasoning. Prompt engineering could be refined to better guide models through complex economic scenarios, thus enhancing their accuracy in under- standing and processing complex logical relation- ships. Additionally, the application of parameter- efficient fine-tuning (PEFT) using the EconLog- icQA training dataset offers a promising approach to customize models efficiently, optimizing their responses. Moreover, there is a significant oppor- tunity to develop specialized LLMs that are specif- ically designed to address the unique challenges in economics, business, and supply chain manage- ment. Limitations Scope of Data. The effectiveness of the Econ- LogicQA benchmark is currently validated using a specific dataset of economic news articles. This reliance on a single data source limits the gener- 4\nalizability of our findings to other datasets in the domain, which may have distinct characteristics and diverse compositions that could influence the performance of LLMs. Temporal Coverage. The dataset spans articles from 2011 to 2022, potentially missing recent eco- nomic developments and trends. This temporal limitation could affect the benchmark’s relevance and the models’ performance in current economic contexts. Ethical Considerations Our research emphasizes transparency in methodology, reporting, and data utilization. We adhered to the principles of responsible AI research throughout the study. The data employed in this re- search is sourced from public domains, ensuring no private user data was involved. We incorporated a stringent human review process to maintain dataset accuracy and integrity, excluding any sensitive or inappropriate content. These measures reflect our commitment to ethical standards and research in- tegrity.
|
[
"Yi: Open Foundation Models by 01.AI",
"AQA-Bench: An Interactive Benchmark for Evaluating LLMs' Sequential Reasoning Ability",
"CLadder: Assessing Causal Reasoning in Language Models",
"STEPS: A Benchmark for Order Reasoning in Sequential Tasks",
"GPT-4 Technical Report",
"LLaMA: Open and Efficient Foundation Language Models",
"BBT-Fin: Comprehensive Construction of Chinese Financial Domain Pre-trained Language Model, Corpus and Benchmark",
"When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain",
"PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change",
"Training language models to follow instructions with human feedback",
"Measuring Massive Multitask Language Understanding",
"HuggingFace's Transformers: State-of-the-art Natural Language Processing",
"Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning",
"CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge"
] |
Major Entity Identification:
A Generalizable Alternative to Coreference Resolution
|
Major Entity Identification: A Generalizable Alternative to Coreference Resolution Abstract The limited generalization of coreference reso- lution (CR) models has been a major bottleneck in the task’s broad application. Prior work has identified annotation differences, especially for mention detection, as one of the main reasons for the generalization gap and proposed using additional annotated target domain data. Rather than relying on this additional annotation, we propose an alternative formulation of the CR task, MajorEntity Identification ( MEI), where we: (a) assume the target entities to be specified in the input, and (b) limit the task to only the fre- quent entities. Through extensive experiments, we demonstrate that MEI models generalize well across domains on multiple datasets with supervised models and LLM-based few-shot prompting. Additionally, the MEI task fits the classification framework, which enables the use of classification-based metrics that are more robust than the current CR metrics. Finally, MEI is also of practical use as it allows a user to search for all mentions of a particular entity or a group of entities of interest. 1 Introduction Coreference resolution (CR) is the task of finding text spans that refer to the same entity. CR is a fundamental language understanding task relevant to various downstream NLP applications, such as question-answering , building knowledge graphs , and summarization . Despite the importance of CR and the progress made by neural coreference models , domain generalization remains an issue even with the best- performing supervised models . The lack of domain generalization in CR mod- els can largely be attributed to differences in an- notation guidelines of popular CR benchmarks, specifically annotation guidelines about what con- Input Document ( d) There lived a poor tailor named Mustapha, who had a son called Aladdin. Aladdin was disobedient to his father and mother and spent all his time idling with his friends. Coreference Resolution (CR) There lived apoor tailornamed Mustapha, who had asoncalled Aladdin. Aladdin was disobedient tohisfather and mother and spent allhis time idling with hisfriends.Major Entity Identification (MEI) There lived apoor tailornamed Mustapha, who had asoncalled Aladdin. Aladdin was disobedient tohisfather and mother and spent allhis time idling with his friends.E={Mustapha, Aladdin}Figure 1: CR vs. MEI. The CR task aims to detect and cluster all mentions into different entities, shown in various colors. MEI takes major entities as additional input and aims to detect and classify the mentions that refer only to these entities. stitutes a mention . For ex- ample, OntoNotes does not annotate singletons, confounding mention identity with being referential. Thus, models trained on OntoNotes generalize poorly. The importance of mention detection for CR generalization is further highlighted by Gandhi et al. (2023), who show that solely annotating mentions is sufficient and more efficient for adapting pre-trained coreference mod- els to new domains (in comparison to annotating coreference chains). Similarly, GPT-4 struggles with zero-/few-shot mention prediction, but given ground-truth mentions, its CR performance is com- petitive with the best-supervised models . Given these observations, we hypothesize that current CR models, including large language mod- els, generalize well at mention clustering but strug- gle to generalize on mention detection due to id- iosyncrasies of different domains/benchmarks. We put forth an alternative formulation of the CR task where the entities of interest are provided as addi- tional input. Assuming entities to be part of the input offloads the required domain adaptation from training to inference. Specifically, we propose the task of Major Entity Identification ( MEI), where we assume the major entities of the narrative, de- 1\nLitBank FantasyCoref Statistics CR MEI CR MEI # of Mentions 29103 16985 56968 35938 # of Non singletons 23340 16985 56968 35938 Mean ant. dist. 55.31 36.95 57.58 30.24 # of Clusters 7927 490 5829 942 Avg. cluster size 3.67 34.66 9.77 38.15 Table 1: Comparing CR and MEI. MEI has fewer but larger clusters, and a smaller mean antecedent distance (Mean ant. dist.). Our formulation’s frequency-based criterion for deciding major entities means that singleton mentions are typically not a part of MEI. fined as the most frequently occurring entities, to be provided as input along with the text (see Fig. 1). We focus on major entities for the following rea- sons: (a) Specifying major entities of a narrative is intuitively easier. (b) A handful of major entities often dominate any discourse. Table 1 shows that in LitBank roughly 6% of entities (490 of 7927) con- tribute to 60% of the mentions (16985 of 29103). To test the generalizability of MEI, we adapt two literary CR benchmarks, namely LitBank and FantasyCoref , and a state-of-the-art coreference model to MEI. While there is a big gap in CR performance between in- and out-of- domain models , we show that this performance gap is much smaller for MEI (Section 5.1). To test this hypothesis further, we evaluate large language models (LLMs) for MEI in a few-shot learning setup. On CR, LLMs are shown to struggle with mention detection and perform worse than supervised models . Contrary to this, on MEI, top LLMs ( e.g. GPT-4) are only slightly behind supervised models (Sec- tion 5.2). These experiments in the supervised setting and the few-shot setting demonstrate that the MEI task is more generalizable than CR. Additionally, we argue that MEI is easier to eval- uate than CR. The MEI task can be viewed as a classification task in which any text span either refers to one of the input entities or the null class (minor entities and other non-mention spans). The classification formulation of MEI allows for the use of classification-based metrics that are more robust than the current CR metrics. Furthermore, MEI, by its definition, disregards insignificant and smaller clusters known to inflate the CR metrics . As an aside, formulating MEI as a classification task allows for a trivial parallelization across candidate spans (Appendix A.1). Finally, MEI’s explicit mapping of mentions to predefined entities improves its usability over CR in downstream applications that focus on mentions of specific entities. MEI effectively replaces tailored heuristics employed to extract CR cluster(s) refer- ring to entities of choice in such applications (entity understanding , sentiment and social dynamics analysis ). 2 Task Formulation Notation. For a document d, letE={ej}L j=1be the set of Lmajor entities that we wish to identify. We define Mallas the set of all mentions that could refer to any entity and subsequently Mj⊆ M all as the set of mentions that refer to a major entity ej. Furthermore, we denote M=S jMjas the set of mentions that refer to one of the major entities while mentions that do not correspond to any major entity are designated as Mother=Mall\ M . Task formulation. In MEI, the input consists of the document dand designative phrases P= {p(ej)}L j=1where p(ej)succinctly represents the entity ej. For example, in Fig. 1, the phrases “Al- addin” and“Mustapha” uniquely represent Al- addin and his father who appear in “Aladdin And The Wonderful Lamp” . Note that in CR, the desig- native phrases Pare not part of the input. In contrast to CR’s clustering foundations, MEI starts with a prior for each entity (the designative phrase) and can be formulated as an open set clas- sification, where every mention is either classified as one of the major entities or ignored. Formally, MEI aims to assign each mention m∈ M jtoej and mentions m∈ M otherto∅, a null entity. 3 Supervised MEI models We propose MEIRa, MajorEntity Identification via Ranking, which draws inspiration from the entity ranking formulation and maintains an explicit representa- tion for entities. The MEIRa models consist of 3 steps: encoding the document, proposing candidate mentions, and an identification (id) module that tags mentions with major entities or the null entity. Document encoding is performed using a Longformer-Large , ϕ, that we finetune for the task. Mentions (or spans) are encoded as mi=ϕ(mi, d)by concatenating the 2\nfirst, last, and an attention-weighted average of the token representations within the mention span. In MEI, an additional input is the set of designative phrases Pfor the major entities. Since each phrase is a derived from the document itself, we also obtain its encoding using the backbone: ej=ϕ(p(ej), d). Mention detection. Similar to prior efforts , we use a mention proposal network that predicts high-scoring candidate mentions. This step finds all mentions Malland not just the ones corresponding to the major entities M.Training a model to only detect mentions of major entities would confuse it leading to poor performance. Identification module. As illustrated in Fig. 2, we initialize a working memory EW= [ej]L j=1as a list of Lmajor entities based on their designative phrase representations. Given a mention mi, the id module computes the most likely entity as: [s∗ i, e∗ i] = max j=1...Lf([mi,ej, χ(mi, ej)]),(1) where f()is an MLP that predicts the score of tag- ging mention miwith the entity ej, andχ(mi, ej) encodes metadata. The output s∗ icorresponds to the highest score and e∗ iis the top-scoring entity. Based on the score, miis assigned to: y(mi) =( e∗ iifs∗ i> τ , ∅ otherwise ,(2) where τis a threshold (set to 0 in practice). The metadata χ(mi, ej)contains a distance (po- sition) embedding representing the log distance be- tween the mention miand the last tagged instance of the entity ej. If no mention is yet associated with the entity, we use a special learnable embedding. Updates to the working memory. We investigate two approaches: (i)MEIRa-S tatic: As the name suggests, the working memory EWof the entity representations remains constant ( EW(0)) and is not updated with new mention associations. This makes the approach highly parallelizable. (ii)MEIRa-H ybrid: Similar to traditional CR, this variation maintains a dynamic working memory EW, which is updated with every new mention-id association. Specifically, assuming miis assigned toe∗ j, the working memory would be updated using a weighted mean operator gasej←g(ej,mi), similar to Toshniwal et al. (2020). To prevent error accumulation, we evaluate the mentions against EWand the initial entity representations ( EW(0)), then compute the average score. This hybrid ap- proach reaps benefits from both, the initial clean designative phrases and the dynamic updates. Following Toshniwal et al. (2020), the mention detection and identification modules are trained end- to-end using separate cross-entropy loss functions. 4 Few-shot MEI with LLMs We propose a prompting strategy to leverage LLMs for MEI, addressing their challenges in CR. Mention detection challenges. CR or MEI can be addressed using separate few-shot prompting strategies for mention detection and mention clus- tering/identification. However, Le and Ritter (2023) found that this strategy faced significant challenges with mention detection, performing worse than a deterministic mention detector. Thus, they assume access to an oracle mention detector and focus their study to evaluating the linking capabilities of LLMs. An alternative is to use an external supervised mention detector instead of the oracle. However, this requires annotated training data and may not align with a true few-shot LLM prompt paradigm. Additionally, supervised mention detectors often fail to generalize across CR datasets due to annota- tion variability . MEI with LLMs. We demonstrate that transition- ing from CR to MEI addresses this gap in mention detection and proposes an end-to-end, few-shot prompting approach for MEI. Inspired by Dobro- volskii (2021), we develop a prompting strategy that first performs MEI at word-level (rather than span), followed by a prompt to retrieve the span corresponding to the word. In addition to the document dand the set of phrases P, we also provide entity identifiers ( e.g. #1, #2) to the LLM. We will use the following example: Document: That lady in the BMW is Alice’s mom. Major Entities : 1.Alice ; 2.Alice’s mother . Prompt 1. Word-level MEI. Mention detection with LLMs is challenging due to the frequent oc- currence of nested mentions. We overcome this by prompting the LLM to tag each word. Specifi- cally, through few-shot examples, we ask the LLM to detect and tag the syntactic heads1(e.g., lady, Alice ,mom ) of mentions that refer to the major 1A syntactic head of a phrase is a word ( lady) that is central to the characteristics of the phrase ( The lady in the BMW ). 3\nMention -Entity Representation max-0.8 1.2 -0.1𝒔𝒊∗ > 𝝉 𝒆𝑖∗ -0.4 0.2 0.5 + 𝒔𝒊∗ 𝒆𝑖∗𝑔MEIRa -S MEIRa -HmaxMention -Entity Representation > 𝝉𝐦𝐢 𝒆𝒋𝒋=𝟏𝑳 𝓔𝑾=𝒆𝒋𝒋=𝟏𝑳𝑓 𝑓Figure 2: Identification module of MEIRa. A mention encoding miis concatenated with each entity’s embedding in EWand the metadata χ(mi, ej). Network fscores the likelihood of assigning mito each major entity. If the highest score s∗ iis above the threshold τ,miis associated with the highest scoring major entity e∗ ior discarded. In MEIRa-S, the entity memory EWremains static. For MEIRa-H (blue path), the assigned entity’s working memory is updated, and both the static (top half) and updated working memory (bottom half) are utilized to compute a final score. entities. Other words are left untagged (implicitly assigned to ∅, the null entity). To create the few- shot examples, a contiguous set of words annotated with the same entity is considered as a span and its syntactic head is extracted using spaCy . The ideal output for the example above is: “That lady#2 in the BMW is Alice#1’s mom#2.. ” . Note that, even though the span “BMW” might be a valid mention, it is not annotated as it does not refer to one of the major entities. The exact prompt used for this is provided in the Appendix, Table 9. Prompt 2. Head2Span retrieval. The entity tagged heads are passed to the Head2Span (H2S) module, along with the document to retrieve the span. The prompt consists of the document pre-annotated with the positions of the head, where each candidate head-word is followed by a “#” and is instructed to be replaced by the complete span (including any existent determiners and adjectives). For the input: That lady# in the BMW is Alice#’s mom#. the expected ideal output is That lady (That lady in the BMW) in the BMW is Alice(Alice’s)’s mom (Alice’s mom). Table 10 in the appendix shows the H2S prompt. Preserving structure. We pose MEI as a structured generation task, prompting LLMs to reproduce doc- uments and generate MEI tags at specific locations. Proprietary models like GPT-4 generally reproduce documents faithfully but for rare failures, we use the Needleman-Wunsch algorithm to align documents and extract tags In the case of open-source models, we employ reg- ular expression-based constrained decoding with theoutlines library 2. 2https://outlines-dev.github.io/outlines/5 Experiments Datasets. We evaluate three literary datasets chosen for their longer length and identifiable major enti- ties, particularly the key narrative elements such as characters or plot devices. Table 1 compares statistical aspects of MEI and CR, revealing that MEI features fewer clusters (entities) but larger cluster sizes (more mentions per cluster). (i)LitBank annotates coreference in 100 literary texts, each averaging around 2000 words. Following prior work , we utilize the initial cross- validation split, dividing the documents into train- ing, validation, and test sets with an 80:10:10 ratio. (ii)FantasyCoref provides OntoNotes -style3coreference annotations for 211 documents from Grimm’s Fairy Tales, with an average length of approximately 1700 words. The dataset includes 171 training, 20 validation, and 20 test documents. (iii)Additional Fantasy Text ( AFT ) provides annotations for long narratives: (a) Aladdin (6976 words), (b) Ali Baba and the Forty Thieves (6911 words), and (c) Alice in Won- derland (13471 words). Metrics. In contrast to CR, MEI facilitates the use of simple classification metrics. We define standard precision and recall for each major entity considered as an individual class of its own. For a dataset D={d1, . . . , d |D|}, the evaluation metrics are defined as follows: Macro-F1 =P d∈DP ej∈EdF1(ej) P d∈D|Ed|,and (3) 3The exact guidelines are documented here 4\nFantasyCoref LitBank Model Macro-F1 Micro-F1 Macro-F1 Micro-F1 Coref-ID 72.5 ±2.2 78.8±2.7 79.7±2.7 80.6±3.7 Coref-CM 77.7 ±1.8 82.4±2.2 74.1±2.5 76.0±3.0 Coref-FM 77.9 ±1.7 83.2±2.2 77.4±2.3 80.6±4.7 MEIRa-S 80.7±0.6 84.9±0.5 80.8±0.8 81.8±1.0 MEIRa-H 80.3 ±1.4 84.3±2.0 82.3±1.2 83.2±2.5 Table 2: Results for models trained jointly on Fantasy- Coref and LitBank. Micro-F1 =1 |D|X d∈DP ej∈EdF1(ej)· |M j| P ej∈Ed|M j|.(4) Macro-F1 is the average F1-score of entities across the dataset, while Micro-F1 is the frequency- weighted F1-score of entities within a document, averaged across the dataset. Major entity selection. We select as major entities, the top- kentities ranked as per the frequency of occurrences. We use k=5for LitBank and Fantasy- Coref after visualizing the frequency plots of their training sets. For longer documents in AFT, we select up to 9 entities to ensure coverage of all key entities from the story. We also enforce that every entity ej∈ Ehas a mention count |Mj| ≥5. We derive the representative span for each selected ej from the set of mentions Mjby selecting the most commonly occurring name or nominal mention. Implementation details. Supervised models : Model hyperparameters are derived from Toshniwal et al. (2021). To ensure consistent performance across different numbers of target entities, we randomly select a subset of major entities at each training iteration (for more details, see Appendix A.2). All supervised models were trained five times with different random seeds, and we present aggregated results as the mean and standard deviation. LLMs: We follow a few-shot prompting mecha- nism across the setups and experiments. Prompts that perform referential tasks consist of 3 examples of 6 sentences each. These 3 examples contain a mixture of narrative styles (narratives, dialogues), types of entities (major, non-major entities), cate- gories of mentions (names, nominals, pronouns), and plurality. Additionally, before producing the MEI output, we ask the LLM to describe each ma- jor entity briefly. We find that this additional step improves performance. For the H2S prompt, we provide 9 sentences as examples, balancing the number of pre- and post-modifiers to the head. All FantasyCoref LitBank Model Macro-F1 Micro-F1 Macro-F1 Micro-F1 Coref-ID 63.4 ±1.8 69.5±3.6 58.0±2.4 57.7±1.0 Coref-CM 72.8 ±0.3 76.5±0.5 61.0±5.9 61.2±5.2 Coref-FM 71.2 ±1.5 75.2±1.3 66.1±2.1 67.1±3.9 MEIRa-S 75.7±1.5 78.5±1.2 74.6±1.1 74.7±1.6 MEIRa-H 74.7 ±1.0 78.5±0.8 77.2±1.9 78.6±2.7 Table 3: Results for models trained on OntoNotes. examples were selected from LitBank’s train set and kept constant throughout the experiments. We set the temperature to 0 for all the models to ensure consistent and reproducible outputs. 5.1 Experiments: Supervised Models Baselines. We train the fast-coref model for CR and perform the following three inference-time adaptations for MEI: Coref-ID: fast-coref uses active lists of entity representations, resolving coreference by associat- ing mentions with existing clusters or generating new ones. During inference, we disable the cluster creation step and pre-fill the entity list with the en- coded vector representations of the major entities. Hence, all the detected mentions either get mapped to one of the major entities or are discarded. Coref-Cosine Map (Coref-CM): Since coref- erence clusters obtained from fast-coref lack explicit entity association, we employ the Kuhn- Munkres (KM) algorithm to find the optimal matching cluster for each major entity. The cost matrix uses the cosine similarity between the encoded representation of the major entities and that of the predicted cluster embeddings, both derived from fast-coref . Coref-Fuzzy Map (Coref-FM): This method uses the KM algorithm to derive optimal mappings by constructing a cost matrix from accumulated fuzzy- string matching scores between designative phrases and the predicted cluster’s mention strings. Supervised results. In this experiment, we train MEIRa and the baseline models on the joint train- ing set of LitBank and FantasyCoref. Subsequently, we assess their performance on the individual test sets, with results summarized in Table 2. Overall, MEIRa models consistently outperform the base- lines on both metrics while also exhibiting better stability with a lower variance. The considerable variance observed in the performance of baseline methods across all experiments underscores the non- trivial nature of identifying clusters corresponding 5\nAFT Model Macro-F1 Micro-F1 Coref-ID 68.1 ±5.9 78.7±6.1 Coref-CM 71.1 ±2.8 82.4±4.2 Coref-FM 71.1 ±4.7 83.2±4.7 MEIRa-S 81.6 ±1.4 88.8±1.3 MEIRa-H 82.8±1.1 89.5±1.0 Table 4: Results on the AFT dataset. to major entities within the output clusters provided by the CR algorithms. MEIRa-H and MEIRa-S exhibit competitive parity on FantasyCoref (chil- dren stories), while MEIRa-H edges out on LitBank dataset, showcasing its adaptability in elaborate sentence constructions. Generalization across datasets. To evaluate the generalization capabilities of MEIRa and baseline models, we train them on the OntoNotes dataset and then test their performance on LitBank and Fantasy- Coref. The results are presented in Table 3. When compared with Table 2, we observe a significant per- formance drop across the baseline models ( e.g. for Coref-ID, the average Micro-F1 scores drop from 80.6 to 57.7 on LitBank). The performance gap for the baseline models is more pronounced on LitBank than on FantasyCoref because LitBank’s annotation strategies differ more significantly from those of OntoNotes. The observations aligns with previous work , that showcase poor generalization of models trained for CR. In con- trast, MEIRa models recover most of the underlying performance on both the datasets (MEIRa-H drops a little from 83.2 to 78.6 on LitBank Micro-F1), demonstrating MEI as a more adaptable task, bring- ing robustness over varying annotation strategies. Long documents. Table 4 presents results on the AFT dataset of the models trained using a com- bined training set of LitBank and FantasyCoref. MEIRa models significantly outperform the base- line models, with MEIRa-H gaining 11.7% in Macro-F1 over the best baseline. The results demon- strate the efficacy of MEIRa models on resolving key entities in longer narratives. Computational performance. MEIRa-S supports parallel batched processing since it does not update the working memory after associating mentions, i.e. the mentions need not be processed sequen- tially from left to right. Hence, post-mention de- tection (common to all models), MEIRa-S is about 25×faster than fast-coref when assessed across LitBank, FantasyCoref and AFT datasets on an FantasyCoref LitBank Model Macro-F1 Micro-F1 Macro-F1 Micro-F1 MEIRa-H 88.5 91.0 86.1 85.4 GPT-4 90.7 92.0 88.8 91.6 GPT-3.5 65.6 70.4 74.3 75.8 Code Llama-34B 63.4 70.8 68.3 72.7 Llama3-8B 50.5 57.8 46.3 52.1 Mistral-7B 62.1 71.1 61.2 70.9 Table 5: Few-shot LLM prompting results assuming the availability of ground-truth mentions. NVIDIA RTX 4090 (see Fig. 3 in the appendix). Additionally, with the model’s small memory foot- print during inference, the entire process can also be parallelized across chunks of documents making it extremely efficient. Hence, we pose MEIRa-S as a faster while competitive alternative to MEIRa- H (that requires dynamic updates and has similar computational performance as fast-coref ). 5.2 Experiments: Few-shot prompting Models. We experiment with GPT-44, GPT-3.55, Code Llama-34B (Rozière et al., 2024), Mistral-7B , and Llama3- 8B.6Following Le and Ritter (2023), we use the instruction-tuned versions for open-source models. These models were chosen for their ability to handle the extended context required for our benchmarks. 5.2.1 Linking Performance w/ Gold Mentions We first evaluate all the models assuming the avail- ability of an oracle mention detector. The experi- mental configuration is aligned with that of Le and Ritter (2023), albeit with the distinction that we as- sess them for the MEI task rather than for CR. The prompt used in our setup is provided in Table 11 of Appendix. For comparison, we also perform inference on golden mentions with MEIRa-H. The results in Table 5 show that GPT-4 sur- passes the supervised MEIRa-H model in this setup. Among LLMs, GPT-4 is easily the best-performing model. Code Llama-34B performs the best among open-source models, closely followed by Mistral- 7B. While Code Llama-34B is tailored for the code domain, surprisingly, it outperforms strong LLMs suited for natural language. This result corroborates a similar finding by Le and Ritter (2023) for CR and related evidence regarding code pretraining aiding entity tracking . We find 4Specifically, gpt-4-1106-preview 5Specifically, gpt-3.5-turbo-1106 6https://ai.meta.com/blog/meta-llama-3/ 6\nFantasyCoref LitBank Model Macro-F1 Micro-F1 Macro-F1 Micro-F1 MEIRa-H 80.3 84.3 82.3 83.2 GPT-4 w/ Ext det 80.1 82.2 78.6 83.9 GPT-4 with varying prompting strategies Single prompt 63.0 66.2 64.4 72.8 Two-stage prompt 70.5 74.9 76.5 81.3 Word-level MEI + spaCy H2S GPT-4 77.4 79.4 82.5 85.5 GPT-3.5 50.1 54.4 60.1 63.1 Code Llama-34B 19.4 23.4 9.4 16.2 Llama3-8B 29.2 32.8 24.5 27.1 Mistral-7B 28.0 30.9 14.9 15.3 Table 6: Results on LLMs with different mention detec- tion and linking strategies. that Code Llama-34B performs close to GPT-3.5 for FantasyCoref, though a sizable gap remains for LitBank, potentially due to its linguistic complexity. 5.2.2 MEI Task Performance with LLMs In this section, we present the results for the end- to-end MEI task using LLMs. We compare all the approaches from Section 4 and relevant baselines with the results summarized in Table 6. To limit the combinations of LLMs and approaches for our experiments, we first compare all the approaches in tandem with GPT-4 and then present results for the best-performing approach with other LLMs. The first straightforward approach of using a Sin- gle Prompt to retrieve all the mentions of major entities in a single pass results in a significant perfor- mance drop compared to MEIRa-H (prompt in Ta- ble 12 of Appendix). The reason is that while GPT-4 outperforms MEIRa-H at mention linking, its men- tion detection performance, especially with nested mentions, is much worse compared to MEIRa-H.7 To further underscore the importance of mention detection, we also compare against the baseline GPT-4 w/ Ext det , which utilizes an external pre- trained mention detector followed by prompt-based linking (prompt in Table 11 of Appendix). We train the mention detector on the PreCo dataset , which achieves a 93.8% recall and 53.1% precision on the combined FantasyCoref and LitBank validation sets. We observe that GPT-4 w/ Ext det is almost at par with the fully supervised MEIRa-H, again highlighting the strong mention linking capabilities of GPT-4. Next, we present the results of our proposed 7The failure to detect nested mentions is despite best efforts to provide illustrative examples in the few-shot prompt. Le and Ritter (2023) report similar findings with earlier GPT versions.Error Type MEIRa-H GPT-4 Missing Major 162 793 Major-Major 210 154 Major-Other 243 0 Other-Major 200 516 Extra-Major 461 896 Total 1276 2359 Table 7: Breakdown of errors by MEIRa-H and GPT-4 on the combined LitBank and FantasyCoref test set. Two-stage prompt , motivated by the Single prompt method’s failure with nested mentions. The first prompt asks GPT-4 to perform word-level MEI, by limiting the task to syntactic heads only. The second prompt then performs the task of mapping the identified syntactic heads to full mention spans. The results strongly validate our proposed approach with a relative improvement of more than 7% over theSingle prompt method across all metrics and datasets. We also explore replacing the second step, i.e., head-to-span (H2S) retrieval, with an external tool. Specifically, we invert spaCy ’s span-to-head mapping to obtain a head-to-span retriever.8 GPT-4 significantly improves in this setup, out- performing even the supervised model on LitBank. Given the strong performance of GPT-4 + spaCy H2S, we evaluate the open-source LLMs in only this setting. We observe a wide gap between GPT-4 and the open-source models. Llama3-8B surpasses other open-source models on both datasets, whereas the larger Code Llama-34B underperforms on the end-to-end task. This contrasts with the findings of the idealized golden mention setting, which as- sesses purely the model’s linking capabilities. The discrepancy between these results highlights the importance of evaluating in the realistic end-to-end setup. 5.3 Error Analysis We classify MEI errors into five categories: (1)Missing Major: Not detecting a mention m∈ M. (2) Major-Major: Assigning a mention m∈ Mjto any other major entity E \ej. (3) Major- Other: Assigning a mention m∈ M to∅. (4)Other-Major: Assigning a mention m∈ M other to any major entity in E. (5) Extra-Major: Detect- ing extra mentions m̸∈ M alland assigning to any major entity in E. 8For the test set gold mentions of the two datasets, there were only two cases where spans had the same head. We handled these two cases manually. 7\nGolden MentionsPresently [a small boy] 0came walking along the path – [an urchin of nine or ten] 0. . . . . . [Winterbourne] 1had immediately perceived that [he] 1might have the honor of claiming [him] 2as a fellow countryman. “Take care [you] 2don’t hurt [your] 2teeth," [he] 1 said, paternally . . . . . . [My] 2mother counted them last night, and one came out right afterwards. She said she’d slap [me] 2if any more came out. [I] 2can’t help it. It’s this old Europe . . . . . . If [you] 2eat three lumps of sugar, [your] 2mother will certainly slap [you] 2," [he] 1said. “She’s got to give [me] 2some candy, then," rejoined [[his] 1young interlocutor] 2. GPT-4 OutputPresently [a small boy] 0came walking along the path – [an urchin of nine or ten] 0. . . . . . [Winterbourne] 1had immediately perceived that [he] 1might have the honor of claiming [him] 2as a fellow countryman. “Take care you don’t hurt your teeth," [he] 1said, paternally . . . . . . [My] 2mother counted them last night, and one came out right afterwards. [She] 2said[she] 2’d slap [me] 2if any more came out. [I] 2can’t help it. [It]2’s this old Europe . . . . . . If you eat three lumps of sugar, [your] 2mother will certainly slap [you] 2," [he] 1said. “ [She] 2’s got to give [me] 2some candy, then," rejoined [his] 2young interlocutor. MEIRa-H OutputPresently asmall boy came walking along the path – [anurchin ofnine orten] . . . . . . [Winterbourne] 1had immediately perceived that [he] 1might have the honor of claiming [him] 2as a fellow countryman. “Take care [you] 2don’t hurt [your] 2teeth," [he] 1 said, paternally . . . . . . [My] 2mother counted them last night, and one came out right afterwards. She said she’d slap [me] 2if any more came out. [I] 2can’t help it. It’s this old Europe . . . . . . If [you] 2eat three lumps of sugar, [your] 2mother will certainly slap [you] 2," [he] 1said. “She’s got to give [me] 2some candy, then," rejoined [[his] 1young interlocutor] 2. Table 8: Qualitative Analysis showcasing different errors made by GPT-4 and MEIRa-H. Errors are color-coded as follows: Miss ingMajor,Others-Major,Extra-Major, Major-Major, and Major-Other. Results combined over the LitBank and Fanta- syCoref test sets are presented in Table 7. Missing Major and Extra-Major contribute most of the er- rors for GPT-4, highlighting the scope for improve- ment in mention detection and span retrieval. Men- tion detection also remains a challenge in MEIRa- H, the model making most of the mistakes in the Extra-Major category. GPT-4 distinguishes major entities more clearly than MEIRa-H but tends to over-associate other mentions with major entities, resulting in higher Other-Major and Extra-Major errors. Note that GPT-4 has zero errors in the Major- Other category due to the prompt design, which only allows annotating major entities. Examples of these errors are visualized in Table 8. 6 Related Work Neural models for CR have become the de facto choice in supervised settings . Efforts to enhance model efficiency include reducing candidate mentions to word-level spans and using single dense representations for entity clusters . Generalization in CR remains a lingering prob- lem . Current solutions include fea- ture addition , joint training , and active learn- ing . Rather than relying on additional train- ing data, we argue for an alternative formulation where the burden of domain adaptation is offloaded from training to inference. Evaluation of LLMs for CR has largely been conducted in limited settings, such as the sentence- level Winograd Schema Challenges (WSC) , clinical pronoun resolution and instance-level Q&A . Le and Ritter (2023) conducted the first document-level evaluation of LLMs for CR but assumed an oracle-mention detector. In contrast, we conduct end-to-end evaluations. Character Identification deals with specific char- acters from transcripts of TV shows and trains a model tailored to these constrained inputs . Baruah and Narayanan (2023) introduced a dataset annotated with referent mentions of specific characters of interest. We differ from these works by adopting a generalized task formulation indepen- dent of annotation strategies and entity selection. 7 Conclusion CR models are limited in their generalization capa- bilities owing to annotation differences and general challenges of domain adaptation. We propose MEI as an alternative to CR, where the entities rele- vant to the input text are provided as input along with the text. Our experiments demonstrate that MEI is more suited for generalization than CR. Additionally, MEI can be viewed as a classifica- tion task that (a) enables the use of more robust classification-based metrics and (b) a trivially par- allelizable model across document spans, which gives a 25x speedup over a comparable corefer- ence model, making MEI more suitable for longer narratives. Unlike CR, the formulation of MEI allows few-shot prompted LLMs to effectively com- pete with trained models. Our novel two-stage prompting and robust baseline methods empower top-performing LLMs like GPT-4 to achieve this. Our analysis indicates that this task holds promise for effectively evaluating the long-context referen- tial capabilities of LLMs in an end-to-end manner. 8\n8 Limitations Major Entity Identification (MEI) is proposed as a generalizable alternative to the coreference reso- lution (CR) task, and is not a replacement of CR. MEI limits itself to major entities and only caters to applications that are interested in a particular pre-defined set of entities. Our experiments follow certain thresholds that might not be universally ap- plicable, and results and performance might vary slightly along this decision (refer Appendix A.2). Our current few-shot prompting evaluations are limited only to a few models that accommodate a large context window. Optimizing prompts and architecture to allow for a piece-wise aggregation of outputs across chunks of documents is left for future work.
|
[
"Code Pretraining Improves Entity Tracking Abilities of Language Models",
"GUMsley: Evaluating Entity Salience in Summarization for 12 English Genres",
"Code Llama: Open Foundation Models for Code",
"Are Large Language Models Robust Coreference Resolvers?",
"Coreference Resolution through a seq2seq Transition-Based System",
"Annotating Mentions Alone Enables Efficient Domain Adaptation for Coreference Resolution",
"LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution",
"Large language models are few-shot clinical information extractors",
"On Generalization in Coreference Resolution",
"Word-Level Coreference Resolution",
"OntoGUM: Evaluating Contextualized SOTA Coreference Resolution on 12 More Genres",
"Moving on from OntoNotes: Coreference Resolution Model Transfer",
"Incremental Few-shot Text Classification with Multi-round New Classes: Formulation, Dataset and System",
"Conundrums in Entity Coreference Resolution: Making Sense of the State of the Art",
"Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks",
"Language Models are Few-Shot Learners",
"Longformer: The Long-Document Transformer",
"An Annotated Dataset of Coreference in English Literature",
"An Entity-Driven Framework for Abstractive Summarization",
"Rewarding Coreference Resolvers for Being Consistent with World Knowledge",
"SpanBERT: Improving Pre-training by Representing and Predicting Spans",
"Coreference Resolution with Entity Equalization",
"Text Generation from Knowledge Graphs with Graph Transformers",
"PreCo: A Large-scale Dataset in Preschool Vocabulary for Coreference Resolution",
"Neural Models for Reasoning over Multiple Mentions Using Coreference",
"Emotion Detection on TV Show Transcripts with Sequence-based Convolutional Neural Networks",
"End-to-end Neural Coreference Resolution",
"Lexical Features in Coreference Resolution: To be Used With Caution",
"Character Identification on Multiparty Conversation: Identifying Mentions of Characters in TV Shows",
"Which Coreference Evaluation Metric Do You Trust? A Proposal for a Link-based Entity Aware Metric",
"Domain Adaptation with Active Learning for Coreference Resolution",
"Error-Driven Analysis of Challenges in Coreference Resolution",
"Towards Robust Linguistic Analysis using OntoNotes",
"BLANC: Implementing the Rand index for coreference evaluation",
"Algorithms for the Assignment and Transportation Problems",
"Learning and Evaluating Character Representations in Novels",
"FantasyCoref: Coreference Resolution on Fantasy Literature Through Omniscient Writer’s Point of View",
"Overview of TAC-KBP2015 Tri-lingual Entity Discovery and Linking"
] |
Instance-Level Dynamic LoRAs Composition for Cross-Task
Generalization
|
Instance-Level Dynamic LoRAs Composition for Cross-Task Generalization Abstract Large language models perform well on tasks that have undergone fine-tuning of instructions, but their performance to completely unseen tasks is often less than ideal. To overcome the challenge of cross-task generalization, task- level LoRA combination is proposed, which does not require training a model for new tasks. Instead, it learns the LoRA combination weights based on a small number of samples to form the task model. However, task-level LoRA combination only utilize a few task mod- ules due to its reliance on the weight enumer- ation method, and it also overlooks the speci- ficity between different instances. Therefore, we proposed an instance-level LoRA composi- tionfor cross-task generalization, which selects appropriate multiple task LoRAs for each input instance and dynamically determines the com- position weights. Our experiments on publicly available datasets show that our method outper- forms the typical method, LoraHub, in 16 out of 27 tasks. We release the source code at https: //github.com/noname822/iLoraComp.git 1 Introduction Currently, large language models (LLMs) demon- strate remarkable zero-shot learning capabilities on tasks that have undergone instruction tuning (Chung et al., 2022; Achiam et al., 2023; Touvron et al., 2023; AI@Meta, 2024). However, numerous studies have revealed that when encountering novel tasks outside their training distribution, these mod- els often fail to exhibit satisfactory performance . Explor- ing strategies to enhance the cross-task general- ization abilities of these massive language models, enabling them to adapt swiftly and accurately to diverse new tasks, has emerged as a pressing chal- lenge that demands attention. Addressing the challenge of cross-task general- ization has traditionally involved fine-tuning mod- els for each task and in-context learning. However, LoRALibraryarithmetic entityextractionquestiongenerationdocumentsummary Instance-level Composition taskTask-level Composition best NxNCompositionrandomN xNxN task LLM Q LLM A…𝒘𝟏𝒘𝑵 {𝒘𝟏…𝒘!}iComposition{𝒘𝟏…𝒘!}k𝒘𝟏…𝒘!𝟏Retrieval A Figure 1: Previous task-level composition constructs a shared task model for all instances. The proposed instance-level composition constructs a unique task module for each instance. these conventional approaches come with inherent limitations. Fine-tuning for every new task can be resource-intensive, demanding extensive data, storage, and computing power, which compromises flexibility. Although methods such as LoRA , falling under the delta tuning approach, aim to adapt to specific tasks or domains by introducing smaller parameter up- dates while minimizing computation and storage costs, thus mitigating storage issues and enhanc- ing flexibility, they still require backpropagation for precise output tuning, rendering them less cost- effective for multiple tasks. In-context learning , on the other hand, necessi- tates more input than zero-shot to fully leverage the model’s capabilities, indirectly increasing the computational resources needed for inference. To address the shortcomings of these methods and achieve efficiency and sustainability in multi- task, few-shot, and high-volume scenarios, inno- vative approaches such as LoraHub have emerged. LoraHub rapidly adapts to unseen tasks by intelligently combining pre-trained low-rank adapters from other relevant tasks. This method enhances model performance across di- 1\nverse tasks without increasing input requirements, striking a balance between performance and energy consumption. However, LoraHub also has room for improve- ment in terms of its effectiveness. Firstly, when selecting Lora modules from a trained Lora library for task adaptation composition, LoraHub’s current strategy is to randomly select modules from the library. This random selection may result in the inclusion of tasks that are either overly similar or completely unrelated, leading to significant perfor- mance variations under different random seeds for the same task, thus exhibiting poor stability. Sec- ondly, when training on instances, LoraHub does not consider the subtle nuances between individual instances, preventing the full utilization of the lim- ited instance data to capture the potential specificity of inputs, which in turn limits LoraHub’s perfor- mance. To address these two issues, we propose the following solutions: •To address the issue with the Lora module se- lection strategy, we adopt a selection method based on task similarity. By calculating the se- mantic similarity between the target task and the training sets of the available Lora mod- ules, we prioritize the combination of Lora modules that are most closely related to the current task, thereby enhancing the stability and effectiveness of the task-level adaptation. •To fully account for the unique characteris- tics of each input instance, we propose tai- loring a dedicated Lora module combination for each instance. By calculating the seman- tic similarity between the input instance and the training instances used to create the avail- able Lora modules, we select the most fitting instance-specific Lora combination as the pro- cessing strategy for that input. This approach effectively leverages the subtle nuances across different input instances. By employing the aforementioned improvements, our method has achieved a significant enhancement in inference stability. Additionally, compared to the original LoraHub, our approach has demonstrated a noticeable performance advantage. In our experi- ments, a total of 27 tasks were tested, and in these, our proposed method outperformed LoraHub on 16 of them. 2 Related work Instance-Based Generation for LLMs refers to a method that leverages dataset analysis to extract valuable instance, thereby enhancing the perfor- mance of a task. The introduction of large lan- guage models has since inspired numerous works, including Wiki-Chat , which have sought to augment language model capabil- ities through retrieval-based knowledge enhance- ment. This trend originated with RAG , which incorporates knowledge as prompts for in-context learning in LLM. Additionally, there are works that do not retrieve text as prompts, but instead retrieve delta-tuning modules, using these modules to generate prompts for answering questions, such as Knowledge Card . In this paper, we retrieval delta-tuning mod- ule by calculating the semantic similarity between instance and question using the method of DPR . Module Composition represents an endeavor to integrate diverse models, Consequently, tasks that retrieve model modules for composition have nat- urally emerged, such as MAC, SLM , Arrow, LoraRetriever , and Lora- Flow . While most methods adopt a simplistic processing approach for mod- els. These approaches strive to leverage retrieval methods by employing retrieval scores as weights during composition, thereby obviating the need for manual parameter tuning and facilitating immedi- ate usage. Concurrently, methods such as Moelora exist that directly assign weights through backpropagation. LoraHub occupies an intermediary position which used a gradient-free optimization. In comparison to previous work, our approach places a stronger emphasis on utilizing in- stances to get model modules that are more relevant to the given question. 3 Method In this section, we will provide an overview of the process, followed by an explanation of how to identify appropriate task Lora modules based on Lora training data. Finally, we will offer a detailed account of how to integrate the selected LoRA com- binations with the input data. 2\n3.1 Overview We first train the upstream tasks Ton the large model Mθusing the training set Ti∈Tto get LoRA module Liand collect them into Lora li- braryL. Next, We specify the hyperparameter Nas the number of LoRA modules to be com- posed. Each new task T′/∈Thas their in- stance set I′. For each instance ej∈ I′, we find the closest NLoRA library from L, denoted as Lej={L1, . . . , L N}, and optimize a weight com- bination ˆwej={w1, . . . , w N}using a gradient- free method as ng. For a new question Qbelonging to new task T′, we select the most suitable weight combination ˆwejbased on the semantic similarity between Qandejthen make new LoRA module ˆLj. Finally, we combine these to form the model Mϕ=LoRA (Mθ,ˆL)and use it for reasoning on Q. 3.2 LoRA module Retrieval To select the most suitable LoRA modules from Lfor composition, we identify the corresponding training set Ti={(x1, y1), . . . , (xn, yn)}for each Li∈ L. We then derive the task embedding vector embTi=1 nPn k=1Ms(xk+yk)using the sentence vectorization model Ms. Similarly, for the instance ej= (xej, yej), we can obtain its embedding vec- toremb ej=Ms(xej+yej). Consequently, Follow- ing the approach of Mussmann and Ermon, 2016 and Karpukhin et al., 2020b in using cosine similar- ity as a measure of task similarity, we can identify the top N most similar tasks to ej. The formula for cosine similarity is as follows: similarity (ej,Ti) =emb ej·embTi ∥emb ej∥ · ∥embTi∥(1) Where embTirepresents the embedding vector of thei-th task, and ∥ · ∥ denotes the Euclidean norm of a vector. By calculating the cosine similarity between each task Tiand the instance ej, we can select the top N tasks with the highest similarity as the candidate set of similar tasks for ej, which is denoted as Lej, and then collect all Lejas a set called SL. 3.3 Instruct based Module Composition and Inference To fine-tune the model Mθto the state that best aligns with the instance ej= (xj, yj), we employ the non-gradient optimization method ngto refine the weights. We perform a broad adjustment of the init weights winitusing all the instances for Ti donated as Ii={e1, . . . , e n}. Then, we conduct a targeted adjustment using the instruct-level LoRA setLejcorresponding to the specific instance ej. The optimization process is encapsulated in the following formula: ˆwej=ng(Ii,Lej, winit) (2) Having aggregated the adjusted weights ˆwej for all einto the set Sˆw, we proceed to identify theejthat shares the most affinity with the input x. This is accomplished by calculating the co- sine similarity between the input embedding vector emb eix=Ms(xj)forejand the embedding vec- toremb x=Ms(x)for the input x. This analysis allows us to select the most suitable LoRA library fromSL, denoted as Lsuit, and its corresponding weights from Sˆw, denoted as ˆwsuit. Utilizing these components, we construct the optimal LoRA mod- uleˆL= ˆwsuitLsuit. As a result, we obtain the model Mϕ=LoRA (Mθ,ˆL)that is specifically tailored to the given input. This model is then em- ployed for inference, with the output expressed as y=Mϕ(x). 4 Experimental Setup LLM. We utilized the Flan-T5-Large model as our foundational large language model Mθfor experimentation pur- poses. Concurrently, we employed the compact all_datasets_v4_MiniLM-L6 model as our Ms, which was trained on a dataset comprising one bil- lion sentence pairs, excluding the BBH and flanv2 datasets that we utilized. This compact model effec- tively supported our sentence vectorization efforts. Dataset and Evaluation. We utilize the flanv2 dataset , which incorporates data from four mixed sources, as the training set for upstream tasks. It encompasses 264 distinct datasets, out of which we selected 97 for our pur- poses. We then employed the Lora modules trained on these datasets by Huang et al. (2024) as our repository of Lora models for potential selection. The Big-Bench Hard benchmark , with 27 tasks, offers a valid test for Mθas it was not trained on these datasets. We sampled 5 instances per task, used 20 LoRA modules for adaptation, and initiated with 40 steps of global optimization, followed by EM-based evaluation on the remaining data. 3\nBaseline Setup. To ensure our method’s credibility, we used our LoRA library to test LoraHub refined parameters for 40 steps as a baseline, averaging three runs for the final score (LoraHub avg). We compared scores using zero- shot, full fine-tuning (FFT), and in-context learning (ICL). For LoRA module selection, we conducted ablation experiments using the average embedding vector of five instances per task (BatchComp). In FFT, we maintained consistency by training with the same random seeds and 5 instances. We trained the model over 40 epochs with a learning rate of 3e-5 and batch size of 5. 5 Result And Discussion Method average average-3 FFT∗39.8 44.3 0-shot 24.4 27.4 ICL 30.9 34.8 LoraHub avg 34.0 38.1 BatchComp 34.7 39.0 Ours 35.6 40.0 Table 1: Experimental results on 27 tasks of BBH, the "average-3" has excluded three tasks with an accuracy of less than 10%, (*) represents the upper limit. Method FFT ICL 0-shot LoraHub BatchComp 7/18 18/3 16/8 13/12 Ours 11/16 19/2 18/7 16/8 Table 2: A/B vs. the baseline, "A" represents the num- ber of tasks where our proposed method performed bet- ter than the baseline method, while "B" represents the number of tasks where our proposed method performed worse than the baseline method. 5.1 Result The primary results are presented in Table 1 and Table 2, with detailed task scores in Appendix A. Our method significantly outperforms the zero-shot approach on 19 out of 27 tasks and the in-context learning (ICL) method on 18 tasks in terms of aver- age performance. Compared to ICL, our approach is more computationally efficient, requiring fewer tokens. Our modifications to LoraHub are also notably successful, with our method outperform- ing LoraHub’s random selection approach on 16 tasks. Crucially, our instance-level method exhibits a 0.9% performance enhancement over our task- level method in the ablation study, underscoring the efficacy of capturing input nuances through instance-specific adaptation. However, our method still cannot compete with full fine-tuning (FFT), which holds a significant performance advantage over other methods on cer- tain highly structured tasks, such as "date under- standing" and "dyck language". The results suggest that only FFT enables the model to adequately learn the underlying structure and patterns required for these more complex and specialized tasks. 5.2 Discussion Ablition study. Our instance-level approach sig- nificantly outperforms the task-level BatchComp, which directly selects Lora modules without pair- ing questions to instances. BatchComp’s 0.7% im- provement over random LoraHub selection pales in comparison to our approach’s doubling of per- formance in the "disambiguation qa" task, likely due to our method’s superior ability to highlight the importance of key instances for task success. Retrieval method average BM25 25.6 DPR L2 Distance 34.3 DPR Cosine Similarity 35.6 Table 3: Result of different retrieval strategy Retrieval strategy. Our approach is closely tied to retrieval performance. If accurate retrieval is not achieved, properly aligning suitable instances with corresponding questions and matching them with the appropriate LoRA modules, the overall effectiveness will be reduced, as demonstrated in Table 3 like bm25. The results obtained from the DPR’s L2 distance and Cosine Similarity confirm the efficacy of DPR in instance-level fusion. 6 Conclusion Our work introduces two key enhancements to the LoraHub framework. The first is the incorporation of a method that indexes models trained on datasets using their semantic centroids, which improves Lo- raHub’s precision at the task level. The second is the introduction of instance-level adaptation, which leverages the distinctive features of individual in- stances to elevate the performance ceiling of the Lo- raHub approach. These complementary strategies work in synergy to bolster the model’s cross-task generalization capabilities. 4\n7 Limitation Increased Computational Cost. Our method in- curs a higher computational cost than LoraHub, mainly because we train weights for each individ- ual instance during the Lora group weights train- ing phase. This means that our approach will re- quire computational resources proportional to the number of instances, multiplied by the cost of Lo- raHub’s training. Application Scenario Limitation. Our method is not universally cost-effective. In scenarios where a task involves a limited number of questions, em- ploying our method may not be the most economi- cal choice. For tasks without any instances, zero- shot learning would be a more appropriate and efficient approach. Additional Preliminary Preparations Re- quired. When utilizing LoRA for composition, our method not only requires identifying the appro- priate LoRA modules within the library but also necessitates access to the data used during the train- ing of those LoRA modules. Consequently, our approach incurs greater initial preparation costs compared to methods that do not rely on such spe- cific training data. Requirement for Higher-Quality Instances. Instance-level methods, such as ours, are more sen- sitive to the quality of the instances used. Lower- quality instances, including those that are flawed or not closely related to the task, can potentially lead to misleading answers for associated questions. This underscores the importance of careful instance selection and curation to ensure the method’s effec- tiveness.
|
[
"Towards Modular LLMs by Building and Reusing a Library of LoRAs",
"Scalable Language Model with Generalized Continual Learning",
"Online Adaptation of Language Models with a Memory of Amortized Contexts",
"LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks",
"LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild",
"In-context Learning with Retrieved Demonstrations for Language Models: A Survey",
"Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs",
"LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition",
"WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia",
"Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models",
"GPT-4 Technical Report",
"The Flan Collection: Designing Data and Methods for Effective Instruction Tuning",
"A Survey on In-context Learning",
"Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them",
"Black-Box Tuning for Language-Model-as-a-Service",
"Learning To Retrieve Prompts for In-Context Learning",
"LoRA: Low-Rank Adaptation of Large Language Models",
"Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks",
"Dense Passage Retrieval for Open-Domain Question Answering",
"MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers",
"Learning and Inference via Maximum Inner Product Search",
"Maximum inner-product search using cone trees",
"MOELoRA: An MOE-based Parameter Efficient Fine-Tuning Method for Multi-task Medical Applications",
"Okapi at TREC-3"
] |
Leveraging pre-trained language models for linguistic analysis:
A case of argument structure constructions
|
Leveraging pre-trained language models for linguistic analysis: A case of argument structure constructions Abstract This study evaluates the effectiveness of pre- trained language models in identifying argu- ment structure constructions, important for modeling both first and second language learn- ing. We examine three methodologies: (1) supervised training with RoBERTa using a gold-standard ASC treebank, including by-tag accuracy evaluation for sentences from both native and non-native English speakers, (2) prompt-guided annotation with GPT-4, and (3) generating training data through prompts with GPT-4, followed by RoBERTa training. Our findings indicate that RoBERTa trained on gold-standard data shows the best performance. While data generated through GPT-4 enhances training, it does not exceed the benchmarks set by gold-standard data. 1 Introduction Argument structure constructions (ASCs) are lexi- cogrammatical patterns at the clausal level. They consist of an argument structure and a main verb, with each argument contributing to the clause’s meaning . The characteristics of ASC use, such as frequency and/or the strength of association between a verb and its argument struc- ture, have been actively explored in previous stud- ies on first language (L1) and second language (L2) learning and assessment . To effectively model human language learn- ing/development using ASC features, ASCs must be reliably identified in target texts. Recent studies have shifted from manual to automatic ASC analyses . However, enhancing automatic analysis presents a challenge due to the unreliable extrac- tion of ASCs. This issue typically occurs when analyses are constructed from the bottom up, us- Figure 1: Distinguishing semantic roles in similar de- pendency structures of two different types of ASCs, visualized by DisplaCy ing individual syntactic and semantic elements to form the target constructions. For example, while syntactic analyses would represent the clauses (1) she ran [to the mountains] and (2) she ran [in the mountains] identically based on the form (i.e., subject-verb-prepositional phrase structures), they imply different meanings and represent distinct ASC types. In case (1), the prepositional phrase [to the mountains ] is an argument that completes the meaning by specifying the goal of the move- ment. In contrast, in case (2), the phrase [ in the mountains ] modifies the location of the event, as illustrated in Figure 1. One potential reason for this mismatch is that human language often employs a pre-built form-meaning schema at the clausal level , which can be challenging to capture from a bottom-up perspec- tive. A top-down approach, directly assigning ASC types based on their clausal contexts, is therefore likely more effective than a bottom-up approach. Recent advancements in pre-trained language models (PLMs) may offer a promising solution to these challenges, given their effectiveness in stor- ing sentence-level contextual knowledge, as well as part-of-speech and syntactic knowledge within their word embeddings (Miaschi and Dell’Orletta, 1\n2020; Hewitt and Manning, 2019). The follow-up empirical question is whether these models can reli- ably capture specific types of ASCs, both with and without top-down annotations provided by trained human annotators focusing on the linguistic charac- teristics of clausal forms. To address this, the cur- rent study explores the use of PLMs for identifying ASCs, evaluating three methodologies: (1) super- vised training with a encoder model (RoBERTa) using a gold-standard ASC treebank, (2) prompt- guided annotation of unlabeled data with a decoder mode (GPT-4), and (3) prompt-guided generation of training data with GPT4, followed by training with RoBERTa. 2 Backgrounds 2.1 Language learning and ASC use The usage-based constructionist approach sug- gests that language learning/development is driven by learners forming form-meaning pairings (also known as constructions), through statistical induc- tion from varied linguistic inputs. In modeling lan- guage learning/development, a key aspect of this approach involves ASCs, which are clausal level constructions that convey the core concepts of a sentence. They are also instrumental in communi- cations as they encapsulate conceptual archetypes, such as motion or causative events (Bencini and Goldberg, 2000; Goldberg, 1995, 2003; O’Connor and Kay, 2003; Rappaport Hovav and Levin, 1998). Building on theories that emphasize the signif- icance of ASCs, empirical studies in language learning have indicated that the frequency of ASCs (and of verbs) and the strength of association between ASCs and their corresponding verbs are key factors in the de- velopmental trajectory of their use. To be spe- cific, language learners make form-meaning map- pings between frequent linguistic forms (e.g., give- me-the-toy ) and their corresponding meanings early in their learning process. As learners encounter more related but varied inputs (e.g., hand -me-the-toy ,bring -me-the-toy ), they de- velop schematic representations of these forms likeVERB -me-the-toy , or more abstractly, VERB - Recipient-Object )1. In short, as they develop, learn- 1Research has shown that learners tend to initially over- generalize schematic slots . For example, after learning how to use a basic transitive ASC form (e.g., she opened the door ), a learner might mistakenly extend this construction to intransi-ers adopt a broader range of less frequent ASCs, utilize a wider range of verbs within specific ASCs, and form stronger associations between verbs and ASCs, thus reducing their use of atypical verbs in these constructions. The use of ASCs has proven to be a useful indi- cator of language proficiency, applicable to NLP applications such as automatic scoring and model- ing human language development. Kyle and Cross- ley (2017), for example, found that more profi- cient L2 writers tended to use less frequent but more strongly associated verb-ASC combinations. Additionally, they found that ASC-based indices were better predictors of holistic writing scores than classic indices of syntactic complexity (e.g., mean length of T-unit [minimally terminable unit] and mean length of clause), which focused on the structural elements of sentences without account- ing for the functional relationships conveyed by ASCs. Relatedly, scholars have also found that the use of particular ASC types indicate L2 proficiency. For example, Hwang and Kim (2023) found that more proficient L2 writers tended to use a wider range of ASC types overall, and also tended to use a higher proportion of passive and caused-motion ASCs. 2.2 Identification of ASCs To accurately and reliably identify ASCs, initial studies relied on time-intensive manual analyses . However, recognizing the need for efficiency, researchers have increasingly been investigating the feasibility of automated ASC analysis for some time now, as illustrated below. 2.2.1 Use of dependency representations The advent and popularization of syntactic de- pendency representation in treebanks and parsers provided a helpful starting point for automated ASC analysis. For example, O’Donnell and Ellis (2010) used a dependency parsed ver- sion of the BNC to ex- plore the feasibility of extracting ASCs using de- pendency tags. While this approach allowed for some target constructions to be accurately ex- tracted (e.g., VERB-preposition-noun construction: [talked about it ], Römer et al., 2014) overall ac- tive verbs, resulting in ungrammatical sentences (e.g., she sits the chair ). However, they gradually fine-tune their linguistic system through additional input and use. 2\ncuracy was insufficient for practical use. The in- troduction of NLP systems that utilize neural net- works substantially increased dependency parsing accuracy, leading to renewed efforts in automated ASC annotation . However, an important issue with the use of dependency representations to identify ASCs is that extant dependency representations do not include the semantic information necessary to disambiguate some ASC types (e.g., between in- transitive simple and intransitive motion construc- tions, as illustrated in Figure 1; also see Kyle and Sung, 2023). To improve accuracy, an alternative approach that considers the semantics of the clause is necessary. 2.2.2 Use of semantic role labels Another promising approach involves using databases annotated with semantic role labels, such as PropBank or Universal Propositions (UP) treebank . Given that each ASC includes ‘argument roles’, which often correspond to traditional semantic roles (e.g., agent, patient, theme, goal, etc.), lever- aging those semantic role annotations to extract ASCs appears promising. However, there are two major obstacles at present. First, the accuracy of current automated semantic role labeling systems is still not sufficient for this task2. Second, it is sometimes not straight- forward to map the output of semantic role labeling systems to ASCs. Typically, these systems use ab- stract semantic role labels (e.g., ARG0, ARG1) that pose challenges in directly mapping to theoretical ASC categorizations for some complex ASCs3. To address this issue, one potential solution in- volves automatically extracting semantic roles from a clause and mapping the set of roles to correspond- ing ASC types based on domain knowledge. Subse- quently, these mapped ASCs can be trained using a sequential learning model. For example, Kyle and Sung (2023) utilized a combination of UP 2To our best understanding, the publicly-available seman- tic role labeling achieved an F1 of 0.86 on argument tagging, and 0.95 on predicate tagging . Note that these scores are for large-grained argu- ment tags, which do not offer the precision required for ASC identification. 3Particularly, ARG2 and ARG3 cover a number of seman- tic categories. According to Jurafsky and Martin (Chapter 24.4), ARG2 includes benefactive, instrumental, attributive, or end-state roles, while ARG3 encompasses start-point, bene- factive, instrumental, or attributive roles.treebank , VerbNet , and FrameNet to semi-automatically annotate ASCs for a subset of the English Web Treebank with ASC data (i.e., silver-annotated ASC treebank; ASC treebank V1). They then trained a transformer model using RoBERTa embeddings with the semi- automatically annotated ASC labels and compared the model with three probabilistic models (based on verb lemmas, syntactic frames utilizing depen- dency parsing, and a combination of verb lemmas and syntactic frames). The results showed that the transformer model, trained on silver-annotated sentences with semantic role labels, achieved the highest classification accuracy (F1 = .918 on the silver-annotated test set), outperforming the other models. Despite this success, there is room to im- prove the model’s accuracy by leveraging gold- standard annotations beyond this semi-automatic annotation system . 2.2.3 Use of the gold standard ASC treebanks In response to the limitations observed in previous methodologies, a promising approach is to build gold-standard ASC annotations first and then train and/or evaluate PLMs for ASC annotation appli- cations. In order to do so, first, a treebank must be manually annotated with ASC information fol- lowing systematic annotation guidelines. Then, the treebank can be used for multiple approaches: (1) train set for a supervised learning, especially designed for sequential named entity recognition (NER) tasks4, (2) input example for few-shot learn- ing in unsupervised learning tasks, and (3) test set to test the accuracy of the models. Recently, Sung and Kyle (2024) released a gold- standard annotated treebank of ASCs (ASC tree- bank V2), which includes sentences from the En- glish Web Treebank (EWT), as well as sentences written by L2 users from ESL-WR , and spoken by L2 users from ESL-SP (10,204 sentences; 22,069 ASC to- kens). This treebank can be leveraged for more robust training and precise evaluation of developed models aimed at identifying ASCs. 4Methodologically, Kyle and Sung (2023) adopted this approach for the silver-annotated ASC treebank. 3\n3 Related work 3.1 Automated linguistic annotation with encoder models Recent advancements have underscored the po- tential of PLMs in automated linguistic annota- tion, as encoder models (e.g., BERT ; RoBERTa ) have demon- strated impressive gains in supervised learning tasks. Based on the Transformer architecture , PLMs have been exten- sively pre-trained on large text corpora and adeptly store morpho-syntactic and sentence-level contex- tual knowledge within their word embeddings (Mi- aschi and Dell’Orletta, 2020; Hewitt and Manning, 2019). One fundamental application, often consid- ered first in linguistic annotation, is dependency tagging and parsing. The performance of these models, specified for English, typically achieves an F1 score above 0.90 . Beyond syntactic analysis, Shi and Lin (2019) demonstrated that a BERT-LSTM based model could attain F1 scores of 0.90 on in-domain test sets and 0.84 on out-domain test sets in semantic role labeling. This was accomplished through argument identification and classification, without the need for auxiliary syntactic features like part-of-speech tags or dependency trees. The RoBERTa-based model showed a promising result for a discourse-level linguistic annotation. For example, recently Eguchi and Kyle (2023) ap- plied a RoBERTa-based ensemble model to iden- tify and categorize rhetorical stance features in academic English writing. By employing a dis- course analytic framework and manually annotat- ing 4,688 sentences across eight rhetorical stance categories, they trained an ensemble model com- bining RoBERTa and LSTM. This model achieved a macro-averaged F1 score of 0.72 in span iden- tification of stance-taking expressions, surpassing pre-adjudication human annotator reliability. 3.2 Automated linguistic annotation with decoder models To effectively employ encoder models for fine- grained linguistic analyses, it is important to collect and precisely annotate a certain amount of training data for the linguistic features of interest. However, data annotation is often a costly process. This cost encompasses the labor involved in researchers re- cruiting, training, and managing human annotators, as well as the time spent by annotators in labeling raw data. In this context, recent studies have ex- plored ways to effectively use decoder models (e.g., GPT) for data annotation with unsupervised learn- ing . They have demonstrated impressive zero-shot or few-shot learning abilities, which allow them to perform tasks with minimal or no task-specific training data . For example, Ding et al. (2022) conducted com- prehensive analyses on the feasibility of leverag- ing GPT-3 for data annotation in different NLP tasks including an NER task. They developed three distinct GPT-3-based data annotation approaches: (1) prompt-guided unlabeled data annotation, (2) prompt-guided training data generation, and (3) dictionary-assisted training data generation. Sub- sequent experiments on both sequence- and token- level NLP tasks were used to evaluate their perfor- mance. The findings indicated that directly anno- tating unlabeled data was effective for tasks with a small labelling task, while generation-based meth- ods proved more suitable for tasks with a larger la- belling task. Similarly, Yu et al. (2023) investigates the application of GPT models to automate com- plex pragmatic-discourse features of apology in zero and few-shot settings. By comparing the per- formance of GPT-3.5, GPT-4, and human annota- tions in annotating apology components, the study demonstrated that GPT-4’s accuracy approached that of human annotators. On the contrary, the recent study by Ettinger et al. (2023) found limited success using GPT-3, Chat- GPT, and GPT-4 models for semantic annotations (i.e., abstract meaning representation ). The experiments included zero and few-shot experiments, as well as an experiment fo- cusing on PLMs’ ability to handle metalinguistic queries (e.g., identifying primary sentence events and predicates). A comprehensive evaluation of parse acceptability demonstrated that, even with few-shot examples, the models almost never suc- ceeded in producing completely accurate parses. The findings indicate that while these models cap- ture some semantic elements, significant challenges persist in achieving precise semantic analyses. 4 Methodology 4.1 Datasets In this study, we utilize two treebanks, namely the silver and gold versions of the ASC treebank. The first silver version includes 26,437 ASC tokens that were semi-automatically 4\nannotated (CC-BY 4.0)5. The second gold version includes 22,069 manually annotated ASC tokens (CC-BY 4.0)6. The sen- tences in this treebank were sampled from the En- glish Web Treebank (ETW) , L2-Written (ESL-WR) , and L2-spoken (ESL-SP) treebanks, which are all part of the Universal Dependencies project . Given the relatively small representation of L2 written and spoken data, training, development, and test sets were resampled with a 34/33/33 distribution. The EWT sentences retained their original sections and were roughly distributed at 80/10/10. Table 1 illustrates the nine ASC tags along with the most prototypical semantic roles that were mapped in two treebanks , accompanied by examples from the annotated dataset. Appendix A shows ASC type frequencies in each dataset. 4.2 Experiment setup The purpose of this study is to explore how to leverage PLMs, specifically RoBERTa (an encoder model) and GPT-4 (a decoder model), for ASC an- notations which could assist in modeling and mea- suring human language development. To achieve this goal, we designed three different approaches to utilize PLMs to evaluate and compare their per- formance (Figure 2). Figure 2: Experiment overview 4.2.1 Experiment 1 The objective of the first experiment is to investi- gate supervised learning using gold-standard data applied with RoBERTa embeddings . To accomplish this, we train a transformer- based machine learning, employing the open- access Python library, SpaCy (version 3.7.4; Hon- nibal et al., 2020) for a multi-class NER task. 5https://github.com/LCR-ADS-Lab/ASC-Treebank 6https://osf.io/v75qu/SpaCy’s method includes a transition-based parser, a neural network-based state prediction model, for structured prediction in NER tasks. Additionally, we employed the en_core_web_trf pipeline, which fine-tunes a RoBERTa model. To evaluate the performance, we constructed three comparative models: (1) a model using silver- standard data, (2) a model trained with gold L1 data, and (3) a model trained with both gold L1 and L2 data. Considering the necessity for accurate per- formance on L2 data to capture non-native English linguistic structures, we conducted detailed testing on each L1, L2 written, and L2 spoken data. For specifics on the hyperparameter settings, refer to Appendix B. 4.2.2 Experiment 2 The goal of the second experiment is to explore prompt-guided annotation of unlabeled data. To this end, GPT-4 was employed to generate labels for a subset of the test set from the gold-standard treebank. Due to the high processing costs and time, we streamlined the task by filtering the tag set – reducing the number of tags from nine to seven by removing the ATTR and PASSIVE tags. Moreover, we utilize a random balanced extraction method to select sentences for annotation, ultimately resulting in a total of 282 sentences. To evaluate performance, we provided GPT-4 with three distinct prompts for label generation on the test set: (1) zero-shot, (2) 3-shot, and (3) 10- shot. In cases of few-shot learning, examples were randomly selected from the gold-standard ASC treebank. We compared these results with baseline scores from a model trained under a supervised learning. This comparative model, as described in Experiment 1, incorporated adaptations such as early stopping7. Figure 3 shows an example of a zero-shot learning, and for details on the examples that we used for the experiment, refer to [ details anonymized for review ]. 4.2.3 Experiment 3 The objective of the third experiment is to explore the use of prompt-guided generation of training data for training RoBERTa. In this experiment, we utilized GPT-4 to create a labeled dataset, which was subsequently used to train with RoBERTa. For data generation, GPT-4 was used to pro- duce a balanced set of sentences with ASC tags, starting with 3-shot and 10-shot settings, as the 7The model was trained for only 400 iterations. 5\nASC (Annotated tag) Semantic frame Example Attributive (ATTR) theme -V-attribute It theme is now visible attribute on the street Caused-motion (CAUS_MOT) agent -V-theme -destination I agent put it theme [on the calendars] destination Ditransitive (DITRAN) agent -V-recipient -theme I agent gave him recipient [the address] theme Intransitive motion (INTRAN_MOT) theme -V-goal I theme won’t go [out the door] goal Intransitive resultative (INTRAN_RES) patient -V-result Money patient may become tight result Intransitive simple (INTRAN_S) agent -V Iagent am working from the office Passive (PASSIVE) theme -aux-V passive They theme were recommended passive by him Transitive resultative (TRAN_RES) agent -V-result -result I agent don’t want [my leg] result hurt result Transitive simple (TRAN_S) agent -V-theme I agent should buy [a new one] theme Table 1: ASCs representation Figure 3: Example of prompting GPT-4 to generate ASC labels in a zero-shot setting model struggled to generate data without any ini- tial examples. We divided the experiment into two parts: the first involved training the model solely using data generated by GPT-4; the second com- bined these generated sentences with a similarly balanced selection from the gold-standard dataset to augment the training set. This approach allowed the integration of artificially generated and gold data into two additional experimental groups: one trained with 3-shot (i.e., sentences generated from 3-shot setting) plus gold data, and another with 10-shot plus gold data. The data were converted to IOB format to train RoBERTa. We then com- pared the performance of these models to baseline scores from a model trained on fewer gold data sentences8. This comparison additionally aimed to 8This adjustment was made because the GPT-4 generated sentences typically had fewer ASC types, necessitating a re-evaluate the effectiveness of augmenting training sets with machine-generated data versus additional human-annotated data. We ensured consistency in hyperparameters and the number of training epochs to facilitate comparability9. Figure 4 shows an ex- ample of a few-shot learning. Figure 4: Example of prompting GPT-4 to generate ASC labels in a few-shot setting 5 Results 5.1 Experiment 1 We investigated the performance of supervised learning using gold-standard data applied with RoBERTa embeddings. The results, detailed in Table 2, highlight the highest performance of the model trained using gold-standard data that in- cludes both L1 and L2 annotations (Gold L1+L2 train model). It demonstrated the highest averaged F1 scores across all tested datasets: EWT (F1 = duction in the gold training data for a fair comparison. 9We used the same hyperparameter settings as the first experiment and also did the early stopping of stop at 400 iterations of the training data. 6\nSilver train model Gold L1 train model Gold L1 + L2 train model ASC L1 L2Writ L2Spok L1 L2Writ L2Spok L1 L2Writ L2Spok ATTR 0.982 0.955 0.971 0.972 0.954 0.986 0.968 0.971 0.988 CAUS_MOT 0.794 0.764 0.690 0.818 0.833 0.710 0.857 0.867 0.710 DITRAN 0.757 0.862 1.000 0.919 0.914 0.842 0.865 0.881 0.947 INTRAN_MOT 0.763 0.755 0.774 0.800 0.770 0.789 0.772 0.807 0.843 INTRAN_RES 0.667 0.741 0.000 0.750 0.788 0.800 0.625 0.813 0.833 INTRAN_S 0.806 0.770 0.853 0.779 0.806 0.817 0.808 0.803 0.865 PASSIVE 0.932 0.865 0.875 0.920 0.775 0.938 0.940 0.865 0.909 TRAN_RES 0.853 0.714 0.588 0.884 0.800 0.625 0.881 0.792 0.625 TRAN_S 0.922 0.904 0.933 0.931 0.929 0.927 0.936 0.943 0.948 macroAv 0.902 0.885 0.907 0.908 0.900 0.905 0.912 0.915 0.928 Table 2: F1-scores across ASC types, models, and registers, with the highest scores per tag in each dataset shaded (Experiment 1) 0.912), L2 Written (F1 = 0.915), and L2 Spoken (F1 = 0.928). It also outperformed the other models in individual tag accuracy, securing the highest F1 scores for seven out of nine annotation types in both the L2 Written and Spoken datasets. Meanwhile, the model trained on the gold-standard L1 dataset (excluding L2) achieved top F1 scores for four out of nine tags in the L1 written dataset, underscoring the importance of leveraging gold-standard data for developing effective model, especially in com- parison to models trained on the silver-standard data. 5.2 Experiment 2 We explored prompt-guided annotation of unla- beled data using GPT-4. The results demonstrate that performance varied with the number of ex- amples provided (Table 3). The zero-shot learn- ing yielded the lowest F1 score at 0.377, while the 10-shot configuration showed an improvement, achieving the highest average F1 score of 0.60210. This indicates that more extensive example-driven guidance considerably enhances the model’s ef- fectiveness in automated ASC tagging tasks with GPT-4. However, the overall F1 scores were lower than the model trained solely on gold annotations (i.e., baseline), and neither of the F1 scores for any ASC type exceeded those of the baseline model. 5.3 Experiment 3 We explore the use of prompt-guided generation of training data for training RoBERTa. The ex- periment was designed to first train the RoBERTa model using only the data generated by GPT-4 and 10We additionally tested zero-shot learning by explicitly providing syntactic or semantic information about each con- struction to the model, but observed no improvement. Refer to Appendix C for detailed results and the prompts used.ASC tag (#) zero-shot 3-shot 10-shot baseline CAUS_MOT (55) 0.121 0.446 0.483 0.907 DITRAN (46) 0.612 0.673 0.667 0.945 INTRAN_MOT (54) 0.562 0.674 0.684 0.825 INTRAN_RES (41) 0.130 0.525 0.730 0.822 INTRAN_S (105) 0.327 0.421 0.552 0.817 TRAN_RES (46) 0.213 0.306 0.485 0.863 TRAN_S (307) 0.676 0.700 0.742 0.922 macroA V (654) 0.377 0.535 0.602 0.888 Cost ($) 3.82 3.71 29.56 Time (mins) 29 24 24 Table 3: F1-scores for ASC tagging using GPT-4 (Ex- periment 2) then compare its performance with a model trained using gold standard data, as detailed in Table 4. The results reveal two key findings: First, increas- ing the number of examples, from 3-shot to 10-shot, enhanced model performance. The F1-scores gen- erally improved with the number of examples pro- vided, with the 10-shot configuration substantially outperforming the 3-shot across most categories. This highlights the role of example-driven guid- ance in enhancing the quality of machine-generated training data; Second, despite the performance gains observed with an increased number of exam- ples, models trained solely with gold data (gold1) consistently outperform those trained with the GPT- 4 generated data (both 3-shot and 10-shot), partic- ularly in more complex ASCs (e.g., CAUS_MOT, TRAN_RES). This highlights that while machine- generated data can positively contribute to the train- ing process for some ASCs (e.g., TRAN_S, IN- TRAN_MOT), it still falls short of the quality and effectiveness of human-annotated data. The second part of the experiment aimed to deter- mine if augmenting the gold-standard training set with GPT-4-generated data could enhance the per- formance of the supervised learning model. As il- 7\nCategory 3-shot 10-shot gold1 CAUS_MOT (55) 0.333 0.422 0.838 DITRAN (46) 0.367 0.632 0.867 INTRAN_MOT (54) 0.405 0.667 0.651 INTRAN_RES (41) 0.571 0.620 0.667 INTRAN_S (105) 0.303 0.485 0.742 TRAN_RES (46) 0.102 0.188 0.824 TRAN_S (307) 0.347 0.718 0.860 macroA V (654) 0.340 0.607 0.816 # of sentences 927 814 469 Cost ($) 3.31 6.59 Time (mins) 18 20 Table 4: Comparison of F1-scores for ASC tagging using different training sets, trained with RoBERTa (Ex- periment 3) lustrated in Table 5, introducing machine-generated data (both 3-shot and 10-shot) into the gold data set does not consistently improve performance across all ASC tags11. The macro average F1-score in- dicates that models trained with a combination of gold and machine-generated data (0.795 for 3- shot+gold and 0.809 for 10-shot+gold) generally perform less effectively than those trained solely with gold-standard data (0.816). Furthermore, the results demonstrate that the most significant improvement in performance was observed when gold data was augmented with ad- ditional gold data (gold1+gold2), achieving the highest macro average F1-score of 0.877. This underscores that while machine-generated data can enhance training effectiveness for some ASC types (e.g., TRAN_RES, INTRAN_S), incorporat- ing more human-annotated gold data substantially boosts model accuracy. Upon closer examination of the machine-generated training data, it became evident that despite the prompts directing GPT-4 to generate sentences closely resembling the human- produced examples in the 10-shot set, the model struggled to capture the nuances present in sen- tences from human sources, such as the web corpus or L2 datasets (See Appendix D). In other words, GPT-4-generated sentences tend to be shorter and less complex, typically lacking multiple clauses, unlike the more elaborate sentences crafted by hu- mans. This limitation likely impacted the quality of the training data and, consequently, the effective- ness of the training outcomes. 11There are some cases where it slightly enhances the model’s effectiveness, as seen in the TRAN_RES and IN- TRAN_S tags.ASC tag (#) gold1 gold1 gold1 gold1 +3-shot +10-shot +gold2 CAUS_MOT (55) 0.838 0.731 0.782 0.914 DITRAN (46) 0.867 0.756 0.824 0.920 INTRAN_MOT (54) 0.651 0.644 0.727 0.814 INTRAN_RES (41) 0.667 0.615 0.695 0.831 INTRAN_S (105) 0.742 0.760 0.751 0.816 TRAN_RES (46) 0.824 0.857 0.782 0.886 TRAN_S (307) 0.860 0.851 0.863 0.900 macroA V (654) 0.816 0.795 0.809 0.877 # of trained sentences 469 1396 1283 938 Table 5: Comparison of F1-scores for ASC tagging using different training sets – combined with the gold- standard data, trained with RoBERTa (Experiment 3) 6 Conclusions This study highlights the potential of integrat- ing PLMs into linguistic analysis frameworks, particularly for examining the characteristics of ASCs in the context of modeling L1 and L2 learn- ing/development. RoBERTa, when trained on gold- standard datasets, demonstrated superior perfor- mance, underscoring the importance of compre- hensive, high-quality annotated data. Additionally, the use of GPT-4 for prompt-guided annotation and data generation offered some insights into the effec- tiveness of synthetic data in model training. While these methods did not surpass the F1 scores of the baseline model trained solely on gold-standard an- notations, they proved effective in identifying and processing certain types of ASCs. Future directions: This study serves as a promis- ing foundation for automated annotation systems in both L1 and L2 language contexts. However, it did not directly assess the effectiveness of ASC anno- tation in automatic writing evaluation or feedback systems, which represent critical avenues for future research and applications of NLP in education. Limitations The accuracy of ASC annotation was assessed across three linguistic domains—L1 written, L2 written, and L2 spoken—but only a single register within each domain was examined in Experiment 1. Experiments 2 and 3 did not comprehensively ex- plore model performance across different domains. Consequently, the applicability of these models in other registers, such as L2 written narratives or L2 argumentative speeches, remains uncertain, partic- ularly with the RoBERTa model. Furthermore, the GPT-4 model should have also included investi- gations into two additional ASC types (PASSIVE, 8\nATTRIBUTE) and comparisons across different linguistic domains. Additionally, due to the limited scope of the L2 datasets, certain ASC types, such as transitive and intransitive resultative constructions, were underrepresented in the test sets. Therefore, the annotation accuracy for these specific ASCs should be interpreted with caution. Supplementary Materials All prompt, data, code, and models are available in [details anonymized for review ] All contributions in this proceeding are licensed under the Creative Commons Attribution-Non-Commercial 4.0 Inter- national License (CC-BY 4.0).
|
[
"Span Identification of Epistemic Stance-Taking in Academic Written English",
"Assessing the potential of LLM-assisted annotation for corpus-based pragmatics and discourse analysis",
"Is GPT-3 a Good Data Annotator?",
"Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation",
"Language Models are Few-Shot Learners",
"Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection",
"RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"A Structural Probe for Finding Syntax in Word Representations",
"Simple BERT Models for Relation Extraction and Semantic Role Labeling",
"AllenNLP: A Deep Semantic Natural Language Processing Platform",
"Assessing syntactic sophistication in L2 writing: A usage-based approach",
"Attention is All you Need",
"Universal Dependencies for Learner English",
"Generating High Quality Proposition Banks for Multilingual Semantic Role Labeling",
"Linking learner corpus and experimental data in studying second language learners’ knowledge of verb-argument constructions",
"Abstract Meaning Representation for Sembanking",
"Towards an Inventory of English Verb Argument Constructions",
"Construction Learning as a Function of Frequency, Frequency Distribution, and Function.",
"The Stanford Typed Dependencies Representation",
"The BNC Parsed with RASP4UIMA",
"The Proposition Bank: An Annotated Corpus of Semantic Roles",
"Background to Framenet",
"Constructions: a new theoretical approach to language",
"The contribution of argument structure constructions to sentence meaning",
"Pathbreaking verbs in syntactic development and the question of prototypical transitivity",
"Constructions: A Construction Grammar Approach to Argument Structure",
"Regularity and Idiomaticity in Grammatical Constructions: The Case of Let Alone",
"Lexical Entries for Verbs",
"Annotation Scheme for English Argument Structure Constructions Treebank",
"A Dependency Treebank of Spoken Second Language English",
"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"Language Models are Unsupervised Multitask Learners",
"Measuring Syntactic Development in L2 Writing: Fine Grained Indices of Syntactic Complexity and Usage-Based Indices of Syntactic Sophistication",
"A Gold Standard Dependency Corpus for English",
"A Fast and Accurate Dependency Parser using Neural Networks",
"Verbnet: a broad-coverage, comprehensive verb lexicon",
"Constructing a language: A usage-based theory of language acquisition",
"Do foreign language learners also have constructions",
"Learning argument structure generalizations",
"Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition",
"Young children's earliest transitive and intransitive constructions"
] |
M3D: MultiModal MultiDocument Fine-Grained Inconsistency Detection
|
M3D: MultiModal MultiDocument Fine-Grained Inconsistency Detection Abstract Fact-checking claims is a highly laborious task that involves understanding how each factual assertion within the claim relates to a set of trusted source materials. Existing approaches make sample-level predictions but fail to iden- tify the specific aspects of the claim that are troublesome and the specific evidence relied upon. In this paper, we introduce a method and new benchmark for this challenging task. Our method predicts the fine-grained logical rela- tionship of each aspect of the claim from a set of multimodal documents, which include text, image(s), video(s), and audio(s). We also in- troduce a new benchmark ( M3DC) of claims requiring multimodal multidocument reason- ing, which we construct using a novel claim synthesis technique. Experiments show that our approach significantly outperforms state- of-the-art baselines on this challenging task on two benchmarks while providing finer-grained predictions, explanations, and evidence. 1 Introduction Misinformation poses serious societal risks by per- petuating narratives that incite fear, sow discord, and affect public health and safety . Despite signif- icant efforts towards developing automated fact- checking techniques , existing methods face several limitations. First, real-world claims may include assertions that require consulting mul- tiple documents and modalities to verify or refute the claim. Existing approaches either assume a sin- gle document setting or perform retrieval across documents to obtain relevant evidence, which is then treated as a single document , poten- tially losing important surrounding context. Sec- ondly, some methods only predict when claims con- flict with relevant knowledge but ignore ambiguous cases where no supporting or refuting information is available . Lastly, most of the existing methods fail to provide the fine-grained analysis needed for users to under- stand what is inconsistent in a claim or to make revisions to be more factual . Simply flagging an entire claim as false without pinpointing the specific inaccurate parts provides limited utility. In contrast, we propose an approach for predict- ing the logical relationship of each piece of a claim with respect to a set of multimodal sources. We perform a semantic dissection of claims into seman- tic pieces and leverage a hierarchical transformer that operates across multimedia documents to make fine-grained predictions. Our model ingests the claim along with associated multimedia, preserv- ing the context. It then fuses the cross-document representations into a graph initialized with the claim’s Abstract Meaning Representation (AMR) . Entailment relations are then predicted for each node (e.g., entities, actions) and tuple (e.g., relations) within the graph. Because no prior work has explored making fine- grained claim predictions from a set of multimodal documents, we also introduce a new dataset of claims that contains fine-grained labels for this task called M3DC (MultiModal Multi-Document Claims). We build our dataset on top of the NewsStories dataset, which in- cludes sets of news articles, images, and videos across multiple topics. We retrieve textual, visual, and audio data from each set to build a robust mul- timodal multidocument knowledge graph for each set of related documents. Next, we develop a claim synthesis method in order to generate claims that re- quire multisource knowledge to verify, which uses a fine-grained claim manipulator model to generate claims manipulated at the sub-claim level. Our major contributions are as follows: •We introduce the novel task of performing 1\nfine-grained entailment of a textual claim with a set of multimodal documents. •We introduce a new hierarchical transformer model designed for the task of fine-grained claim analysis over multiple sources. •We propose a novel data synthesis technique for generating fine-grained labeled claims re- quiring multimodal multisource knowledge to verify using a graph traversal and fine-grained claim manipulator model. •We contribute a large benchmark of fine- grained labeled claims created using our tech- nique. We also contribute a small number of claims densely annotated by experts. •We conduct qualitative and quantitative exper- iments to evaluate the performance of our pro- posed method on our new benchmark dataset, as well as an existing benchmark dataset. 2 Related Works 2.1 Fact-checking Datasets Fact-checking claims is an established task with many text-only benchmarks such as LIAR , SNLI , MNLI , and ANLI . Later datasets like FEVER and SciFact incor- porate evidence from multiple text-only sources. Recent work has explored fine-grained, text-only misinformation detection across multiple sources, such as , but ignores that crucial evidence can also come from other modalities. To address this, datasets like MM-Claims and Fakeddit have incorporated multimodal fact- checking data. However, these tend to assess au- thenticity at a coarse level, with claims derived from short sentences or paragraphs. In contrast, our proposed dataset features extremely fine-grained la- bels that capture the logical relationships between each aspect of a claim and a set of multimedia documents. This requires a much more nuanced entailment analysis and a deeper understanding of the evidence required to verify or refute specific claim components. Furthermore, our data synthe- sis process explicitly creates claims from multiple modalities across various documents with lengthy sentences. Our approach explicitly models these intricate multimodal relationships and aims to fa- cilitate the development of more interpretable and trustworthy fact-checking systems. 2.2 Fake News Detection Methods Many methods for detecting misinformation at the document level have been proposed . These approaches either directly rely on neural models or rely on struc- tural cues indicative of disinformation . Hu et al. (2021) compares docu- ments to an external knowledge graph (KG) for verification, but it does not make fine-grained pre- dictions or leverage multimodal data. Predicting claim entailment from evidence has been studied in NLP and computer vision , using visual instead of textual evidence. Recent multimodal misinformation detection ap- proaches make entailment predictions from mul- timodal documents. MOCHEG predicts entailment at the sample level. Fung et al. (2021) extracts a multimodal KG for fake news detection, focusing on internal inconsistencies in a single document. Wu et al. (2022) propose a GNN-based model for fine-grained predictions across text-only documents, using IE-generated KGs which may miss fine details. Unlike prior work, our proposed method performs fine-grained entailment of complex claims against a set of mul- timodal documents. Wu et al. (2022) can only output binary inconsistency results using limited IE-generated knowledge graphs. Thomas et al. (2022) considers shorter, caption-like claims veri- fiable from a single image. In contrast, our claims are much more complex, requiring reasoning over a multimodal, multi-document premise. Unlike Thomas et al. (2022), we further contribute a new benchmark dataset for this task. 3 Approach In this section, we first describe our methodology for constructing our dataset, M3DC. We then pro- vide details of our model architecture which oper- ates across sets of documents to make fine-grained claim predictions. 3.1 Multimodal MultiDocument Dataset In this section, we introduce our data synthesis ap- proach for constructing a dataset with claims con- taining fine-grained labels that require multimodal and multi-source knowledge to verify. Our dataset 2\nFigure 1: Constructing a KG from a multimedia news cluster. AMR trees from different documents and modalities are linked to form a cross-document, cross-media KG. Co-reference links are shown in red. builds upon NewsStories , a col- lection of news clusters with articles and videos. We begin by crawling the data and removing news that is no longer publicly accessible or has been taken down. For each news cluster, we construct a knowledge graph (KG) combining textual and non-textual data based on AMR trees generated from news documents. This cross-document, cross-media representation allows us to synthesize claims by linking information from the graph. We then introduce a claim manipulator model that generates claims with varying degrees of truthfulness by traversing the AMR-based KG and introducing controlled perturbations. To obtain fine-grained labels, we employ a model that assigns entailment labels (e.g., entailment, contradiction, neutral) to individual AMR nodes and tuples with its associated knowledge. Using this approach, we synthesize a dataset of about 400K claims across over 10,000 topics, requiring multimodal and multi- document knowledge for verification. The overall process is shown in Figure 2. 3.1.1 Knowledge Graph Construction For each news cluster, we extract knowledge into a set of AMR trees using Structured-BART with sentences coming from the news document, visual captions generated from our grounding module and audio summaries from Qwen-Audio. Then, we connect nodes from AMR trees using co-reference reso- lution from CDLM and F- coref in order to link within- document and cross-document entities or events. The overall process is illustrated in Figure 1. For visual data from images and videos, we uti- lize GLIP and CoFormer to perform co-reference resolution across modalities. Initially, CoFormer is used to extract event frames from each image and subsam- pled video frames, along with grounded bound- ing boxes. Event frames are defined by PropBank and contain event verbs, along with noun arguments and roles. Meanwhile, GLIP is used to ground textual data from news articles with visual content from images or video frames. After grounding textual and visual content using both models, we measure the Inter- section over Union (IoU) between groundings from both models to filter out discrepancies. Then we utilize GPT-Neo to generate captions for each image and video frame from the event frame extracted by CoFormer and textual data grounded from GLIP, using an in-context learning approach with a pre-defined template. For audio data, we prompt Qwen-Audio to generate summaries that describe the audio content and background noise from the video. 3.1.2 Claim Generation To generate claims that require multimodal, multi- document evidence from the constructed KGs, we developed a Depth-First Search (DFS) based graph traversal method that selects Knowledge Elements (KEs) from multiple sources from the constructed KG. For a given KG and starting node (i.e. an AMR 3\npredicate node), the traversal algorithm traverses surrounding nodes until another predicate node is reached. We encourage the algorithm to follow co- reference edges to incorporate knowledge across documents and modalities. The traversal algorithm outputs KEs (AMR triples) rooted at a predicate, which is then used to generate a complete claim sentence containing the information from the tra- versed nodes and edges through AMRBART . Given that these generated claims are directly generated from the KG, all resulting claims are inherently entailed by this approach. This ap- proach ensures that the resulting claims rephrase evidence from different articles and modalities, re- quiring the model to reason across sources to per- form fine-grained verification. 3.1.3 Claim Manipulation Since the claims generated directly from the KGs are inherently entailed, we introduce a claim ma- nipulator model to generate diverse claims with varying degrees of truthfulness (entailed, neutral, or contradicted) relative to the evidence in the KG. The claim manipulator takes as input the claim, relevant evidence from the KG (which may be mul- timodal), and a desired logical label (entailed, neu- tral, or contradicted). The goal is to manipulate an entailed claim so that the claim’s logical relation matches the input. To train the manipulator, we employ reinforcement learning, where a model is optimized to maximize the scores provided by a reward model that offers evaluative feedback. Denoting the original claim as c, de- rived from the KG, and the modified claim asˆcproduced by the manipulator M, with yrepresenting the logical label from Y={”entailed ”,”neutral ”,”contradicted ”}, the goal of the claim manipulator is to generate a claim similar to the original claim cwith the target logical label ˆygiven premise (evidence) p. We leverage Llama-2-13B to manipulate claims to correspond with the desig- nated logical label ˆybased on the given premise p. The premise consists of the top 10 most relevant evidence (expressed in text, i.e., using sentences from news articles and captions for image and video) related to cfrom Sentence-BERT, the manipulator is fine-tuned using reinforcement learning to produce a claim ˆc based on c. In this process, candˆcare intended to be syntactically similar to each other. The claim manipulator can be formulated as ˆc=Mθ(p, c,ˆy) To steer the manipulator towards generating claims that align with the target logical label ˆy and similar to the original claim csyntactically, a reward model based on DeBERTAv3 is trained to function as a critic using MNLI , Fever-NLI , and ANLI . The reward model is trained for fine-grained entailment clas- sification using the multi-instance and structural constraints from FGVE , al- lowing the model to do fine-grained predictions without ground-truth fine-grained labels. Critically, we enforce our target label constraint at both the fine-grained and sample levels within the graph. This approach ensures that the claim manipulator not only focuses on producing claims in a coarse- grained manner but also pays attention to fine- grained details. Specifically, the reward model’s score is defined as the likelihood of the target label considering both the manipulated claim and the top 10 sentences most relevant to the original claim from the KG (serving as evidence): r(c,ˆc,ˆy) =P(ˆy|p,ˆc)− P|Y| yi̸=ˆyP(yi|p,ˆc) +ROUGE (c,ˆc) (1) where c,ˆc,ˆy, and prepresent the original claim, the modified claim, the desired logical label for the claim, and the premise, respectively. The termP(ˆy|p,ˆc)is obtained from the trained fine- grained entailment classifier. The goal of this re- ward function is to ensure that the modified claim ˆcnot only matches the intended truthfulness label ˆybut also retains as much similarity to the original claim cas possible as quantified by the ROUGE score. We fine-tuned the claim manipulator with Prox- imal Policy Optimization (PPO) as our policy gradient method for reinforce- ment learning. PPO adds an additional term to the reward function, which imposes a penalty deter- mined by the Kullback-Leibler (KL) divergence between the trained RL policy manipulator, πPPO ϕ, and the initial supervised manipulator πSFT: rtotal=r(ˆh, h,ˆy)−ηKL(πPPO ϕ(ˆyt|p,ˆh), πSFT(ˆyt|p,ˆh)), (2) where ηrepresents the KL reward coefficient, which determines the magnitude of the KL penalty; we set it to 0.2 for our model. This coefficient func- tions as an entropy boost, enhancing exploration throughout the policy domain and urging the model 4\npx Figure 2: Claim generation pipeline. We create a knowledge graph from a set of media about an event. Our traversal algorithm selects the part of the KG highlighted in yellow to generate a (true) claim. To do so, we use the selected elements to translate the selected knowledge into a sentence. We then feed relevant evidence and the generated claim into our claim manipulator model. In this example, we ask our claim manipulator to generate a contradicted claim. The claim manipulator performs fine-grained manipulations, inserting both unverified (i.e. 74 individuals) and contradictory (i.e. 5 people injured) assertions. Because we know how the claim was manipulated at the knowledge-element level, we can use this as supervision to train our verification model. to engage in a diverse set of actions rather than the one currently considered the best. In addition, it inhibits the policy from rapidly committing to a singular strategy, and this encourages outputs from the RL fine-tuned model to not deviate too far from the original model. After constructing the dataset with the claim manipulator, we employ Mixtral- 8x7B using in-context learning to predict the logical label of the claims generated by the claim manipulator as a quality check; we discard those that do not align with the target labels. Finally, as a final quality check on our generated dataset, we assess the checkworthiness of claims using ClaimBuster to filter opinions or unimportant claims from our dataset. More details are covered in Appendix A.1. 3.2 Model Architecture In this section, we present our model for predicting fine-grained entailment relations for claims given a set of trusted multimodal source materials. Figure 3 shows our model’s architecture. 3.2.1 Multimodal Encoder By design, our claims require reasoning across modalities and documents to make fine-grained predictions. We thus integrate all modalities into our model, preserving the original context in which the claim appeared. For textual content, we employ LongT5 to encode the claims and sentences from documents and captions. For han- dling non-textual context (i.e. images, video, and audio), we utilize ImageBind , a set of cross-modal embedding models for em- bedding text, audio (represented as spectrograms), visual content, and other modalities in a common space. In addition to explicitly capturing how the information relates across documents and modal- ities, our model also ingests an embedding of the KG corresponding to each cluster. To learn our KG embedding, we instantiate our KG using a Graph Convolutional Network (GCN) and train it via a masked sequence prediction task. We randomly obscure nodes and edges within the KG and train a classifier to predict the masked pieces. After train- ing, we extract KG embeddings for each cluster and feed them to our model. To bridge the various representation spaces, we add an additional linear layer for each modality’s encoder. The embeddings from different modalities, in- cluding textual content (encoded by LongT5), non- textual context (encoded by ImageBind), and the knowledge graph (encoded by the GNN), are con- catenated to form a comprehensive multimodal representation of the claim and its associated evi- dence. This concatenated embedding is then fed into LongT5 for pretraining us- ing the Gap Sentence Generation (GSG) objective from Pegasus . GSG is a self- supervised learning task that aims to generate miss- ing sentences within a given context. We identify the top 3 sentences inside the news documents that 5\npx Figure 3: The model architecture. Each cluster, potentially containing multiple news articles, will have its content from various multimedia sources independently encoded and then merged to form a unified representation. This joint representation will serve as the initial state for every node within the GNN. Subsequently, labels at both the sample level and the fine-grained level can be derived by aggregating features from the nodes and edges of the GNN. are most relevant to the claim cusing ROUGE-F1, randomly choose one sentence and its adjacent sen- tence, and then mask them both. LongT5 is trained to generate the masked sentences based on the sur- rounding context and the multimodal embeddings. 3.2.2 Graph Convolutional Network Our task requires predicting fine-grained entail- ment relationships between a claim and a set of multimedia source materials. To ensure each fine- grained element within the claim’s AMR captures the context of the AMR structure in which it ap- pears, we employ a two-layer GCN to learn contextual features of each node and tuple within the claim’s AMR graph. Our GCN model is initialized with features from our multimodal encoder and features from the claims’s AMR. Specifically, we encode the AMR represen- tation of claims and embeddings from multimedia news content via the GCN as follows: for each node iwithin the AMR graph, we define the fea- ture aggregation mechanism by the equation: h(l+1) i=X j∈N(i)∪{i}1 cijh(l) j (3) where h(l+1) i is the feature vector of node iat the subsequent layer l+ 1. The set N(i)includes the neighbors of node i, andcijis a normalization factor for the edge that connects nodes iandj. For edge features, we extend our model to incor- porate edge features alongside node features. This is achieved by incorporating edge attributes into the aggregation function, allowing the model to con- sider the characteristics of the connections between nodes. For an edge eijconnecting nodes iandj, the edge features can be integrated as follows: e(l+1) ij=h W(l) eh(l) i||W(l) eh(l) ji (4) where e(l+1) ij represents the feature vector of edge eijat layer l+ 1, with W(l) eandb(l) ebeing the weight matrix and bias vector specific to edge fea- tures at layer l. This approach ensures that the model captures not only the node-level but also the edge-level semantic and structural information inherent in AMR graphs. For graph-level (sample-level) classification, we aggregate the features of the entire graph with av- erage pooling. Finally, multiple MLP classifiers are applied to make predictions for nodes, edges, and the graph on the sample-level and fine-grained tasks. We train our model using cross-entropy loss with labels from the trained fine-grained entailment classifier in section 3.1.3. 4 Experiments 4.1 Multimodal MultiDocument Dataset We compare our new dataset with others in Table 1. Our dataset contains fine-grained labels across 180,000 entailed claims, 121,224 neutral claims, and 113,181 contradicted claims. While existing datasets are topic-specific, our claims are highly de- tailed and topically diverse. In our supplementary, we include examples of claims from our dataset compared to other datasets. 6\nDatasets #Samples Data source Topic(s) Multi-modality Multi-document Claim verification Fine-grained Labels Zlatkova et al. (2019) 1,233 Snopes, Reuters Multi ✔ ✗ ✔ ✗ Cheema et al. (2022b) 3,400 X COVID-19, Climate change, Technology ✔ ✔ ✔ ✗ Nielsen and McConville (2022) 12,914 X Multi ✔ ✔ ✔ ✗ Yao et al. (2023b) 15,601 Politifact, Snopes Multi ✔ ✗ ✔ ✗ Nakov et al. (2021) 18,014 X COVID-19, Politics ✗ ✔ ✗ ✗ Ours 414,405 Multi-News Multi ✔ ✔ ✔ ✔ Table 1: Comparison between different datasets in terms of multi-modality, multi documents, claim verification, and fine-grained labels. Ours is the largest one that supports fine-grained labels with multimodal document claim verification. No dataset provides fine-grained labels. Model Synthetic Labels Human Labels Sample-level Fine-grained Sample-level Fine-grained E N C All E N C All E N C All E N C All FGVE 0.27 0.2 0.28 0.25 0.23 0.1 0.09 0.14 0.32 0.14 0.36 0.27 0.30 0.05 0.04 0.13 MOCHEG 0.32 0.14 0.36 0.27 0.28 0.13 0.32 0.24 0.37 0.18 0.41 0.32 0.35 0.14 0.39 0.29 LLaV A-1.5 0.57 0.0 0.33 0.30 0.73 0.0 0.14 0.29 0.67 0.0 0.43 0.37 0.88 0.0 0.13 0.33 MiniGPT-v2 0.50 0.0 0.43 0.31 0.56 0.0 0.24 0.27 0.62 0.0 0.62 0.41 0.54 0.0 0.09 0.21 Ours 0.72 0.26 0.48 0.49 0.65 0.23 0.41 0.43 0.72 0.21 0.59 0.51 0.68 0.1 0.39 0.39 Table 2: Results on our M3DC benchmark. We report class-wise F1 scores (E: entailed, N: neutral, C: contradicted) and the overall F1 score (All). 4.2 Testing Datasets and Baselines We evaluate entailment performance on our M3DC benchmark and MOCHEG . We report F1 scores for each of the three classes and the macro-averaged overall F1. We evaluate both sample and fine-grained level pre- diction tasks. For M3DC, we compare using two types of labels: human annotated labels and syn- thetic labels generated from the fine-grained entail- ment classifier (Section 3.1.3). For human labels in M3DC and fine-grained labels in MOCHEG, we randomly selected 30 samples for human annota- tion using six expert annotators. Performing human annotation on samples is an extremely laborious and time-consuming task, which requires annota- tors to read multiple documents, look at multiple images, and watch video(s), which are often quite long. Annotators then must densely annotate every KE within the claim. We include additional details on our human study in our supplementary. Our baselines consist of a set of large vision-language models (LVLM) that have demonstrated strong performance on various multimodal tasks. Task- specific baselines include FGVE and MOCHEG , while LVLM baselines include LLaV A-1.5 and MiniGPT-v2 . 4.3 Quantitative Evaluation Table 2 shows our model outperforming baselines on the M3DC dataset, with similar results on syn- thetic and human-labeled data. This is critical, as it shows that the performance of our models on our human-annotated data tracks closely with the per- Model Sample-level Fine-grained E N C All E N C All FGVE 0.37 0.16 0.37 0.3 0.31 0.1 0.2 0.20 MOCHEG† 0.57 0.23 0.40 0.39 0.52 0.21 0.36 0.37 LLaV A-1.5 0.67 0.0 0.93 0.53 0.44 0.0 0.25 0.23 MiniGPT-v2 0.67 0.0 0.93 0.53 0.71 0.0 0.25 0.32 Ours 0.69 0.25 0.48 0.47 0.63 0.18 0.36 0.39 Table 3: Results on MOCHEG dataset . All labels are human labels in this benchmark. We report class-wise F1 scores (E: entailed, N: neutral, C: contradicted) and the overall F1 score (All). †: Note that MOCHEG is also trained on this dataset, while our method is applied zero-shot . Model Sample-level Fine-grained E N C All E N C All Ours w/ Text 0.69 0.25 0.43 0.46 0.61 0.15 0.34 0.37 Ours w/ Text + Image 0.71 0.26 0.42 0.46 0.63 0.18 0.36 0.39 Ours w/ Text + Image + Video 0.72 0.26 0.48 0.49 0.65 0.23 0.41 0.43 Ours w/ Text + Image + Video + Audio 0.70 0.24 0.47 0.47 0.63 0.21 0.41 0.42 Ours All w/o Text 0.42 0.02 0.29 0.24 0.37 0.01 0.23 0.20 Table 4: Ablation on M3DC showing the impact of removing different modalities on our method. formance obtained on our large synthetic dataset, suggesting our synthetic dataset is a good evalua- tion benchmark for this task. On the MOCHEG dataset (Table 3), our model outperforms in fine- grained predictions, despite being trained on a di- verse news dataset, M3DC, rather than MOCHEG. While LLaV A and MiniGPT-v2 outperform in over- all F1, they fail to correctly identify neutral claims, which our model handles better. The MOCHEG dataset’s lack of video and audio and different styles of text (Snopes vs News) contributes to its lower performance at the sample level. 4.4 Ablations To demonstrate our model’s capability in handling multimodal inputs, we conducted ablation studies 7\npx Figure 4: Qualitative results comparing our method’s fine-grained predictions with those obtained from other baselines. We include additional results in our supplementary materials. with varying combinations of modalities, as out- lined in Table 4. Considering that a substantial por- tion of the information in KGs is derived from the textual content of news articles, it was anticipated that the text modality would play a pivotal role in the model’s inference process. Our results, how- ever, indicate that including additional modalities, such as visual and audio, did not significantly en- hance the model’s performance. This observation suggests that the dominance of text-based claims in our dataset may lead the model to prioritize textual features, which are typically sufficient for classify- ing claims derived from textual information. 4.5 Qualitative Results We show qualitative results comparing our method with competitive baselines in Figure 4. We il- lustrate predictions on nodes and tuples by the color of the edges (green=entailed, yellow=neutral, red=contradiction). Node colors indicate node pre- dictions, while edge colors represent tuple predic- tions. We perform fine-grained claim verification for the claim “The crash and fire in Chuhuiv were caused by engine failure and not by pilot errors.” In actuality, the crash was partially caused by pilot errors, so this portion of the claim is shown in red (as being contradicted by certain media sources). We observe that our method identifies the correct portion of the claim as being contradicted by the evidence, while baselines tend to make more ran- dom predictions throughout the graph. In general, we observe that our GNN-based approach is able to ensure more semantic consistency across pre- dictions than approaches that make predictions for each part of the claim separately. 5 Conclusion In this paper, we address the challenge of predicting the logical consistency of claims with multimodal sources. Our method analyzes claims within a mul- timodal multidocument context, including text, vi- sual content, and audio. Our method is able to rea- son in a fine-grained manner over complex informa- tion across media and modalities. We further intro- duce a dataset, M3DC, created through a unique synthesis technique that produces claims requiring cross-document, cross-media reasoning for verifi- cation. This benchmark will enable the evaluation of multimodal fact-checking models and spur fur- ther research in this space. Our contributions aim to mitigate the impact of misinformation and enhance the reliability of automated fact-checking systems, thus supporting informed decision-making and fos- tering a factually accurate public dialogue. 8\n
|
[
"LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation",
"Video-LLaVA: Learning United Visual Representation by Alignment Before Projection",
"Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models",
"Llama 2: Open Foundation and Fine-Tuned Chat Models",
"ImageBind One Embedding Space to Bind Them All",
"GPT-4 Technical Report",
"EVA: Exploring the Limits of Masked Visual Representation Learning at Scale",
"F-coref: Fast, Accurate and Easy to Use Coreference Resolution",
"End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models",
"Graph Pre-training for AMR Parsing and Generation",
"MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset",
"Open-Domain, Content-based, Multi-modal Fact-checking of Out-of-Context Images via Online Resources",
"DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing",
"Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing",
"Overview of the CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News",
"Fake news detection: A hybrid CNN-RNN based deep learning approach",
"Learning Transferable Visual Models From Natural Language Supervision",
"Detecting Cross-Modal Inconsistency to Defend against Neural Fake News",
"A Joint Neural Model for Information Extraction with Global Features",
"Online misinformation about climate change",
"Detecting fake news stories via multimodal analysis",
"Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake News Detection",
"A Benchmark Dataset of Check-worthy Factual Claims",
"Vaccine Safety: Myths and Misinformation",
"PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization",
"Adversarial NLI: A New Benchmark for Natural Language Understanding",
"Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
"Fact-Checking Meets Fauxtography: Verifying Claims About Images",
"Learning Hierarchical Discourse-level Structure for Fake News Detection",
"FEVER: a Large-scale Dataset for Fact Extraction and VERification",
"Proximal Policy Optimization Algorithms",
"A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference",
"Semi-Supervised Classification with Graph Convolutional Networks",
"Abstract Meaning Representation for Sembanking",
"Cross-document Misinformation Detection based on Event Graph Reasoning",
"InfoSurgeon: Cross-Media Fine-grained Information Consistency Checking for Fake News Detection"
] |
Exploring the Best Practices of Query Expansion with Large Language
Models
|
Exploring the Best Practices of Query Expansion with Large Language Models Abstract Large Language Models (LLMs) are founda- tional in language technologies, particularly in information retrieval (IR). Previous stud- ies have utilized LLMs for query expansion, achieving notable improvements in IR. In this paper, we thoroughly explore the best prac- tice of leveraging LLMs for query expan- sion. To this end, we introduce a training-free, straightforward yet effective framework called Multi-Text Generation Integration ( MUGI). It leverages LLMs to generate multiple pseudo- references, integrating them with queries to en- hance both sparse and dense retrievers. Our empirical findings reveal that: (1) Increasing the number of samples from LLMs benefits IR systems; (2) A balance between the query and pseudo-documents, and an effective integration strategy, is critical for high performance; (3) Contextual information from LLMs is essential, even boost a 23M model to outperform a 7B baseline model; (4) Pseudo relevance feedback can further calibrate queries for improved per- formance; and (5) Query expansion is widely applicable and versatile, consistently enhancing models ranging from 23M to 7B parameters. 1 Introduction Information retrieval (IR) is crucial for extracting relevant documents from large databases, serving as a key component in search engines, dialogue sys- tems , question- answering platforms , recommendation sys- tems , and Retrieval Augmented Generation (RAG) . Query expansion, a key technique for enhancing information retrieval (IR) efficacy , traditionally employs Pseudo-Relevance Feedback (PRF) from initial retrieval results. However, its effectiveness is constrained by the quality of these results. Recently, Large Language Models (LLMs), such as ChatGPT, have demonstrated ex- ceptional capabilities in language understanding, knowledge storage, and reasoning . Motivated by these advancements, some studies have explored leverag- ing LLMs for zero-shot query expansion . While these methods have shown empirical effectiveness, they also present certain limitations. LameR generates potential answers by utilizing LLMs to rewrite BM25 can- didates for expansion. However, its performance is highly dependent on the quality of the initial retrieval. Both HyDE and query2doc leverage the knowl- edge stored in LLMs. While HyDE demonstrates effective performance with contriver, it performs poorly with lexical-based retrievers . Conversely, query2doc is effective with both sparse and dense retrieval methods, but strong rankers may not benefit as much as weaker ones . Moreover, the in- tegration and balance between pseudo references and queries are under-explored in these studies. To address these limitations, we explore best practices for utilizing query expansion with LLMs for information retrieval. In this paper, we delve into several specific research questions: RQ1 : Are multiple pseudo-references more beneficial than a single one? RQ2 : Is there a universal query ex- pansion method that effectively serves both lexical- based and neural-based retrievers, applicable to both weak and strong models without prior con- straints? RQ3 : How can the query and pseudo- references be balanced for lexical-based retrievers? RQ4 : What is the most effective method for inte- grating multiple pseudo-references with a query in dense retrievers? 1\nWe introduce a framework named Multi-Text Generation Integration ( MUGI) to address these key questions. MUGIemploys a zero-shot ap- proach to generate multiple pseudo-references from LLMs, integrating them with queries to enhance IR efficiency. Our empirical experiments demon- strate that: (1) Increasing the number of samples from LLMs benefits IR systems. (2) MUGIdemon- strates versatility and effectiveness across both lex- ical and dense retrievers and models of various sizes. Remarkably, it enables a 23M-parameter dense retriever to outperform a larger 7B baseline. (3)MUGIproposes an adaptive reweighting strat- egy that considers the lengths of both the pseudo- references and the query, critically improving the performance of lexical retrievers. (4) MUGIinves- tigates different integration strategies and proposes contextualized pooling, which has been overlooked in previous methods. Additionally, drawing inspi- ration from the Rocchio algorithm (Schütze et al., 2008), MUGIimplements a calibration module that leverages pseudo relevance feedback to further en- hance IR performance. Notably, using ChatGPT4, MUGIsignificantly enhances BM25 performance, with an 18% improvement on the TREC DL dataset and 7.5% on BEIR, and boosts dense retrievers by over 7% on TREC DL and 4% on BEIR. 2 Related Work Information Retrieval focuses on the efficient and effective retrieval of information in response to user queries. Best Matching 25 (BM25) advances beyond earlier proba- bilistic models by incorporating document length normalization and non-linear term frequency scal- ing, thereby enhancing the alignment of queries with documents. Dense retrievers such as DPR employ deep neu- ral networks to identify semantic relationships be- tween queries and documents by measuring the cosine similarity of their text embeddings. Existing efficient IR systems typically use a re- trieval & rerank pipeline : Initially, a retrieval mecha- nism, such as BM25 or a bi-encoder, identifies a broad set of potentially relevant documents. Subse- quently, a stronger ranker, usually a cross-encoder, meticulously scores the relevance of these docu- ments, enhancing the precision of the final results. LLMs for IR The use of LLMs in IR falls into two primary categories : fine- tuning LLMs as retrieval models and employing them for zero-shot IR. This paper concentrates on zero-shot IR, where typical approaches involve leveraging the reasoning capabilities of LLMs for direct document ranking or relevance assessment . While effective, these methods are limited by LLMs’ input length constraints, making them better suited for the rerank phase. Another line of research focuses on using LLMs to synthesize additional high-quality train- ing datasets to improve existing models . Other works, such as HyDE , query2doc , and LameR , explore query expansion. They leverage LLMs to create pseudo-references or potential answers, enhancing queries for better retrieval outcomes. MuGI is a query expansion framework that lever- ages LLMs to enhance queries. Unlike previous works, which are limited by inherent constraints, MuGI offers broader applicability and versatility as it seamlessly integrates with both lexical and dense retrievers. By utilizing and intergrating a wealth of contextualized information from multiple references, MuGI surpasses existing techniques in both in-domain and out-of-distribution evaluations by more effectively capturing essential keywords and enriching the background context. 3 Method We begin by discussing IR preliminaries and intro- ducing our MuGI framework, which is designed to address the questions outlined earlier. 3.1 Preliminaries Non-parametric Lexical-based Methods BM25 is a fundamental non-parametric lexical method that calculates document relevance using term frequency (TF) and inverse document frequency (IDF) as:. nX i=1IDF(qi)TF(qi, D)(k1+ 1) TF(qi, D) +k1(1−b+b|D| avgdl)(1) where qiare query terms, TF(qi, D)is term fre- quency, IDF(qi)is inverse document frequency, |D|is document length, avgdl is average document length, and k1andbare tunable parameters. 2\nNeural Dense Retrieval Methods Dense re- trieval leverages deep learning to identify semantic similarities between queries and documents by en- coding them into high-dimensional embeddings, typically measured by : Sim(q, D) =f(q)⊤f(D) ∥f(q)∥∥f(D)∥(2) where f(·)maps text to embedding space Rd. BM25 is fast and generalizes well, suited for sparase retrieval, while dense retrieval excels at capturing semantic connections but is slower and less generalized due to neural network dependency. 3.2 Multi-Text Generation Integration Recognizing that both lexical-based and dense re- trieval methods depend on a certain degree of infor- mation overlap between the query and document, we introduce the Multi-Text Generation Integra- tion (MUGI) method. This approach aims to aug- ment the query’s information content by leveraging multiple samplings from LLMs. MUGIenriches queries with additional background information and broadens the keyword vocabulary to encom- pass out-of-domain terms, thereby bridging the se- mantic gap between queries and documents on both lexical-based and dense retrievers. Figure 2 pro- vides an illustrative overview of M UGI. Upon receiving a query q,MUGIinitially ap- plies a zero-shot prompt (see fig. 1) technique to generate a set of pseudo-references, denoted as R={r1, r2, r3, ..., r n}, which are then integrated with query for subsequent IR operations. We have explored different methods for BM25 and dense retrievers. Zero-shot Generation PromptYou are PassageGenGPT, an AI capable of generating concise, informative, and clear pseudo passages on specific topics.Generate one passage that is relevant to the following query: '{query}'. The passage should be concise, informative, and clear Figure 1: Zero-Shot Prompting for Relevant Passage Generation: It emphasizes generating contextually rele- vant content to enhance background knowledge density for multiple-text integration. 3.2.1 M UGI for BM25 This component evaluates relevance by analyzing lexical overlaps between the query and references. Given the longer lengths of documents compared to queries and BM25’s sensitivity to word frequency, achieving a careful balance to ensure the appro- priate influence of each element in text is crucial. The variation in the lengths of queries and pas- sages makes the constant repetition of query used in previous studies, which typically handles single pseudo-references, ineffective , particularly when dealing with multiple references. To address this issue, we implement an adap- tive reweighting strategy that adjusts according to the length of the pseudo-references. This adjust- ment is governed by a factor β, as illustrated by the following equation: λ=len(r1) + len( r2) +. . .+ len( rn)) len(q)·β (3) Since BM25 does not account for word order, we enhance the query by repeating query λtimes and concatenating it with all pseudo-references: qsparse =concat (q∗λ, r 1, r2, r3..., rn)(4) This enhanced query is then processed by BM25 to produce the ranking results Ibm25. 3.2.2 M UGI for Dense Retriever MUGIalso enhances dense retrievers, specifically bi-encoders. In this section, we discuss how to integrate pseudo-references with queries and re- fine query using pseudo positive/negative reference feedback. Integration We present two approaches to inte- grate queries with pseudo-references to obtain a contextualized query embedding. I.Concatenation has been commonly used in prior studies , where the query is simply concate- nated with all references as in BM25: qcat=concat (q, r 1, r2, ..., r n) (5) This enhanced query is then processed by the dense retriever fto produce embeddings, i.e.,ecat=f(qcat). However, as the number and length of references increase, the typi- cal input length limitation of 512 tokens can hinder the integration process. Consequently, only one to two passages can be incorporated intoqcat. II.Feature Pooling addresses the model’s input length limitations, particularly when multi- ple references are involved. A straightfor- ward method is to average the embeddings in 3\nBM25Stage 1: RetrievalStage 2: RerankMuGIpipelineTop 100DocumentsPseudo
|
[
"Can Query Expansion Improve Generalization of Strong Cross-Encoder Rankers?",
"Fine-Tuning LLaMA for Multi-Stage Text Retrieval",
"When do Generative Query and Document Expansions Fail? A Comprehensive Study Across Methods, Retrievers, and Datasets",
"Large Language Models for Information Retrieval: A Survey",
"Towards General Text Embeddings with Multi-stage Contrastive Learning",
"Recommender Systems in the Era of Large Language Models (LLMs)",
"Judging LLM-as-a-judge with MT-Bench and Chatbot Arena",
"Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach",
"Large Language Models are Strong Zero-Shot Retriever",
"Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent",
"LLaMA: Open and Efficient Foundation Language Models",
"InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval",
"Precise Zero-Shot Dense Retrieval without Relevance Labels",
"Enhancing Multi-modal Multi-hop Question Answering via Structured Knowledge and Unified Retrieval-Generation",
"FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness",
"Improving Passage Retrieval with Zero-Shot Question Generation",
"Unsupervised Dense Information Retrieval with Contrastive Learning",
"Pseudo Relevance Feedback with Deep Language Models and Dense Retrievers: Successes and Pitfalls",
"Learning Implicit User Profile for Personalized Retrieval-Based Chatbot",
"Pyserini: A Python Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations",
"RetGen: A Joint Framework for Retrieval and Grounded Text Generation Modeling",
"BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
"Semantic Models for the First-Stage Retrieval: A Comprehensive Review",
"Overview of the TREC 2020 Deep Learning Track",
"RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering",
"Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering",
"Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval",
"Language Models are Few-Shot Learners",
"Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks",
"Dense Passage Retrieval for Open-Domain Question Answering",
"Overview of the TREC 2019 deep learning track",
"Document Ranking with a Pretrained Sequence-to-Sequence Model",
"Multi-hop Selector Network for Multi-turn Response Selection in Retrieval-based Chatbots",
"Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
"Passage Re-ranking with BERT",
"Learning deep structured semantic models for web search using clickthrough data",
"Relevance-Based Language Models",
"Relevance weighting of search terms",
"C-Pack: Packaged Resources To Advance General Chinese Embedding",
"UMass at TREC 2004: Novelty and HARD",
"Okapi at TREC-3",
"The SMART Retrieval System—Experiments in Automatic Document Processing"
] |
Explicit Memory Learning with Expectation Maximization
|
Explicit Memory Learning with Expectation Maximization Abstract Large Language Models (LLMs) have revo- lutionized the landscape of natural language processing, demonstrating remarkable abilities across various complex tasks. However, their stateless nature limits the capability to retain information across interactions, hindering per- formance in scenarios requiring historical con- text recall. To mitigate this, current approaches primarily use explicit memory to allow LLMs to store useful information, which is accessible, readable, and interpretable. Nevertheless, ex- plicit memory lacks the reliable learning mech- anisms of implicit memory, which can be op- timized end-to-end. To harness the benefits of both, we introduce EM2, a novel frame- work enhancing explicit memory updates via the Expectation-Maximization (EM) algorithm. EM2treats memory as a latent variable, ensur- ing continual learning and improvement dur- ing updates. Experimental results on stream- ing inference tasks demonstrate that EM2out- performs existing methods without memory or with static external memory. Our in-depth analysis highlights that EM2significantly en- hances performance across various backbones and memory strategies, providing a robust solu- tion for advancing LLM memory management and enabling explicit memory to learn and im- prove similarly to implicit memory. 1 Introduction The advent of Large Language Models (LLMs) has shifted the landscape of machine learning, unveil- ing unprecedented capabilities for handling com- plex tasks across diverse domains (Ouyang et al., 2022; Achiam et al., 2023; Anthropic, 2024; Reid et al., 2024; Touvron et al., 2023; Zhao et al., 2023; Naveed et al., 2023, inter alia ). Despite these ad- vancements, a fundamental limitation of LLMs is their statelessness : they do not retain informa- tion across invocations . This restricts their ability to process and utilize previous inter- Explicit Memory Implicit Memory 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. ParametersFigure 1: Comparison between Explicit and Implicit Memory. Explicit memory is represented through text, storing information directly accessible and readable. Im- plicit memory is stored in the form of parameters, which underlie the model’s learned behaviors and are not di- rectly interpretable. Deep blue indicates the memory currently being activated. actions in a manner akin to human cognitive pro- cesses , thereby limiting their util- ity in scenarios that require retention and recall of historical context . Recent studies have attempted to address this challenge by incorporating external memory mech- anisms , which can be categorized into explicit and implicit forms . As illustrated in Fig- ure 1, explicit memory stores information in a tex- tual format that is directly accessible and read- able, such as rules, knowledge, and skills . Implicit memory, on the other hand, is parametric, facil- itating learning and updates . While parametric storage allows 1\nfor end-to-end learning, it often faces issues with training stability , specifica- tion , and interpretabil- ity . With the increasing ability of LLMs to directly understand text , explicit memory is be- coming the dominant method for memory storage in LLMs . Updating is a critical feature of memory . Current methods of updating explicit memory include manual revi- sions and self-reflection . Ge et al. (2023) con- ceptualize LLMs as operating systems and have developed memory update mechanisms inspired by OS design. Wang et al. (2024a) employ LLMs to summarize past experiences for enhanced external memory autonomously. It is worth noticing that, LLMs may miss or make mistakes when internalizing knowledge , and there is no guarantee that newly constructed memory is supe- rior to its predecessors. In contrast, implicit mem- ory, which is updated through gradients , ensures learning during the memory update. Current meth- ods for updating explicit memory do not guaran- tee learning and enhancement during the memory update process, marking a fundamental drawback. The primary reason is the non-differentiability of textual memory, which means that memory updates lack a clear direction. To address this, we propose EM2, which treats memory as a latent variable and update it using the Expectation-Maximization (EM) algorithm . EM2extracts relevant past experi- ences to guide current predictions and ensures that the memory is continuously optimized, enabling the model to learn and improve effectively over time. Experimental results on streaming inference tasks show that compared to models without exter- nal/fixed memory, our dynamic memory updating approach significantly enhances performance. Our main contributions are as follows: •We identify that current methods of updating explicit memory lack direction and do not en- sure that updated memory is superior to previ- ous versions. •We introduce EM2, which updates explicit memory using the EM algorithm to ensure continuous learning and enhancement during the memory update process. •Experimental results demonstrate that EM ² significantly improves model performance. 2 Related Work 2.1 Memory Mechanism of LLMs Memory is fundamental to the development of intelligence . Memory mecha- nisms in LLMs primarily involve retrieval , updating , and utiliza- tion processes. Retrieval aims to fetch relevant and accurate memories from a vast store, directly influencing the outcome’s qual- ity . Updates include incremental, inductive, and com- pressive approaches. Incremental updates simply add newly acquired memories without processing them . Induc- tive updates utilize the LLM’s capability to amal- gamate and summarize memories, thereby narrow- ing the retrieval scope . Compressive updates enhance the efficiency of memory use by condensing texts into vectors . The utilization of memory relies on the LLM’s contextual understanding and learning capabilities, optimizing model behavior through the injection of text or parameters . For LLMs, memory can be classified as explicit or implicit . Explicit memory, also known as declarative memory, refers to forms of memory that can be articulated . It can be stored and retrieved in textual form , offering readability and interpretability . Explicit memory does not depend on a specific model and can be uti- lized by various models post-generation . Additionally, humans can participate in modifying and refining explicit memory, making it widely applied in LLM memory modules . Implicit memory, on the other hand, refers to forms of memory that can- not be articulated. This type of memory is stored in parameters and updated through training . Although explicit memory can also be up- dated through model-driven summarization and induction , it lacks the clear update targets characteristic of implicit memory, 2\nwhich ensures that the updated state is superior to its previous state. 3 Model Inference. The inference methods for LLMs predominantly encompass zero-shot, few-shot, and chain-of- thought . Zero-shot often requires model fine-tuning to equip LLMs with the capability to generate task-specific outputs di- rectly . Brown et al. (2020) observe that providing models with ex- ample prompts can significantly enhance their un- derstanding of specific tasks. Currently, In-Context Learning has emerged as a fundamental paradigm for addressing tasks using LLMs , effectively leveraging minimal input to guide model responses . Wei et al. (2022c) note that guiding models to generate intermediary reasoning steps will boost their performance for reasoning. This enhanced ca- pability typically emerges only in models of certain scales, a phenomenon often referred to as “emer- gent abilities” . Furthermore, Li et al. (2024) and Wang et al. (2024b) find that prompts serve a dual function: they not only acti- vate the model’s internal memory but also inject effective external knowledge and guidance. Addi- tionally, updating and infusing memory in prompts offers benefits such as interpretability and flexi- bility , further enhancing the utility of LLMs in complex inference scenarios . 4 Preliminary and Task Definition 4.1 Explicit Memory Learning Memory in AI are designed to mimic the hu- man ability to remember past experiences and uti- lize this accumulated knowledge to aid in future tasks . In our model, explicit memory learning is implemented via a memory module Mthat stores strategies τlearned over time, which is formally represented as: Mt={τ1, τ2, . . . , τ K}, (1) where Mtrepresents the state of the memory mod- ule at time t,Kis the memory size, and each τi is a tactic derived from past experiences. The up- dating of this memory is governed by a learning function L, which adjusts the memory based on new experiences (X, Y): Mt+1=L(Mt,(Xt, Yt)). (2) Here, (Xt, Yt)represents the input-output pair at timet, and the function Ldetermines how the mem- ory should be updated, possibly by adding new strategies, modifying existing ones, or removing outdated strategies based on their relevance and effectiveness in the new context. 4.2 Expectation Maximization Algorithm The Expectation Maximization (EM) algorithm is a powerful statistical tool used for parameter esti- mation in models with latent variables. It operates in two main steps: the Expectation (E) step and the Maximization (M) step. During the E step, the algorithm estimates the latent variables based on the current estimate of the parameters: Q(θ|θ(t)) =EZ∼p(Z|X,θ(t))[logp(X, Z|θ)],(3) where θ(t)denotes the parameters at iteration t, Xis the observed data, Zare the latent variables, andp(Z|X, θ(t))is the probability of the latent variables given the observed data and current pa- rameters. The M step then updates the parameters to max- imize the expected log-likelihood found in the E step: θ(t+1)= arg max θQ(θ|θ(t)). (4) This iterative process continues until convergence, making it suitable for complex models where direct likelihood maximization is infeasible . The EM algorithm is particularly ef- fective in scenarios where the model parameters include both observed and unobserved (latent) com- ponents. By alternating between estimating the hidden components given the parameters and then optimizing the parameters given the hidden compo- nents, EM facilitates a more accurate estimation of model parameters. 4.3 Task Definition Given a stream of data D = {(X1, Y1),(X2, Y2), . . . , (Xn, Yn)}, where Xtrepresents the observed data at time tandYt denotes the corresponding true label, the objective is to construct effective memory Mtthat provides accurate predictions ˆYt. Our primary goal is to minimize the discrep- ancy between the predicted labels ˆYtand the actual labels Yt. This is achieved by enhancing the predic- tive accuracy of the model under the guidance of the evolving memory Mt. The effectiveness of Mt 3\nis crucial as it directly influences the model’s ability to adapt to new data and make accurate predictions. Therefore, the challenge lies in designing a learn- ing function Lthat not only updates the memory efficiently but also ensures that these updates result in the accurate anticipation of future samples based on past and present data insights. 5 Methodology 5.1 Memory based Inference At time t, the model receives an input Xt. In a zero- shot scenario, without any guidance from memory, the model ξgenerates the predicted label ˆYtin an autoregressive manner as follows: Pξ(ˆYt|Xt) =|ˆYt|Y i=1Pξ(ˆyi|Xt,ˆy<i) (5) To leverage past experiences stored in the mem- ory, we enhance model’s capability by introducing a memory-based guidance. Given the current in- putXt, we extract the most relevant information from the current memory state Mt. This extraction process results in a memory subset mt, defined as the set of elements in Mtthat are most relevant toXt. The relevance can be quantified based on similarity measures, heuristic rules, or learned rele- vance functions. The resulting mtcan be formally represented as: mt= select( Mt, Xt) (6) where select is a function that retrieves the most relevant memory elements based on Xt. Withmtas an additional context, the model then generates ˆYtusing both mtandXtto guide the prediction: Pξ(ˆYt|mt, Xt) =|ˆYt|Y i=1Pξ(ˆyi|mt, Xt,ˆy<i)(7) This memory-augmented inference mechanism allows the model to effectively utilize historical data, enhancing its predictive accuracy and adapt- ability in dynamic environments. 5.2 Memory Module Construction The Memory Module Mis constructed by accu- mulating pairs (Xi,ˆYi)over time. Initially, the memory of the model is empty, representing a state of minimal prior knowledge. As the model pro- cesses data and generates predictions, it selectively updates this memory based on the quality and cer- tainty of the information. To quantify the certainty of each predicted out- put and determine its eligibility for memory in- clusion, we define an uncertainty threshold ϵ. A prediction ˆYiis considered high-quality if its nor- malized entropy, which measures the average un- certainty across all predicted components, is below this threshold. The entropy H(ˆYi)for each predic- tion is calculated as follows: H(ˆYi) =−1 |ˆYi||ˆYi|X j=1logPξ(ˆyj|Xi,ˆy<j)≤ϵ (8) When the above condition is satisfied, indicating that the generated prediction ˆYiis of sufficiently high certainty and quality, it is integrated into the memory using the learning function L, as discussed in Section 4.1. 5.3 Memory Update through Learning Function We employ the EM algorithm to design the learning function L. As depicted in Figure 2 under 2⃝and 3⃝, if the generated ˆYisatisfies condition 8, it is fed along with the current memory state Mtinto the learning function L. The update equation is: Mt+1=L(Mt,(Xt,ˆYt)) (9) We treat strategies τas latent variables ZandMtas the parameter θin Eq. 3, transforming the learning process into an EM learning framework. 5.3.1 Construction of Representative Validation Set To evaluate the updates efficiently, we construct a representative validation set Vfrom the dataset Dnot yet included in the memory Mt. We select cluster centers from D \Mtto form V, reducing re- dundancy and improving the efficiency of memory updates. The selection can be represented by: Vt= centers( {(X1,ˆY1), . . . , (Xt,ˆYt)} \Mt) (10) 5.3.2 E-step: Inference Procedure LetVt={(Xv, Yv)}. Based on Equation 3, the prediction for Yvgiven Xvand the memory Mis calculated as: 4\nObserved Data Memory ModuleLLM Inference Relevant PromptM-step: Determine whether to update memory based on predicted performance E-step: Enhance Prediction based on the new Memory𝑋𝑋1�𝑌𝑌1 𝑋𝑋2�𝑌𝑌2𝑋𝑋𝑖𝑖�𝑌𝑌𝑖𝑖𝑋𝑋1�𝑌𝑌1 Evaluation on the representative validation setMemory Informed Prediction 𝑋𝑋𝑣𝑣𝑀𝑀𝑡𝑡 𝑋𝑋𝑣𝑣𝑃𝑃𝑌𝑌𝑣𝑣 𝑋𝑋𝑣𝑣;𝑀𝑀 =E𝜏𝜏∼P𝜏𝜏𝑋𝑋𝑣𝑣;𝑀𝑀P𝑌𝑌𝑣𝑣𝑋𝑋𝑣𝑣,𝜏𝜏 𝑀𝑀𝑡𝑡+1𝑋𝑋𝑡𝑡�𝑌𝑌𝑡𝑡 𝑀𝑀𝑡𝑡①② ③ Inference based on 𝑀𝑀 Update 𝑀𝑀 based on new observed data (𝑋𝑋𝑡𝑡,𝑌𝑌𝑡𝑡)𝑋𝑋𝑣𝑣Figure 2: Overview of EM2for memory-guided prediction in streaming data. At each timestep t, the model receives an input Xt.1⃝utilizes the memory Mtto select relevant demonstrations that guide the generation of the prediction ˆYt.2⃝and 3⃝depict the integration of the newly generated ˆYtand the current memory Mtinto the memory updating process, ensuring that the memory evolves with the latest data insights and contributes to future predictions. P(Yv|Xv;M) =X τP(Yv, τ|Xv;M) =X τP(Yv|Xv, τ)P(τ|Xv;M) =Eτ∼P(τ|Xv;M)[P(Yv|Xv, τ)] (11) 5.3.3 M-step: Learning Procedure The memory is updated based on the maximization step defined as: Mt+1= arg max m⊂Mt∪Γ(Xt,ˆYt)|Vt|X i=1P(Yi|Xi;m),(12) where Γrepresents a function extracting knowledge from(Xt,ˆYt)to generate τt, which can be formally represented as: τt= Γ(Xt,ˆYt) (13) This step ensures that the updated memory Mt+1 performs better on Vtthan the previous state Mt, effectively capturing the beneficial strategies for future predictions. 6 Experiment 6.1 Evaluation Datasets To assess the efficacy of our approach, we evaluate it across three distinct types of tasks: word math problems, commonsense question answering (QA), and symbolic analysis. We utilize the following datasets for these evaluations: •Word Math Problem: GSM8K , MultiArith , SingleEq , AddSub , SV AMP , AQUA and MATH . •Commonsense QA: StrategyQA , CommonsenseQA (CSQA; Talmor et al., 2019), BoolQ , the AI2 Reasoning Challenge (ARC-c; Clark et al., 2018). •Symbolic Understanding: Date Understand- ing, Penguins in a Table, Colored Objects, and Object Counting sourced from Big- Bench . For a more detailed description of the datasets, please refer to Appendix A. 6.2 Experiment Settings Implementation Details. The inference process of the model not only demonstrates its understand- ing and analysis of problems but often encapsulates latent knowledge . There- fore, we store the model’s reasoning process along with the problem as the model memory. In the main experiments, memory are vectorized using 5\nGSM8K MultiArith SingleEq AddSub SV AMP AQuA MATH Average Single Inference ZS-CoT 76.80 94.83 89.96 84.30 81.45 40.55 29.02 77.98 CoT 79.61 96.50 92.32 85.31 82.76 42.32 - 79.80 ComplexCoT 78.01 96.67 91.92 84.81 81.48 42.51 29.50 79.23 EM282.63 97.77 92.71 86.32 83.91 45.27 30.12 81.43 EM2∗83.09 97.83 92.71 87.59 84.19 46.45 30.22 81.98 Multiple Inference ZS-CoT 84.98 97.50 92.71 88.61 87.18 47.24 32.22 83.03 CoT 85.59 98.00 94.29 91.13 91.76 51.57 - 85.39 ComplexCoT 85.29 98.16 93.70 89.87 89.62 50.78 32.46 84.57 EM286.35 98.83 95.86 93.41 92.51 53.14 33.82 86.68 EM2∗86.43 98.83 95.66 94.43 92.55 53.93 33.96 86.97 Table 1: Results on Math Word Problems (Accuracy in %). The best outcomes are emphasized in bold . Average represents the average performance across all datasets, excluding MATH. EM2denotes initialization using ZS-CoT, while EM2∗indicates initialization with CoT demonstrations, highlighted with a skyblue background. To ensure a fair comparison, the LLaMA-3-8B model is used as the backbone across all methods. CSQA StrategyQA BoolQ ARC60708090Accuracy (%)ZS-CoT CoT EM2 EM2* (a) Commonsense QA Date Penguin Colored Obj. Obj. Count60708090100Accuracy (%)ZS-CoT CoT EM2 EM2* (b) Symbolic Understanding Figure 3: Performance comparison on (a) commonsense question answering and (b) symbolic understanding tasks. The charts illustrate that EM2demonstrates a distinct advantage over both no and fixed-memory mechanisms. text-embedding-3-large , and relevancy is cal- culated using cosine distance as specified in Eq. 6. To ensure fair comparisons, we limit the selection to a maximum of 8 examples. These vectors are also employed to determine the clustering centers as outlined in Eq. 10. For more details and ablation studies, see Appendix B and C. Baselines. To validate the efficacy of our ap- proach, we compare it against three baseline meth- ods representing different levels of memory integra- tion: models without memory, with fixed memory, and with retrieval-based memory. •No Memory: The Zero-shot CoT (ZS-CoT; Kojima et al., 2022) utilizes the prompt “Let’s think step by step” to activate the model’s internal reasoning capabilities without relying on external memory aids. •Fixed Memory: The Chain-of-Thought (CoT; Wei et al., 2022b) employs fixed prompts to guide the model through a reasoning process. ComplexCoT extends this by using complex prompts that guide the model to generate more detailed reasoning processes. •Retrieval Memory: The Memory-of-Thought (MoT; Li and Qiu, 2023) incorporates a two- stage memory retrieval process, which in- cludes coarse-grained semantic retrieval fol- lowed by fine-grained model filtering to select relevant memories. AutoCoT selects examples based on relevance and diversity metrics tailored to the query. In contrast to the main experiment where memory updates are conducted using test samples, MoT and AutoCoT require pre-inference on training data. To ensure a fair comparison, we align the settings with these methods to in Section 6.4. Backbones. In the main experiment, we employ the 8B LLaMA-3 model . For the anal- ysis, we extend our investigations to include more LLMs, including LLaMA-3-70B , 6\nLLaMA-3-8B Mistral-7B Qwen-7B LLaMA-3-70B Mistral-7Bx8405060708090100Accuracy (%)ZS CoT Complex-CoT EM2(a) GSM8K LLaMA-3-8B Mistral-7B Qwen-7B LLaMA-3-70B Mistral-7Bx85060708090100Accuracy (%)ZS CoT Complex-CoT EM2 (b) CSQA Figure 4: Performance comparison of different memory mechanisms across various LLMs. algebra probabilitygeometry intermediate_algebranumber_theoryprealgebra precalculus01020304050Accuracy (%)EM Algorithm Random FIFO Figure 5: Performance of different memory updating mechanisms on the MATH dataset. Mistral-7B , Mixtral , and Qwen-2 . 6.3 Main Results Word Math Problem. Table 1 presents the re- sults on math word problems. Compared to meth- ods with no memory or fixed memory, our memory learning approach exhibits significant advantages. Notably, on the GSM8K dataset, EM2outperforms the ZS-CoT by 5.83% and CoT by 3.02%. This improvement is attributed to the dynamic mem- ory updating mechanism of EM2. We utilize two initialization methods: ZS-CoT, where the initial memory is empty, and CoT, which provides eight high-quality demonstrations at initialization. While the CoT initialization ensures better initial perfor- mance, the efficacy of both approaches converges as the memory accumulates. For instance, on the SingleEq dataset, results from both initialization methods are identical. Further, we analyze multiple inference scenario and observe thatEM2retains a clear advantage. Moreover, as more memories are integrated, the performance gap between the two initialization methods narrows. Commonsense QA and Symbolic. The experi- mental results for commonsense QA and symbolic understanding tasks are shown in Figure 3. We algebra probabilitygeometry intermediate_algebranumber_theoryprealgebra precalculus01020304050Accuracy (%)EM2MoT AutoCoTFigure 6: Performance comparison of retrieval-based memory methods on the MATH dataset. observe that EM2effectively enhances model per- formance on both types of tasks. Notably, EM2 demonstrates a more pronounced advantage in chal- lenging tasks, such as those involving complex, non-factoid information in the BoolQ dataset, and tasks requiring implicit multi-step reasoning in the StrategyQA dataset. This improvement can be at- tributed to EM2’s memory updating and retrieval mechanisms, which ensure the selection of high- quality and relevant demonstrations. 6.4 Analysis and Discussion Performance on Various Models. The perfor- mance of EM2across a range of models is ana- lyzed in Figure 4, focusing on two representative datasets: GSM8K and CSQA. We observe that EM2consistently delivers significant performance enhancements across different models. Notably, models with greater computational capabilities ben- efit more substantially from the EM2approach. For instance, despite having a similar number of param- eters, Qwen-7B exhibits a greater improvement than Mistral-7B. Moreover, EM2proves to be ver- satile, not only enhancing the performance of dense models but also boosting the efficacy of Mixture of Experts (MoE) models like Mixtral. This adaptabil- ity underscores EM2’s effectiveness in leveraging 7\n0 20 40 60 80 100 Percentage of LLaMA-3-70B Memory Used (%)42444648Accuracy (%) Accuracy Trend CoT Accuracy EM2 Accuracy(a) 8B model accesses 70B model’s memory 0 20 40 60 80 100 Percentage of LLaMA-3-8B Memory Used (%)666870Accuracy (%) Accuracy Trend CoT Accuracy EM2 Accuracy (b) 70B model accesses 8B model’s memory Figure 7: Impact of memory swapping on model performance. The horizontal axis represents the proportion of memory injected. The horizontal lines indicate the baseline accuracies for models with fixed memory and EM2 initialized with ZS-CoT. complex memory dynamics across different archi- tectural frameworks. Analysis of Memory Updating Mechanism. The impact of different memory updating strate- gies on accuracy is analyzed in Figure 5. We ex- perimented with replacing the learning function in Section 5.3 with two simpler updating strategies: random selection and First-In-First-Out (FIFO). Results on the MATH dataset indicate that these changes significantly reduce performance. The pri- mary reason for this decline can be attributed to the inherent limitations of Random and FIFO strate- gies, which rely on randomness and sample order, respectively, and cannot guarantee the effective- ness of memory updates. This analysis highlights the efficacy of the EM2approach, which employs the EM algorithm to ensure gradual and effective optimization of memory. Comparison of Memory Retrieval Method. In Figure 6, we compare the EM2with two memory retrieval methods. Both MoT and AutoCoT require pre-inference on the training dataset to gather ex- amples for retrieval. To ensure a fair comparison, we incorporate training samples into EM2, first performing memory updates and constructing a representative validation set on the training dataset, before introducing the test set for accuracy calcu- lations. Results on the MATH dataset demonstrate thatEM2achieves superior performance compared to traditional memory retrieval methods. Despite having a narrower search scope compared to the broader retrieval range of MoT and AutoCoT, the EM2’s updating strategy ensures the retention of high-quality memories. Moreover, continuous up- dates maintain alignment between the memory dis- tribution and the test distribution, thereby resulting in enhanced performance. Memory Sharing The memory constructed by EM2is model-agnostic, enabling the transfer and sharing of memories between models. In Fig- ure 7, we explore the effects of exchanging mem- ories between LLaMA-3-8B and LLaMA-3-70B. Each model first performs inference on the training dataset, after which their memories are swapped. As shown in Figure 7a, there is a gradual improve- ment in the performance of the 8B model as the pro- portion of memory from the 70B model increases. This indicates that smaller models can benefit from high-quality memories sourced from larger models. Conversely, Figure 7b reveals that the performance of the 70B model remains unaffected by the mem- ory from the 8B model, as lower-quality memories do not enter our memory module. 7 Conclusion In this paper, we analyze the advantages of ex- plicit memory over implicit memory and highlight a critical limitation of the former: its inability to ensure the effectiveness of updates as reliably as im- plicit memory. To address this, we introduce EM2, which treats memory as a latent variable and itera- tively updates it using the EM algorithm, thereby ensuring that updated memories are superior to their predecessors. Experiments show that EM2 offers significant advantages over models without memory and those with fixed memory. Importantly, the performance of EM2scales with the model’s capabilities, suggesting that more powerful models can leverage EM2to achieve even greater benefits. Additionally, EM2is model-agnostic, which allows for the transfer and sharing of memory across dif- ferent models. Analyses reveal that weaker LLMs can significantly benefit from high-quality memo- ries derived from larger counterparts. 8\nLimitations Generalization to a Broader Range of Tasks. While we have analyzed EM2across three distinct types of tasks, there is potential to extend this ap- proach to a wider array of generative tasks (Gozalo- Brizuela and Garrido-Merchán, 2023), such as code generation , machine transla- tion , and various agent-based tasks . Additionally, the form of memory could also be diversified to include structured data, triplets, user historical informa- tion, and more. Our current scope has not yet ex- plored these domains, and we see the exploration of EM2’s potential in more diverse tasks as an avenue for future work. Application to Commercial Models. EM2re- quires access to internal model information, such as perplexity, to assess the effectiveness of new mem- ories. However, for commercial models that only provide text outputs, such as OpenAI’s GPT mod- els or Anthropic’s Claude models , despite their powerful capabilities, applying EM2remains challenging. Incorporating Human Supervision. As men- tioned in Section 6.4, higher-quality memories can significantly enhance model performance. This pa- per primarily focuses on memories constructed au- tonomously by the model. An intriguing question is whether human-supervised memory enhancement and correction could further improve performance. Additionally, how to effectively incorporate human supervision , such as step-by-step guidance , remains an open question for future research. Ethics Statement Data Privacy. Our approach constructs memory from the model’s own outputs and does not require the collection or acquisition of personal data. The prompts and data used in our experiments do not in- volve any personal or privacy-sensitive information, ensuring compliance with privacy standards. Environmental Protection. The construction of large language models and the generation of data and memory are likely to become more prevalent, consuming significant computational resources and potentially increasing carbon emissions. We advo- cate for sustainable AI development, emphasizing the reduction of carbon footprints and the promo- tion of green AI initiatives to mitigate environmen- tal impacts. Adherence to Ethical Guidelines. We adhere to ethical guidelines and ensure that our data usage complies with the corresponding dataset licenses. Detailed statistics about the datasets and their re- spective licenses is listed in Table 2.
|
[
"The Llama 3 Herd of Models",
"A Survey on Large Language Models for Code Generation",
"A Survey on the Memory Mechanism of Large Language Model based Agents",
"Memory Sharing for Large Language Model based Agents",
"Efficient Prompting Methods for Large Language Models: A Survey",
"Understanding LLMs: A Comprehensive Overview from Training to Inference",
"Empowering Working Memory for Large Language Model Agents",
"Retrieval-Augmented Generation for Large Language Models: A Survey",
"A Comprehensive Survey of Machine Translation Approaches",
"LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem",
"MemGPT: Towards LLMs as Operating Systems",
"Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration",
"A Survey on Large Language Model based Autonomous Agents",
"MetaGPT: Meta Programming for Multi-Agent Collaborative Framework",
"Boosting Language Models Reasoning with Chain-of-Knowledge Prompting",
"A survey of Generative AI Applications",
"Let's Verify Step by Step",
"Do Large Language Models Know What They Don't Know?",
"Adapting Language Models to Compress Contexts",
"Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources",
"Editing Large Language Models: Problems, Methods, and Opportunities",
"MoT: Memory-of-Thought Enables ChatGPT to Self-Improve",
"Improving Cross-Task Generalization with Step-by-Step Instructions",
"Learning to Compress Prompts with Gist Tokens",
"A Survey of Large Language Models",
"Reflexion: language agents with verbal reinforcement learning",
"GPT-4 Technical Report",
"A Survey on In-context Learning",
"Rethinking with Retrieval: Faithful Large Language Model Inference",
"Large Language Models with Controllable Working Memory",
"Large Language Models Can Self-Improve",
"Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them",
"Complexity-Based Prompting for Multi-Step Reasoning",
"Large Language Models are Zero-Shot Reasoners",
"Self-Consistency Improves Chain of Thought Reasoning in Language Models",
"Training language models to follow instructions with human feedback",
"Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?",
"Training Verifiers to Solve Math Word Problems",
"EncT5: A Framework for Fine-tuning T5 as Non-autoregressive Models",
"CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation",
"A Survey of Human-in-the-loop for Machine Learning",
"Are NLP Models really able to Solve Simple Math Word Problems?",
"Measuring Mathematical Problem Solving With the MATH Dataset",
"Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies",
"A Survey on Neural Network Interpretability",
"Language Models are Few-Shot Learners",
"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer",
"Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
"BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions",
"Robust and Scalable Differentiable Neural Computer for Question Answering",
"Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge",
"Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems",
"Hybrid computing using a neural network with dynamic external memory",
"Solving General Arithmetic Word Problems",
"MAWPS: A Math Word Problem Repository",
"End-To-End Memory Networks",
"Memory Networks",
"Learning to Solve Arithmetic Word Problems with Verb Categorization",
"Common molecular mechanisms in explicit and implicit memory",
"Reasoning=working memory¿attention",
"Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper",
"Hallucination Detection for Generative Large Language Models by Bayesian Sequential Estimation",
"Memory-assisted prompt editing to improve GPT-3 after deployment",
"CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge",
"The Development of Implicit and Explicit Memory",
"The Development of Intelligence",
"Cognitive neuroscience of human memory.",
"The Claude 3 Model Family: Opus, Sonnet, Haiku"
] |
Seeing Through VisualBERT: A Causal Adventure on Memetic Landscapes
| "Seeing Through VisualBERT: A Causal Adventure on Memetic Landscapes Abstract Detecting offensive me(...TRUNCATED)
| ["SEMANTIFY: Unveiling Memes with Robust Interpretability beyond Input Attribution","Are self-explan(...TRUNCATED)
|
mPLUG-DocOwl 1.5: Unified Structure Learning
for OCR-free Document Understanding
| "mPLUG-DocOwl 1.5: Unified Structure Learning for OCR-free Document Understanding Abstract Structure(...TRUNCATED)
| ["MM-LLMs: Recent Advances in MultiModal Large Language Models","ChartAssisstant: A Universal Chart (...TRUNCATED)
|
FoodieQA: A Multimodal Dataset for Fine-Grained
Understanding of Chinese Food Culture
| "FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture Abstract Food(...TRUNCATED)
| ["Benchmarking Vision Language Models for Cultural Understanding","MANTIS: Interleaved Multi-Image I(...TRUNCATED)
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 12