title
stringlengths
45
132
paper_text
stringlengths
11.6k
38.1k
citations
listlengths
14
84
EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning
EconLogicQA: A Question-Answering Benchmark for Evaluating Large Language Models in Economic Sequential Reasoning Abstract In this paper, we introduce EconLogicQA, a rig- orous benchmark designed to assess the sequen- tial reasoning capabilities of large language models (LLMs) within the intricate realms of economics, business, and supply chain man- agement. Diverging from traditional bench- marks that predict subsequent events individu- ally, EconLogicQA poses a more challenging task: it requires models to discern and sequence multiple interconnected events, capturing the complexity of economic logics. EconLogicQA comprises an array of multi-event scenarios de- rived from economic articles, which necessitate an insightful understanding of both temporal and logical event relationships. Through com- prehensive evaluations, we exhibit that Econ- LogicQA effectively gauges a LLM’s profi- ciency in navigating the sequential complexi- ties inherent in economic contexts. We provide a detailed description of EconLogicQA dataset and shows the outcomes from evaluating the benchmark across various leading-edge LLMs, thereby offering a thorough perspective on their sequential reasoning potential in economic con- texts. 1 Introduction Logical reasoning is a pivotal skill in many pro- fessional and academic domains, enabling individ- uals to make informed decisions by understanding relationships between sequential events or pieces of information. In practice, the reasoning capabilities of large language models (LLMs) are frequently uti- lized across various applications, yet their effective- ness in logical reasoning remains underexplored. Despite its importance, there is a evident gap in the literature regarding the capability of LLMs to per- form logical reasoning at a high level. This paper addresses this gap by introducing EconLogicQA, a new benchmark designed to rigorously assess the logical reasoning capabilities of LLMs specifically within the contexts of economics, business, and supply chain management. EconLogicQA distinguishes itself from existing benchmarks by challenging LLMs to not only iden- tify but also logically sequence multiple intercon- nected events from realistic economic scenarios. This approach aims to reflect the intricate decision- making processes required in these fields, going beyond mere fact recall or simple event prediction. By focusing on the sequencing of events based on logical rather than simply chronological order, EconLogicQA inspects the LLMs’ ability to en- gage with and understand the underlying mechan- ics of economic phenomena. The benchmark utilizes a curated dataset de- rived from a wide range of business news articles, guiding GPT-4 to generate multi-choice questions that demand an intelligent understanding of logi- cal connections. A rigorous human review process ensures the accuracy and appropriateness of the content, refining the dataset to enhance its practi- cal value. Through comprehensive testing across various state-of-the-art LLMs, this paper not only demonstrates EconLogicQA’s effectiveness in eval- uating logical reasoning but also provides insights into the potential improvements and applications of LLMs in complex reasoning tasks. Our contributions of this paper are as follows: 1.We propose a novel benchmark, EconLog- icQA, which rigorously assesses LLMs’ log- ical reasoning capabilities within economics, business, and supply chain management. 2.We utilize GPT-4 to generate questions and answers from business articles, ensuring high- quality, well-crafted multiple-choice ques- tions through meticulous human review. 3.We conduct a comprehensive evaluation of both open and proprietary LLMs to assess their performance on this benchmark. 1\n2 Related Work Sequential Reasoning Benchmarks. In the realm of assessing complex reasoning abilities, Jin et al. (2023) introduce the CL ADDER dataset, ex- ploring capacities of large language models (LLMs) for causal reasoning, differentiating itself by focus- ing on formal rules-based causal inference instead of the typical evaluation of commonsense causal- ity in Natural Language Processing (NLP). Wang et al. (2023) present STEPS, a rigorous benchmark de- signed to assess models’ understanding of action sequence order in sequential tasks such as cooking and manufacturing, which highlights challenges of current LLMs in performing order reasoning with- out specific tuning. In examining adjacent domains, Guha et al. (2024) launch L EGAL BENCH , emerg- ing as a notable benchmark that evaluates LLMs in legal reasoning, having been developed collabora- tively with legal experts to cover various facets of practical and theoretical legal analysis. Yang et al. (2024) establish AQA-Bench, serving as an inter- active benchmark that evaluates LLMs’ sequential reasoning abilities across various algorithmic tasks, including Depth-First Search (DFS), Breadth-First Search (BFS), and binary search, by requiring mod- els to dynamically interact with the task environ- ment, and thereby uncovering notable performance disparities among different LLMs. Valmeekam et al. (2024) create PlanBench as an extensible benchmark focused on evaluating LLMs’ planning and reasoning capabilities, particularly in action and change, where diverse scenarios are used from the automated planning community to discern be- tween genuine planning abilities and mere retrieval from pre-trained knowledge. Economic Benchmarks. In the finance do- main, Shah et al. (2022) launch the Financial Lan- guage Understanding Evaluation (FLUE) bench- mark alongside the Financial LANGuage (FLANG) model, offering a comprehensive suite of evalua- tions focused on economic and financial domains, significantly outperforming existing models on var- ious NLP tasks. Hendrycks et al. (2020) compile the Massive Multitask Language Understanding (MMLU) benchmark of 57 diverse tasks, includ- ing the economics subject, designed to evaluate the multitask accuracy of language models, reveal- ing that even the largest models still struggle with expert-level performance and have inconsistent ac- curacy across subjects. Lu et al. (2023) propose the BBT-CFLEB benchmark, supporting advanced understanding and generation tasks in the finan- cial domain and fostering significant research and development in this specialized area. Zhang et al. (2023) present the FinEval, a specialized bench- mark for assessing financial knowledge in LLMs, demonstrating significant potential through GPT- 4’s high performance across diverse prompt types. Van Patten (2023) introduce the EconQA, a novel dataset for assessing LLMs on multiple-choice eco- nomics questions, reveals that Chain-of-Thought reasoning improves performance, particularly in mathematical queries, while prompt variations have a moderate effect on accuracy. 3 EconLogicQA In this section, we detail the dataset generation and human review processes for creating the Econ- LogicQA benchmark and provide illustrative exam- ples from it. 3.1 Dataset Generation To streamline the question-generation process and reduce the subjectivity, labor-intensiveness, and randomness of manual creation, we utilize the GPT-4 to automatically generate questions by ex- tracting key points from news articles. We specif- ically select economics-related articles from the 2011 to 2022 news dataset available on Kaggle1, which is under the CC0 Public Domain license. This cleaned dataset provides a comprehensive range of economic news articles, and we further narrow our focus to those categorized under busi- ness to align with our research scope in economics. In the data generation process, instructional prompts are developed to guide GPT-4 in creat- ing multi-choice questions that challenge models to logically sequence events within the framework of business-related scenarios. These questions start with a brief scenario description and involve four events that must be ordered based on their logical or chronological sequence rather than their appear- ances in the source articles. The selected events pertain to typical business or economic situations, necessitating a deep understanding of business prac- tices and economic principles for accurate sequenc- ing. The prompts specify that the generated content should be original, concise, and crafted without ref- erencing the original news articles and unnecessary 1https://www.kaggle.com/datasets/hadasu92/ cnn-articles-after-basic-cleaning/ 2\ndetails. Each question is designed to be completed independently, making it suitable for evaluation. The formatted output includes a scenario descrip- tion followed by four choices labeled A, B, C, and D, concluding with the correct sequence and a brief explanation to ensure that the reasoning behind the sequence is clear and deducible solely from the information presented in the question and choices. This structure is intended to enhance comprehen- sion and application of business concepts. See Appendix A Figure 1 for an example of GPT-4 response with the prompt. 3.2 Review Process In order to maintain the integrity and quality of the dataset, human verification is incorporated into the workflow. This manual review is essential as some generated responses exhibit errors in the correct sequencing of events. Each question under- goes meticulous examination, and adjustments are made to ensure accuracy and clarity in the logical sequence provided. Furthermore, the dataset un- dergoes a rigorous review to identify and exclude sensitive news articles that could be inappropriate. In total, 204 questions are removed from the initial pool of 854 questions. The criteria for removal in- clude scenarios that yield multiple valid sequences and instances where a logical sequence cannot be clearly established. This comprehensive vetting process significantly enhances the evaluation qual- ity. The final dataset consists of 650 questions, divided into training, validation, and test sets con- taining 390, 130, and 130 questions, respectively. 3.3 Dataset Examples To provide a clear depiction of EconLogicQA’s contents, we present two examples from the dataset in Appendix B Table 2. The first example details a sequence of decisions by Costco to manage its chicken supply chain effectively, while the second outlines steps taken by the Federal Reserve to nav- igate fiscal challenges. These examples illustrate the dataset’s primary objective: to evaluate the ca- pability of large language models in sequencing economic events logically, not just chronologically. Each question is meticulously designed to chal- lenge models to demonstrate their understanding of complex economic interactions and to apply logical reasoning within real-world business contexts. 4 Experiments This section outlines experiments with the Econ- LogicQA dataset, assessing the sequential reason- ing of multiple open and proprietary large language models (LLMs) in economic scenarios. 4.1 Experiment Setup We implement experiments on various LLMs using the EconLogicQA dataset to assess its se- quential reasoning capabilities within the intricate realms of economics, business, and supply chain management. We select the current mainstream open and proprietary LLMs in our study, includ- ing Llama-2 (Touvron et al., 2023a,b), Llama 3 , Gemma , Mis- tral , Yi , Zephyr , GPT-3.5 , and GPT-4 . Each model is evaluated in both 1-shot and 5-shot settings. We do not include 0-shot in our selec- tion because the results are unsatisfactory due to the task’s complexity. Therefore, we recommend using a few-shot approach for sorting problems. Accuracy is the primary metric used, offering a direct measure of each model’s understanding of the concepts within the EconLogicQA dataset. All experiments are conducted on NVIDIA A100 GPUs. Each open LLM use in this paper is sourced from the Huggingface Transformers li- brary . Language Model Evalua- tion Harness is used to test open LLMs on sequential reasoning evaluation tasks. The YAML configuration file is used to set key parameters in our scenario, such as terminating out- puts, extracting answers, evaluating results, and so on. LLMs are configured to stop generating re- sponses once it begins to pose new questions after answering the question in the prompt, setting the temperature to 0.0 without any sampling. Then, we extract the answer from the response generated by LLMs using regular expression. Finally, we verify the answer through exact matching and use accuracy as the evaluation metric. 4.2 Experiment Results The results from our experiments shown in Table 1 indicate diverse performances among the models, with significant disparities observed in their ability to handle the sequencing of economic events. GPT- 4-Turbo exhibits the highest accuracy, achieving 56.92% in the 1-shot scenario and 56.15% in the 5- 3\nshot scenario, making it the best-performing model in our tests. GPT-4 follows closely, demonstrat- ing the second-highest performance with 55.38% in the 1-shot and 53.85% in the 5-shot settings. Remarkably, the 1-shot scenario generally results in better performance than the 5-shot scenario for these two models, which could be attributed to the models’ ability to leverage their pre-trained knowl- edge effectively without the potential confusion introduced by additional context in the 5-shot sce- nario. Open LLMs are evaluated, showing varied per- formances as outlined in Table 1. Notably, Llama- 3-8B-Instruct demonstrates significant improve- ments when fine-tuned with instructions compared with Llama-3-8B, achieving 34.62% accuracy in the 1-shot setting and 37.69% in the 5-shot setting, which highlights the substantial impact of instruc- tion tuning on enhancing performance for question answering tasks. Similarly, Mistral-7B-Instruct- v0.2 exhibits promising results, with the accuracy of 31.54% in the 1-shot setting and 32.31% in the 5-shot setting, underscoring its adaptability to com- plex reasoning tasks, though it still lags behind GPT-4’s overall performance. These experiments collectively demonstrate the varying degrees of proficiency in applying LLMs to economic sequential reasoning, reflecting the current landscape of LLMs capabilities in this do- main. There is still a clear gap in the ability of current LLMs to accurately handle many economic scenarios and correctly sequence events, especially open LLMs. This limitation points to significant challenges that persist in the field, emphasizing the need for targeted improvements and innovations in future research. Addressing these shortcomings could lead to more robust models that are better equipped to navigate the complexity of economic reasoning. 5 Conclusion This study introduces EconLogicQA, a novel benchmark specifically designed to assess the logi- cal reasoning capabilities of large language models (LLMs) in the domains of economics, business, and supply chain management. The benchmark chal- lenges LLMs with complex realistic economic sce- narios. Utilizing GPT-4, high-quality, well-crafted multiple-choice questions are generated from busi- ness articles and refined through meticulous hu- man review. A comprehensive evaluation of both Model 1-Shot 5-Shot Llama-2-7B 0.77% 1.54% Llama-2-7B-Chat 9.23% 10.00% Llama-2-13B 9.23% 1.54% Llama-2-13B-Chat 14.62% 8.46% Llama-3-8B 23.85% 23.85% Llama-3-8B-Instruct 34.62% 37.69% Gemma-2B-IT 7.69% 7.69% Gemma-1.1-2B-IT 8.46% 6.92% Gemma-7B-IT 2.31% 4.62% Gemma-1.1-7B-IT 0.77% 3.85% Mistral-7B-v0.1 26.15% 30.00% Mistral-7B-v0.2 26.15% 32.31% Mistral-7B-Instruct-v0.1 15.38% 20.77% Mistral-7B-Instruct-v0.2 31.54% 32.31% Yi-6B 3.85% 29.23% Yi-6B-Chat 20.77% 30.77% Zephyr-7B-Alpha 23.08% 23.08% Zephyr-7B-Beta 17.69% 14.62% GPT-3.5-Turbo 37.69% 38.46% GPT-4 55.38% 53.85% GPT-4-Turbo 56.92% 56.15% Table 1: Comparison of the accuracy of multiple large language models on the EconLogicQA dataset under 1-shot and 5-shot learning scenarios. open and proprietary LLMs is conducted, providing deep insights into their capabilities and limitations within this specialized context. In the future, various enhancements can be made to improve the performance of LLMs in economic reasoning. Prompt engineering could be refined to better guide models through complex economic scenarios, thus enhancing their accuracy in under- standing and processing complex logical relation- ships. Additionally, the application of parameter- efficient fine-tuning (PEFT) using the EconLog- icQA training dataset offers a promising approach to customize models efficiently, optimizing their responses. Moreover, there is a significant oppor- tunity to develop specialized LLMs that are specif- ically designed to address the unique challenges in economics, business, and supply chain manage- ment. Limitations Scope of Data. The effectiveness of the Econ- LogicQA benchmark is currently validated using a specific dataset of economic news articles. This reliance on a single data source limits the gener- 4\nalizability of our findings to other datasets in the domain, which may have distinct characteristics and diverse compositions that could influence the performance of LLMs. Temporal Coverage. The dataset spans articles from 2011 to 2022, potentially missing recent eco- nomic developments and trends. This temporal limitation could affect the benchmark’s relevance and the models’ performance in current economic contexts. Ethical Considerations Our research emphasizes transparency in methodology, reporting, and data utilization. We adhered to the principles of responsible AI research throughout the study. The data employed in this re- search is sourced from public domains, ensuring no private user data was involved. We incorporated a stringent human review process to maintain dataset accuracy and integrity, excluding any sensitive or inappropriate content. These measures reflect our commitment to ethical standards and research in- tegrity.
[ "Yi: Open Foundation Models by 01.AI", "AQA-Bench: An Interactive Benchmark for Evaluating LLMs' Sequential Reasoning Ability", "CLadder: Assessing Causal Reasoning in Language Models", "STEPS: A Benchmark for Order Reasoning in Sequential Tasks", "GPT-4 Technical Report", "LLaMA: Open and Efficient Foundation Language Models", "BBT-Fin: Comprehensive Construction of Chinese Financial Domain Pre-trained Language Model, Corpus and Benchmark", "When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain", "PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change", "Training language models to follow instructions with human feedback", "Measuring Massive Multitask Language Understanding", "HuggingFace's Transformers: State-of-the-art Natural Language Processing", "Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning", "CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge" ]
Major Entity Identification: A Generalizable Alternative to Coreference Resolution
Major Entity Identification: A Generalizable Alternative to Coreference Resolution Abstract The limited generalization of coreference reso- lution (CR) models has been a major bottleneck in the task’s broad application. Prior work has identified annotation differences, especially for mention detection, as one of the main reasons for the generalization gap and proposed using additional annotated target domain data. Rather than relying on this additional annotation, we propose an alternative formulation of the CR task, MajorEntity Identification ( MEI), where we: (a) assume the target entities to be specified in the input, and (b) limit the task to only the fre- quent entities. Through extensive experiments, we demonstrate that MEI models generalize well across domains on multiple datasets with supervised models and LLM-based few-shot prompting. Additionally, the MEI task fits the classification framework, which enables the use of classification-based metrics that are more robust than the current CR metrics. Finally, MEI is also of practical use as it allows a user to search for all mentions of a particular entity or a group of entities of interest. 1 Introduction Coreference resolution (CR) is the task of finding text spans that refer to the same entity. CR is a fundamental language understanding task relevant to various downstream NLP applications, such as question-answering , building knowledge graphs , and summarization . Despite the importance of CR and the progress made by neural coreference models , domain generalization remains an issue even with the best- performing supervised models . The lack of domain generalization in CR mod- els can largely be attributed to differences in an- notation guidelines of popular CR benchmarks, specifically annotation guidelines about what con- Input Document ( d) There lived a poor tailor named Mustapha, who had a son called Aladdin. Aladdin was disobedient to his father and mother and spent all his time idling with his friends. Coreference Resolution (CR) There lived apoor tailornamed Mustapha, who had asoncalled Aladdin. Aladdin was disobedient tohisfather and mother and spent allhis time idling with hisfriends.Major Entity Identification (MEI) There lived apoor tailornamed Mustapha, who had asoncalled Aladdin. Aladdin was disobedient tohisfather and mother and spent allhis time idling with his friends.E={Mustapha, Aladdin}Figure 1: CR vs. MEI. The CR task aims to detect and cluster all mentions into different entities, shown in various colors. MEI takes major entities as additional input and aims to detect and classify the mentions that refer only to these entities. stitutes a mention . For ex- ample, OntoNotes does not annotate singletons, confounding mention identity with being referential. Thus, models trained on OntoNotes generalize poorly. The importance of mention detection for CR generalization is further highlighted by Gandhi et al. (2023), who show that solely annotating mentions is sufficient and more efficient for adapting pre-trained coreference mod- els to new domains (in comparison to annotating coreference chains). Similarly, GPT-4 struggles with zero-/few-shot mention prediction, but given ground-truth mentions, its CR performance is com- petitive with the best-supervised models . Given these observations, we hypothesize that current CR models, including large language mod- els, generalize well at mention clustering but strug- gle to generalize on mention detection due to id- iosyncrasies of different domains/benchmarks. We put forth an alternative formulation of the CR task where the entities of interest are provided as addi- tional input. Assuming entities to be part of the input offloads the required domain adaptation from training to inference. Specifically, we propose the task of Major Entity Identification ( MEI), where we assume the major entities of the narrative, de- 1\nLitBank FantasyCoref Statistics CR MEI CR MEI # of Mentions 29103 16985 56968 35938 # of Non singletons 23340 16985 56968 35938 Mean ant. dist. 55.31 36.95 57.58 30.24 # of Clusters 7927 490 5829 942 Avg. cluster size 3.67 34.66 9.77 38.15 Table 1: Comparing CR and MEI. MEI has fewer but larger clusters, and a smaller mean antecedent distance (Mean ant. dist.). Our formulation’s frequency-based criterion for deciding major entities means that singleton mentions are typically not a part of MEI. fined as the most frequently occurring entities, to be provided as input along with the text (see Fig. 1). We focus on major entities for the following rea- sons: (a) Specifying major entities of a narrative is intuitively easier. (b) A handful of major entities often dominate any discourse. Table 1 shows that in LitBank roughly 6% of entities (490 of 7927) con- tribute to 60% of the mentions (16985 of 29103). To test the generalizability of MEI, we adapt two literary CR benchmarks, namely LitBank and FantasyCoref , and a state-of-the-art coreference model to MEI. While there is a big gap in CR performance between in- and out-of- domain models , we show that this performance gap is much smaller for MEI (Section 5.1). To test this hypothesis further, we evaluate large language models (LLMs) for MEI in a few-shot learning setup. On CR, LLMs are shown to struggle with mention detection and perform worse than supervised models . Contrary to this, on MEI, top LLMs ( e.g. GPT-4) are only slightly behind supervised models (Sec- tion 5.2). These experiments in the supervised setting and the few-shot setting demonstrate that the MEI task is more generalizable than CR. Additionally, we argue that MEI is easier to eval- uate than CR. The MEI task can be viewed as a classification task in which any text span either refers to one of the input entities or the null class (minor entities and other non-mention spans). The classification formulation of MEI allows for the use of classification-based metrics that are more robust than the current CR metrics. Furthermore, MEI, by its definition, disregards insignificant and smaller clusters known to inflate the CR metrics . As an aside, formulating MEI as a classification task allows for a trivial parallelization across candidate spans (Appendix A.1). Finally, MEI’s explicit mapping of mentions to predefined entities improves its usability over CR in downstream applications that focus on mentions of specific entities. MEI effectively replaces tailored heuristics employed to extract CR cluster(s) refer- ring to entities of choice in such applications (entity understanding , sentiment and social dynamics analysis ). 2 Task Formulation Notation. For a document d, letE={ej}L j=1be the set of Lmajor entities that we wish to identify. We define Mallas the set of all mentions that could refer to any entity and subsequently Mj⊆ M all as the set of mentions that refer to a major entity ej. Furthermore, we denote M=S jMjas the set of mentions that refer to one of the major entities while mentions that do not correspond to any major entity are designated as Mother=Mall\ M . Task formulation. In MEI, the input consists of the document dand designative phrases P= {p(ej)}L j=1where p(ej)succinctly represents the entity ej. For example, in Fig. 1, the phrases “Al- addin” and“Mustapha” uniquely represent Al- addin and his father who appear in “Aladdin And The Wonderful Lamp” . Note that in CR, the desig- native phrases Pare not part of the input. In contrast to CR’s clustering foundations, MEI starts with a prior for each entity (the designative phrase) and can be formulated as an open set clas- sification, where every mention is either classified as one of the major entities or ignored. Formally, MEI aims to assign each mention m∈ M jtoej and mentions m∈ M otherto∅, a null entity. 3 Supervised MEI models We propose MEIRa, MajorEntity Identification via Ranking, which draws inspiration from the entity ranking formulation and maintains an explicit representa- tion for entities. The MEIRa models consist of 3 steps: encoding the document, proposing candidate mentions, and an identification (id) module that tags mentions with major entities or the null entity. Document encoding is performed using a Longformer-Large , ϕ, that we finetune for the task. Mentions (or spans) are encoded as mi=ϕ(mi, d)by concatenating the 2\nfirst, last, and an attention-weighted average of the token representations within the mention span. In MEI, an additional input is the set of designative phrases Pfor the major entities. Since each phrase is a derived from the document itself, we also obtain its encoding using the backbone: ej=ϕ(p(ej), d). Mention detection. Similar to prior efforts , we use a mention proposal network that predicts high-scoring candidate mentions. This step finds all mentions Malland not just the ones corresponding to the major entities M.Training a model to only detect mentions of major entities would confuse it leading to poor performance. Identification module. As illustrated in Fig. 2, we initialize a working memory EW= [ej]L j=1as a list of Lmajor entities based on their designative phrase representations. Given a mention mi, the id module computes the most likely entity as: [s∗ i, e∗ i] = max j=1...Lf([mi,ej, χ(mi, ej)]),(1) where f()is an MLP that predicts the score of tag- ging mention miwith the entity ej, andχ(mi, ej) encodes metadata. The output s∗ icorresponds to the highest score and e∗ iis the top-scoring entity. Based on the score, miis assigned to: y(mi) =( e∗ iifs∗ i> τ , ∅ otherwise ,(2) where τis a threshold (set to 0 in practice). The metadata χ(mi, ej)contains a distance (po- sition) embedding representing the log distance be- tween the mention miand the last tagged instance of the entity ej. If no mention is yet associated with the entity, we use a special learnable embedding. Updates to the working memory. We investigate two approaches: (i)MEIRa-S tatic: As the name suggests, the working memory EWof the entity representations remains constant ( EW(0)) and is not updated with new mention associations. This makes the approach highly parallelizable. (ii)MEIRa-H ybrid: Similar to traditional CR, this variation maintains a dynamic working memory EW, which is updated with every new mention-id association. Specifically, assuming miis assigned toe∗ j, the working memory would be updated using a weighted mean operator gasej←g(ej,mi), similar to Toshniwal et al. (2020). To prevent error accumulation, we evaluate the mentions against EWand the initial entity representations ( EW(0)), then compute the average score. This hybrid ap- proach reaps benefits from both, the initial clean designative phrases and the dynamic updates. Following Toshniwal et al. (2020), the mention detection and identification modules are trained end- to-end using separate cross-entropy loss functions. 4 Few-shot MEI with LLMs We propose a prompting strategy to leverage LLMs for MEI, addressing their challenges in CR. Mention detection challenges. CR or MEI can be addressed using separate few-shot prompting strategies for mention detection and mention clus- tering/identification. However, Le and Ritter (2023) found that this strategy faced significant challenges with mention detection, performing worse than a deterministic mention detector. Thus, they assume access to an oracle mention detector and focus their study to evaluating the linking capabilities of LLMs. An alternative is to use an external supervised mention detector instead of the oracle. However, this requires annotated training data and may not align with a true few-shot LLM prompt paradigm. Additionally, supervised mention detectors often fail to generalize across CR datasets due to annota- tion variability . MEI with LLMs. We demonstrate that transition- ing from CR to MEI addresses this gap in mention detection and proposes an end-to-end, few-shot prompting approach for MEI. Inspired by Dobro- volskii (2021), we develop a prompting strategy that first performs MEI at word-level (rather than span), followed by a prompt to retrieve the span corresponding to the word. In addition to the document dand the set of phrases P, we also provide entity identifiers ( e.g. #1, #2) to the LLM. We will use the following example: Document: That lady in the BMW is Alice’s mom. Major Entities : 1.Alice ; 2.Alice’s mother . Prompt 1. Word-level MEI. Mention detection with LLMs is challenging due to the frequent oc- currence of nested mentions. We overcome this by prompting the LLM to tag each word. Specifi- cally, through few-shot examples, we ask the LLM to detect and tag the syntactic heads1(e.g., lady, Alice ,mom ) of mentions that refer to the major 1A syntactic head of a phrase is a word ( lady) that is central to the characteristics of the phrase ( The lady in the BMW ). 3\nMention -Entity Representation max-0.8 1.2 -0.1𝒔𝒊∗ > 𝝉 𝒆𝑖∗ -0.4 0.2 0.5 + 𝒔𝒊∗ 𝒆𝑖∗𝑔MEIRa -S MEIRa -HmaxMention -Entity Representation > 𝝉𝐦𝐢 𝒆𝒋𝒋=𝟏𝑳 𝓔𝑾=𝒆𝒋𝒋=𝟏𝑳𝑓 𝑓Figure 2: Identification module of MEIRa. A mention encoding miis concatenated with each entity’s embedding in EWand the metadata χ(mi, ej). Network fscores the likelihood of assigning mito each major entity. If the highest score s∗ iis above the threshold τ,miis associated with the highest scoring major entity e∗ ior discarded. In MEIRa-S, the entity memory EWremains static. For MEIRa-H (blue path), the assigned entity’s working memory is updated, and both the static (top half) and updated working memory (bottom half) are utilized to compute a final score. entities. Other words are left untagged (implicitly assigned to ∅, the null entity). To create the few- shot examples, a contiguous set of words annotated with the same entity is considered as a span and its syntactic head is extracted using spaCy . The ideal output for the example above is: “That lady#2 in the BMW is Alice#1’s mom#2.. ” . Note that, even though the span “BMW” might be a valid mention, it is not annotated as it does not refer to one of the major entities. The exact prompt used for this is provided in the Appendix, Table 9. Prompt 2. Head2Span retrieval. The entity tagged heads are passed to the Head2Span (H2S) module, along with the document to retrieve the span. The prompt consists of the document pre-annotated with the positions of the head, where each candidate head-word is followed by a “#” and is instructed to be replaced by the complete span (including any existent determiners and adjectives). For the input: That lady# in the BMW is Alice#’s mom#. the expected ideal output is That lady (That lady in the BMW) in the BMW is Alice(Alice’s)’s mom (Alice’s mom). Table 10 in the appendix shows the H2S prompt. Preserving structure. We pose MEI as a structured generation task, prompting LLMs to reproduce doc- uments and generate MEI tags at specific locations. Proprietary models like GPT-4 generally reproduce documents faithfully but for rare failures, we use the Needleman-Wunsch algorithm to align documents and extract tags In the case of open-source models, we employ reg- ular expression-based constrained decoding with theoutlines library 2. 2https://outlines-dev.github.io/outlines/5 Experiments Datasets. We evaluate three literary datasets chosen for their longer length and identifiable major enti- ties, particularly the key narrative elements such as characters or plot devices. Table 1 compares statistical aspects of MEI and CR, revealing that MEI features fewer clusters (entities) but larger cluster sizes (more mentions per cluster). (i)LitBank annotates coreference in 100 literary texts, each averaging around 2000 words. Following prior work , we utilize the initial cross- validation split, dividing the documents into train- ing, validation, and test sets with an 80:10:10 ratio. (ii)FantasyCoref provides OntoNotes -style3coreference annotations for 211 documents from Grimm’s Fairy Tales, with an average length of approximately 1700 words. The dataset includes 171 training, 20 validation, and 20 test documents. (iii)Additional Fantasy Text ( AFT ) provides annotations for long narratives: (a) Aladdin (6976 words), (b) Ali Baba and the Forty Thieves (6911 words), and (c) Alice in Won- derland (13471 words). Metrics. In contrast to CR, MEI facilitates the use of simple classification metrics. We define standard precision and recall for each major entity considered as an individual class of its own. For a dataset D={d1, . . . , d |D|}, the evaluation metrics are defined as follows: Macro-F1 =P d∈DP ej∈EdF1(ej) P d∈D|Ed|,and (3) 3The exact guidelines are documented here 4\nFantasyCoref LitBank Model Macro-F1 Micro-F1 Macro-F1 Micro-F1 Coref-ID 72.5 ±2.2 78.8±2.7 79.7±2.7 80.6±3.7 Coref-CM 77.7 ±1.8 82.4±2.2 74.1±2.5 76.0±3.0 Coref-FM 77.9 ±1.7 83.2±2.2 77.4±2.3 80.6±4.7 MEIRa-S 80.7±0.6 84.9±0.5 80.8±0.8 81.8±1.0 MEIRa-H 80.3 ±1.4 84.3±2.0 82.3±1.2 83.2±2.5 Table 2: Results for models trained jointly on Fantasy- Coref and LitBank. Micro-F1 =1 |D|X d∈DP ej∈EdF1(ej)· |M j| P ej∈Ed|M j|.(4) Macro-F1 is the average F1-score of entities across the dataset, while Micro-F1 is the frequency- weighted F1-score of entities within a document, averaged across the dataset. Major entity selection. We select as major entities, the top- kentities ranked as per the frequency of occurrences. We use k=5for LitBank and Fantasy- Coref after visualizing the frequency plots of their training sets. For longer documents in AFT, we select up to 9 entities to ensure coverage of all key entities from the story. We also enforce that every entity ej∈ Ehas a mention count |Mj| ≥5. We derive the representative span for each selected ej from the set of mentions Mjby selecting the most commonly occurring name or nominal mention. Implementation details. Supervised models : Model hyperparameters are derived from Toshniwal et al. (2021). To ensure consistent performance across different numbers of target entities, we randomly select a subset of major entities at each training iteration (for more details, see Appendix A.2). All supervised models were trained five times with different random seeds, and we present aggregated results as the mean and standard deviation. LLMs: We follow a few-shot prompting mecha- nism across the setups and experiments. Prompts that perform referential tasks consist of 3 examples of 6 sentences each. These 3 examples contain a mixture of narrative styles (narratives, dialogues), types of entities (major, non-major entities), cate- gories of mentions (names, nominals, pronouns), and plurality. Additionally, before producing the MEI output, we ask the LLM to describe each ma- jor entity briefly. We find that this additional step improves performance. For the H2S prompt, we provide 9 sentences as examples, balancing the number of pre- and post-modifiers to the head. All FantasyCoref LitBank Model Macro-F1 Micro-F1 Macro-F1 Micro-F1 Coref-ID 63.4 ±1.8 69.5±3.6 58.0±2.4 57.7±1.0 Coref-CM 72.8 ±0.3 76.5±0.5 61.0±5.9 61.2±5.2 Coref-FM 71.2 ±1.5 75.2±1.3 66.1±2.1 67.1±3.9 MEIRa-S 75.7±1.5 78.5±1.2 74.6±1.1 74.7±1.6 MEIRa-H 74.7 ±1.0 78.5±0.8 77.2±1.9 78.6±2.7 Table 3: Results for models trained on OntoNotes. examples were selected from LitBank’s train set and kept constant throughout the experiments. We set the temperature to 0 for all the models to ensure consistent and reproducible outputs. 5.1 Experiments: Supervised Models Baselines. We train the fast-coref model for CR and perform the following three inference-time adaptations for MEI: Coref-ID: fast-coref uses active lists of entity representations, resolving coreference by associat- ing mentions with existing clusters or generating new ones. During inference, we disable the cluster creation step and pre-fill the entity list with the en- coded vector representations of the major entities. Hence, all the detected mentions either get mapped to one of the major entities or are discarded. Coref-Cosine Map (Coref-CM): Since coref- erence clusters obtained from fast-coref lack explicit entity association, we employ the Kuhn- Munkres (KM) algorithm to find the optimal matching cluster for each major entity. The cost matrix uses the cosine similarity between the encoded representation of the major entities and that of the predicted cluster embeddings, both derived from fast-coref . Coref-Fuzzy Map (Coref-FM): This method uses the KM algorithm to derive optimal mappings by constructing a cost matrix from accumulated fuzzy- string matching scores between designative phrases and the predicted cluster’s mention strings. Supervised results. In this experiment, we train MEIRa and the baseline models on the joint train- ing set of LitBank and FantasyCoref. Subsequently, we assess their performance on the individual test sets, with results summarized in Table 2. Overall, MEIRa models consistently outperform the base- lines on both metrics while also exhibiting better stability with a lower variance. The considerable variance observed in the performance of baseline methods across all experiments underscores the non- trivial nature of identifying clusters corresponding 5\nAFT Model Macro-F1 Micro-F1 Coref-ID 68.1 ±5.9 78.7±6.1 Coref-CM 71.1 ±2.8 82.4±4.2 Coref-FM 71.1 ±4.7 83.2±4.7 MEIRa-S 81.6 ±1.4 88.8±1.3 MEIRa-H 82.8±1.1 89.5±1.0 Table 4: Results on the AFT dataset. to major entities within the output clusters provided by the CR algorithms. MEIRa-H and MEIRa-S exhibit competitive parity on FantasyCoref (chil- dren stories), while MEIRa-H edges out on LitBank dataset, showcasing its adaptability in elaborate sentence constructions. Generalization across datasets. To evaluate the generalization capabilities of MEIRa and baseline models, we train them on the OntoNotes dataset and then test their performance on LitBank and Fantasy- Coref. The results are presented in Table 3. When compared with Table 2, we observe a significant per- formance drop across the baseline models ( e.g. for Coref-ID, the average Micro-F1 scores drop from 80.6 to 57.7 on LitBank). The performance gap for the baseline models is more pronounced on LitBank than on FantasyCoref because LitBank’s annotation strategies differ more significantly from those of OntoNotes. The observations aligns with previous work , that showcase poor generalization of models trained for CR. In con- trast, MEIRa models recover most of the underlying performance on both the datasets (MEIRa-H drops a little from 83.2 to 78.6 on LitBank Micro-F1), demonstrating MEI as a more adaptable task, bring- ing robustness over varying annotation strategies. Long documents. Table 4 presents results on the AFT dataset of the models trained using a com- bined training set of LitBank and FantasyCoref. MEIRa models significantly outperform the base- line models, with MEIRa-H gaining 11.7% in Macro-F1 over the best baseline. The results demon- strate the efficacy of MEIRa models on resolving key entities in longer narratives. Computational performance. MEIRa-S supports parallel batched processing since it does not update the working memory after associating mentions, i.e. the mentions need not be processed sequen- tially from left to right. Hence, post-mention de- tection (common to all models), MEIRa-S is about 25×faster than fast-coref when assessed across LitBank, FantasyCoref and AFT datasets on an FantasyCoref LitBank Model Macro-F1 Micro-F1 Macro-F1 Micro-F1 MEIRa-H 88.5 91.0 86.1 85.4 GPT-4 90.7 92.0 88.8 91.6 GPT-3.5 65.6 70.4 74.3 75.8 Code Llama-34B 63.4 70.8 68.3 72.7 Llama3-8B 50.5 57.8 46.3 52.1 Mistral-7B 62.1 71.1 61.2 70.9 Table 5: Few-shot LLM prompting results assuming the availability of ground-truth mentions. NVIDIA RTX 4090 (see Fig. 3 in the appendix). Additionally, with the model’s small memory foot- print during inference, the entire process can also be parallelized across chunks of documents making it extremely efficient. Hence, we pose MEIRa-S as a faster while competitive alternative to MEIRa- H (that requires dynamic updates and has similar computational performance as fast-coref ). 5.2 Experiments: Few-shot prompting Models. We experiment with GPT-44, GPT-3.55, Code Llama-34B (Rozière et al., 2024), Mistral-7B , and Llama3- 8B.6Following Le and Ritter (2023), we use the instruction-tuned versions for open-source models. These models were chosen for their ability to handle the extended context required for our benchmarks. 5.2.1 Linking Performance w/ Gold Mentions We first evaluate all the models assuming the avail- ability of an oracle mention detector. The experi- mental configuration is aligned with that of Le and Ritter (2023), albeit with the distinction that we as- sess them for the MEI task rather than for CR. The prompt used in our setup is provided in Table 11 of Appendix. For comparison, we also perform inference on golden mentions with MEIRa-H. The results in Table 5 show that GPT-4 sur- passes the supervised MEIRa-H model in this setup. Among LLMs, GPT-4 is easily the best-performing model. Code Llama-34B performs the best among open-source models, closely followed by Mistral- 7B. While Code Llama-34B is tailored for the code domain, surprisingly, it outperforms strong LLMs suited for natural language. This result corroborates a similar finding by Le and Ritter (2023) for CR and related evidence regarding code pretraining aiding entity tracking . We find 4Specifically, gpt-4-1106-preview 5Specifically, gpt-3.5-turbo-1106 6https://ai.meta.com/blog/meta-llama-3/ 6\nFantasyCoref LitBank Model Macro-F1 Micro-F1 Macro-F1 Micro-F1 MEIRa-H 80.3 84.3 82.3 83.2 GPT-4 w/ Ext det 80.1 82.2 78.6 83.9 GPT-4 with varying prompting strategies Single prompt 63.0 66.2 64.4 72.8 Two-stage prompt 70.5 74.9 76.5 81.3 Word-level MEI + spaCy H2S GPT-4 77.4 79.4 82.5 85.5 GPT-3.5 50.1 54.4 60.1 63.1 Code Llama-34B 19.4 23.4 9.4 16.2 Llama3-8B 29.2 32.8 24.5 27.1 Mistral-7B 28.0 30.9 14.9 15.3 Table 6: Results on LLMs with different mention detec- tion and linking strategies. that Code Llama-34B performs close to GPT-3.5 for FantasyCoref, though a sizable gap remains for LitBank, potentially due to its linguistic complexity. 5.2.2 MEI Task Performance with LLMs In this section, we present the results for the end- to-end MEI task using LLMs. We compare all the approaches from Section 4 and relevant baselines with the results summarized in Table 6. To limit the combinations of LLMs and approaches for our experiments, we first compare all the approaches in tandem with GPT-4 and then present results for the best-performing approach with other LLMs. The first straightforward approach of using a Sin- gle Prompt to retrieve all the mentions of major entities in a single pass results in a significant perfor- mance drop compared to MEIRa-H (prompt in Ta- ble 12 of Appendix). The reason is that while GPT-4 outperforms MEIRa-H at mention linking, its men- tion detection performance, especially with nested mentions, is much worse compared to MEIRa-H.7 To further underscore the importance of mention detection, we also compare against the baseline GPT-4 w/ Ext det , which utilizes an external pre- trained mention detector followed by prompt-based linking (prompt in Table 11 of Appendix). We train the mention detector on the PreCo dataset , which achieves a 93.8% recall and 53.1% precision on the combined FantasyCoref and LitBank validation sets. We observe that GPT-4 w/ Ext det is almost at par with the fully supervised MEIRa-H, again highlighting the strong mention linking capabilities of GPT-4. Next, we present the results of our proposed 7The failure to detect nested mentions is despite best efforts to provide illustrative examples in the few-shot prompt. Le and Ritter (2023) report similar findings with earlier GPT versions.Error Type MEIRa-H GPT-4 Missing Major 162 793 Major-Major 210 154 Major-Other 243 0 Other-Major 200 516 Extra-Major 461 896 Total 1276 2359 Table 7: Breakdown of errors by MEIRa-H and GPT-4 on the combined LitBank and FantasyCoref test set. Two-stage prompt , motivated by the Single prompt method’s failure with nested mentions. The first prompt asks GPT-4 to perform word-level MEI, by limiting the task to syntactic heads only. The second prompt then performs the task of mapping the identified syntactic heads to full mention spans. The results strongly validate our proposed approach with a relative improvement of more than 7% over theSingle prompt method across all metrics and datasets. We also explore replacing the second step, i.e., head-to-span (H2S) retrieval, with an external tool. Specifically, we invert spaCy ’s span-to-head mapping to obtain a head-to-span retriever.8 GPT-4 significantly improves in this setup, out- performing even the supervised model on LitBank. Given the strong performance of GPT-4 + spaCy H2S, we evaluate the open-source LLMs in only this setting. We observe a wide gap between GPT-4 and the open-source models. Llama3-8B surpasses other open-source models on both datasets, whereas the larger Code Llama-34B underperforms on the end-to-end task. This contrasts with the findings of the idealized golden mention setting, which as- sesses purely the model’s linking capabilities. The discrepancy between these results highlights the importance of evaluating in the realistic end-to-end setup. 5.3 Error Analysis We classify MEI errors into five categories: (1)Missing Major: Not detecting a mention m∈ M. (2) Major-Major: Assigning a mention m∈ Mjto any other major entity E \ej. (3) Major- Other: Assigning a mention m∈ M to∅. (4)Other-Major: Assigning a mention m∈ M other to any major entity in E. (5) Extra-Major: Detect- ing extra mentions m̸∈ M alland assigning to any major entity in E. 8For the test set gold mentions of the two datasets, there were only two cases where spans had the same head. We handled these two cases manually. 7\nGolden MentionsPresently [a small boy] 0came walking along the path – [an urchin of nine or ten] 0. . . . . . [Winterbourne] 1had immediately perceived that [he] 1might have the honor of claiming [him] 2as a fellow countryman. “Take care [you] 2don’t hurt [your] 2teeth," [he] 1 said, paternally . . . . . . [My] 2mother counted them last night, and one came out right afterwards. She said she’d slap [me] 2if any more came out. [I] 2can’t help it. It’s this old Europe . . . . . . If [you] 2eat three lumps of sugar, [your] 2mother will certainly slap [you] 2," [he] 1said. “She’s got to give [me] 2some candy, then," rejoined [[his] 1young interlocutor] 2. GPT-4 OutputPresently [a small boy] 0came walking along the path – [an urchin of nine or ten] 0. . . . . . [Winterbourne] 1had immediately perceived that [he] 1might have the honor of claiming [him] 2as a fellow countryman. “Take care you don’t hurt your teeth," [he] 1said, paternally . . . . . . [My] 2mother counted them last night, and one came out right afterwards. [She] 2said[she] 2’d slap [me] 2if any more came out. [I] 2can’t help it. [It]2’s this old Europe . . . . . . If you eat three lumps of sugar, [your] 2mother will certainly slap [you] 2," [he] 1said. “ [She] 2’s got to give [me] 2some candy, then," rejoined [his] 2young interlocutor. MEIRa-H OutputPresently asmall boy came walking along the path – [anurchin ofnine orten] . . . . . . [Winterbourne] 1had immediately perceived that [he] 1might have the honor of claiming [him] 2as a fellow countryman. “Take care [you] 2don’t hurt [your] 2teeth," [he] 1 said, paternally . . . . . . [My] 2mother counted them last night, and one came out right afterwards. She said she’d slap [me] 2if any more came out. [I] 2can’t help it. It’s this old Europe . . . . . . If [you] 2eat three lumps of sugar, [your] 2mother will certainly slap [you] 2," [he] 1said. “She’s got to give [me] 2some candy, then," rejoined [[his] 1young interlocutor] 2. Table 8: Qualitative Analysis showcasing different errors made by GPT-4 and MEIRa-H. Errors are color-coded as follows: Miss ingMajor,Others-Major,Extra-Major, Major-Major, and Major-Other. Results combined over the LitBank and Fanta- syCoref test sets are presented in Table 7. Missing Major and Extra-Major contribute most of the er- rors for GPT-4, highlighting the scope for improve- ment in mention detection and span retrieval. Men- tion detection also remains a challenge in MEIRa- H, the model making most of the mistakes in the Extra-Major category. GPT-4 distinguishes major entities more clearly than MEIRa-H but tends to over-associate other mentions with major entities, resulting in higher Other-Major and Extra-Major errors. Note that GPT-4 has zero errors in the Major- Other category due to the prompt design, which only allows annotating major entities. Examples of these errors are visualized in Table 8. 6 Related Work Neural models for CR have become the de facto choice in supervised settings . Efforts to enhance model efficiency include reducing candidate mentions to word-level spans and using single dense representations for entity clusters . Generalization in CR remains a lingering prob- lem . Current solutions include fea- ture addition , joint training , and active learn- ing . Rather than relying on additional train- ing data, we argue for an alternative formulation where the burden of domain adaptation is offloaded from training to inference. Evaluation of LLMs for CR has largely been conducted in limited settings, such as the sentence- level Winograd Schema Challenges (WSC) , clinical pronoun resolution and instance-level Q&A . Le and Ritter (2023) conducted the first document-level evaluation of LLMs for CR but assumed an oracle-mention detector. In contrast, we conduct end-to-end evaluations. Character Identification deals with specific char- acters from transcripts of TV shows and trains a model tailored to these constrained inputs . Baruah and Narayanan (2023) introduced a dataset annotated with referent mentions of specific characters of interest. We differ from these works by adopting a generalized task formulation indepen- dent of annotation strategies and entity selection. 7 Conclusion CR models are limited in their generalization capa- bilities owing to annotation differences and general challenges of domain adaptation. We propose MEI as an alternative to CR, where the entities rele- vant to the input text are provided as input along with the text. Our experiments demonstrate that MEI is more suited for generalization than CR. Additionally, MEI can be viewed as a classifica- tion task that (a) enables the use of more robust classification-based metrics and (b) a trivially par- allelizable model across document spans, which gives a 25x speedup over a comparable corefer- ence model, making MEI more suitable for longer narratives. Unlike CR, the formulation of MEI allows few-shot prompted LLMs to effectively com- pete with trained models. Our novel two-stage prompting and robust baseline methods empower top-performing LLMs like GPT-4 to achieve this. Our analysis indicates that this task holds promise for effectively evaluating the long-context referen- tial capabilities of LLMs in an end-to-end manner. 8\n8 Limitations Major Entity Identification (MEI) is proposed as a generalizable alternative to the coreference reso- lution (CR) task, and is not a replacement of CR. MEI limits itself to major entities and only caters to applications that are interested in a particular pre-defined set of entities. Our experiments follow certain thresholds that might not be universally ap- plicable, and results and performance might vary slightly along this decision (refer Appendix A.2). Our current few-shot prompting evaluations are limited only to a few models that accommodate a large context window. Optimizing prompts and architecture to allow for a piece-wise aggregation of outputs across chunks of documents is left for future work.
[ "Code Pretraining Improves Entity Tracking Abilities of Language Models", "GUMsley: Evaluating Entity Salience in Summarization for 12 English Genres", "Code Llama: Open Foundation Models for Code", "Are Large Language Models Robust Coreference Resolvers?", "Coreference Resolution through a seq2seq Transition-Based System", "Annotating Mentions Alone Enables Efficient Domain Adaptation for Coreference Resolution", "LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution", "Large language models are few-shot clinical information extractors", "On Generalization in Coreference Resolution", "Word-Level Coreference Resolution", "OntoGUM: Evaluating Contextualized SOTA Coreference Resolution on 12 More Genres", "Moving on from OntoNotes: Coreference Resolution Model Transfer", "Incremental Few-shot Text Classification with Multi-round New Classes: Formulation, Dataset and System", "Conundrums in Entity Coreference Resolution: Making Sense of the State of the Art", "Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks", "Language Models are Few-Shot Learners", "Longformer: The Long-Document Transformer", "An Annotated Dataset of Coreference in English Literature", "An Entity-Driven Framework for Abstractive Summarization", "Rewarding Coreference Resolvers for Being Consistent with World Knowledge", "SpanBERT: Improving Pre-training by Representing and Predicting Spans", "Coreference Resolution with Entity Equalization", "Text Generation from Knowledge Graphs with Graph Transformers", "PreCo: A Large-scale Dataset in Preschool Vocabulary for Coreference Resolution", "Neural Models for Reasoning over Multiple Mentions Using Coreference", "Emotion Detection on TV Show Transcripts with Sequence-based Convolutional Neural Networks", "End-to-end Neural Coreference Resolution", "Lexical Features in Coreference Resolution: To be Used With Caution", "Character Identification on Multiparty Conversation: Identifying Mentions of Characters in TV Shows", "Which Coreference Evaluation Metric Do You Trust? A Proposal for a Link-based Entity Aware Metric", "Domain Adaptation with Active Learning for Coreference Resolution", "Error-Driven Analysis of Challenges in Coreference Resolution", "Towards Robust Linguistic Analysis using OntoNotes", "BLANC: Implementing the Rand index for coreference evaluation", "Algorithms for the Assignment and Transportation Problems", "Learning and Evaluating Character Representations in Novels", "FantasyCoref: Coreference Resolution on Fantasy Literature Through Omniscient Writer’s Point of View", "Overview of TAC-KBP2015 Tri-lingual Entity Discovery and Linking" ]
Instance-Level Dynamic LoRAs Composition for Cross-Task Generalization
Instance-Level Dynamic LoRAs Composition for Cross-Task Generalization Abstract Large language models perform well on tasks that have undergone fine-tuning of instructions, but their performance to completely unseen tasks is often less than ideal. To overcome the challenge of cross-task generalization, task- level LoRA combination is proposed, which does not require training a model for new tasks. Instead, it learns the LoRA combination weights based on a small number of samples to form the task model. However, task-level LoRA combination only utilize a few task mod- ules due to its reliance on the weight enumer- ation method, and it also overlooks the speci- ficity between different instances. Therefore, we proposed an instance-level LoRA composi- tionfor cross-task generalization, which selects appropriate multiple task LoRAs for each input instance and dynamically determines the com- position weights. Our experiments on publicly available datasets show that our method outper- forms the typical method, LoraHub, in 16 out of 27 tasks. We release the source code at https: //github.com/noname822/iLoraComp.git 1 Introduction Currently, large language models (LLMs) demon- strate remarkable zero-shot learning capabilities on tasks that have undergone instruction tuning (Chung et al., 2022; Achiam et al., 2023; Touvron et al., 2023; AI@Meta, 2024). However, numerous studies have revealed that when encountering novel tasks outside their training distribution, these mod- els often fail to exhibit satisfactory performance . Explor- ing strategies to enhance the cross-task general- ization abilities of these massive language models, enabling them to adapt swiftly and accurately to diverse new tasks, has emerged as a pressing chal- lenge that demands attention. Addressing the challenge of cross-task general- ization has traditionally involved fine-tuning mod- els for each task and in-context learning. However, LoRALibraryarithmetic entityextractionquestiongenerationdocumentsummary Instance-level Composition taskTask-level Composition best NxNCompositionrandomN xNxN task LLM Q LLM A…𝒘𝟏𝒘𝑵 {𝒘𝟏…𝒘!}iComposition{𝒘𝟏…𝒘!}k𝒘𝟏…𝒘!𝟏Retrieval A Figure 1: Previous task-level composition constructs a shared task model for all instances. The proposed instance-level composition constructs a unique task module for each instance. these conventional approaches come with inherent limitations. Fine-tuning for every new task can be resource-intensive, demanding extensive data, storage, and computing power, which compromises flexibility. Although methods such as LoRA , falling under the delta tuning approach, aim to adapt to specific tasks or domains by introducing smaller parameter up- dates while minimizing computation and storage costs, thus mitigating storage issues and enhanc- ing flexibility, they still require backpropagation for precise output tuning, rendering them less cost- effective for multiple tasks. In-context learning , on the other hand, necessi- tates more input than zero-shot to fully leverage the model’s capabilities, indirectly increasing the computational resources needed for inference. To address the shortcomings of these methods and achieve efficiency and sustainability in multi- task, few-shot, and high-volume scenarios, inno- vative approaches such as LoraHub have emerged. LoraHub rapidly adapts to unseen tasks by intelligently combining pre-trained low-rank adapters from other relevant tasks. This method enhances model performance across di- 1\nverse tasks without increasing input requirements, striking a balance between performance and energy consumption. However, LoraHub also has room for improve- ment in terms of its effectiveness. Firstly, when selecting Lora modules from a trained Lora library for task adaptation composition, LoraHub’s current strategy is to randomly select modules from the library. This random selection may result in the inclusion of tasks that are either overly similar or completely unrelated, leading to significant perfor- mance variations under different random seeds for the same task, thus exhibiting poor stability. Sec- ondly, when training on instances, LoraHub does not consider the subtle nuances between individual instances, preventing the full utilization of the lim- ited instance data to capture the potential specificity of inputs, which in turn limits LoraHub’s perfor- mance. To address these two issues, we propose the following solutions: •To address the issue with the Lora module se- lection strategy, we adopt a selection method based on task similarity. By calculating the se- mantic similarity between the target task and the training sets of the available Lora mod- ules, we prioritize the combination of Lora modules that are most closely related to the current task, thereby enhancing the stability and effectiveness of the task-level adaptation. •To fully account for the unique characteris- tics of each input instance, we propose tai- loring a dedicated Lora module combination for each instance. By calculating the seman- tic similarity between the input instance and the training instances used to create the avail- able Lora modules, we select the most fitting instance-specific Lora combination as the pro- cessing strategy for that input. This approach effectively leverages the subtle nuances across different input instances. By employing the aforementioned improvements, our method has achieved a significant enhancement in inference stability. Additionally, compared to the original LoraHub, our approach has demonstrated a noticeable performance advantage. In our experi- ments, a total of 27 tasks were tested, and in these, our proposed method outperformed LoraHub on 16 of them. 2 Related work Instance-Based Generation for LLMs refers to a method that leverages dataset analysis to extract valuable instance, thereby enhancing the perfor- mance of a task. The introduction of large lan- guage models has since inspired numerous works, including Wiki-Chat , which have sought to augment language model capabil- ities through retrieval-based knowledge enhance- ment. This trend originated with RAG , which incorporates knowledge as prompts for in-context learning in LLM. Additionally, there are works that do not retrieve text as prompts, but instead retrieve delta-tuning modules, using these modules to generate prompts for answering questions, such as Knowledge Card . In this paper, we retrieval delta-tuning mod- ule by calculating the semantic similarity between instance and question using the method of DPR . Module Composition represents an endeavor to integrate diverse models, Consequently, tasks that retrieve model modules for composition have nat- urally emerged, such as MAC, SLM , Arrow, LoraRetriever , and Lora- Flow . While most methods adopt a simplistic processing approach for mod- els. These approaches strive to leverage retrieval methods by employing retrieval scores as weights during composition, thereby obviating the need for manual parameter tuning and facilitating immedi- ate usage. Concurrently, methods such as Moelora exist that directly assign weights through backpropagation. LoraHub occupies an intermediary position which used a gradient-free optimization. In comparison to previous work, our approach places a stronger emphasis on utilizing in- stances to get model modules that are more relevant to the given question. 3 Method In this section, we will provide an overview of the process, followed by an explanation of how to identify appropriate task Lora modules based on Lora training data. Finally, we will offer a detailed account of how to integrate the selected LoRA com- binations with the input data. 2\n3.1 Overview We first train the upstream tasks Ton the large model Mθusing the training set Ti∈Tto get LoRA module Liand collect them into Lora li- braryL. Next, We specify the hyperparameter Nas the number of LoRA modules to be com- posed. Each new task T′/∈Thas their in- stance set I′. For each instance ej∈ I′, we find the closest NLoRA library from L, denoted as Lej={L1, . . . , L N}, and optimize a weight com- bination ˆwej={w1, . . . , w N}using a gradient- free method as ng. For a new question Qbelonging to new task T′, we select the most suitable weight combination ˆwejbased on the semantic similarity between Qandejthen make new LoRA module ˆLj. Finally, we combine these to form the model Mϕ=LoRA (Mθ,ˆL)and use it for reasoning on Q. 3.2 LoRA module Retrieval To select the most suitable LoRA modules from Lfor composition, we identify the corresponding training set Ti={(x1, y1), . . . , (xn, yn)}for each Li∈ L. We then derive the task embedding vector embTi=1 nPn k=1Ms(xk+yk)using the sentence vectorization model Ms. Similarly, for the instance ej= (xej, yej), we can obtain its embedding vec- toremb ej=Ms(xej+yej). Consequently, Follow- ing the approach of Mussmann and Ermon, 2016 and Karpukhin et al., 2020b in using cosine similar- ity as a measure of task similarity, we can identify the top N most similar tasks to ej. The formula for cosine similarity is as follows: similarity (ej,Ti) =emb ej·embTi ∥emb ej∥ · ∥embTi∥(1) Where embTirepresents the embedding vector of thei-th task, and ∥ · ∥ denotes the Euclidean norm of a vector. By calculating the cosine similarity between each task Tiand the instance ej, we can select the top N tasks with the highest similarity as the candidate set of similar tasks for ej, which is denoted as Lej, and then collect all Lejas a set called SL. 3.3 Instruct based Module Composition and Inference To fine-tune the model Mθto the state that best aligns with the instance ej= (xj, yj), we employ the non-gradient optimization method ngto refine the weights. We perform a broad adjustment of the init weights winitusing all the instances for Ti donated as Ii={e1, . . . , e n}. Then, we conduct a targeted adjustment using the instruct-level LoRA setLejcorresponding to the specific instance ej. The optimization process is encapsulated in the following formula: ˆwej=ng(Ii,Lej, winit) (2) Having aggregated the adjusted weights ˆwej for all einto the set Sˆw, we proceed to identify theejthat shares the most affinity with the input x. This is accomplished by calculating the co- sine similarity between the input embedding vector emb eix=Ms(xj)forejand the embedding vec- toremb x=Ms(x)for the input x. This analysis allows us to select the most suitable LoRA library fromSL, denoted as Lsuit, and its corresponding weights from Sˆw, denoted as ˆwsuit. Utilizing these components, we construct the optimal LoRA mod- uleˆL= ˆwsuitLsuit. As a result, we obtain the model Mϕ=LoRA (Mθ,ˆL)that is specifically tailored to the given input. This model is then em- ployed for inference, with the output expressed as y=Mϕ(x). 4 Experimental Setup LLM. We utilized the Flan-T5-Large model as our foundational large language model Mθfor experimentation pur- poses. Concurrently, we employed the compact all_datasets_v4_MiniLM-L6 model as our Ms, which was trained on a dataset comprising one bil- lion sentence pairs, excluding the BBH and flanv2 datasets that we utilized. This compact model effec- tively supported our sentence vectorization efforts. Dataset and Evaluation. We utilize the flanv2 dataset , which incorporates data from four mixed sources, as the training set for upstream tasks. It encompasses 264 distinct datasets, out of which we selected 97 for our pur- poses. We then employed the Lora modules trained on these datasets by Huang et al. (2024) as our repository of Lora models for potential selection. The Big-Bench Hard benchmark , with 27 tasks, offers a valid test for Mθas it was not trained on these datasets. We sampled 5 instances per task, used 20 LoRA modules for adaptation, and initiated with 40 steps of global optimization, followed by EM-based evaluation on the remaining data. 3\nBaseline Setup. To ensure our method’s credibility, we used our LoRA library to test LoraHub refined parameters for 40 steps as a baseline, averaging three runs for the final score (LoraHub avg). We compared scores using zero- shot, full fine-tuning (FFT), and in-context learning (ICL). For LoRA module selection, we conducted ablation experiments using the average embedding vector of five instances per task (BatchComp). In FFT, we maintained consistency by training with the same random seeds and 5 instances. We trained the model over 40 epochs with a learning rate of 3e-5 and batch size of 5. 5 Result And Discussion Method average average-3 FFT∗39.8 44.3 0-shot 24.4 27.4 ICL 30.9 34.8 LoraHub avg 34.0 38.1 BatchComp 34.7 39.0 Ours 35.6 40.0 Table 1: Experimental results on 27 tasks of BBH, the "average-3" has excluded three tasks with an accuracy of less than 10%, (*) represents the upper limit. Method FFT ICL 0-shot LoraHub BatchComp 7/18 18/3 16/8 13/12 Ours 11/16 19/2 18/7 16/8 Table 2: A/B vs. the baseline, "A" represents the num- ber of tasks where our proposed method performed bet- ter than the baseline method, while "B" represents the number of tasks where our proposed method performed worse than the baseline method. 5.1 Result The primary results are presented in Table 1 and Table 2, with detailed task scores in Appendix A. Our method significantly outperforms the zero-shot approach on 19 out of 27 tasks and the in-context learning (ICL) method on 18 tasks in terms of aver- age performance. Compared to ICL, our approach is more computationally efficient, requiring fewer tokens. Our modifications to LoraHub are also notably successful, with our method outperform- ing LoraHub’s random selection approach on 16 tasks. Crucially, our instance-level method exhibits a 0.9% performance enhancement over our task- level method in the ablation study, underscoring the efficacy of capturing input nuances through instance-specific adaptation. However, our method still cannot compete with full fine-tuning (FFT), which holds a significant performance advantage over other methods on cer- tain highly structured tasks, such as "date under- standing" and "dyck language". The results suggest that only FFT enables the model to adequately learn the underlying structure and patterns required for these more complex and specialized tasks. 5.2 Discussion Ablition study. Our instance-level approach sig- nificantly outperforms the task-level BatchComp, which directly selects Lora modules without pair- ing questions to instances. BatchComp’s 0.7% im- provement over random LoraHub selection pales in comparison to our approach’s doubling of per- formance in the "disambiguation qa" task, likely due to our method’s superior ability to highlight the importance of key instances for task success. Retrieval method average BM25 25.6 DPR L2 Distance 34.3 DPR Cosine Similarity 35.6 Table 3: Result of different retrieval strategy Retrieval strategy. Our approach is closely tied to retrieval performance. If accurate retrieval is not achieved, properly aligning suitable instances with corresponding questions and matching them with the appropriate LoRA modules, the overall effectiveness will be reduced, as demonstrated in Table 3 like bm25. The results obtained from the DPR’s L2 distance and Cosine Similarity confirm the efficacy of DPR in instance-level fusion. 6 Conclusion Our work introduces two key enhancements to the LoraHub framework. The first is the incorporation of a method that indexes models trained on datasets using their semantic centroids, which improves Lo- raHub’s precision at the task level. The second is the introduction of instance-level adaptation, which leverages the distinctive features of individual in- stances to elevate the performance ceiling of the Lo- raHub approach. These complementary strategies work in synergy to bolster the model’s cross-task generalization capabilities. 4\n7 Limitation Increased Computational Cost. Our method in- curs a higher computational cost than LoraHub, mainly because we train weights for each individ- ual instance during the Lora group weights train- ing phase. This means that our approach will re- quire computational resources proportional to the number of instances, multiplied by the cost of Lo- raHub’s training. Application Scenario Limitation. Our method is not universally cost-effective. In scenarios where a task involves a limited number of questions, em- ploying our method may not be the most economi- cal choice. For tasks without any instances, zero- shot learning would be a more appropriate and efficient approach. Additional Preliminary Preparations Re- quired. When utilizing LoRA for composition, our method not only requires identifying the appro- priate LoRA modules within the library but also necessitates access to the data used during the train- ing of those LoRA modules. Consequently, our approach incurs greater initial preparation costs compared to methods that do not rely on such spe- cific training data. Requirement for Higher-Quality Instances. Instance-level methods, such as ours, are more sen- sitive to the quality of the instances used. Lower- quality instances, including those that are flawed or not closely related to the task, can potentially lead to misleading answers for associated questions. This underscores the importance of careful instance selection and curation to ensure the method’s effec- tiveness.
[ "Towards Modular LLMs by Building and Reusing a Library of LoRAs", "Scalable Language Model with Generalized Continual Learning", "Online Adaptation of Language Models with a Memory of Amortized Contexts", "LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks", "LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild", "In-context Learning with Retrieved Demonstrations for Language Models: A Survey", "Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs", "LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition", "WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia", "Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models", "GPT-4 Technical Report", "The Flan Collection: Designing Data and Methods for Effective Instruction Tuning", "A Survey on In-context Learning", "Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them", "Black-Box Tuning for Language-Model-as-a-Service", "Learning To Retrieve Prompts for In-Context Learning", "LoRA: Low-Rank Adaptation of Large Language Models", "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", "Dense Passage Retrieval for Open-Domain Question Answering", "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers", "Learning and Inference via Maximum Inner Product Search", "Maximum inner-product search using cone trees", "MOELoRA: An MOE-based Parameter Efficient Fine-Tuning Method for Multi-task Medical Applications", "Okapi at TREC-3" ]
Leveraging pre-trained language models for linguistic analysis: A case of argument structure constructions
Leveraging pre-trained language models for linguistic analysis: A case of argument structure constructions Abstract This study evaluates the effectiveness of pre- trained language models in identifying argu- ment structure constructions, important for modeling both first and second language learn- ing. We examine three methodologies: (1) supervised training with RoBERTa using a gold-standard ASC treebank, including by-tag accuracy evaluation for sentences from both native and non-native English speakers, (2) prompt-guided annotation with GPT-4, and (3) generating training data through prompts with GPT-4, followed by RoBERTa training. Our findings indicate that RoBERTa trained on gold-standard data shows the best performance. While data generated through GPT-4 enhances training, it does not exceed the benchmarks set by gold-standard data. 1 Introduction Argument structure constructions (ASCs) are lexi- cogrammatical patterns at the clausal level. They consist of an argument structure and a main verb, with each argument contributing to the clause’s meaning . The characteristics of ASC use, such as frequency and/or the strength of association between a verb and its argument struc- ture, have been actively explored in previous stud- ies on first language (L1) and second language (L2) learning and assessment . To effectively model human language learn- ing/development using ASC features, ASCs must be reliably identified in target texts. Recent studies have shifted from manual to automatic ASC analyses . However, enhancing automatic analysis presents a challenge due to the unreliable extrac- tion of ASCs. This issue typically occurs when analyses are constructed from the bottom up, us- Figure 1: Distinguishing semantic roles in similar de- pendency structures of two different types of ASCs, visualized by DisplaCy ing individual syntactic and semantic elements to form the target constructions. For example, while syntactic analyses would represent the clauses (1) she ran [to the mountains] and (2) she ran [in the mountains] identically based on the form (i.e., subject-verb-prepositional phrase structures), they imply different meanings and represent distinct ASC types. In case (1), the prepositional phrase [to the mountains ] is an argument that completes the meaning by specifying the goal of the move- ment. In contrast, in case (2), the phrase [ in the mountains ] modifies the location of the event, as illustrated in Figure 1. One potential reason for this mismatch is that human language often employs a pre-built form-meaning schema at the clausal level , which can be challenging to capture from a bottom-up perspec- tive. A top-down approach, directly assigning ASC types based on their clausal contexts, is therefore likely more effective than a bottom-up approach. Recent advancements in pre-trained language models (PLMs) may offer a promising solution to these challenges, given their effectiveness in stor- ing sentence-level contextual knowledge, as well as part-of-speech and syntactic knowledge within their word embeddings (Miaschi and Dell’Orletta, 1\n2020; Hewitt and Manning, 2019). The follow-up empirical question is whether these models can reli- ably capture specific types of ASCs, both with and without top-down annotations provided by trained human annotators focusing on the linguistic charac- teristics of clausal forms. To address this, the cur- rent study explores the use of PLMs for identifying ASCs, evaluating three methodologies: (1) super- vised training with a encoder model (RoBERTa) using a gold-standard ASC treebank, (2) prompt- guided annotation of unlabeled data with a decoder mode (GPT-4), and (3) prompt-guided generation of training data with GPT4, followed by training with RoBERTa. 2 Backgrounds 2.1 Language learning and ASC use The usage-based constructionist approach sug- gests that language learning/development is driven by learners forming form-meaning pairings (also known as constructions), through statistical induc- tion from varied linguistic inputs. In modeling lan- guage learning/development, a key aspect of this approach involves ASCs, which are clausal level constructions that convey the core concepts of a sentence. They are also instrumental in communi- cations as they encapsulate conceptual archetypes, such as motion or causative events (Bencini and Goldberg, 2000; Goldberg, 1995, 2003; O’Connor and Kay, 2003; Rappaport Hovav and Levin, 1998). Building on theories that emphasize the signif- icance of ASCs, empirical studies in language learning have indicated that the frequency of ASCs (and of verbs) and the strength of association between ASCs and their corresponding verbs are key factors in the de- velopmental trajectory of their use. To be spe- cific, language learners make form-meaning map- pings between frequent linguistic forms (e.g., give- me-the-toy ) and their corresponding meanings early in their learning process. As learners encounter more related but varied inputs (e.g., hand -me-the-toy ,bring -me-the-toy ), they de- velop schematic representations of these forms likeVERB -me-the-toy , or more abstractly, VERB - Recipient-Object )1. In short, as they develop, learn- 1Research has shown that learners tend to initially over- generalize schematic slots . For example, after learning how to use a basic transitive ASC form (e.g., she opened the door ), a learner might mistakenly extend this construction to intransi-ers adopt a broader range of less frequent ASCs, utilize a wider range of verbs within specific ASCs, and form stronger associations between verbs and ASCs, thus reducing their use of atypical verbs in these constructions. The use of ASCs has proven to be a useful indi- cator of language proficiency, applicable to NLP applications such as automatic scoring and model- ing human language development. Kyle and Cross- ley (2017), for example, found that more profi- cient L2 writers tended to use less frequent but more strongly associated verb-ASC combinations. Additionally, they found that ASC-based indices were better predictors of holistic writing scores than classic indices of syntactic complexity (e.g., mean length of T-unit [minimally terminable unit] and mean length of clause), which focused on the structural elements of sentences without account- ing for the functional relationships conveyed by ASCs. Relatedly, scholars have also found that the use of particular ASC types indicate L2 proficiency. For example, Hwang and Kim (2023) found that more proficient L2 writers tended to use a wider range of ASC types overall, and also tended to use a higher proportion of passive and caused-motion ASCs. 2.2 Identification of ASCs To accurately and reliably identify ASCs, initial studies relied on time-intensive manual analyses . However, recognizing the need for efficiency, researchers have increasingly been investigating the feasibility of automated ASC analysis for some time now, as illustrated below. 2.2.1 Use of dependency representations The advent and popularization of syntactic de- pendency representation in treebanks and parsers provided a helpful starting point for automated ASC analysis. For example, O’Donnell and Ellis (2010) used a dependency parsed ver- sion of the BNC to ex- plore the feasibility of extracting ASCs using de- pendency tags. While this approach allowed for some target constructions to be accurately ex- tracted (e.g., VERB-preposition-noun construction: [talked about it ], Römer et al., 2014) overall ac- tive verbs, resulting in ungrammatical sentences (e.g., she sits the chair ). However, they gradually fine-tune their linguistic system through additional input and use. 2\ncuracy was insufficient for practical use. The in- troduction of NLP systems that utilize neural net- works substantially increased dependency parsing accuracy, leading to renewed efforts in automated ASC annotation . However, an important issue with the use of dependency representations to identify ASCs is that extant dependency representations do not include the semantic information necessary to disambiguate some ASC types (e.g., between in- transitive simple and intransitive motion construc- tions, as illustrated in Figure 1; also see Kyle and Sung, 2023). To improve accuracy, an alternative approach that considers the semantics of the clause is necessary. 2.2.2 Use of semantic role labels Another promising approach involves using databases annotated with semantic role labels, such as PropBank or Universal Propositions (UP) treebank . Given that each ASC includes ‘argument roles’, which often correspond to traditional semantic roles (e.g., agent, patient, theme, goal, etc.), lever- aging those semantic role annotations to extract ASCs appears promising. However, there are two major obstacles at present. First, the accuracy of current automated semantic role labeling systems is still not sufficient for this task2. Second, it is sometimes not straight- forward to map the output of semantic role labeling systems to ASCs. Typically, these systems use ab- stract semantic role labels (e.g., ARG0, ARG1) that pose challenges in directly mapping to theoretical ASC categorizations for some complex ASCs3. To address this issue, one potential solution in- volves automatically extracting semantic roles from a clause and mapping the set of roles to correspond- ing ASC types based on domain knowledge. Subse- quently, these mapped ASCs can be trained using a sequential learning model. For example, Kyle and Sung (2023) utilized a combination of UP 2To our best understanding, the publicly-available seman- tic role labeling achieved an F1 of 0.86 on argument tagging, and 0.95 on predicate tagging . Note that these scores are for large-grained argu- ment tags, which do not offer the precision required for ASC identification. 3Particularly, ARG2 and ARG3 cover a number of seman- tic categories. According to Jurafsky and Martin (Chapter 24.4), ARG2 includes benefactive, instrumental, attributive, or end-state roles, while ARG3 encompasses start-point, bene- factive, instrumental, or attributive roles.treebank , VerbNet , and FrameNet to semi-automatically annotate ASCs for a subset of the English Web Treebank with ASC data (i.e., silver-annotated ASC treebank; ASC treebank V1). They then trained a transformer model using RoBERTa embeddings with the semi- automatically annotated ASC labels and compared the model with three probabilistic models (based on verb lemmas, syntactic frames utilizing depen- dency parsing, and a combination of verb lemmas and syntactic frames). The results showed that the transformer model, trained on silver-annotated sentences with semantic role labels, achieved the highest classification accuracy (F1 = .918 on the silver-annotated test set), outperforming the other models. Despite this success, there is room to im- prove the model’s accuracy by leveraging gold- standard annotations beyond this semi-automatic annotation system . 2.2.3 Use of the gold standard ASC treebanks In response to the limitations observed in previous methodologies, a promising approach is to build gold-standard ASC annotations first and then train and/or evaluate PLMs for ASC annotation appli- cations. In order to do so, first, a treebank must be manually annotated with ASC information fol- lowing systematic annotation guidelines. Then, the treebank can be used for multiple approaches: (1) train set for a supervised learning, especially designed for sequential named entity recognition (NER) tasks4, (2) input example for few-shot learn- ing in unsupervised learning tasks, and (3) test set to test the accuracy of the models. Recently, Sung and Kyle (2024) released a gold- standard annotated treebank of ASCs (ASC tree- bank V2), which includes sentences from the En- glish Web Treebank (EWT), as well as sentences written by L2 users from ESL-WR , and spoken by L2 users from ESL-SP (10,204 sentences; 22,069 ASC to- kens). This treebank can be leveraged for more robust training and precise evaluation of developed models aimed at identifying ASCs. 4Methodologically, Kyle and Sung (2023) adopted this approach for the silver-annotated ASC treebank. 3\n3 Related work 3.1 Automated linguistic annotation with encoder models Recent advancements have underscored the po- tential of PLMs in automated linguistic annota- tion, as encoder models (e.g., BERT ; RoBERTa ) have demon- strated impressive gains in supervised learning tasks. Based on the Transformer architecture , PLMs have been exten- sively pre-trained on large text corpora and adeptly store morpho-syntactic and sentence-level contex- tual knowledge within their word embeddings (Mi- aschi and Dell’Orletta, 2020; Hewitt and Manning, 2019). One fundamental application, often consid- ered first in linguistic annotation, is dependency tagging and parsing. The performance of these models, specified for English, typically achieves an F1 score above 0.90 . Beyond syntactic analysis, Shi and Lin (2019) demonstrated that a BERT-LSTM based model could attain F1 scores of 0.90 on in-domain test sets and 0.84 on out-domain test sets in semantic role labeling. This was accomplished through argument identification and classification, without the need for auxiliary syntactic features like part-of-speech tags or dependency trees. The RoBERTa-based model showed a promising result for a discourse-level linguistic annotation. For example, recently Eguchi and Kyle (2023) ap- plied a RoBERTa-based ensemble model to iden- tify and categorize rhetorical stance features in academic English writing. By employing a dis- course analytic framework and manually annotat- ing 4,688 sentences across eight rhetorical stance categories, they trained an ensemble model com- bining RoBERTa and LSTM. This model achieved a macro-averaged F1 score of 0.72 in span iden- tification of stance-taking expressions, surpassing pre-adjudication human annotator reliability. 3.2 Automated linguistic annotation with decoder models To effectively employ encoder models for fine- grained linguistic analyses, it is important to collect and precisely annotate a certain amount of training data for the linguistic features of interest. However, data annotation is often a costly process. This cost encompasses the labor involved in researchers re- cruiting, training, and managing human annotators, as well as the time spent by annotators in labeling raw data. In this context, recent studies have ex- plored ways to effectively use decoder models (e.g., GPT) for data annotation with unsupervised learn- ing . They have demonstrated impressive zero-shot or few-shot learning abilities, which allow them to perform tasks with minimal or no task-specific training data . For example, Ding et al. (2022) conducted com- prehensive analyses on the feasibility of leverag- ing GPT-3 for data annotation in different NLP tasks including an NER task. They developed three distinct GPT-3-based data annotation approaches: (1) prompt-guided unlabeled data annotation, (2) prompt-guided training data generation, and (3) dictionary-assisted training data generation. Sub- sequent experiments on both sequence- and token- level NLP tasks were used to evaluate their perfor- mance. The findings indicated that directly anno- tating unlabeled data was effective for tasks with a small labelling task, while generation-based meth- ods proved more suitable for tasks with a larger la- belling task. Similarly, Yu et al. (2023) investigates the application of GPT models to automate com- plex pragmatic-discourse features of apology in zero and few-shot settings. By comparing the per- formance of GPT-3.5, GPT-4, and human annota- tions in annotating apology components, the study demonstrated that GPT-4’s accuracy approached that of human annotators. On the contrary, the recent study by Ettinger et al. (2023) found limited success using GPT-3, Chat- GPT, and GPT-4 models for semantic annotations (i.e., abstract meaning representation ). The experiments included zero and few-shot experiments, as well as an experiment fo- cusing on PLMs’ ability to handle metalinguistic queries (e.g., identifying primary sentence events and predicates). A comprehensive evaluation of parse acceptability demonstrated that, even with few-shot examples, the models almost never suc- ceeded in producing completely accurate parses. The findings indicate that while these models cap- ture some semantic elements, significant challenges persist in achieving precise semantic analyses. 4 Methodology 4.1 Datasets In this study, we utilize two treebanks, namely the silver and gold versions of the ASC treebank. The first silver version includes 26,437 ASC tokens that were semi-automatically 4\nannotated (CC-BY 4.0)5. The second gold version includes 22,069 manually annotated ASC tokens (CC-BY 4.0)6. The sen- tences in this treebank were sampled from the En- glish Web Treebank (ETW) , L2-Written (ESL-WR) , and L2-spoken (ESL-SP) treebanks, which are all part of the Universal Dependencies project . Given the relatively small representation of L2 written and spoken data, training, development, and test sets were resampled with a 34/33/33 distribution. The EWT sentences retained their original sections and were roughly distributed at 80/10/10. Table 1 illustrates the nine ASC tags along with the most prototypical semantic roles that were mapped in two treebanks , accompanied by examples from the annotated dataset. Appendix A shows ASC type frequencies in each dataset. 4.2 Experiment setup The purpose of this study is to explore how to leverage PLMs, specifically RoBERTa (an encoder model) and GPT-4 (a decoder model), for ASC an- notations which could assist in modeling and mea- suring human language development. To achieve this goal, we designed three different approaches to utilize PLMs to evaluate and compare their per- formance (Figure 2). Figure 2: Experiment overview 4.2.1 Experiment 1 The objective of the first experiment is to investi- gate supervised learning using gold-standard data applied with RoBERTa embeddings . To accomplish this, we train a transformer- based machine learning, employing the open- access Python library, SpaCy (version 3.7.4; Hon- nibal et al., 2020) for a multi-class NER task. 5https://github.com/LCR-ADS-Lab/ASC-Treebank 6https://osf.io/v75qu/SpaCy’s method includes a transition-based parser, a neural network-based state prediction model, for structured prediction in NER tasks. Additionally, we employed the en_core_web_trf pipeline, which fine-tunes a RoBERTa model. To evaluate the performance, we constructed three comparative models: (1) a model using silver- standard data, (2) a model trained with gold L1 data, and (3) a model trained with both gold L1 and L2 data. Considering the necessity for accurate per- formance on L2 data to capture non-native English linguistic structures, we conducted detailed testing on each L1, L2 written, and L2 spoken data. For specifics on the hyperparameter settings, refer to Appendix B. 4.2.2 Experiment 2 The goal of the second experiment is to explore prompt-guided annotation of unlabeled data. To this end, GPT-4 was employed to generate labels for a subset of the test set from the gold-standard treebank. Due to the high processing costs and time, we streamlined the task by filtering the tag set – reducing the number of tags from nine to seven by removing the ATTR and PASSIVE tags. Moreover, we utilize a random balanced extraction method to select sentences for annotation, ultimately resulting in a total of 282 sentences. To evaluate performance, we provided GPT-4 with three distinct prompts for label generation on the test set: (1) zero-shot, (2) 3-shot, and (3) 10- shot. In cases of few-shot learning, examples were randomly selected from the gold-standard ASC treebank. We compared these results with baseline scores from a model trained under a supervised learning. This comparative model, as described in Experiment 1, incorporated adaptations such as early stopping7. Figure 3 shows an example of a zero-shot learning, and for details on the examples that we used for the experiment, refer to [ details anonymized for review ]. 4.2.3 Experiment 3 The objective of the third experiment is to explore the use of prompt-guided generation of training data for training RoBERTa. In this experiment, we utilized GPT-4 to create a labeled dataset, which was subsequently used to train with RoBERTa. For data generation, GPT-4 was used to pro- duce a balanced set of sentences with ASC tags, starting with 3-shot and 10-shot settings, as the 7The model was trained for only 400 iterations. 5\nASC (Annotated tag) Semantic frame Example Attributive (ATTR) theme -V-attribute It theme is now visible attribute on the street Caused-motion (CAUS_MOT) agent -V-theme -destination I agent put it theme [on the calendars] destination Ditransitive (DITRAN) agent -V-recipient -theme I agent gave him recipient [the address] theme Intransitive motion (INTRAN_MOT) theme -V-goal I theme won’t go [out the door] goal Intransitive resultative (INTRAN_RES) patient -V-result Money patient may become tight result Intransitive simple (INTRAN_S) agent -V Iagent am working from the office Passive (PASSIVE) theme -aux-V passive They theme were recommended passive by him Transitive resultative (TRAN_RES) agent -V-result -result I agent don’t want [my leg] result hurt result Transitive simple (TRAN_S) agent -V-theme I agent should buy [a new one] theme Table 1: ASCs representation Figure 3: Example of prompting GPT-4 to generate ASC labels in a zero-shot setting model struggled to generate data without any ini- tial examples. We divided the experiment into two parts: the first involved training the model solely using data generated by GPT-4; the second com- bined these generated sentences with a similarly balanced selection from the gold-standard dataset to augment the training set. This approach allowed the integration of artificially generated and gold data into two additional experimental groups: one trained with 3-shot (i.e., sentences generated from 3-shot setting) plus gold data, and another with 10-shot plus gold data. The data were converted to IOB format to train RoBERTa. We then com- pared the performance of these models to baseline scores from a model trained on fewer gold data sentences8. This comparison additionally aimed to 8This adjustment was made because the GPT-4 generated sentences typically had fewer ASC types, necessitating a re-evaluate the effectiveness of augmenting training sets with machine-generated data versus additional human-annotated data. We ensured consistency in hyperparameters and the number of training epochs to facilitate comparability9. Figure 4 shows an ex- ample of a few-shot learning. Figure 4: Example of prompting GPT-4 to generate ASC labels in a few-shot setting 5 Results 5.1 Experiment 1 We investigated the performance of supervised learning using gold-standard data applied with RoBERTa embeddings. The results, detailed in Table 2, highlight the highest performance of the model trained using gold-standard data that in- cludes both L1 and L2 annotations (Gold L1+L2 train model). It demonstrated the highest averaged F1 scores across all tested datasets: EWT (F1 = duction in the gold training data for a fair comparison. 9We used the same hyperparameter settings as the first experiment and also did the early stopping of stop at 400 iterations of the training data. 6\nSilver train model Gold L1 train model Gold L1 + L2 train model ASC L1 L2Writ L2Spok L1 L2Writ L2Spok L1 L2Writ L2Spok ATTR 0.982 0.955 0.971 0.972 0.954 0.986 0.968 0.971 0.988 CAUS_MOT 0.794 0.764 0.690 0.818 0.833 0.710 0.857 0.867 0.710 DITRAN 0.757 0.862 1.000 0.919 0.914 0.842 0.865 0.881 0.947 INTRAN_MOT 0.763 0.755 0.774 0.800 0.770 0.789 0.772 0.807 0.843 INTRAN_RES 0.667 0.741 0.000 0.750 0.788 0.800 0.625 0.813 0.833 INTRAN_S 0.806 0.770 0.853 0.779 0.806 0.817 0.808 0.803 0.865 PASSIVE 0.932 0.865 0.875 0.920 0.775 0.938 0.940 0.865 0.909 TRAN_RES 0.853 0.714 0.588 0.884 0.800 0.625 0.881 0.792 0.625 TRAN_S 0.922 0.904 0.933 0.931 0.929 0.927 0.936 0.943 0.948 macroAv 0.902 0.885 0.907 0.908 0.900 0.905 0.912 0.915 0.928 Table 2: F1-scores across ASC types, models, and registers, with the highest scores per tag in each dataset shaded (Experiment 1) 0.912), L2 Written (F1 = 0.915), and L2 Spoken (F1 = 0.928). It also outperformed the other models in individual tag accuracy, securing the highest F1 scores for seven out of nine annotation types in both the L2 Written and Spoken datasets. Meanwhile, the model trained on the gold-standard L1 dataset (excluding L2) achieved top F1 scores for four out of nine tags in the L1 written dataset, underscoring the importance of leveraging gold-standard data for developing effective model, especially in com- parison to models trained on the silver-standard data. 5.2 Experiment 2 We explored prompt-guided annotation of unla- beled data using GPT-4. The results demonstrate that performance varied with the number of ex- amples provided (Table 3). The zero-shot learn- ing yielded the lowest F1 score at 0.377, while the 10-shot configuration showed an improvement, achieving the highest average F1 score of 0.60210. This indicates that more extensive example-driven guidance considerably enhances the model’s ef- fectiveness in automated ASC tagging tasks with GPT-4. However, the overall F1 scores were lower than the model trained solely on gold annotations (i.e., baseline), and neither of the F1 scores for any ASC type exceeded those of the baseline model. 5.3 Experiment 3 We explore the use of prompt-guided generation of training data for training RoBERTa. The ex- periment was designed to first train the RoBERTa model using only the data generated by GPT-4 and 10We additionally tested zero-shot learning by explicitly providing syntactic or semantic information about each con- struction to the model, but observed no improvement. Refer to Appendix C for detailed results and the prompts used.ASC tag (#) zero-shot 3-shot 10-shot baseline CAUS_MOT (55) 0.121 0.446 0.483 0.907 DITRAN (46) 0.612 0.673 0.667 0.945 INTRAN_MOT (54) 0.562 0.674 0.684 0.825 INTRAN_RES (41) 0.130 0.525 0.730 0.822 INTRAN_S (105) 0.327 0.421 0.552 0.817 TRAN_RES (46) 0.213 0.306 0.485 0.863 TRAN_S (307) 0.676 0.700 0.742 0.922 macroA V (654) 0.377 0.535 0.602 0.888 Cost ($) 3.82 3.71 29.56 Time (mins) 29 24 24 Table 3: F1-scores for ASC tagging using GPT-4 (Ex- periment 2) then compare its performance with a model trained using gold standard data, as detailed in Table 4. The results reveal two key findings: First, increas- ing the number of examples, from 3-shot to 10-shot, enhanced model performance. The F1-scores gen- erally improved with the number of examples pro- vided, with the 10-shot configuration substantially outperforming the 3-shot across most categories. This highlights the role of example-driven guid- ance in enhancing the quality of machine-generated training data; Second, despite the performance gains observed with an increased number of exam- ples, models trained solely with gold data (gold1) consistently outperform those trained with the GPT- 4 generated data (both 3-shot and 10-shot), partic- ularly in more complex ASCs (e.g., CAUS_MOT, TRAN_RES). This highlights that while machine- generated data can positively contribute to the train- ing process for some ASCs (e.g., TRAN_S, IN- TRAN_MOT), it still falls short of the quality and effectiveness of human-annotated data. The second part of the experiment aimed to deter- mine if augmenting the gold-standard training set with GPT-4-generated data could enhance the per- formance of the supervised learning model. As il- 7\nCategory 3-shot 10-shot gold1 CAUS_MOT (55) 0.333 0.422 0.838 DITRAN (46) 0.367 0.632 0.867 INTRAN_MOT (54) 0.405 0.667 0.651 INTRAN_RES (41) 0.571 0.620 0.667 INTRAN_S (105) 0.303 0.485 0.742 TRAN_RES (46) 0.102 0.188 0.824 TRAN_S (307) 0.347 0.718 0.860 macroA V (654) 0.340 0.607 0.816 # of sentences 927 814 469 Cost ($) 3.31 6.59 Time (mins) 18 20 Table 4: Comparison of F1-scores for ASC tagging using different training sets, trained with RoBERTa (Ex- periment 3) lustrated in Table 5, introducing machine-generated data (both 3-shot and 10-shot) into the gold data set does not consistently improve performance across all ASC tags11. The macro average F1-score in- dicates that models trained with a combination of gold and machine-generated data (0.795 for 3- shot+gold and 0.809 for 10-shot+gold) generally perform less effectively than those trained solely with gold-standard data (0.816). Furthermore, the results demonstrate that the most significant improvement in performance was observed when gold data was augmented with ad- ditional gold data (gold1+gold2), achieving the highest macro average F1-score of 0.877. This underscores that while machine-generated data can enhance training effectiveness for some ASC types (e.g., TRAN_RES, INTRAN_S), incorporat- ing more human-annotated gold data substantially boosts model accuracy. Upon closer examination of the machine-generated training data, it became evident that despite the prompts directing GPT-4 to generate sentences closely resembling the human- produced examples in the 10-shot set, the model struggled to capture the nuances present in sen- tences from human sources, such as the web corpus or L2 datasets (See Appendix D). In other words, GPT-4-generated sentences tend to be shorter and less complex, typically lacking multiple clauses, unlike the more elaborate sentences crafted by hu- mans. This limitation likely impacted the quality of the training data and, consequently, the effective- ness of the training outcomes. 11There are some cases where it slightly enhances the model’s effectiveness, as seen in the TRAN_RES and IN- TRAN_S tags.ASC tag (#) gold1 gold1 gold1 gold1 +3-shot +10-shot +gold2 CAUS_MOT (55) 0.838 0.731 0.782 0.914 DITRAN (46) 0.867 0.756 0.824 0.920 INTRAN_MOT (54) 0.651 0.644 0.727 0.814 INTRAN_RES (41) 0.667 0.615 0.695 0.831 INTRAN_S (105) 0.742 0.760 0.751 0.816 TRAN_RES (46) 0.824 0.857 0.782 0.886 TRAN_S (307) 0.860 0.851 0.863 0.900 macroA V (654) 0.816 0.795 0.809 0.877 # of trained sentences 469 1396 1283 938 Table 5: Comparison of F1-scores for ASC tagging using different training sets – combined with the gold- standard data, trained with RoBERTa (Experiment 3) 6 Conclusions This study highlights the potential of integrat- ing PLMs into linguistic analysis frameworks, particularly for examining the characteristics of ASCs in the context of modeling L1 and L2 learn- ing/development. RoBERTa, when trained on gold- standard datasets, demonstrated superior perfor- mance, underscoring the importance of compre- hensive, high-quality annotated data. Additionally, the use of GPT-4 for prompt-guided annotation and data generation offered some insights into the effec- tiveness of synthetic data in model training. While these methods did not surpass the F1 scores of the baseline model trained solely on gold-standard an- notations, they proved effective in identifying and processing certain types of ASCs. Future directions: This study serves as a promis- ing foundation for automated annotation systems in both L1 and L2 language contexts. However, it did not directly assess the effectiveness of ASC anno- tation in automatic writing evaluation or feedback systems, which represent critical avenues for future research and applications of NLP in education. Limitations The accuracy of ASC annotation was assessed across three linguistic domains—L1 written, L2 written, and L2 spoken—but only a single register within each domain was examined in Experiment 1. Experiments 2 and 3 did not comprehensively ex- plore model performance across different domains. Consequently, the applicability of these models in other registers, such as L2 written narratives or L2 argumentative speeches, remains uncertain, partic- ularly with the RoBERTa model. Furthermore, the GPT-4 model should have also included investi- gations into two additional ASC types (PASSIVE, 8\nATTRIBUTE) and comparisons across different linguistic domains. Additionally, due to the limited scope of the L2 datasets, certain ASC types, such as transitive and intransitive resultative constructions, were underrepresented in the test sets. Therefore, the annotation accuracy for these specific ASCs should be interpreted with caution. Supplementary Materials All prompt, data, code, and models are available in [details anonymized for review ] All contributions in this proceeding are licensed under the Creative Commons Attribution-Non-Commercial 4.0 Inter- national License (CC-BY 4.0).
[ "Span Identification of Epistemic Stance-Taking in Academic Written English", "Assessing the potential of LLM-assisted annotation for corpus-based pragmatics and discourse analysis", "Is GPT-3 a Good Data Annotator?", "Contextual and Non-Contextual Word Embeddings: an in-depth Linguistic Investigation", "Language Models are Few-Shot Learners", "Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection", "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "A Structural Probe for Finding Syntax in Word Representations", "Simple BERT Models for Relation Extraction and Semantic Role Labeling", "AllenNLP: A Deep Semantic Natural Language Processing Platform", "Assessing syntactic sophistication in L2 writing: A usage-based approach", "Attention is All you Need", "Universal Dependencies for Learner English", "Generating High Quality Proposition Banks for Multilingual Semantic Role Labeling", "Linking learner corpus and experimental data in studying second language learners’ knowledge of verb-argument constructions", "Abstract Meaning Representation for Sembanking", "Towards an Inventory of English Verb Argument Constructions", "Construction Learning as a Function of Frequency, Frequency Distribution, and Function.", "The Stanford Typed Dependencies Representation", "The BNC Parsed with RASP4UIMA", "The Proposition Bank: An Annotated Corpus of Semantic Roles", "Background to Framenet", "Constructions: a new theoretical approach to language", "The contribution of argument structure constructions to sentence meaning", "Pathbreaking verbs in syntactic development and the question of prototypical transitivity", "Constructions: A Construction Grammar Approach to Argument Structure", "Regularity and Idiomaticity in Grammatical Constructions: The Case of Let Alone", "Lexical Entries for Verbs", "Annotation Scheme for English Argument Structure Constructions Treebank", "A Dependency Treebank of Spoken Second Language English", "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "Language Models are Unsupervised Multitask Learners", "Measuring Syntactic Development in L2 Writing: Fine Grained Indices of Syntactic Complexity and Usage-Based Indices of Syntactic Sophistication", "A Gold Standard Dependency Corpus for English", "A Fast and Accurate Dependency Parser using Neural Networks", "Verbnet: a broad-coverage, comprehensive verb lexicon", "Constructing a language: A usage-based theory of language acquisition", "Do foreign language learners also have constructions", "Learning argument structure generalizations", "Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition", "Young children's earliest transitive and intransitive constructions" ]
M3D: MultiModal MultiDocument Fine-Grained Inconsistency Detection
M3D: MultiModal MultiDocument Fine-Grained Inconsistency Detection Abstract Fact-checking claims is a highly laborious task that involves understanding how each factual assertion within the claim relates to a set of trusted source materials. Existing approaches make sample-level predictions but fail to iden- tify the specific aspects of the claim that are troublesome and the specific evidence relied upon. In this paper, we introduce a method and new benchmark for this challenging task. Our method predicts the fine-grained logical rela- tionship of each aspect of the claim from a set of multimodal documents, which include text, image(s), video(s), and audio(s). We also in- troduce a new benchmark ( M3DC) of claims requiring multimodal multidocument reason- ing, which we construct using a novel claim synthesis technique. Experiments show that our approach significantly outperforms state- of-the-art baselines on this challenging task on two benchmarks while providing finer-grained predictions, explanations, and evidence. 1 Introduction Misinformation poses serious societal risks by per- petuating narratives that incite fear, sow discord, and affect public health and safety . Despite signif- icant efforts towards developing automated fact- checking techniques , existing methods face several limitations. First, real-world claims may include assertions that require consulting mul- tiple documents and modalities to verify or refute the claim. Existing approaches either assume a sin- gle document setting or perform retrieval across documents to obtain relevant evidence, which is then treated as a single document , poten- tially losing important surrounding context. Sec- ondly, some methods only predict when claims con- flict with relevant knowledge but ignore ambiguous cases where no supporting or refuting information is available . Lastly, most of the existing methods fail to provide the fine-grained analysis needed for users to under- stand what is inconsistent in a claim or to make revisions to be more factual . Simply flagging an entire claim as false without pinpointing the specific inaccurate parts provides limited utility. In contrast, we propose an approach for predict- ing the logical relationship of each piece of a claim with respect to a set of multimodal sources. We perform a semantic dissection of claims into seman- tic pieces and leverage a hierarchical transformer that operates across multimedia documents to make fine-grained predictions. Our model ingests the claim along with associated multimedia, preserv- ing the context. It then fuses the cross-document representations into a graph initialized with the claim’s Abstract Meaning Representation (AMR) . Entailment relations are then predicted for each node (e.g., entities, actions) and tuple (e.g., relations) within the graph. Because no prior work has explored making fine- grained claim predictions from a set of multimodal documents, we also introduce a new dataset of claims that contains fine-grained labels for this task called M3DC (MultiModal Multi-Document Claims). We build our dataset on top of the NewsStories dataset, which in- cludes sets of news articles, images, and videos across multiple topics. We retrieve textual, visual, and audio data from each set to build a robust mul- timodal multidocument knowledge graph for each set of related documents. Next, we develop a claim synthesis method in order to generate claims that re- quire multisource knowledge to verify, which uses a fine-grained claim manipulator model to generate claims manipulated at the sub-claim level. Our major contributions are as follows: •We introduce the novel task of performing 1\nfine-grained entailment of a textual claim with a set of multimodal documents. •We introduce a new hierarchical transformer model designed for the task of fine-grained claim analysis over multiple sources. •We propose a novel data synthesis technique for generating fine-grained labeled claims re- quiring multimodal multisource knowledge to verify using a graph traversal and fine-grained claim manipulator model. •We contribute a large benchmark of fine- grained labeled claims created using our tech- nique. We also contribute a small number of claims densely annotated by experts. •We conduct qualitative and quantitative exper- iments to evaluate the performance of our pro- posed method on our new benchmark dataset, as well as an existing benchmark dataset. 2 Related Works 2.1 Fact-checking Datasets Fact-checking claims is an established task with many text-only benchmarks such as LIAR , SNLI , MNLI , and ANLI . Later datasets like FEVER and SciFact incor- porate evidence from multiple text-only sources. Recent work has explored fine-grained, text-only misinformation detection across multiple sources, such as , but ignores that crucial evidence can also come from other modalities. To address this, datasets like MM-Claims and Fakeddit have incorporated multimodal fact- checking data. However, these tend to assess au- thenticity at a coarse level, with claims derived from short sentences or paragraphs. In contrast, our proposed dataset features extremely fine-grained la- bels that capture the logical relationships between each aspect of a claim and a set of multimedia documents. This requires a much more nuanced entailment analysis and a deeper understanding of the evidence required to verify or refute specific claim components. Furthermore, our data synthe- sis process explicitly creates claims from multiple modalities across various documents with lengthy sentences. Our approach explicitly models these intricate multimodal relationships and aims to fa- cilitate the development of more interpretable and trustworthy fact-checking systems. 2.2 Fake News Detection Methods Many methods for detecting misinformation at the document level have been proposed . These approaches either directly rely on neural models or rely on struc- tural cues indicative of disinformation . Hu et al. (2021) compares docu- ments to an external knowledge graph (KG) for verification, but it does not make fine-grained pre- dictions or leverage multimodal data. Predicting claim entailment from evidence has been studied in NLP and computer vision , using visual instead of textual evidence. Recent multimodal misinformation detection ap- proaches make entailment predictions from mul- timodal documents. MOCHEG predicts entailment at the sample level. Fung et al. (2021) extracts a multimodal KG for fake news detection, focusing on internal inconsistencies in a single document. Wu et al. (2022) propose a GNN-based model for fine-grained predictions across text-only documents, using IE-generated KGs which may miss fine details. Unlike prior work, our proposed method performs fine-grained entailment of complex claims against a set of mul- timodal documents. Wu et al. (2022) can only output binary inconsistency results using limited IE-generated knowledge graphs. Thomas et al. (2022) considers shorter, caption-like claims veri- fiable from a single image. In contrast, our claims are much more complex, requiring reasoning over a multimodal, multi-document premise. Unlike Thomas et al. (2022), we further contribute a new benchmark dataset for this task. 3 Approach In this section, we first describe our methodology for constructing our dataset, M3DC. We then pro- vide details of our model architecture which oper- ates across sets of documents to make fine-grained claim predictions. 3.1 Multimodal MultiDocument Dataset In this section, we introduce our data synthesis ap- proach for constructing a dataset with claims con- taining fine-grained labels that require multimodal and multi-source knowledge to verify. Our dataset 2\nFigure 1: Constructing a KG from a multimedia news cluster. AMR trees from different documents and modalities are linked to form a cross-document, cross-media KG. Co-reference links are shown in red. builds upon NewsStories , a col- lection of news clusters with articles and videos. We begin by crawling the data and removing news that is no longer publicly accessible or has been taken down. For each news cluster, we construct a knowledge graph (KG) combining textual and non-textual data based on AMR trees generated from news documents. This cross-document, cross-media representation allows us to synthesize claims by linking information from the graph. We then introduce a claim manipulator model that generates claims with varying degrees of truthfulness by traversing the AMR-based KG and introducing controlled perturbations. To obtain fine-grained labels, we employ a model that assigns entailment labels (e.g., entailment, contradiction, neutral) to individual AMR nodes and tuples with its associated knowledge. Using this approach, we synthesize a dataset of about 400K claims across over 10,000 topics, requiring multimodal and multi- document knowledge for verification. The overall process is shown in Figure 2. 3.1.1 Knowledge Graph Construction For each news cluster, we extract knowledge into a set of AMR trees using Structured-BART with sentences coming from the news document, visual captions generated from our grounding module and audio summaries from Qwen-Audio. Then, we connect nodes from AMR trees using co-reference reso- lution from CDLM and F- coref in order to link within- document and cross-document entities or events. The overall process is illustrated in Figure 1. For visual data from images and videos, we uti- lize GLIP and CoFormer to perform co-reference resolution across modalities. Initially, CoFormer is used to extract event frames from each image and subsam- pled video frames, along with grounded bound- ing boxes. Event frames are defined by PropBank and contain event verbs, along with noun arguments and roles. Meanwhile, GLIP is used to ground textual data from news articles with visual content from images or video frames. After grounding textual and visual content using both models, we measure the Inter- section over Union (IoU) between groundings from both models to filter out discrepancies. Then we utilize GPT-Neo to generate captions for each image and video frame from the event frame extracted by CoFormer and textual data grounded from GLIP, using an in-context learning approach with a pre-defined template. For audio data, we prompt Qwen-Audio to generate summaries that describe the audio content and background noise from the video. 3.1.2 Claim Generation To generate claims that require multimodal, multi- document evidence from the constructed KGs, we developed a Depth-First Search (DFS) based graph traversal method that selects Knowledge Elements (KEs) from multiple sources from the constructed KG. For a given KG and starting node (i.e. an AMR 3\npredicate node), the traversal algorithm traverses surrounding nodes until another predicate node is reached. We encourage the algorithm to follow co- reference edges to incorporate knowledge across documents and modalities. The traversal algorithm outputs KEs (AMR triples) rooted at a predicate, which is then used to generate a complete claim sentence containing the information from the tra- versed nodes and edges through AMRBART . Given that these generated claims are directly generated from the KG, all resulting claims are inherently entailed by this approach. This ap- proach ensures that the resulting claims rephrase evidence from different articles and modalities, re- quiring the model to reason across sources to per- form fine-grained verification. 3.1.3 Claim Manipulation Since the claims generated directly from the KGs are inherently entailed, we introduce a claim ma- nipulator model to generate diverse claims with varying degrees of truthfulness (entailed, neutral, or contradicted) relative to the evidence in the KG. The claim manipulator takes as input the claim, relevant evidence from the KG (which may be mul- timodal), and a desired logical label (entailed, neu- tral, or contradicted). The goal is to manipulate an entailed claim so that the claim’s logical relation matches the input. To train the manipulator, we employ reinforcement learning, where a model is optimized to maximize the scores provided by a reward model that offers evaluative feedback. Denoting the original claim as c, de- rived from the KG, and the modified claim asˆcproduced by the manipulator M, with yrepresenting the logical label from Y={”entailed ”,”neutral ”,”contradicted ”}, the goal of the claim manipulator is to generate a claim similar to the original claim cwith the target logical label ˆygiven premise (evidence) p. We leverage Llama-2-13B to manipulate claims to correspond with the desig- nated logical label ˆybased on the given premise p. The premise consists of the top 10 most relevant evidence (expressed in text, i.e., using sentences from news articles and captions for image and video) related to cfrom Sentence-BERT, the manipulator is fine-tuned using reinforcement learning to produce a claim ˆc based on c. In this process, candˆcare intended to be syntactically similar to each other. The claim manipulator can be formulated as ˆc=Mθ(p, c,ˆy) To steer the manipulator towards generating claims that align with the target logical label ˆy and similar to the original claim csyntactically, a reward model based on DeBERTAv3 is trained to function as a critic using MNLI , Fever-NLI , and ANLI . The reward model is trained for fine-grained entailment clas- sification using the multi-instance and structural constraints from FGVE , al- lowing the model to do fine-grained predictions without ground-truth fine-grained labels. Critically, we enforce our target label constraint at both the fine-grained and sample levels within the graph. This approach ensures that the claim manipulator not only focuses on producing claims in a coarse- grained manner but also pays attention to fine- grained details. Specifically, the reward model’s score is defined as the likelihood of the target label considering both the manipulated claim and the top 10 sentences most relevant to the original claim from the KG (serving as evidence): r(c,ˆc,ˆy) =P(ˆy|p,ˆc)− P|Y| yi̸=ˆyP(yi|p,ˆc) +ROUGE (c,ˆc) (1) where c,ˆc,ˆy, and prepresent the original claim, the modified claim, the desired logical label for the claim, and the premise, respectively. The termP(ˆy|p,ˆc)is obtained from the trained fine- grained entailment classifier. The goal of this re- ward function is to ensure that the modified claim ˆcnot only matches the intended truthfulness label ˆybut also retains as much similarity to the original claim cas possible as quantified by the ROUGE score. We fine-tuned the claim manipulator with Prox- imal Policy Optimization (PPO) as our policy gradient method for reinforce- ment learning. PPO adds an additional term to the reward function, which imposes a penalty deter- mined by the Kullback-Leibler (KL) divergence between the trained RL policy manipulator, πPPO ϕ, and the initial supervised manipulator πSFT: rtotal=r(ˆh, h,ˆy)−ηKL(πPPO ϕ(ˆyt|p,ˆh), πSFT(ˆyt|p,ˆh)), (2) where ηrepresents the KL reward coefficient, which determines the magnitude of the KL penalty; we set it to 0.2 for our model. This coefficient func- tions as an entropy boost, enhancing exploration throughout the policy domain and urging the model 4\npx Figure 2: Claim generation pipeline. We create a knowledge graph from a set of media about an event. Our traversal algorithm selects the part of the KG highlighted in yellow to generate a (true) claim. To do so, we use the selected elements to translate the selected knowledge into a sentence. We then feed relevant evidence and the generated claim into our claim manipulator model. In this example, we ask our claim manipulator to generate a contradicted claim. The claim manipulator performs fine-grained manipulations, inserting both unverified (i.e. 74 individuals) and contradictory (i.e. 5 people injured) assertions. Because we know how the claim was manipulated at the knowledge-element level, we can use this as supervision to train our verification model. to engage in a diverse set of actions rather than the one currently considered the best. In addition, it inhibits the policy from rapidly committing to a singular strategy, and this encourages outputs from the RL fine-tuned model to not deviate too far from the original model. After constructing the dataset with the claim manipulator, we employ Mixtral- 8x7B using in-context learning to predict the logical label of the claims generated by the claim manipulator as a quality check; we discard those that do not align with the target labels. Finally, as a final quality check on our generated dataset, we assess the checkworthiness of claims using ClaimBuster to filter opinions or unimportant claims from our dataset. More details are covered in Appendix A.1. 3.2 Model Architecture In this section, we present our model for predicting fine-grained entailment relations for claims given a set of trusted multimodal source materials. Figure 3 shows our model’s architecture. 3.2.1 Multimodal Encoder By design, our claims require reasoning across modalities and documents to make fine-grained predictions. We thus integrate all modalities into our model, preserving the original context in which the claim appeared. For textual content, we employ LongT5 to encode the claims and sentences from documents and captions. For han- dling non-textual context (i.e. images, video, and audio), we utilize ImageBind , a set of cross-modal embedding models for em- bedding text, audio (represented as spectrograms), visual content, and other modalities in a common space. In addition to explicitly capturing how the information relates across documents and modal- ities, our model also ingests an embedding of the KG corresponding to each cluster. To learn our KG embedding, we instantiate our KG using a Graph Convolutional Network (GCN) and train it via a masked sequence prediction task. We randomly obscure nodes and edges within the KG and train a classifier to predict the masked pieces. After train- ing, we extract KG embeddings for each cluster and feed them to our model. To bridge the various representation spaces, we add an additional linear layer for each modality’s encoder. The embeddings from different modalities, in- cluding textual content (encoded by LongT5), non- textual context (encoded by ImageBind), and the knowledge graph (encoded by the GNN), are con- catenated to form a comprehensive multimodal representation of the claim and its associated evi- dence. This concatenated embedding is then fed into LongT5 for pretraining us- ing the Gap Sentence Generation (GSG) objective from Pegasus . GSG is a self- supervised learning task that aims to generate miss- ing sentences within a given context. We identify the top 3 sentences inside the news documents that 5\npx Figure 3: The model architecture. Each cluster, potentially containing multiple news articles, will have its content from various multimedia sources independently encoded and then merged to form a unified representation. This joint representation will serve as the initial state for every node within the GNN. Subsequently, labels at both the sample level and the fine-grained level can be derived by aggregating features from the nodes and edges of the GNN. are most relevant to the claim cusing ROUGE-F1, randomly choose one sentence and its adjacent sen- tence, and then mask them both. LongT5 is trained to generate the masked sentences based on the sur- rounding context and the multimodal embeddings. 3.2.2 Graph Convolutional Network Our task requires predicting fine-grained entail- ment relationships between a claim and a set of multimedia source materials. To ensure each fine- grained element within the claim’s AMR captures the context of the AMR structure in which it ap- pears, we employ a two-layer GCN to learn contextual features of each node and tuple within the claim’s AMR graph. Our GCN model is initialized with features from our multimodal encoder and features from the claims’s AMR. Specifically, we encode the AMR represen- tation of claims and embeddings from multimedia news content via the GCN as follows: for each node iwithin the AMR graph, we define the fea- ture aggregation mechanism by the equation: h(l+1) i=X j∈N(i)∪{i}1 cijh(l) j (3) where h(l+1) i is the feature vector of node iat the subsequent layer l+ 1. The set N(i)includes the neighbors of node i, andcijis a normalization factor for the edge that connects nodes iandj. For edge features, we extend our model to incor- porate edge features alongside node features. This is achieved by incorporating edge attributes into the aggregation function, allowing the model to con- sider the characteristics of the connections between nodes. For an edge eijconnecting nodes iandj, the edge features can be integrated as follows: e(l+1) ij=h W(l) eh(l) i||W(l) eh(l) ji (4) where e(l+1) ij represents the feature vector of edge eijat layer l+ 1, with W(l) eandb(l) ebeing the weight matrix and bias vector specific to edge fea- tures at layer l. This approach ensures that the model captures not only the node-level but also the edge-level semantic and structural information inherent in AMR graphs. For graph-level (sample-level) classification, we aggregate the features of the entire graph with av- erage pooling. Finally, multiple MLP classifiers are applied to make predictions for nodes, edges, and the graph on the sample-level and fine-grained tasks. We train our model using cross-entropy loss with labels from the trained fine-grained entailment classifier in section 3.1.3. 4 Experiments 4.1 Multimodal MultiDocument Dataset We compare our new dataset with others in Table 1. Our dataset contains fine-grained labels across 180,000 entailed claims, 121,224 neutral claims, and 113,181 contradicted claims. While existing datasets are topic-specific, our claims are highly de- tailed and topically diverse. In our supplementary, we include examples of claims from our dataset compared to other datasets. 6\nDatasets #Samples Data source Topic(s) Multi-modality Multi-document Claim verification Fine-grained Labels Zlatkova et al. (2019) 1,233 Snopes, Reuters Multi ✔ ✗ ✔ ✗ Cheema et al. (2022b) 3,400 X COVID-19, Climate change, Technology ✔ ✔ ✔ ✗ Nielsen and McConville (2022) 12,914 X Multi ✔ ✔ ✔ ✗ Yao et al. (2023b) 15,601 Politifact, Snopes Multi ✔ ✗ ✔ ✗ Nakov et al. (2021) 18,014 X COVID-19, Politics ✗ ✔ ✗ ✗ Ours 414,405 Multi-News Multi ✔ ✔ ✔ ✔ Table 1: Comparison between different datasets in terms of multi-modality, multi documents, claim verification, and fine-grained labels. Ours is the largest one that supports fine-grained labels with multimodal document claim verification. No dataset provides fine-grained labels. Model Synthetic Labels Human Labels Sample-level Fine-grained Sample-level Fine-grained E N C All E N C All E N C All E N C All FGVE 0.27 0.2 0.28 0.25 0.23 0.1 0.09 0.14 0.32 0.14 0.36 0.27 0.30 0.05 0.04 0.13 MOCHEG 0.32 0.14 0.36 0.27 0.28 0.13 0.32 0.24 0.37 0.18 0.41 0.32 0.35 0.14 0.39 0.29 LLaV A-1.5 0.57 0.0 0.33 0.30 0.73 0.0 0.14 0.29 0.67 0.0 0.43 0.37 0.88 0.0 0.13 0.33 MiniGPT-v2 0.50 0.0 0.43 0.31 0.56 0.0 0.24 0.27 0.62 0.0 0.62 0.41 0.54 0.0 0.09 0.21 Ours 0.72 0.26 0.48 0.49 0.65 0.23 0.41 0.43 0.72 0.21 0.59 0.51 0.68 0.1 0.39 0.39 Table 2: Results on our M3DC benchmark. We report class-wise F1 scores (E: entailed, N: neutral, C: contradicted) and the overall F1 score (All). 4.2 Testing Datasets and Baselines We evaluate entailment performance on our M3DC benchmark and MOCHEG . We report F1 scores for each of the three classes and the macro-averaged overall F1. We evaluate both sample and fine-grained level pre- diction tasks. For M3DC, we compare using two types of labels: human annotated labels and syn- thetic labels generated from the fine-grained entail- ment classifier (Section 3.1.3). For human labels in M3DC and fine-grained labels in MOCHEG, we randomly selected 30 samples for human annota- tion using six expert annotators. Performing human annotation on samples is an extremely laborious and time-consuming task, which requires annota- tors to read multiple documents, look at multiple images, and watch video(s), which are often quite long. Annotators then must densely annotate every KE within the claim. We include additional details on our human study in our supplementary. Our baselines consist of a set of large vision-language models (LVLM) that have demonstrated strong performance on various multimodal tasks. Task- specific baselines include FGVE and MOCHEG , while LVLM baselines include LLaV A-1.5 and MiniGPT-v2 . 4.3 Quantitative Evaluation Table 2 shows our model outperforming baselines on the M3DC dataset, with similar results on syn- thetic and human-labeled data. This is critical, as it shows that the performance of our models on our human-annotated data tracks closely with the per- Model Sample-level Fine-grained E N C All E N C All FGVE 0.37 0.16 0.37 0.3 0.31 0.1 0.2 0.20 MOCHEG† 0.57 0.23 0.40 0.39 0.52 0.21 0.36 0.37 LLaV A-1.5 0.67 0.0 0.93 0.53 0.44 0.0 0.25 0.23 MiniGPT-v2 0.67 0.0 0.93 0.53 0.71 0.0 0.25 0.32 Ours 0.69 0.25 0.48 0.47 0.63 0.18 0.36 0.39 Table 3: Results on MOCHEG dataset . All labels are human labels in this benchmark. We report class-wise F1 scores (E: entailed, N: neutral, C: contradicted) and the overall F1 score (All). †: Note that MOCHEG is also trained on this dataset, while our method is applied zero-shot . Model Sample-level Fine-grained E N C All E N C All Ours w/ Text 0.69 0.25 0.43 0.46 0.61 0.15 0.34 0.37 Ours w/ Text + Image 0.71 0.26 0.42 0.46 0.63 0.18 0.36 0.39 Ours w/ Text + Image + Video 0.72 0.26 0.48 0.49 0.65 0.23 0.41 0.43 Ours w/ Text + Image + Video + Audio 0.70 0.24 0.47 0.47 0.63 0.21 0.41 0.42 Ours All w/o Text 0.42 0.02 0.29 0.24 0.37 0.01 0.23 0.20 Table 4: Ablation on M3DC showing the impact of removing different modalities on our method. formance obtained on our large synthetic dataset, suggesting our synthetic dataset is a good evalua- tion benchmark for this task. On the MOCHEG dataset (Table 3), our model outperforms in fine- grained predictions, despite being trained on a di- verse news dataset, M3DC, rather than MOCHEG. While LLaV A and MiniGPT-v2 outperform in over- all F1, they fail to correctly identify neutral claims, which our model handles better. The MOCHEG dataset’s lack of video and audio and different styles of text (Snopes vs News) contributes to its lower performance at the sample level. 4.4 Ablations To demonstrate our model’s capability in handling multimodal inputs, we conducted ablation studies 7\npx Figure 4: Qualitative results comparing our method’s fine-grained predictions with those obtained from other baselines. We include additional results in our supplementary materials. with varying combinations of modalities, as out- lined in Table 4. Considering that a substantial por- tion of the information in KGs is derived from the textual content of news articles, it was anticipated that the text modality would play a pivotal role in the model’s inference process. Our results, how- ever, indicate that including additional modalities, such as visual and audio, did not significantly en- hance the model’s performance. This observation suggests that the dominance of text-based claims in our dataset may lead the model to prioritize textual features, which are typically sufficient for classify- ing claims derived from textual information. 4.5 Qualitative Results We show qualitative results comparing our method with competitive baselines in Figure 4. We il- lustrate predictions on nodes and tuples by the color of the edges (green=entailed, yellow=neutral, red=contradiction). Node colors indicate node pre- dictions, while edge colors represent tuple predic- tions. We perform fine-grained claim verification for the claim “The crash and fire in Chuhuiv were caused by engine failure and not by pilot errors.” In actuality, the crash was partially caused by pilot errors, so this portion of the claim is shown in red (as being contradicted by certain media sources). We observe that our method identifies the correct portion of the claim as being contradicted by the evidence, while baselines tend to make more ran- dom predictions throughout the graph. In general, we observe that our GNN-based approach is able to ensure more semantic consistency across pre- dictions than approaches that make predictions for each part of the claim separately. 5 Conclusion In this paper, we address the challenge of predicting the logical consistency of claims with multimodal sources. Our method analyzes claims within a mul- timodal multidocument context, including text, vi- sual content, and audio. Our method is able to rea- son in a fine-grained manner over complex informa- tion across media and modalities. We further intro- duce a dataset, M3DC, created through a unique synthesis technique that produces claims requiring cross-document, cross-media reasoning for verifi- cation. This benchmark will enable the evaluation of multimodal fact-checking models and spur fur- ther research in this space. Our contributions aim to mitigate the impact of misinformation and enhance the reliability of automated fact-checking systems, thus supporting informed decision-making and fos- tering a factually accurate public dialogue. 8\n
[ "LEMMA: Towards LVLM-Enhanced Multimodal Misinformation Detection with External Knowledge Augmentation", "Video-LLaVA: Learning United Visual Representation by Alignment Before Projection", "Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "ImageBind One Embedding Space to Bind Them All", "GPT-4 Technical Report", "EVA: Exploring the Limits of Masked Visual Representation Learning at Scale", "F-coref: Fast, Accurate and Easy to Use Coreference Resolution", "End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models", "Graph Pre-training for AMR Parsing and Generation", "MuMiN: A Large-Scale Multilingual Multimodal Fact-Checked Misinformation Social Network Dataset", "Open-Domain, Content-based, Multi-modal Fact-checking of Out-of-Context Images via Online Resources", "DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing", "Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing", "Overview of the CLEF-2021 CheckThat! Lab on Detecting Check-Worthy Claims, Previously Fact-Checked Claims, and Fake News", "Fake news detection: A hybrid CNN-RNN based deep learning approach", "Learning Transferable Visual Models From Natural Language Supervision", "Detecting Cross-Modal Inconsistency to Defend against Neural Fake News", "A Joint Neural Model for Information Extraction with Global Features", "Online misinformation about climate change", "Detecting fake news stories via multimodal analysis", "Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake News Detection", "A Benchmark Dataset of Check-worthy Factual Claims", "Vaccine Safety: Myths and Misinformation", "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization", "Adversarial NLI: A New Benchmark for Natural Language Understanding", "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "Fact-Checking Meets Fauxtography: Verifying Claims About Images", "Learning Hierarchical Discourse-level Structure for Fake News Detection", "FEVER: a Large-scale Dataset for Fact Extraction and VERification", "Proximal Policy Optimization Algorithms", "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference", "Semi-Supervised Classification with Graph Convolutional Networks", "Abstract Meaning Representation for Sembanking", "Cross-document Misinformation Detection based on Event Graph Reasoning", "InfoSurgeon: Cross-Media Fine-grained Information Consistency Checking for Fake News Detection" ]
Exploring the Best Practices of Query Expansion with Large Language Models
Exploring the Best Practices of Query Expansion with Large Language Models Abstract Large Language Models (LLMs) are founda- tional in language technologies, particularly in information retrieval (IR). Previous stud- ies have utilized LLMs for query expansion, achieving notable improvements in IR. In this paper, we thoroughly explore the best prac- tice of leveraging LLMs for query expan- sion. To this end, we introduce a training-free, straightforward yet effective framework called Multi-Text Generation Integration ( MUGI). It leverages LLMs to generate multiple pseudo- references, integrating them with queries to en- hance both sparse and dense retrievers. Our empirical findings reveal that: (1) Increasing the number of samples from LLMs benefits IR systems; (2) A balance between the query and pseudo-documents, and an effective integration strategy, is critical for high performance; (3) Contextual information from LLMs is essential, even boost a 23M model to outperform a 7B baseline model; (4) Pseudo relevance feedback can further calibrate queries for improved per- formance; and (5) Query expansion is widely applicable and versatile, consistently enhancing models ranging from 23M to 7B parameters. 1 Introduction Information retrieval (IR) is crucial for extracting relevant documents from large databases, serving as a key component in search engines, dialogue sys- tems , question- answering platforms , recommendation sys- tems , and Retrieval Augmented Generation (RAG) . Query expansion, a key technique for enhancing information retrieval (IR) efficacy , traditionally employs Pseudo-Relevance Feedback (PRF) from initial retrieval results. However, its effectiveness is constrained by the quality of these results. Recently, Large Language Models (LLMs), such as ChatGPT, have demonstrated ex- ceptional capabilities in language understanding, knowledge storage, and reasoning . Motivated by these advancements, some studies have explored leverag- ing LLMs for zero-shot query expansion . While these methods have shown empirical effectiveness, they also present certain limitations. LameR generates potential answers by utilizing LLMs to rewrite BM25 can- didates for expansion. However, its performance is highly dependent on the quality of the initial retrieval. Both HyDE and query2doc leverage the knowl- edge stored in LLMs. While HyDE demonstrates effective performance with contriver, it performs poorly with lexical-based retrievers . Conversely, query2doc is effective with both sparse and dense retrieval methods, but strong rankers may not benefit as much as weaker ones . Moreover, the in- tegration and balance between pseudo references and queries are under-explored in these studies. To address these limitations, we explore best practices for utilizing query expansion with LLMs for information retrieval. In this paper, we delve into several specific research questions: RQ1 : Are multiple pseudo-references more beneficial than a single one? RQ2 : Is there a universal query ex- pansion method that effectively serves both lexical- based and neural-based retrievers, applicable to both weak and strong models without prior con- straints? RQ3 : How can the query and pseudo- references be balanced for lexical-based retrievers? RQ4 : What is the most effective method for inte- grating multiple pseudo-references with a query in dense retrievers? 1\nWe introduce a framework named Multi-Text Generation Integration ( MUGI) to address these key questions. MUGIemploys a zero-shot ap- proach to generate multiple pseudo-references from LLMs, integrating them with queries to enhance IR efficiency. Our empirical experiments demon- strate that: (1) Increasing the number of samples from LLMs benefits IR systems. (2) MUGIdemon- strates versatility and effectiveness across both lex- ical and dense retrievers and models of various sizes. Remarkably, it enables a 23M-parameter dense retriever to outperform a larger 7B baseline. (3)MUGIproposes an adaptive reweighting strat- egy that considers the lengths of both the pseudo- references and the query, critically improving the performance of lexical retrievers. (4) MUGIinves- tigates different integration strategies and proposes contextualized pooling, which has been overlooked in previous methods. Additionally, drawing inspi- ration from the Rocchio algorithm (Schütze et al., 2008), MUGIimplements a calibration module that leverages pseudo relevance feedback to further en- hance IR performance. Notably, using ChatGPT4, MUGIsignificantly enhances BM25 performance, with an 18% improvement on the TREC DL dataset and 7.5% on BEIR, and boosts dense retrievers by over 7% on TREC DL and 4% on BEIR. 2 Related Work Information Retrieval focuses on the efficient and effective retrieval of information in response to user queries. Best Matching 25 (BM25) advances beyond earlier proba- bilistic models by incorporating document length normalization and non-linear term frequency scal- ing, thereby enhancing the alignment of queries with documents. Dense retrievers such as DPR employ deep neu- ral networks to identify semantic relationships be- tween queries and documents by measuring the cosine similarity of their text embeddings. Existing efficient IR systems typically use a re- trieval & rerank pipeline : Initially, a retrieval mecha- nism, such as BM25 or a bi-encoder, identifies a broad set of potentially relevant documents. Subse- quently, a stronger ranker, usually a cross-encoder, meticulously scores the relevance of these docu- ments, enhancing the precision of the final results. LLMs for IR The use of LLMs in IR falls into two primary categories : fine- tuning LLMs as retrieval models and employing them for zero-shot IR. This paper concentrates on zero-shot IR, where typical approaches involve leveraging the reasoning capabilities of LLMs for direct document ranking or relevance assessment . While effective, these methods are limited by LLMs’ input length constraints, making them better suited for the rerank phase. Another line of research focuses on using LLMs to synthesize additional high-quality train- ing datasets to improve existing models . Other works, such as HyDE , query2doc , and LameR , explore query expansion. They leverage LLMs to create pseudo-references or potential answers, enhancing queries for better retrieval outcomes. MuGI is a query expansion framework that lever- ages LLMs to enhance queries. Unlike previous works, which are limited by inherent constraints, MuGI offers broader applicability and versatility as it seamlessly integrates with both lexical and dense retrievers. By utilizing and intergrating a wealth of contextualized information from multiple references, MuGI surpasses existing techniques in both in-domain and out-of-distribution evaluations by more effectively capturing essential keywords and enriching the background context. 3 Method We begin by discussing IR preliminaries and intro- ducing our MuGI framework, which is designed to address the questions outlined earlier. 3.1 Preliminaries Non-parametric Lexical-based Methods BM25 is a fundamental non-parametric lexical method that calculates document relevance using term frequency (TF) and inverse document frequency (IDF) as:. nX i=1IDF(qi)TF(qi, D)(k1+ 1) TF(qi, D) +k1(1−b+b|D| avgdl)(1) where qiare query terms, TF(qi, D)is term fre- quency, IDF(qi)is inverse document frequency, |D|is document length, avgdl is average document length, and k1andbare tunable parameters. 2\nNeural Dense Retrieval Methods Dense re- trieval leverages deep learning to identify semantic similarities between queries and documents by en- coding them into high-dimensional embeddings, typically measured by : Sim(q, D) =f(q)⊤f(D) ∥f(q)∥∥f(D)∥(2) where f(·)maps text to embedding space Rd. BM25 is fast and generalizes well, suited for sparase retrieval, while dense retrieval excels at capturing semantic connections but is slower and less generalized due to neural network dependency. 3.2 Multi-Text Generation Integration Recognizing that both lexical-based and dense re- trieval methods depend on a certain degree of infor- mation overlap between the query and document, we introduce the Multi-Text Generation Integra- tion (MUGI) method. This approach aims to aug- ment the query’s information content by leveraging multiple samplings from LLMs. MUGIenriches queries with additional background information and broadens the keyword vocabulary to encom- pass out-of-domain terms, thereby bridging the se- mantic gap between queries and documents on both lexical-based and dense retrievers. Figure 2 pro- vides an illustrative overview of M UGI. Upon receiving a query q,MUGIinitially ap- plies a zero-shot prompt (see fig. 1) technique to generate a set of pseudo-references, denoted as R={r1, r2, r3, ..., r n}, which are then integrated with query for subsequent IR operations. We have explored different methods for BM25 and dense retrievers. Zero-shot Generation PromptYou are PassageGenGPT, an AI capable of generating concise, informative, and clear pseudo passages on specific topics.Generate one passage that is relevant to the following query: '{query}'. The passage should be concise, informative, and clear Figure 1: Zero-Shot Prompting for Relevant Passage Generation: It emphasizes generating contextually rele- vant content to enhance background knowledge density for multiple-text integration. 3.2.1 M UGI for BM25 This component evaluates relevance by analyzing lexical overlaps between the query and references. Given the longer lengths of documents compared to queries and BM25’s sensitivity to word frequency, achieving a careful balance to ensure the appro- priate influence of each element in text is crucial. The variation in the lengths of queries and pas- sages makes the constant repetition of query used in previous studies, which typically handles single pseudo-references, ineffective , particularly when dealing with multiple references. To address this issue, we implement an adap- tive reweighting strategy that adjusts according to the length of the pseudo-references. This adjust- ment is governed by a factor β, as illustrated by the following equation: λ=len(r1) + len( r2) +. . .+ len( rn)) len(q)·β (3) Since BM25 does not account for word order, we enhance the query by repeating query λtimes and concatenating it with all pseudo-references: qsparse =concat (q∗λ, r 1, r2, r3..., rn)(4) This enhanced query is then processed by BM25 to produce the ranking results Ibm25. 3.2.2 M UGI for Dense Retriever MUGIalso enhances dense retrievers, specifically bi-encoders. In this section, we discuss how to integrate pseudo-references with queries and re- fine query using pseudo positive/negative reference feedback. Integration We present two approaches to inte- grate queries with pseudo-references to obtain a contextualized query embedding. I.Concatenation has been commonly used in prior studies , where the query is simply concate- nated with all references as in BM25: qcat=concat (q, r 1, r2, ..., r n) (5) This enhanced query is then processed by the dense retriever fto produce embeddings, i.e.,ecat=f(qcat). However, as the number and length of references increase, the typi- cal input length limitation of 512 tokens can hinder the integration process. Consequently, only one to two passages can be incorporated intoqcat. II.Feature Pooling addresses the model’s input length limitations, particularly when multi- ple references are involved. A straightfor- ward method is to average the embeddings in 3\nBM25Stage 1: RetrievalStage 2: RerankMuGIpipelineTop 100DocumentsPseudo
[ "Can Query Expansion Improve Generalization of Strong Cross-Encoder Rankers?", "Fine-Tuning LLaMA for Multi-Stage Text Retrieval", "When do Generative Query and Document Expansions Fail? A Comprehensive Study Across Methods, Retrievers, and Datasets", "Large Language Models for Information Retrieval: A Survey", "Towards General Text Embeddings with Multi-stage Contrastive Learning", "Recommender Systems in the Era of Large Language Models (LLMs)", "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena", "Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach", "Large Language Models are Strong Zero-Shot Retriever", "Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent", "LLaMA: Open and Efficient Foundation Language Models", "InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval", "Precise Zero-Shot Dense Retrieval without Relevance Labels", "Enhancing Multi-modal Multi-hop Question Answering via Structured Knowledge and Unified Retrieval-Generation", "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness", "Improving Passage Retrieval with Zero-Shot Question Generation", "Unsupervised Dense Information Retrieval with Contrastive Learning", "Pseudo Relevance Feedback with Deep Language Models and Dense Retrievers: Successes and Pitfalls", "Learning Implicit User Profile for Personalized Retrieval-Based Chatbot", "Pyserini: A Python Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations", "RetGen: A Joint Framework for Retrieval and Grounded Text Generation Modeling", "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", "Semantic Models for the First-Stage Retrieval: A Comprehensive Review", "Overview of the TREC 2020 Deep Learning Track", "RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering", "Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering", "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval", "Language Models are Few-Shot Learners", "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", "Dense Passage Retrieval for Open-Domain Question Answering", "Overview of the TREC 2019 deep learning track", "Document Ranking with a Pretrained Sequence-to-Sequence Model", "Multi-hop Selector Network for Multi-turn Response Selection in Retrieval-based Chatbots", "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "Passage Re-ranking with BERT", "Learning deep structured semantic models for web search using clickthrough data", "Relevance-Based Language Models", "Relevance weighting of search terms", "C-Pack: Packaged Resources To Advance General Chinese Embedding", "UMass at TREC 2004: Novelty and HARD", "Okapi at TREC-3", "The SMART Retrieval System—Experiments in Automatic Document Processing" ]
Explicit Memory Learning with Expectation Maximization
Explicit Memory Learning with Expectation Maximization Abstract Large Language Models (LLMs) have revo- lutionized the landscape of natural language processing, demonstrating remarkable abilities across various complex tasks. However, their stateless nature limits the capability to retain information across interactions, hindering per- formance in scenarios requiring historical con- text recall. To mitigate this, current approaches primarily use explicit memory to allow LLMs to store useful information, which is accessible, readable, and interpretable. Nevertheless, ex- plicit memory lacks the reliable learning mech- anisms of implicit memory, which can be op- timized end-to-end. To harness the benefits of both, we introduce EM2, a novel frame- work enhancing explicit memory updates via the Expectation-Maximization (EM) algorithm. EM2treats memory as a latent variable, ensur- ing continual learning and improvement dur- ing updates. Experimental results on stream- ing inference tasks demonstrate that EM2out- performs existing methods without memory or with static external memory. Our in-depth analysis highlights that EM2significantly en- hances performance across various backbones and memory strategies, providing a robust solu- tion for advancing LLM memory management and enabling explicit memory to learn and im- prove similarly to implicit memory. 1 Introduction The advent of Large Language Models (LLMs) has shifted the landscape of machine learning, unveil- ing unprecedented capabilities for handling com- plex tasks across diverse domains (Ouyang et al., 2022; Achiam et al., 2023; Anthropic, 2024; Reid et al., 2024; Touvron et al., 2023; Zhao et al., 2023; Naveed et al., 2023, inter alia ). Despite these ad- vancements, a fundamental limitation of LLMs is their statelessness : they do not retain informa- tion across invocations . This restricts their ability to process and utilize previous inter- Explicit Memory Implicit Memory 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. ParametersFigure 1: Comparison between Explicit and Implicit Memory. Explicit memory is represented through text, storing information directly accessible and readable. Im- plicit memory is stored in the form of parameters, which underlie the model’s learned behaviors and are not di- rectly interpretable. Deep blue indicates the memory currently being activated. actions in a manner akin to human cognitive pro- cesses , thereby limiting their util- ity in scenarios that require retention and recall of historical context . Recent studies have attempted to address this challenge by incorporating external memory mech- anisms , which can be categorized into explicit and implicit forms . As illustrated in Fig- ure 1, explicit memory stores information in a tex- tual format that is directly accessible and read- able, such as rules, knowledge, and skills . Implicit memory, on the other hand, is parametric, facil- itating learning and updates . While parametric storage allows 1\nfor end-to-end learning, it often faces issues with training stability , specifica- tion , and interpretabil- ity . With the increasing ability of LLMs to directly understand text , explicit memory is be- coming the dominant method for memory storage in LLMs . Updating is a critical feature of memory . Current methods of updating explicit memory include manual revi- sions and self-reflection . Ge et al. (2023) con- ceptualize LLMs as operating systems and have developed memory update mechanisms inspired by OS design. Wang et al. (2024a) employ LLMs to summarize past experiences for enhanced external memory autonomously. It is worth noticing that, LLMs may miss or make mistakes when internalizing knowledge , and there is no guarantee that newly constructed memory is supe- rior to its predecessors. In contrast, implicit mem- ory, which is updated through gradients , ensures learning during the memory update. Current meth- ods for updating explicit memory do not guaran- tee learning and enhancement during the memory update process, marking a fundamental drawback. The primary reason is the non-differentiability of textual memory, which means that memory updates lack a clear direction. To address this, we propose EM2, which treats memory as a latent variable and update it using the Expectation-Maximization (EM) algorithm . EM2extracts relevant past experi- ences to guide current predictions and ensures that the memory is continuously optimized, enabling the model to learn and improve effectively over time. Experimental results on streaming inference tasks show that compared to models without exter- nal/fixed memory, our dynamic memory updating approach significantly enhances performance. Our main contributions are as follows: •We identify that current methods of updating explicit memory lack direction and do not en- sure that updated memory is superior to previ- ous versions. •We introduce EM2, which updates explicit memory using the EM algorithm to ensure continuous learning and enhancement during the memory update process. •Experimental results demonstrate that EM ² significantly improves model performance. 2 Related Work 2.1 Memory Mechanism of LLMs Memory is fundamental to the development of intelligence . Memory mecha- nisms in LLMs primarily involve retrieval , updating , and utiliza- tion processes. Retrieval aims to fetch relevant and accurate memories from a vast store, directly influencing the outcome’s qual- ity . Updates include incremental, inductive, and com- pressive approaches. Incremental updates simply add newly acquired memories without processing them . Induc- tive updates utilize the LLM’s capability to amal- gamate and summarize memories, thereby narrow- ing the retrieval scope . Compressive updates enhance the efficiency of memory use by condensing texts into vectors . The utilization of memory relies on the LLM’s contextual understanding and learning capabilities, optimizing model behavior through the injection of text or parameters . For LLMs, memory can be classified as explicit or implicit . Explicit memory, also known as declarative memory, refers to forms of memory that can be articulated . It can be stored and retrieved in textual form , offering readability and interpretability . Explicit memory does not depend on a specific model and can be uti- lized by various models post-generation . Additionally, humans can participate in modifying and refining explicit memory, making it widely applied in LLM memory modules . Implicit memory, on the other hand, refers to forms of memory that can- not be articulated. This type of memory is stored in parameters and updated through training . Although explicit memory can also be up- dated through model-driven summarization and induction , it lacks the clear update targets characteristic of implicit memory, 2\nwhich ensures that the updated state is superior to its previous state. 3 Model Inference. The inference methods for LLMs predominantly encompass zero-shot, few-shot, and chain-of- thought . Zero-shot often requires model fine-tuning to equip LLMs with the capability to generate task-specific outputs di- rectly . Brown et al. (2020) observe that providing models with ex- ample prompts can significantly enhance their un- derstanding of specific tasks. Currently, In-Context Learning has emerged as a fundamental paradigm for addressing tasks using LLMs , effectively leveraging minimal input to guide model responses . Wei et al. (2022c) note that guiding models to generate intermediary reasoning steps will boost their performance for reasoning. This enhanced ca- pability typically emerges only in models of certain scales, a phenomenon often referred to as “emer- gent abilities” . Furthermore, Li et al. (2024) and Wang et al. (2024b) find that prompts serve a dual function: they not only acti- vate the model’s internal memory but also inject effective external knowledge and guidance. Addi- tionally, updating and infusing memory in prompts offers benefits such as interpretability and flexi- bility , further enhancing the utility of LLMs in complex inference scenarios . 4 Preliminary and Task Definition 4.1 Explicit Memory Learning Memory in AI are designed to mimic the hu- man ability to remember past experiences and uti- lize this accumulated knowledge to aid in future tasks . In our model, explicit memory learning is implemented via a memory module Mthat stores strategies τlearned over time, which is formally represented as: Mt={τ1, τ2, . . . , τ K}, (1) where Mtrepresents the state of the memory mod- ule at time t,Kis the memory size, and each τi is a tactic derived from past experiences. The up- dating of this memory is governed by a learning function L, which adjusts the memory based on new experiences (X, Y): Mt+1=L(Mt,(Xt, Yt)). (2) Here, (Xt, Yt)represents the input-output pair at timet, and the function Ldetermines how the mem- ory should be updated, possibly by adding new strategies, modifying existing ones, or removing outdated strategies based on their relevance and effectiveness in the new context. 4.2 Expectation Maximization Algorithm The Expectation Maximization (EM) algorithm is a powerful statistical tool used for parameter esti- mation in models with latent variables. It operates in two main steps: the Expectation (E) step and the Maximization (M) step. During the E step, the algorithm estimates the latent variables based on the current estimate of the parameters: Q(θ|θ(t)) =EZ∼p(Z|X,θ(t))[logp(X, Z|θ)],(3) where θ(t)denotes the parameters at iteration t, Xis the observed data, Zare the latent variables, andp(Z|X, θ(t))is the probability of the latent variables given the observed data and current pa- rameters. The M step then updates the parameters to max- imize the expected log-likelihood found in the E step: θ(t+1)= arg max θQ(θ|θ(t)). (4) This iterative process continues until convergence, making it suitable for complex models where direct likelihood maximization is infeasible . The EM algorithm is particularly ef- fective in scenarios where the model parameters include both observed and unobserved (latent) com- ponents. By alternating between estimating the hidden components given the parameters and then optimizing the parameters given the hidden compo- nents, EM facilitates a more accurate estimation of model parameters. 4.3 Task Definition Given a stream of data D = {(X1, Y1),(X2, Y2), . . . , (Xn, Yn)}, where Xtrepresents the observed data at time tandYt denotes the corresponding true label, the objective is to construct effective memory Mtthat provides accurate predictions ˆYt. Our primary goal is to minimize the discrep- ancy between the predicted labels ˆYtand the actual labels Yt. This is achieved by enhancing the predic- tive accuracy of the model under the guidance of the evolving memory Mt. The effectiveness of Mt 3\nis crucial as it directly influences the model’s ability to adapt to new data and make accurate predictions. Therefore, the challenge lies in designing a learn- ing function Lthat not only updates the memory efficiently but also ensures that these updates result in the accurate anticipation of future samples based on past and present data insights. 5 Methodology 5.1 Memory based Inference At time t, the model receives an input Xt. In a zero- shot scenario, without any guidance from memory, the model ξgenerates the predicted label ˆYtin an autoregressive manner as follows: Pξ(ˆYt|Xt) =|ˆYt|Y i=1Pξ(ˆyi|Xt,ˆy<i) (5) To leverage past experiences stored in the mem- ory, we enhance model’s capability by introducing a memory-based guidance. Given the current in- putXt, we extract the most relevant information from the current memory state Mt. This extraction process results in a memory subset mt, defined as the set of elements in Mtthat are most relevant toXt. The relevance can be quantified based on similarity measures, heuristic rules, or learned rele- vance functions. The resulting mtcan be formally represented as: mt= select( Mt, Xt) (6) where select is a function that retrieves the most relevant memory elements based on Xt. Withmtas an additional context, the model then generates ˆYtusing both mtandXtto guide the prediction: Pξ(ˆYt|mt, Xt) =|ˆYt|Y i=1Pξ(ˆyi|mt, Xt,ˆy<i)(7) This memory-augmented inference mechanism allows the model to effectively utilize historical data, enhancing its predictive accuracy and adapt- ability in dynamic environments. 5.2 Memory Module Construction The Memory Module Mis constructed by accu- mulating pairs (Xi,ˆYi)over time. Initially, the memory of the model is empty, representing a state of minimal prior knowledge. As the model pro- cesses data and generates predictions, it selectively updates this memory based on the quality and cer- tainty of the information. To quantify the certainty of each predicted out- put and determine its eligibility for memory in- clusion, we define an uncertainty threshold ϵ. A prediction ˆYiis considered high-quality if its nor- malized entropy, which measures the average un- certainty across all predicted components, is below this threshold. The entropy H(ˆYi)for each predic- tion is calculated as follows: H(ˆYi) =−1 |ˆYi||ˆYi|X j=1logPξ(ˆyj|Xi,ˆy<j)≤ϵ (8) When the above condition is satisfied, indicating that the generated prediction ˆYiis of sufficiently high certainty and quality, it is integrated into the memory using the learning function L, as discussed in Section 4.1. 5.3 Memory Update through Learning Function We employ the EM algorithm to design the learning function L. As depicted in Figure 2 under 2⃝and 3⃝, if the generated ˆYisatisfies condition 8, it is fed along with the current memory state Mtinto the learning function L. The update equation is: Mt+1=L(Mt,(Xt,ˆYt)) (9) We treat strategies τas latent variables ZandMtas the parameter θin Eq. 3, transforming the learning process into an EM learning framework. 5.3.1 Construction of Representative Validation Set To evaluate the updates efficiently, we construct a representative validation set Vfrom the dataset Dnot yet included in the memory Mt. We select cluster centers from D \Mtto form V, reducing re- dundancy and improving the efficiency of memory updates. The selection can be represented by: Vt= centers( {(X1,ˆY1), . . . , (Xt,ˆYt)} \Mt) (10) 5.3.2 E-step: Inference Procedure LetVt={(Xv, Yv)}. Based on Equation 3, the prediction for Yvgiven Xvand the memory Mis calculated as: 4\nObserved Data Memory ModuleLLM Inference Relevant PromptM-step: Determine whether to update memory based on predicted performance E-step: Enhance Prediction based on the new Memory𝑋𝑋1�𝑌𝑌1 𝑋𝑋2�𝑌𝑌2𝑋𝑋𝑖𝑖�𝑌𝑌𝑖𝑖𝑋𝑋1�𝑌𝑌1 Evaluation on the representative validation setMemory Informed Prediction 𝑋𝑋𝑣𝑣𝑀𝑀𝑡𝑡 𝑋𝑋𝑣𝑣𝑃𝑃𝑌𝑌𝑣𝑣 𝑋𝑋𝑣𝑣;𝑀𝑀 =E𝜏𝜏∼P𝜏𝜏𝑋𝑋𝑣𝑣;𝑀𝑀P𝑌𝑌𝑣𝑣𝑋𝑋𝑣𝑣,𝜏𝜏 𝑀𝑀𝑡𝑡+1𝑋𝑋𝑡𝑡�𝑌𝑌𝑡𝑡 𝑀𝑀𝑡𝑡①② ③ Inference based on 𝑀𝑀 Update 𝑀𝑀 based on new observed data (𝑋𝑋𝑡𝑡,𝑌𝑌𝑡𝑡)𝑋𝑋𝑣𝑣Figure 2: Overview of EM2for memory-guided prediction in streaming data. At each timestep t, the model receives an input Xt.1⃝utilizes the memory Mtto select relevant demonstrations that guide the generation of the prediction ˆYt.2⃝and 3⃝depict the integration of the newly generated ˆYtand the current memory Mtinto the memory updating process, ensuring that the memory evolves with the latest data insights and contributes to future predictions. P(Yv|Xv;M) =X τP(Yv, τ|Xv;M) =X τP(Yv|Xv, τ)P(τ|Xv;M) =Eτ∼P(τ|Xv;M)[P(Yv|Xv, τ)] (11) 5.3.3 M-step: Learning Procedure The memory is updated based on the maximization step defined as: Mt+1= arg max m⊂Mt∪Γ(Xt,ˆYt)|Vt|X i=1P(Yi|Xi;m),(12) where Γrepresents a function extracting knowledge from(Xt,ˆYt)to generate τt, which can be formally represented as: τt= Γ(Xt,ˆYt) (13) This step ensures that the updated memory Mt+1 performs better on Vtthan the previous state Mt, effectively capturing the beneficial strategies for future predictions. 6 Experiment 6.1 Evaluation Datasets To assess the efficacy of our approach, we evaluate it across three distinct types of tasks: word math problems, commonsense question answering (QA), and symbolic analysis. We utilize the following datasets for these evaluations: •Word Math Problem: GSM8K , MultiArith , SingleEq , AddSub , SV AMP , AQUA and MATH . •Commonsense QA: StrategyQA , CommonsenseQA (CSQA; Talmor et al., 2019), BoolQ , the AI2 Reasoning Challenge (ARC-c; Clark et al., 2018). •Symbolic Understanding: Date Understand- ing, Penguins in a Table, Colored Objects, and Object Counting sourced from Big- Bench . For a more detailed description of the datasets, please refer to Appendix A. 6.2 Experiment Settings Implementation Details. The inference process of the model not only demonstrates its understand- ing and analysis of problems but often encapsulates latent knowledge . There- fore, we store the model’s reasoning process along with the problem as the model memory. In the main experiments, memory are vectorized using 5\nGSM8K MultiArith SingleEq AddSub SV AMP AQuA MATH Average Single Inference ZS-CoT 76.80 94.83 89.96 84.30 81.45 40.55 29.02 77.98 CoT 79.61 96.50 92.32 85.31 82.76 42.32 - 79.80 ComplexCoT 78.01 96.67 91.92 84.81 81.48 42.51 29.50 79.23 EM282.63 97.77 92.71 86.32 83.91 45.27 30.12 81.43 EM2∗83.09 97.83 92.71 87.59 84.19 46.45 30.22 81.98 Multiple Inference ZS-CoT 84.98 97.50 92.71 88.61 87.18 47.24 32.22 83.03 CoT 85.59 98.00 94.29 91.13 91.76 51.57 - 85.39 ComplexCoT 85.29 98.16 93.70 89.87 89.62 50.78 32.46 84.57 EM286.35 98.83 95.86 93.41 92.51 53.14 33.82 86.68 EM2∗86.43 98.83 95.66 94.43 92.55 53.93 33.96 86.97 Table 1: Results on Math Word Problems (Accuracy in %). The best outcomes are emphasized in bold . Average represents the average performance across all datasets, excluding MATH. EM2denotes initialization using ZS-CoT, while EM2∗indicates initialization with CoT demonstrations, highlighted with a skyblue background. To ensure a fair comparison, the LLaMA-3-8B model is used as the backbone across all methods. CSQA StrategyQA BoolQ ARC60708090Accuracy (%)ZS-CoT CoT EM2 EM2* (a) Commonsense QA Date Penguin Colored Obj. Obj. Count60708090100Accuracy (%)ZS-CoT CoT EM2 EM2* (b) Symbolic Understanding Figure 3: Performance comparison on (a) commonsense question answering and (b) symbolic understanding tasks. The charts illustrate that EM2demonstrates a distinct advantage over both no and fixed-memory mechanisms. text-embedding-3-large , and relevancy is cal- culated using cosine distance as specified in Eq. 6. To ensure fair comparisons, we limit the selection to a maximum of 8 examples. These vectors are also employed to determine the clustering centers as outlined in Eq. 10. For more details and ablation studies, see Appendix B and C. Baselines. To validate the efficacy of our ap- proach, we compare it against three baseline meth- ods representing different levels of memory integra- tion: models without memory, with fixed memory, and with retrieval-based memory. •No Memory: The Zero-shot CoT (ZS-CoT; Kojima et al., 2022) utilizes the prompt “Let’s think step by step” to activate the model’s internal reasoning capabilities without relying on external memory aids. •Fixed Memory: The Chain-of-Thought (CoT; Wei et al., 2022b) employs fixed prompts to guide the model through a reasoning process. ComplexCoT extends this by using complex prompts that guide the model to generate more detailed reasoning processes. •Retrieval Memory: The Memory-of-Thought (MoT; Li and Qiu, 2023) incorporates a two- stage memory retrieval process, which in- cludes coarse-grained semantic retrieval fol- lowed by fine-grained model filtering to select relevant memories. AutoCoT selects examples based on relevance and diversity metrics tailored to the query. In contrast to the main experiment where memory updates are conducted using test samples, MoT and AutoCoT require pre-inference on training data. To ensure a fair comparison, we align the settings with these methods to in Section 6.4. Backbones. In the main experiment, we employ the 8B LLaMA-3 model . For the anal- ysis, we extend our investigations to include more LLMs, including LLaMA-3-70B , 6\nLLaMA-3-8B Mistral-7B Qwen-7B LLaMA-3-70B Mistral-7Bx8405060708090100Accuracy (%)ZS CoT Complex-CoT EM2(a) GSM8K LLaMA-3-8B Mistral-7B Qwen-7B LLaMA-3-70B Mistral-7Bx85060708090100Accuracy (%)ZS CoT Complex-CoT EM2 (b) CSQA Figure 4: Performance comparison of different memory mechanisms across various LLMs. algebra probabilitygeometry intermediate_algebranumber_theoryprealgebra precalculus01020304050Accuracy (%)EM Algorithm Random FIFO Figure 5: Performance of different memory updating mechanisms on the MATH dataset. Mistral-7B , Mixtral , and Qwen-2 . 6.3 Main Results Word Math Problem. Table 1 presents the re- sults on math word problems. Compared to meth- ods with no memory or fixed memory, our memory learning approach exhibits significant advantages. Notably, on the GSM8K dataset, EM2outperforms the ZS-CoT by 5.83% and CoT by 3.02%. This improvement is attributed to the dynamic mem- ory updating mechanism of EM2. We utilize two initialization methods: ZS-CoT, where the initial memory is empty, and CoT, which provides eight high-quality demonstrations at initialization. While the CoT initialization ensures better initial perfor- mance, the efficacy of both approaches converges as the memory accumulates. For instance, on the SingleEq dataset, results from both initialization methods are identical. Further, we analyze multiple inference scenario and observe thatEM2retains a clear advantage. Moreover, as more memories are integrated, the performance gap between the two initialization methods narrows. Commonsense QA and Symbolic. The experi- mental results for commonsense QA and symbolic understanding tasks are shown in Figure 3. We algebra probabilitygeometry intermediate_algebranumber_theoryprealgebra precalculus01020304050Accuracy (%)EM2MoT AutoCoTFigure 6: Performance comparison of retrieval-based memory methods on the MATH dataset. observe that EM2effectively enhances model per- formance on both types of tasks. Notably, EM2 demonstrates a more pronounced advantage in chal- lenging tasks, such as those involving complex, non-factoid information in the BoolQ dataset, and tasks requiring implicit multi-step reasoning in the StrategyQA dataset. This improvement can be at- tributed to EM2’s memory updating and retrieval mechanisms, which ensure the selection of high- quality and relevant demonstrations. 6.4 Analysis and Discussion Performance on Various Models. The perfor- mance of EM2across a range of models is ana- lyzed in Figure 4, focusing on two representative datasets: GSM8K and CSQA. We observe that EM2consistently delivers significant performance enhancements across different models. Notably, models with greater computational capabilities ben- efit more substantially from the EM2approach. For instance, despite having a similar number of param- eters, Qwen-7B exhibits a greater improvement than Mistral-7B. Moreover, EM2proves to be ver- satile, not only enhancing the performance of dense models but also boosting the efficacy of Mixture of Experts (MoE) models like Mixtral. This adaptabil- ity underscores EM2’s effectiveness in leveraging 7\n0 20 40 60 80 100 Percentage of LLaMA-3-70B Memory Used (%)42444648Accuracy (%) Accuracy Trend CoT Accuracy EM2 Accuracy(a) 8B model accesses 70B model’s memory 0 20 40 60 80 100 Percentage of LLaMA-3-8B Memory Used (%)666870Accuracy (%) Accuracy Trend CoT Accuracy EM2 Accuracy (b) 70B model accesses 8B model’s memory Figure 7: Impact of memory swapping on model performance. The horizontal axis represents the proportion of memory injected. The horizontal lines indicate the baseline accuracies for models with fixed memory and EM2 initialized with ZS-CoT. complex memory dynamics across different archi- tectural frameworks. Analysis of Memory Updating Mechanism. The impact of different memory updating strate- gies on accuracy is analyzed in Figure 5. We ex- perimented with replacing the learning function in Section 5.3 with two simpler updating strategies: random selection and First-In-First-Out (FIFO). Results on the MATH dataset indicate that these changes significantly reduce performance. The pri- mary reason for this decline can be attributed to the inherent limitations of Random and FIFO strate- gies, which rely on randomness and sample order, respectively, and cannot guarantee the effective- ness of memory updates. This analysis highlights the efficacy of the EM2approach, which employs the EM algorithm to ensure gradual and effective optimization of memory. Comparison of Memory Retrieval Method. In Figure 6, we compare the EM2with two memory retrieval methods. Both MoT and AutoCoT require pre-inference on the training dataset to gather ex- amples for retrieval. To ensure a fair comparison, we incorporate training samples into EM2, first performing memory updates and constructing a representative validation set on the training dataset, before introducing the test set for accuracy calcu- lations. Results on the MATH dataset demonstrate thatEM2achieves superior performance compared to traditional memory retrieval methods. Despite having a narrower search scope compared to the broader retrieval range of MoT and AutoCoT, the EM2’s updating strategy ensures the retention of high-quality memories. Moreover, continuous up- dates maintain alignment between the memory dis- tribution and the test distribution, thereby resulting in enhanced performance. Memory Sharing The memory constructed by EM2is model-agnostic, enabling the transfer and sharing of memories between models. In Fig- ure 7, we explore the effects of exchanging mem- ories between LLaMA-3-8B and LLaMA-3-70B. Each model first performs inference on the training dataset, after which their memories are swapped. As shown in Figure 7a, there is a gradual improve- ment in the performance of the 8B model as the pro- portion of memory from the 70B model increases. This indicates that smaller models can benefit from high-quality memories sourced from larger models. Conversely, Figure 7b reveals that the performance of the 70B model remains unaffected by the mem- ory from the 8B model, as lower-quality memories do not enter our memory module. 7 Conclusion In this paper, we analyze the advantages of ex- plicit memory over implicit memory and highlight a critical limitation of the former: its inability to ensure the effectiveness of updates as reliably as im- plicit memory. To address this, we introduce EM2, which treats memory as a latent variable and itera- tively updates it using the EM algorithm, thereby ensuring that updated memories are superior to their predecessors. Experiments show that EM2 offers significant advantages over models without memory and those with fixed memory. Importantly, the performance of EM2scales with the model’s capabilities, suggesting that more powerful models can leverage EM2to achieve even greater benefits. Additionally, EM2is model-agnostic, which allows for the transfer and sharing of memory across dif- ferent models. Analyses reveal that weaker LLMs can significantly benefit from high-quality memo- ries derived from larger counterparts. 8\nLimitations Generalization to a Broader Range of Tasks. While we have analyzed EM2across three distinct types of tasks, there is potential to extend this ap- proach to a wider array of generative tasks (Gozalo- Brizuela and Garrido-Merchán, 2023), such as code generation , machine transla- tion , and various agent-based tasks . Additionally, the form of memory could also be diversified to include structured data, triplets, user historical informa- tion, and more. Our current scope has not yet ex- plored these domains, and we see the exploration of EM2’s potential in more diverse tasks as an avenue for future work. Application to Commercial Models. EM2re- quires access to internal model information, such as perplexity, to assess the effectiveness of new mem- ories. However, for commercial models that only provide text outputs, such as OpenAI’s GPT mod- els or Anthropic’s Claude models , despite their powerful capabilities, applying EM2remains challenging. Incorporating Human Supervision. As men- tioned in Section 6.4, higher-quality memories can significantly enhance model performance. This pa- per primarily focuses on memories constructed au- tonomously by the model. An intriguing question is whether human-supervised memory enhancement and correction could further improve performance. Additionally, how to effectively incorporate human supervision , such as step-by-step guidance , remains an open question for future research. Ethics Statement Data Privacy. Our approach constructs memory from the model’s own outputs and does not require the collection or acquisition of personal data. The prompts and data used in our experiments do not in- volve any personal or privacy-sensitive information, ensuring compliance with privacy standards. Environmental Protection. The construction of large language models and the generation of data and memory are likely to become more prevalent, consuming significant computational resources and potentially increasing carbon emissions. We advo- cate for sustainable AI development, emphasizing the reduction of carbon footprints and the promo- tion of green AI initiatives to mitigate environmen- tal impacts. Adherence to Ethical Guidelines. We adhere to ethical guidelines and ensure that our data usage complies with the corresponding dataset licenses. Detailed statistics about the datasets and their re- spective licenses is listed in Table 2.
[ "The Llama 3 Herd of Models", "A Survey on Large Language Models for Code Generation", "A Survey on the Memory Mechanism of Large Language Model based Agents", "Memory Sharing for Large Language Model based Agents", "Efficient Prompting Methods for Large Language Models: A Survey", "Understanding LLMs: A Comprehensive Overview from Training to Inference", "Empowering Working Memory for Large Language Model Agents", "Retrieval-Augmented Generation for Large Language Models: A Survey", "A Comprehensive Survey of Machine Translation Approaches", "LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem", "MemGPT: Towards LLMs as Operating Systems", "Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration", "A Survey on Large Language Model based Autonomous Agents", "MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", "Boosting Language Models Reasoning with Chain-of-Knowledge Prompting", "A survey of Generative AI Applications", "Let's Verify Step by Step", "Do Large Language Models Know What They Don't Know?", "Adapting Language Models to Compress Contexts", "Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources", "Editing Large Language Models: Problems, Methods, and Opportunities", "MoT: Memory-of-Thought Enables ChatGPT to Self-Improve", "Improving Cross-Task Generalization with Step-by-Step Instructions", "Learning to Compress Prompts with Gist Tokens", "A Survey of Large Language Models", "Reflexion: language agents with verbal reinforcement learning", "GPT-4 Technical Report", "A Survey on In-context Learning", "Rethinking with Retrieval: Faithful Large Language Model Inference", "Large Language Models with Controllable Working Memory", "Large Language Models Can Self-Improve", "Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them", "Complexity-Based Prompting for Multi-Step Reasoning", "Large Language Models are Zero-Shot Reasoners", "Self-Consistency Improves Chain of Thought Reasoning in Language Models", "Training language models to follow instructions with human feedback", "Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?", "Training Verifiers to Solve Math Word Problems", "EncT5: A Framework for Fine-tuning T5 as Non-autoregressive Models", "CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation", "A Survey of Human-in-the-loop for Machine Learning", "Are NLP Models really able to Solve Simple Math Word Problems?", "Measuring Mathematical Problem Solving With the MATH Dataset", "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies", "A Survey on Neural Network Interpretability", "Language Models are Few-Shot Learners", "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions", "Robust and Scalable Differentiable Neural Computer for Question Answering", "Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge", "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems", "Hybrid computing using a neural network with dynamic external memory", "Solving General Arithmetic Word Problems", "MAWPS: A Math Word Problem Repository", "End-To-End Memory Networks", "Memory Networks", "Learning to Solve Arithmetic Word Problems with Verb Categorization", "Common molecular mechanisms in explicit and implicit memory", "Reasoning=working memory¿attention", "Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper", "Hallucination Detection for Generative Large Language Models by Bayesian Sequential Estimation", "Memory-assisted prompt editing to improve GPT-3 after deployment", "CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge", "The Development of Implicit and Explicit Memory", "The Development of Intelligence", "Cognitive neuroscience of human memory.", "The Claude 3 Model Family: Opus, Sonnet, Haiku" ]
Seeing Through VisualBERT: A Causal Adventure on Memetic Landscapes
Seeing Through VisualBERT: A Causal Adventure on Memetic Landscapes Abstract Detecting offensive memes is crucial, yet stan- dard deep neural network systems often re- main opaque. Various input attribution-based methods attempt to interpret their behavior, but they face challenges with implicitly offensive memes and non-causal attributions. To address these issues, we propose a framework based on a structural causal model (SCM). In this framework, VisualBERT is trained to predict the class of an input meme based on both meme input and data-generating concepts, allowing for transparent interpretation. Our qualitative evaluation demonstrates the framework’s ef- fectiveness in understanding model behavior, particularly in determining whether the model was right due to the right reason, and in iden- tifying reasons behind misclassification. Addi- tionally, quantitative analysis assesses the sig- nificance of proposed modelling choices, such as de-confounding, adversarial learning, and dynamic routing, and compares them with in- put attribution methods. Surprisingly, we find that input attribution methods do not guarantee causality within our framework, raising ques- tions about their reliability in safety-critical ap- plications. 1 Introduction Memes initially served as a medium for spreading humor, yet their rapid proliferation led to their ex- ploitation as a platform for disseminating offensive content. Consequently, the detection of offensive memes has emerged as a crucial task, prompting the development of numerous neural multimodal sys- tems to tackle this issue. How- ever, existing systems often function opaquely, hin- dering public trust in deploying these systems in the real world where they are the most important. This underscores the importance of interpretability and trustworthiness in such a critical domain. Interpretability techniques, such as input attri- butions (e.g., Integrated Gradient(Sundararajan Figure 1: Two motivating examples where input attribu- tion fails. et al., 2017)), have been utilized to elucidate model behavior. Nevertheless, their efficacy is ham- pered by two primary factors: i) Implicit Content: The challenge arises when confronting implicit memes—instances where the meme’s content is not readily understandable to end-users. In Fig- ure 1, we show two memes where input attribution cannot capture the inherent concept of ‘immigra- tion’ and ‘racism’ implicit in these memes because these keywords are not present explicitly in the in- put. Another drawback is ii) Causality: Despite identifying certain features as influential for model predictions, attribution methods fall short in cap- turing the causal impact of inputs on specific out- puts. In response to the latter challenge, various causality-based techniques have been proposed, such as CausaLM and Am- nesic Probing. However, their widespread adoption is impeded by their global explanation paradigm (e.g. trying to answer ques- tions like "Is part-of-speech information crucial for word prediction?") and scalability issues concern- ing a large number of causal factors, while we are interested in local causal explanation. To alleviate these limitations, we propose a novel causal framework that integrates VisualBERT with causal data-generating factors. We begin our ap- proach by extracting the implicit context of memes in the form of keywords, which we call causal con- cepts (e.g. ‘racism’ would be a plausible causal con- 1\ncept in the left meme in Figure 1). Subsequently, we train the model to predict the class of an in- put meme based not only on the meme’s content but also on the contributions of these causal con- cepts, employing a novel combination of dynamic routing and adversarial learning. We quantitatively evaluate various input attribution methods within our framework, revealing their incapacity to cap- ture the causal effect of the data-generating factors. Through ablation studies of various components, we demonstrate that our framework causally bases its prediction on relevant causal concepts, making it trustworthy. Finally, a qualitative analysis section demonstrates how to scrutinize model predictions through a causal lens to i) ascertain whether the model’s correctness stems from valid reasoning or ii) recognize error cases and their underlying causes. 2 Related Work Causal interpretability. Causal interpretability aims to understand how counterfactuals cause model outputs to change, thus estimating the causal effect of inputs. A subfield, causal mediation analysis, explores the mecha- nisms behind these effects. Generating ex- act counterfactuals is challenging, so recent work fo- cuses on approximations or counterfactual representation. Our work in this paper focuses on counterfactual rep- resentation. Most of the existing works target sin- gle modalities (text or vision) and answers global questions about feature importance. We propose a method for answering local questions about specific concepts (e.g., “Is the meme offen- sive due to the presence of Holocaust as a con- cept?") while addressing scalability issues of prior methods. Our framework in- corporates concept annotations and integrates with VisualBERT for trustworthy local causal interpretability. Multimodal Interpretability. Recently, there has been a surge in multimodal models for various tasks, yet re- search on generating explanations for their predic- tions remains limited. Researchers primarily rely on interpretability techniques like LIME and SHAP and various input attribution methods. However, there is a shift towards gen- erating natural language explanations, bridging the gap between text-only and multimodal systems. While some methods like NLX-GPT offer solutions, they may not fully explain implicit meanings unrelated to the input. Also, they do not reflect the causal impact of the input features. This leads us to develop our framework which facilitates both causal inference while also explaining implicit inputs. 3 Data Generating Process 3.1 Causal Diagram Our proposed framework follows a structural causal model (SCM) encapsulating both the data- generating process and the multimodal classifica- tion objectives. We assume an exogenous variable E1that generates causal concepts c1, ... to cn. An- other exogenous variable controls meme text tand image v. The collection of the concepts controls the latent representation L.(t, v)along with L controls the intermediate representation I, which further controls the output yof the model. Figure 2 represents this in detail. Following CausaLM, we want to intervene Isuch that the generated counter- factual representation ICFiis insensitive to concept ciand similar to Ifor all the other concepts except ci1. To achieve this, we observe the structural causal model (SCM) depicted in Figure 2, where Iis a function of Land(t, v), with I=f(L,(t, v)) andLrepresenting the concept latent, a weighted sum of concept representations ci, given by L= Pn i=1wi·ci. Creating a counterfactual latent ( L−i) unaffected by a specific concept ciis straightfor- ward: set wi= 0. This explicit modelling offers simple implementation and ensures ICFirepresents a counterfactual unaffected by ci, while still influ- enced by other concepts as in I. In this formulation, a challenge arises because concept cimight be correlated with other concepts, such as "Holocaust" ( cj) associated with "Nazism" (ci). We term these influencing concepts like cjas 1Note a slight abuse of notation here, ci∈R1×768refers to concept representation instead of textual concepts. 2\nE1c1 cnciE2 (t,v) L yc1 cnci E2 (t,v) L-i y' 0E1 ICFi IFigure 2: Left: The Data generating process as illustrated by a structured causal model (SCM). Right: Causal intervention which selectively intervenes on concept cito nullify its effect. This generates the intermediate counterfactual representation ICF i. To measure the causal effect of concept cionjth input, we take the Individual treatment effect (ITE) as |y′ i−yi|j.E1andE2are two exogenous variables which generate concepts cand meme (t, v)respectively. The SCM is inspired by SCM illustrated in "control concepts" and cias the "treated concept," following the CausaLM terminology. To address this, we propose a novel step called de-confounding (refer to Section 4.3). Here, we constrain the frame- work to prevent the recovery of cifrom the coun- terfactual latent representation L−i, which is es- sentially a linear combination of potential control concepts. The intuition is that once a concept is removed from the representation containing it, it cannot be recovered. Drawing inspiration from existing litera- ture, we formulate the causal effect of concept cias: \RITE i=< ϕ(ICFi DC)−ϕ(I)> =< ϕ(f((t, v), L−i DC)−ϕ(f((t, v), LDC)> (1) Here,\RITE irepresents the "Representation- based Individual Treatment Effect," accounting for treated and control concepts (with DC as subscript representing the de-confounding objective). The function f, modelled as VisualBERT in this paper, takes the input meme as text and visual representa- tion ((t, v)) and latent LDC, outputting a represen- tation I. This intermediate representation is then passed through a classifier ϕ. Essentially, \RITE i denotes the absolute change in predicted class prob- ability due to the absence of concept ciand could be used to measure its causal effect on the model. 3.2 Concept Annotation Figure 2 illustrates the data-generating process, which relies on the integration of concepts along- side meme inputs to facilitate accurate predic- tions. The selection of these concepts (which form the concept set) is pivotal, aiming to i) en- capsulate the breadth of themes present within the training dataset while ii) minimizing redundancy. To achieve this, we adopt a systematic approach wherein each meme encountered in the training set contributes to the refinement of our concept set. Ini- tially empty, this set progressively expands as new concepts are introduced by memes not previously encountered. For instance, if the first meme per- tains to ‘terrorism’ and ‘holocaust’, these concepts are incorporated into the set. Let us assume the second meme pertains to ‘terrorism’ and ‘racism’, so the concept set becomes = {‘terrorism’, ‘holo- caust’, ‘racism’}. Therefore, we only append new concepts to the set if they are absent . We appointed three annotators (postgraduate stu- dents in the age range of 25-27 working in the domain of multimodal machine learning) to anno- tate the concept set. Let us denote the three an- notated concept set as {c1},{c2}, and{c3}. We then calculate concept representation rt(ci 1), where ci 1denotes ith concept from set {c1}. Similarly, let us denote meme representation as rt(Tj)⊙ rv(Vj), where Tj,Vjare the text and image of thejth meme from the test set, and ⊙illustrates element-wise multiplication. rt(Tj)∈R1×768and rv(Vj)∈R1×768represent CLIP text and vision encodings respectively for the jth meme. We then calculate the total similarity of the set {c1}to the memes in the training set as PN j=0Pn i=0rt(ci 1)T·(rt(Tj)⊙rv(Vj)), where N is the number of memes in the training set and n is the number of concepts in the set {c1}. Simi- larly, this total similarity is calculated for sets {c2}, and{c3}. We observe that the total similarity of set{c2}is the highest and this leads us to choose this as the final set. The concepts in that set are as follows as shown in Table 1. 4 Methodology Our method relies on a three-step process of i) Modeling with VisualBERT, ii) Adversarial learn- 3\n0. holocaust (535) 1. nazism (510) 2. genocide (292) 3. funny (3000) 4. anti-muslim (345) 5. terrorism (276) 6. violence (360) 7. politics (122) 8. racism (405) 9. international-relation (88) 10. adult (172) 11. gore (178) 12. misogynistic (381) 13. immigration (430) 14. extremism (431) 15. immoral (151) 16. white supremacy (205) 17. indecency (831) Table 1: Annotated concepts and the number of memes that have this concept in brackets. Note that the total no of concepts may exceed the number of memes as a meme can have multiple concepts. Annotators could see the train labels while annotating and were instructed to label non-offensive memes as funny along with any other concepts, if necessary. VisualBERT c1 c2.......FFN0/1 Text Image 1/2(ws1 + wd1)VisualBERTFFN0/1 Gradient ReversalFFN Gradient Reversal Lshared weight cn 1/2(ws2 + wd2) 1/2(wsn + wdn) +L Dynamic Router +0/1 Figure 3: Model architecture comprising of Visual- BERT, a dynamic routing layer which determines the dynamic weights associated with the concepts. Adver- sarial learning uses a gradient reversal layer, also illustrated in the diagram. ing and iii) De-confounding. 4.1 Modeling with VisualBERT Model Inputs. A meme consists of text Tand image I. We extract image features using Faster- RCNN, yielding dimensions RB×N×768. Text tokens pass through a BERT em- bedding layer to generate text features with di- mensions RB×M×768. Concatenating these text and image features results in input dimensions ofRB×(M+N)×768for VisualBERT. Additionally, we introduce a latent representation L, with di- mensions RB×1×768, obtained by weighted sum- mation of concept features ci. Formally, L= Pn i=0(wi d+wi s)ci, where wi dis a dynamic weight, wi d=f(ci, T), dependent on the ith concept ci and the meme text T. This functional relationship is learned through dynamic routing. Similarly, wi s are static weights, wi s=f(ci), dependent only on the concept ci. Dynamic Routing. To model the interaction between Mtext features {uj}M j=0output by Vi- sualBERT given input T, and concept feature ci, we learn a weight W, which modifies ujas uij=Wij·uj. Further, the interaction between ciandujcan be modelled by taking a dot prod- uct between them. pij=uT j·cidemonstrates this operation, where pijis a scalar. To normal- izepijbetween 0and1, we use the softmax func- tion: bij=exp(pij)Pn k=0exp(pik), where nis the number of concepts. To measure the effect of all text in- puts on concept i, we calculate the weighted mean: si=1 mPm j=0bij·uij, where mis the number of text tokens input to the model. Here, sishows the cumulative effect of all the text inputs on concept ci. To model the interaction between ciandT, we want the length of sito approximate their inter- action. Specifically, longer vectors should have a unit length, whereas shorter vectors should have a length close to zero. This is achieved using the following squashing function: vi=squash (si) =||si||2 1 +||si||2·si ||si||(2) The length of viacts as the dynamic weight be- tween ciandTi, such that wi d=f(ci, Ti) =||vi||. 4.2 Adversarial Learning Letmibe aR1×768dimensional vector output cor- responding to the ‘[CLS]’ token when VisualBERT processes the text Tand image I. Similarly, let ˆmibe the output vector when VisualBERT pro- cesses the text T, image I, and latent L. For clas- sifying an input meme, we utilize a feed-forward neural network (FFN) on the representation ˆmi. Typically, input text and images contain sufficient information to classify a meme into offensive or non-offensive classes, rendering the latent repre- sentation less effective compared to text and image inputs alone. To enhance the effectiveness of the latent representation to match that of text and im- age inputs, we employ adversarial learning. The objective is to make both miandLinvariant to the output class, while their combined representa- tion ˆmishould retain discriminatory information for classifying memes. The aim is for Lto be as effective as the combined text and image inputs. To achieve this, we utilize a Gradient Reversal Layer before pass- ingLandmito two separate classifiers for the 4\noffensiveness detection task. These classifiers help learn class-invariant Landmi, while the classifier utilizing ˆmilearns class-dependent representation. 4.3 De-confounding The premise of de-confounding and its neces- sity is described in Section 3.1. Before de- confounding, we assume, that there exists some W, which can project each L−itoci. More formally, W·LCF=C, where LCF= [−L−1−,−L−2−, ...,−L−n−]∈R768×n, and C= [−c1−,−c2−, ...,−cn−]∈R768×n, where L−i, ci∈R1×768and there are nconcepts. For De-confounding, we aim to learn a projec- tion matrix Psuch that projecting each cithrough it yields counterfactual latent LCF′incapable of re- constructing C. The following theorem illustrates this. Theorem. IfPis a nullspace projection matrix ofWandC′=P·Cfor all i, then W·LCF′= 0 for all i. Proof. The latent representation L′after project- ingciwithPis given by L′=nX i=0wiP·ci=P·nX i=0wici=P·L(3) From Equation 3, W·L′=W·(P·L) = 0 , as W·(P·x) = 0 for all x. Furthermore, wi̸= 0, and wiW·(P·ci) = 0 . Therefore, W·LCF′=W·L′−wiW·(P·ci) = 0 for all i. Following this theorem and its proof, we con- struct a nullspace projection PofW.Wis learned by training a neural network to reconstruct Cfrom LCFfor all i. To de-confound the system, we project the concept representations via P, updating CasC←P·C. This approach is inspired by Iter- ative Nullspace Projection (INLP). After this operation, we denote the latent L with a subscript, i.e., LDC. 5 Results Model performance on the test set is 70.36as mea- sured by the F1 score. This reflects the VisualBERT in our framework works well and even exceeds some of the benchmark models evaluated as a part of the FB Hateful Meme Dataset paper. The experimental setups and dataset details are elaborated in Appendix Section CC. Next, we elaborate on several findings related to our frame- work. Figure 4: Comparison of mean \RITE score between w/ de-confounding and w/o de-confounding strategies Simple attribution methods align better with Causality. We evaluate input attribution-based methods from a causal perspective. For an input meme, causal concepts are ranked in decreasing or- der by their estimated causal impact (using \RITE scores), forming a set denoted as ‘Causal’ . The same concepts are ranked by their attribution scores from a specific method (e.g., Integrated Gradient), forming set ‘X’. A positive correlation between these sets indicates that highly attributed concepts also influence model outcomes (causality), while a negative correlation suggests the opposite. Ta- ble 2 presents Kendall’s Tau and Spearman’s rho (averaged across all memes in the test set) for cor- relation comparison. Notably, simpler methods like Saliency show a stronger correlation with the causal set than complex methods like DeepLift, suggesting their potential to capture causal relation- ships. This suggests that simpler methods might offer a clearer and more direct understanding of causality within the model’s decision-making pro- cess. Simulating Model Outcomes with Attribution- Based Methods. Although keywords sorted by their attribution score do not always reflect causal- ity (as discussed in the previous paragraph), those keywords might correlate well with specific model outcomes. This can be demonstrated by consid- ering a simulator. Let us assume an attribution setXj={xj i}n i=1contains nconcepts sorted by their input attribution score in a non-increasing manner for the jth meme in the test set. Denoting ˆxj ias their text representation, the rank-adjusted BoW representation ˆXjof the set Xjwould be 1 nPn i=1γiˆxj i„ where γ= 0.9is a positive non- zero constant. We train a simulator (an SVM model) on: i) ˆXj, ii) concatenation of ˆXjand ˆmj i, denoted by [ˆXj; ˆmj i], and iii) ˆmj i, to predict 5\nCausal Rank Correlation Explainability of Simulator Performance of Simulator Methods Kendall’s Tau Spearman’s rho Comprehensiveness Sufficiency F1 w/ both F1 w/ inp F1 w/ exp Int. Grad. 0.017 0.025 0.030 0.089 0.66 0.62 0.53 Saliency 0.859 0.916 0.005 0.116 0.63 0.62 0.42 DeepLift -0.001 -0.002 0.029 0.021 0.67 0.62 0.65 DeepLiftSHAP -0.003 -0.005 0.035 0.017 0.67 0.62 0.68 GradientSHAP 0.009 0.014 0.008 0.116 0.63 0.62 0.45 Input x Grad -0.001 -0.002 0.034 0.029 0.68 0.62 0.67 Causal 1 1 0.002 0.112 0.63 0.62 0.42 Table 2: Table shows an assessment of input attribution-based methods in two directions: i) Their correlation with causality, ii) Explanation capability of the extracted keywords. Methods with red borders are anti-causal, i.e. extracted keywords by this method do not correlate well with causal keywords, while ‘Saliency’ achieves the highest Causal rank correlation (shown in blue border). F1 w/ exp is redundantly kept for comparison. Simulator performance is shown via a 10-fold cross-validation on the simulation dataset and the mean value of the corresponding metrics is reported. Input Attr/Causal Full w/o dyn. routing w/o adversarial w/o deconfounding R@5 P@5 MAP@5 R@5 P@5 MAP@5 R@5 P@5 MAP@5 R@5 P@5 MAP@5 Int. Grad. 0.29 0.20 0.20 0.26 0.17 0.19 0.28 0.19 0.17 0.27 0.18 0.17 Saliency 0.27 0.19 0.18 0.19 0.12 0.16 0.29 0.19 0.16 0.32 0.21 0.25 DeepLift 0.30 0.21 0.21 0.27 0.18 0.17 0.26 0.18 0.16 0.26 0.18 0.18 DeepLiftSHAP 0.31 0.21 0.22 0.31 0.22 0.22 0.30 0.20 0.20 0.29 0.19 0.20 GradSHAP 0.30 0.20 0.20 0.30 0.21 0.21 0.29 0.20 0.20 0.29 0.20 0.20 Input x Grad 0.30 0.21 0.21 0.26 0.17 0.17 0.27 0.19 0.17 0.26 0.18 0.17 Causal 0.26 0.18 0.17 0.21 0.14 0.16 0.20 0.14 0.13 0.30 0.20 0.19 Table 3: Red border denotes scores obtained by Causal framework, i.e. sorting keywords with their \RITE score. The scores from alternative input-attribution methods are also displayed. Saliency-based keywords (shown in blue) are relevant, highly causal, and simulatable. the original model prediction ˆyj. The jth super- script reflects the jth meme. Intuitively, the SVM simulates the original model based on the provided information (either one of cases (i), (ii), or (iii)). The performance of the simulator can be seen in Ta- ble 2. F1 w/ exp denotes the simulator performance in case (i) when only the attribute-ranked concepts Xwere used as simulator input. Similarly, F1 w/ inpdenotes case (iii), where the multimodal repre- sentation ( ˆmj i) is used for model input. Lastly, F1 w/ both denotes case (ii). Consider the F1 w/ exp scores for DeepLift, DeepLiftSHAP, and Input x Grad as input attri- bution methods. The attribution set Xobtained by these methods achieves a 67% F1 score of the sim- ulator, indicating a high correlation between Xand the original model prediction ˆy(termed as highly simulatable). However, note the negative correla- tion of Xwith the Causal set, highlighting the fact thatCorrelation does not imply Causation . Addi- tionally, observe the lower F1 w/ exp score obtained by the Saliency attribution set, despite its higher rank correlation to the Causal set. This suggests that keywords causally related to model outcome may not be easily simulatable. The faithfulness of an attribution method depends on whether the ranked concepts reflect causality rather than simu- latability. Therefore, Saliency might be important in faithfully interpreting a model, despite its lower simulatability performance. To measure the impact of attribution set Xon simulator performance, we employ two metrics: i) Comprehensiveness and ii) Sufficiency. Com- prehensiveness quantifies the reduction in simula- tor model confidence when ˆXjreplaces [ˆXj;mj i] as simulator input. Denoting the simulator by S, comprehensiveness is S([ˆXj;mj i])k−S(mj i)kfor predicted class k. A higher comprehensiveness score indicates the importance of attribution set Xjfor the simulator. Sufficiency is defined as S([ˆXj;mj i])k−S(ˆXj)k. It requires a higher av- erage comprehensiveness and lower average suffi- ciency score for Xto be considered simulatable. From Table 4, we observe that all output attribu- tion sets derived from input attribution-based meth- ods are more simulatable than the causal set. De- spite their high correlation (reflected by high simu- latability) with the model prediction, input attribu- tion methods are not inherently causal, except for Saliency -based input attribution. To interpret a model and gauge its trustworthi- ness, understanding whether its predictions origi- nate from relevant concepts within the input is cru- cial, akin to assessing if the model is ‘right for the 6\nright reasons.’ To ascertain this, we annotate offen- sive memes from the test set with relevant concepts from a predefined set of 18concepts. Subsequently, we employ averaged Precision@5, Recall@5, and Mean Average Precision (MAP)@5 to assess the relevance of the top five concepts from both the ‘Causal’ set and the attribution set Xobtained through various input attribution methods. Techni- cal specifics are detailed in the Appendix Section B.Assessing trustworthiness of our Framework. Higher scores across these metrics indicate that the highly causal outputs are also relevant, implying the frameworks’ utilization of pertinent concepts from the input memes to make predictions, thus increasing their trustworthiness. Analysis of Ta- ble 3 reveals several insights. Within our frame- work, without any ablation of modelling compo- nents (shown by Full), the ‘Causal’ set achieves 0.26,0.18, and 0.17scores for R@5, P@5, and MAP@5, respectively. Disabling dynamic rout- ing (w/ dyn. routing) results in static weighting of concepts irrespective of meme input, leading to reduced scores for the ‘Causal’ set across all met- rics compared to the Full framework, indicating diminished trustworthiness. When adversarial learning and dynamic rout- ing are disabled, our framework shows poor trust- worthiness, with lower scores across all metrics for the ‘Causal’ set. Without de-confounding, in- put attribution-based methods like DeepLift and GradientSHAP do not perform as well as they do with other configurations (e.g. w/o dyn. routing). This suggests that without de-confounding, the model struggles to find a proper correlation be- tween data-generating concepts and model output. Empirically, in Figure 4, we illustrate the mean \RITE scores of the concepts with and without de-confounding. Without de-confounding, due to widely varied mean \RITE scores for several con- cepts, certain concepts dominate the top positions in the causal set, regardless of the input, indicating potential bias in the model’s causal attribution and making it difficult for input attribution methods to establish a correlation between model input and output, resulting in lower scores across metrics. The observation that the ‘causal’ set performs better without de-confounding may raise questions about its necessity. However, as illustrated in Sec- tion 3, de-confounding remains a principally valid choice. Without de-confounding, as discussed in the previous paragraph, certain concepts often dom- inate the top positions in the causal set, regardless of the input. Conversely, with de-confounding, all concepts have similar \RITE values and standard deviation, ensuring equal representation across test set examples (refer to Figure 4). Therefore, de- spite achieving lower scores, the framework with de-confounding appears more trustworthy due to its consistent performance, lack of bias towards certain concepts in the causal attribution set, and equal representation of concepts across inputs. 6 Analysing the model through Causal Lens Is the model always right due to the right rea- son? No, it is not always the case. We gain insight into this by examining the model causally. In Fig- ure 5, we present two examples of memes from the test set along with the model’s predictions, both correctly classified as offensive. In the first example, the meme’s offensiveness is linked to the Holocaust and antisemitic ideas, accurately reflected in the Top-5 causal concepts. Notably, concepts like ‘Violence’, ‘Holocaust’, and ‘Nazism’ from the gold standard set directly relate to the meme’s context. With three out of the Top-5 causal keywords aligning with the meme’s context, we conclude that the model’s prediction was correct for the right reason. In the second example, despite the meme being antisemitic, the identified causal concepts are irrel- evant to antisemitism, although highly offensive. However, certain keywords (such as ‘Holocaust’ and ‘Genocide’) identified by GradientSHAP are attributed. Three of these (‘Racism’, ‘Holocaust’, and ‘Genocide’) directly relate to the meme’s con- text. Initially, it may seem the model was right for the right reason. However, the absence of overlap between the set of causal concepts and the gold standard concept set indicates that the model classi- fied the meme correctly but with erroneous causal attribution. This discrepancy suggests that although accurate input attributions exist, the model may base its decision on different causal concepts. Such analyses are crucial for evaluating the trustworthi- ness of the framework. 7 Error Analysis In this section, we analyze the model’s performance using causal concepts. For example, in the first row of Table 4 (meme index 32), the model identifies ‘anti-muslim’ among the top 5 causal keywords, 7\nMeme Causal Keywords Attributed Keywords Relevant Keywords Pred Orig violence holocaust nazism anti-muslim terrorismracism gore indecency genocide terrorismholocaust nazism genocide extremismOffn Offn holocaust nazism genocide extremisminternational relation immoral anti muslim politics misogynisticindecency racism holocaust genocide misogynisticOffn OffnFigure 5: Evaluating trustworthiness of the model Meme Idx Top-5 Causal Actual Predicted Possible Reason 32 ‘anti muslim’, ‘terrorism’, ‘genocide’, ‘violence’, ‘adult’ Non Offensive Offensive Dataset Bias 92 ‘violence’, ‘genocide’, ‘terrorism’, ‘extremism’, ‘nazism’ Non Offensive Offensive Inadequate Context 65 ‘indecency’, ‘holocaust’, ‘funny’ , ‘immigration’, ‘racism’ Offensive Non Offensive Background Knowledge 73 ‘holocaust’, ‘violence’, ‘gore’, ‘nazism’, ‘funny’ Offensive Non Offensive Modality Conflict Table 4: Meme index with corresponding causal keywords and their possible reason behind error cases. Figure 6: Memes with indices 32, 92, 65, 73 read from Left to Right. even though they are irrelevant. The presence of ‘Mohammed’ alone leads the model to classify the meme as offensive, indicating a dataset-wide bias towards words like ‘Mohammed’. Similarly, in the second meme, the word ‘jew’ prompts the model to associate the meme with con- cepts like ‘violence’, and ‘genocide’ possibly due to insufficient visual context and association of the word jew in antisemitic offensive memes. In the third meme, although offensive, the model finds it humorous due to a lack of background knowledge, especially regarding the wordplay on SpongeBob, and the smiling SpongeBob face may further contribute to misclassification. Lastly, the fourth meme’s misclassification can be attributed to a humorous background image un- related to the meme text, creating a modality con- flict. Note that the third and fourth memes which are classified as non-offensive have the ‘funny’ key- word as a part of the Top 5 causal keywords. By examining causal keywords alongside model inputs, this type of error analysis offers insights into why the model made mistakes. 8 Conclusion and Future Work In this paper, we introduce a multimodal causal framework aimed at transparently analyzing Vi- sualBERT predictions. Guided by a structural causal model (SCM), the framework compels Vi- sualBERT to base its decisions on interpretable human-understandable concepts. Evaluation on the Facebook hateful meme dataset reveals key insights: i) Input attribution methods lack causal- ity, ii) Modelling choices significantly influence relevant causal attributions, enhancing model trust- worthiness. The qualitative analysis delves into whether the model is ’right for the right’ reasons and uncovers causal factors behind misclassifica- tions. The simplicity and versatility of our framework (i.e. the underlying structural causal model and its translation to modelling choices) allow its applica- tion across various tasks and multimodal models. Although we show the importance of our architec- ture on meme offensive detection task as a testbed, its application may be important in medicine or warfare where trustworthy systems are paramount. 8\nLimitation While our approach demonstrates promising re- sults, there are some limitations to consider. Firstly, the reliance on a specific dataset, such as the Face- book Hateful Meme dataset, and a specific model, like VisualBERT, may limit the generalizability of our findings to other datasets and models. Secondly, the concept annotation process intro- duces challenges as it relies on human annotators to define and refine the concept set. This process may introduce subjectivity and biases. To address this challenge, employing robust annotation guide- lines, inter-annotator agreement assessments, and sensitivity analyses can enhance the reliability of the concept annotation process. At the outset, the concept of the paper may act as a seed or proof of concept, further generaliz- ability of which is to be explored through a chain of related future studies. Specifically, exploring potential applications of the framework in other do- mains beyond meme classification would be valu- able. The framework could be adopted or applied in areas such as content moderation, sentiment analy- sis, and trend analysis in social media, news media, marketing, and public opinion research.
[ "SEMANTIFY: Unveiling Memes with Robust Interpretability beyond Input Attribution", "Are self-explanations from Large Language Models faithful?", "CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior", "NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks", "DoCoGen: Domain Counterfactual Generation for Low Resource Domain Adaptation", "Locating and Editing Factual Associations in GPT", "“Will You Find These Shortcuts?” A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification", "Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond", "Causal Abstractions of Neural Networks", "CogView: Mastering Text-to-Image Generation via Transformers", "Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction", "GLM: General Language Model Pretraining with Autoregressive Blank Infilling", "Learning Transferable Visual Models From Natural Language Supervision", "Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals", "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?", "CausaLM: Causal Model Explanation Through Counterfactual Language Models", "The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes", "Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias", "Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection", "ERASER: A Benchmark to Evaluate Rationalized NLP Models", "VisualBERT: A Simple and Performant Baseline for Vision and Language", "Explaining Classifiers with Causal Concept Effect (CaCE)", "Neural Network Attributions: A Causal Perspective", "Towards better understanding of gradient-based attribution methods for Deep Neural Networks", "Dynamic Routing Between Capsules", "A Unified Approach to Interpreting Model Predictions", "Learning Important Features Through Propagating Activation Differences", "Axiomatic Attribution for Deep Networks", "Not Just a Black Box: Learning Important Features Through Propagating Activation Differences", "“Why Should I Trust You?”: Explaining the Predictions of Any Classifier", "The proof and measurement of association between two things.", "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", "Adam: A Method for Stochastic Optimization", "Unsupervised Domain Adaptation by Backpropagation", "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", "Support-Vector Networks", "A NEW MEASURE OF RANK CORRELATION", "On the equation ⁿ+ⁿ=1" ]
mPLUG-DocOwl 1.5: Unified Structure Learning for OCR-free Document Understanding
mPLUG-DocOwl 1.5: Unified Structure Learning for OCR-free Document Understanding Abstract Structure information is critical for understand- ing the semantics of text-rich images, such as documents, tables, and charts. Existing Multi- modal Large Language Models (MLLMs) for Visual Document Understanding are equipped with text recognition ability but lack general structure understanding abilities for text-rich document images. In this work, we empha- size the importance of structure information in Visual Document Understanding and pro- pose Unified Structure Learning to boost the performance of MLLMs. Based on publicly available text-rich images, we build a compre- hensive training set DocStruct4M to support structure-aware parsing tasks and multi-grained text localization tasks across 5 domains: docu- ment, webpage, table, chart, and natural image. To better encode structure information, we de- sign a simple and effective vision-to-text mod- ule H-Reducer, which can not only maintain the layout information but also reduce the length of visual features by merging horizontal adjacent patches through convolution, enabling the LLM to understand high-resolution images more effi- ciently. Our model DocOwl 1.5 achieves state- of-the-art performance on 10 visual document understanding benchmarks. All codes, models, and datasets will be publicly available. 1 Introduction Textual information in images manifests with a multitude of visual structures, spanning the sim- plicity of plain text to the systematic grid layouts of tables and incorporating a spectrum of graphical representations such as pie, line, and bar charts. These elements may appear in isolation or be intri- cately interwoven within the framework of docu- ments and webpages, reflecting a rich diversity of informational architecture across posters, invoices, infographics, scientific reports, academic and news websites, etc. As shown in Fig. 1, besides the basic textual content, structure information also plays a big role in Visual Document Understanding . With basic abilities to understand general images and comprehend structured texts through the LLM decoder, MLLMs (Ye et al., 2023c,d; Liu et al., 2023b,a; Zhu et al., 2023; Li et al., 2023a) have the potential to achieve unified structure learning on text-rich images. However, due to the visual encoder and vision-to-text module being trained on general image-text pairs and not specifically optimized to represent the textual and structural information in text-rich images, they still face great challenges with images with rich text information. For better Visual Document Understanding with MLLMs, some works (Ye et al., 2023a,b; Bai et al., 2023b; Feng et al., 2023) attempt to design text- reading tasks to strengthen the text recognition abil- ity, but either ignore the structure comprehension or only cover limited domains, such as just web- pages or documents . In this work, we first propose to perform unified structure learning on text-rich images for MLLMs across 5 domains: document, webpage, table, chart, and natural image. For better structural understanding, we first de- sign a simple and effective vision-to-text module, namely H-Reducer. Unlike the Resampler or Q-former which fuses visual features with learnable queries but af- fects spatial information, the H-Reducer accumu- lates neighborhood visual features through convo- lution to keep the relative positional relationships. Compared with V2T modules with only linear lay- ers (Liu et al., 2023b,a), it produces much fewer visual features, which is more efficient for LLM to understand high-resolution document images. Considering texts in document images are most or- ganized from left to right, H-Reducer merges visual features at the horizontal level. Our Unified Structure Learning comprises structure-aware parsing tasks and multi-grained 1\n(a) What is the assigned response code? W24 (b) which edition has unlimited remote desktop services and virtulization rights? Datacenter (c) What is the forecast for the increase in customs duty revenue in 2030? 100(d) What is the percentage of Iraqi dependents citizen? 0.19. Figure 1: Illustrations of the importance of structure information in Visual Document Understanding. text localization tasks. To learn the organization of text contents, the former mainly teaches the model to parse the texts in the image in a structure-aware style, such as using line feeds and spaces to repre- sent the structure of documents or webpages, and using extended Markdown syntax to represent the structure of tables and charts. Multi-grained text localization tasks further enhance the ability to cor- relate visually situated texts and concrete positions in the image. To support unified structure learning, based on publicly available datasets, we carefully build a comprehensive training set DocStruct4M by constructing structure-aware sequences and multi- grained pairs of text and bounding boxes. The DocOwl 1.5 is first trained with the Unified Struc- ture Learning and then followed by the Multi-task Tuning among downstream tasks. Our contributions in this work are three-fold: •We first propose Unified Structure Learn- ing on text-rich images for MLLMs and de- sign both structure-aware parsing tasks and multi-grained text localization tasks across 5 domains. A comprehensive dataset Doc- Struct4M is carefully built to support Unified Structure Learning. •We design a simple and effective vision-to- text module for structure learning and validate its effectiveness with extensive experiments. •DocOwl 1.5 achieves state-of-the-art OCR- free performance on 10 Visual Document Un- derstanding tasks. 2 Related Work Visual Document Understanding (VDU) , aims to compre- hend images with rich text information. Such images range from documents (Mathew et al., 2021, 2022; Svetlichnaya, 2020; Stanislawek et al., 2021; Zhang et al., 2023a), tables , charts , natural images to web- page screenshots , where diverse composition of text and visual objects contains a wealth of information. Accord- ing to whether relying on an off-the-shelf OCR system to recognize texts in the image, models for Visual Document Understanding can be cate- gorized into OCR-dependent models and OCR-free ones . To leverage recognized texts from an OCR system, OCR-dependent models are always trained to align textual and visual inputs, such as recovering masked text and layout informa- tion given image and retained text as inputs . As for OCR-free methods, training with tasks about text recognition is indispensable. Dount designs the text reading task to output continuous text sequences that ig- nore structure information. To leverage structure information, Pix2Struct designs a Screenshot Parsing Task to generate the HTML 2\nDOM tree for webpage screenshots but is hard to apply to other types of images. In this work, we first propose Unified Structure Learning for all image types and carefully build a comprehensive dataset to support layout learning. Multimodal Large Language Models (MLLM) have shown strong vision understanding and open- ended conversation abilities (Ye et al., 2023c,d; Zhu et al., 2023; Dai et al., 2023; Bai et al., 2023b; Hong et al., 2023; Zhang et al., 2024) for natural images. They follow the architecture paradigm of connecting a vision encoder,e.g. ViT , with a Large Language Model (LLM) by a Vision- to-Text (V2T) module, such as simple linear lay- ers (Liu et al., 2023b,a) or a Q-Former /Resampler /Abstrac- tor (Ye et al., 2023c,d) with learnable queries. To enable MLLMs to comprehend images with rich texts, there are major two challenges: how to encode high-resolution images and how to un- derstand visually-situated texts. To tackle high- resolution images, most works choose to further train or extraly add a high-resolution vision encoder . UReader first proposes to keep the low-resolution vision encoder and use a shape-adaptive cropping module to crop raw im- ages into multiple sub-images with low resolution. To enhance the visually-situated text understand- ing, some work design tasks of reading texts from top-left to bottom-right without taking into account the importance of structure . CogAgent and DocPedia further try strengthen- ing the layout understanding for documents, web- pages, and natural images with text grounding tasks. However, the comprehension of the overall struc- ture is ignored, and tables and charts are not cov- ered. In this work, to strengthen structure under- standing, we design structure-aware praising and multi-grained text localization tasks for all types of images, covering documents, tables, charts, web- pages, and natural images. Besides, we propose a vision-to-text architecture to better maintain spatial information of visual features by convolution. 3 DocOwl 1.5 DocOwl 1.5 consists of a visual encoder, a large lan- guage model, and a vision-to-text module, namely H-Reducer to better keep the visual text and lay- out information by ensembling horizontal visual features. It’s first trained with Unified Structure Learning to enhance text recognition and structure understanding and then tuned on multiple down- stream tasks of Visual Document understanding. 3.1 Model Architecture High-resolution Image Encoding. Encoding high- resolution images is critical to ensure that the de- coder can use rich text information from document images . As shown in Fig. 2, we utilize a parameter-free Shape-adaptive Cropping Mod- ule to crop a shape-variable high-resolution image Iinto multiple fixed-size sub-images (I1, I2, ..., I C), where Cis the number of crops. To keep the overall layout information, the raw image is also resized to a low-resolution one as the global image I0. Then, each image Ii in(I0, I1, ..., I C)is independently encoded to a se- quence of visual features Vi= (v1 i, v2 i, ..., vL i),0≤ i≤Cby a transformer-based Visual Encoder, where vj i,1≤j≤Lis aD-dimension vector, Lis the length of visual features for each image. Spatial-aware V2T Module: H-Reducer. There are two kinds of popular vision-to-text modules for MLLMs: a MLP (Liu et al., 2023b,a; Zhu et al., 2023) or a cross-attention module with learnable queries . Both two are not quite suitable for representing high-resolution text- rich images. The former projects complete visual features into the language embedding space. It maintains all spatial information in the document image but keeps the sequence length of raw vi- sual features, which is too long when processing high-resolution images. For example, encoding a 1,344x1,344 image with the ViT/L-14 results in 9,216 visual tokens. The cross-attention module could greatly reduce the length of the visual se- quence to the number of learnable queries, but may lose spatial information during semantic fusion. In this work, we design a more suitable vision-to- text module for Visual Document Understanding, namely H-Reducer, which not only reduces visual sequence length but also keeps the spatial informa- tion. As shown in Fig. 2, the H-Reducer is com- prised of a convolution layer to reduce sequence length and a fully-connected layer to project visual features to language embedding space. Since most textual information in document images is arranged 3\nVisual Encoder H-ReducerShape-Adaptive Cropping Module Twitter. According to the image, there are 560 million and 70 million active users for Twitter and Pinterest. Thus, Twitter has more active users. Who has more active users, Pinterest or Twitter?Convolution(1x4)FCH-ReducerMAMLarge Language Model<row2-col3><row2-col2><row2-col1><row1-col3><row1-col2><row1-col1> 𝐼!𝐼"𝐼#𝐼$𝐼%𝐼&𝐼'𝑉(flatten𝑉!𝑉"𝑉#𝑉$𝑉%𝑉&𝑉'#𝑉($𝑉(#𝑉!#𝑉"#𝑉##𝑉$#𝑉%#𝑉&#𝑉'<global-img>𝑇"𝑇!𝑇#𝑇$𝑇%𝑇&𝑇'𝑋 Figure 2: The overall architecture of DocOwl 1.5. The global image and cropped images are processed independently by the Visual Encoder and H-Reducer. <rowx-coly> is the special textual token to indicate that the position of the cropped image in the original image is the xthrow and ythcolumn. from left to right, the horizontal text information is usually semantically coherent. Thus, the kernel size and stride size in the convolution layer are set as 1x4 to ensemble horizontal 4 visual features. The output channel is set equal to the input channel D. The convolution calculation is as follows: vj i=f(v4j−3 i, v4j−2 i, v4j−1 i, v4j i),1≤j≤L/4, (1) Vi= (v1 i,v2 i, ...,vL/4 i), (2) where frepresents the dot product with kernel weights on multiple channels. After the convo- lution layer, the visual features Viare converted to theVi. Then, with a fully connected layer to align visual features to the language embedding space, theViare transferred to ˆVi= (ˆv1 i,ˆv2 i, ...,ˆvL/4 i). Multimodal Modeling with LLM. To better dis- tinguish visual and textual inputs, we insert the Modality-adaptive Module (MAM) into the LLM, which utilizes two sets of linear projection layers to separately perform the key/value projection for visual features and tex- tual features. To help the LLM correlate multiple cropped sub-images, UReader designs learnable crop position embeddings to de- note the row and column position in the raw image. In this work, we simply add special textual tokens ‘<row x_coly>’before the visual features of each cropped image, where xandyrefer to the row and column index respectively. For the global im- age, the textual indicator token is ‘<global_img>’ . This design eliminates the need to introduce addi- tional parameters and is more friendly to the LLM decoder. Our experiments validate that it achieves comparable effects as the crop position embedding. Overall, the decoding of the LLM is as follows: Y= LLM([T 0;ˆV0,T1;ˆV1, ...,TC;ˆVC; X]) (3) where [; ]means the concatenation operation, Cis the crop number of the image, Tj,0≤j≤Cis the textual embeddings of the special textual indicator for the global image or positions of cropped images, ˆVjis the visual features of a global or cropped im- age,Xis the textual embeddings of the instruction, Yis the predicted answer. 3.2 Unified Structure Learning For comprehensive document understanding, we design a Unified Structure Learning across 5 do- mains, including natural images, documents, tables, charts, and webpages. It involves both structure- aware parsing tasks and multi-grained text localiza- tion tasks, as shown in Fig. 3. Document/Webpage Parsing. To represent the structure information, Pix2Struct parses webpage screenshots with condensed HTML DOM trees based on the HTML source codes but not feasible for other formats of documents or web- page screenshots, e.g. PDF. In documents or web- pages, horizontal and vertical distances between texts form the main layout information. Therefore, to make the structure-aware parsing task applicable to most documents and webpage screenshots, we choose to add extra line feeds( ‘\n’) and spaces 4\nH-ReducerViusal EncoderLLMMAM DocOwl 1.5Read text in the image. <doc> Universities and departments \nStudents are members of the \nUniversity, a department and a \nCollege.\n •Course content \n •Lectures,…</doc> Parse the table in Markdown style.<md> | | <COLSPAN=2> Factors levels in actual values | | | | \n| Formula | Pluronic to drug ratio | P123 percentage (%) | EE% + SD (%)a | PS + SD (nm)a | PDI + SDa | \n | --- | --- | --- | --- | --- | --- | \n | M1 | 10 | 10 | 7.27 ± 0.47 | 58.73 ± 1.56 | 0.23 ± 0.01 | \n…| M11 | 30 | 50 | 99.43 ± 0.73 | 23.74 ± 0.95 | 0.13 ± 0.03 | </md> Convert the chart into Markdown format.<md> | Country | 1960 | 1961 | 1962 | 1963 | \n| --- | --- | --- | --- | --- | \n| Ecuador | 45 | 44.1 | 43.4 | 43.4 | \n…| Iran | 45.2 | 44.3 | 43.7 | 43.8 | </md> Describe the content and text within the image.emotional players celebrate at the final whistle of their game against football team <ocr> QATAR AIRWAYS QATAR 15 CAP </ocr> Identify the text within the bounding box <bbox>212,52,896,418</bbox><ocr> September 15, 1972 \nDATE BIOGRAPHICAL DATA \nNAME Mr. Milovan Bosnjak \n…PROFESSIONAL EXPERIENCE (In decending chronological order; position </ocr> Predict the bounding box of the text <ocr> 17. Loans Given 75.64 55.24 \n18. Receipt towards Loan Repayment 64.11 4.64 0.13 0.11 \n19. Advances Given 26.27 0.88 6.50 </ocr><bbox>120, 538, 753, 584 </bbox>Document ParsingNatural Image ParsingTable ParsingChart ParsingText Recognition Text Grounding Figure 3: The illustration of Unified Structure Learning of DocOwl 1.5. into the text sequence to denote different lines and horizontal distances. The greater the horizontal distance, the more space characters. Table Parsing. Different from documents or web- pages, tables are structured in a more standardized way, where row and column correspondences repre- sent key-value pairs. HTML and Markdown codes are mainly two kinds of text sequences used to rep- resent a table. HTML codes can represent all kinds of tables, with or without cells spanning multiple rows and grids, but they contain too many paired labels (e.g. ‘<tr></tr>’ and‘<td></td>’ ), caus- ing text sequences to be too long. Markdown codes can represent a table with concise text sequence, but they cannot represent cells spanning multiple rows and columns. To represent all tables with con- cise text sequence, we follow the main grammar of Markdown to represent table structure with ‘|’ and line feeds( ‘\n’). To represent cells spanning multiple rows and columns, we add special text to- kens ‘<COLSPAN=x>’ and‘<ROWSPAN=y>’ before the value, as shown in Fig. 3. Chart Parsing. Unlike documents and tables, or- ganizing texts in reading order cannot represent the structure of charts. Considering that the chart is a visualization form of the table, parsing charts to tables could best maintain the mathematical char- acteristics of the chart. This requires the model to understand the structure of the chart and the alignment of the x/y axis. Besides, to keep con- sistent with the Table Parsing task, we also use Markdown codes to represent the data tables of charts, as shown in Fig. 3. Natural Image Parsing. Semantics of natural im- ages is a combination of natural objects and scene texts. Thus, parsing natural images is necessary to organize scene texts and mention the main image content. Manually annotating captions to describe the relationship between objects and scene texts is labour- and financial-intensive. Like TAP , we concatenate the general caption with OCR texts to form the target parsing sequence. Multi-grained Text Localization. For Visual Document Understanding, structure-aware parsing tasks mainly focus on organizing texts according to the overall structure, while neglecting the cor- respondence between specific texts and local posi- tions. Correlating texts with the concrete position in images is another basic structure understanding ability for visual documents. To support text po- sition learning, we design two symmetrical tasks, namely Multi-grained Text Grounding and Multi- grained Text Recognition. The former aims to pre- dict the bounding box given the visually-situated texts, while the latter does the opposite. We set four granularities of texts for these two tasks: word, phrase, line, and block, with progressively increas- ing text lengths. More details about such four gran- ularities can be found in Appendix A.5. Overall, to support the unified structure learn- 5\nTable 1: Comparison with OCR-free methods on various types of text-rich image understanding tasks. The superscript ‘ ∗’ refers to models separately fine-tuned on each downstream task, rather than generalists. The underline means the best performance among models with <10B parameters. Model SizeDoc Info DeepKLC WTQTab Chart Text Text Visual VQA VQA Form Fact QA VQA Caps MRC Dessurt∗<1B 63.2 - - - - - - - - - Donut∗<1B 67.5 11.6 61.6 30.0 18.8 54.6 41.8 43.5 74.4 93.91 Pix2Struct∗ base <1B 72.1 38.2 - - - - 56.0 - 88.0 - Pix2Struct∗ large 1.3B 76.6 40.0 - - - - 58.6 - 95.5 - DocPeida 7.0B 47.1 15.2 - - - - 46.9 60.2 - - DocOwl 7.1B 62.2 38.2 42.6 30.3 26.9 60.2 57.4 52.6 111.9 188.8 QwenVL 9.6B 65.1 35.4 - - - - 65.7 63.8 - - UReader 7.1B 65.4 42.2 49.5 32.8 29.4 67.6 59.3 57.6 118.4 221.7 Monkey 9.8B 66.5 36.1 40.6 32.8 25.3 - - 67.6 93.2 - CogAgent 17.3B 81.6 44.5 - - - - 68.4 76.1 - - DocOwl-1.5 8.1B 81.6 50.4 68.8 37.9 39.8 80.4 70.5 68.8 132.0 239.5 ing for text-rich images, we build a Doc- Struct4M dataset with 4M samples by ensembling training sets of 17 publicly available datasets and constructing structure-aware text sequences or text- position pairs as the targets. The form of instruc- tions for each task is very diverse for developing the general instruction-following ability of the model. The statistics of DocStruct4M and details of target construction can be found in Appendix A. 3.3 Multi-task Fine-tuning Through Unified Structure Learning, models could well understand the structure of diverse document images but cannot follow users’ instructions to do different types of tasks, such as information extrac- tion or image captioning. So, we further perform multi-task fine-tuning to train a generalist of visual document understanding as UReader. 3.4 Training Paradigm DocOwl 1.5 is trained in a two-stage framework. Considering the LLM has strong comprehension abilities for structured text , we argue that the main limita- tion of MLLM in visual document understanding is the representation ability of the Visual Encoder and Vision-to-Text module for visual text and structure information. Thus, during the Unified Structure Learning, we freeze the LLM and tune the Visual Encoder and H-Reducer. The MAM is also opti- mized to help the LLM better distinguish visual features and texts parsed from the image. Dur- ing the stage of Multi-task Fine-tuning, the model mainly learns how to follow the user’s instructions to give answers based on visually-situated text and structure understanding capabilities acquired in the first stage. Therefore, the Visual Encoder is frozen and other modules are tuned. More implementation details can be found in Appendix B.1. 4 Experiments 4.1 Main Results We evaluate the Visual Document Understand- ing performance on 10 text-rich image bench- marks, covering documents (DocVQA , InfoVQA , DeepForm , KLC ), tables (WTQ , TabFact ), charts (ChartQA ), natural images (TextVQA , TextCaps ), and webpage screenshots (Vi- sualMRC ). We compare Do- cOwl 1.5 with state-of-the-art OCR-free models, including both Multimodal Large Language Mod- els adapted for recognizing texts and much smaller models trained only for document understanding. The detailed comparison of model settings can be found in Appendix B.2. As shown in Table 1, previous MLLMs with more than 7B parameters underperform domain- specific models with less than 1B parameters, show- ing that the document understanding is still a shortcoming for existing MLLMs. Our DocOwl 1.5 outperforms both domain-specific models and MLLMs with similar sizes on all 10 benchmarks. This validates that DocOwl 1.5 is much stronger on visual document understanding across 5 domains, covering visual question answering, information re- trieval, natural language inference, and image cap- tioning tasks. Besides, with much fewer unnatural 6\nTable 2: Ablation study of model setting. ‘Crop’ refers to the maximum number of cropped images. ‘CropPos’ means using learnable embeddings (‘Emb’) or textual tokens (‘Text’) to represent the position of cropped images. ‘Parsing’ and ‘MTL’ refer to structure-aware parsing tasks and the Multi-grained Text Location task, respectively. ‘Owl(224)’ and ‘Owl2(448)’ refer to mPLUG-Owl with 224 resolution and mPLUG-Owl2 with 448 resolution. Model Architecture Structure Multi-task TuningDocVQA TabFact ChartQAInit V2T Crop CropPos Learning ViT LLM r1 Owl(224) Abstractor 20 Emb × × × 65.4 67.6 59.3 r2 Owl2(448) Abstractor 20 Emb × × × 66.3 69.8 60.6 r3 Owl2(448) Abstractor 20 Emb × ✓ × 71.4 70.3 64.2 r4 Owl2(448) Abstractor 9 Emb × ✓ × 68.0 70.0 64.2 r5 Owl2(448) H-Reducer(1x4) 9 Emb × ✓ × 72.8 72.9 65.0 r6 Owl2(448) H-Reducer(2x2) 9 Emb × ✓ × 71.8 72.1 65.2 r7 Owl2(448) H-Reducer(2x4) 9 Emb × ✓ × 71.4 71.1 66.0 r8 Owl2(448) H-Reducer(1x8) 9 Emb × ✓ × 69.9 71.2 64.4 r9 Owl2(448) H-Reducer(2x8) 9 Emb × ✓ × 69.2 70.2 65.6 r10 Owl2(448) H-Reducer(1x4) 9 Emb Parsing × × 77.7 76.5 67.5 r11 Owl2(448) H-Reducer(1x4) 9 Emb Parsing × ✓ 78.9 78.1 68.1 r12 Owl2(448) H-Reducer(1x4) 9 Text Parsing × ✓ 79.8 77.7 69.1 r13 Owl2(448) H-Reducer(1x4) 9 Text Parsing+MTL × ✓ 81.6 80.4 70.5 data (3M vs 9M) and parameters (8.1B vs 17.3B), DocOwl 1.5 outperforms CogAgent on InfoVQA and ChartQA, and achieves comparable performance on DocVQA. This sug- gests that our unified structure learning with Doc- Struct4M is more efficient in learning printed text recognition and how to analyze documents. How- ever, our model still underperforms CogAgent on TextVQA, which requires the ability of scene text recognition and general knowledge about natural objects. The primary reason is that scene texts are more diverse in shapes than printed texts and CogA- gent is trained on 98M samples of scene text recog- nition from LAION-2B and COYO-700M , much more than the natural images (1M) in DocStruct4M. In this work, we mainly focus on improving the uni- fied structure comprehension of visual documents and leave further scaling up data on natural scenes as future work. 4.2 Ablation Study As shown in Table 2, we further perform a compre- hensive ablation study to validate the effectiveness of our H-Reducer and Unified Structure Learning. Firstly, initializing from a stronger general MLLMs brings better performance on text-rich images (r2 vs r1), showing general vision-and- language knowledge benefits visual document un- derstanding. Tuning the visual encoder during multi-task fine-tuning significantly improves the document understanding performance (r3 vs r2). This suggests that the visual representation of doc- ument images may be the main shortcoming of MLLMs and inspires us to design Unified Struc- ture Learning to enhance the representation ability of the visual encoder for visual texts and structure. Effectiveness of H-Reducer. When using the Shape-adaptive Cropping Module, the image res- olution supported by the MLLM is the product of the cropping number and basic resolution of each crop. With the Abstractor as the vision-to-text mod- ule, reducing the cropping number causes an obvi- ous performance decrease (r4 vs r3) on documents. However, with a smaller cropping number, the H- Reducer achieves better performance than the Ab- stractor (r5 vs r3), showing that 4482×9≈221is an acceptable resolution for existing benchmarks and the H-Reducer is stronger on maintaining rich text information during vision-and-language fea- ture alignment. Besides, we further compare differ- ent settings of the merging shape in the convolution layer. With the same number of merged tokens, the model with the 1x4 merging shape achieves better performance than the one with the 2x2 merging shape on document and table datasets but slightly worse performance on chart understanding (r6 vs r5). This is consistent with the common sense that documents and tables mainly organize texts in the left-to-right order while the semantic structures of charts are much more flexible. A square merging shape is more suited to encode visual features in the form of bars, lines, or pies while the 1x4 merging shape is more appropriate for general document understanding. Further extending the 1x4 merg- ing shape horizontally and vertically decreases the length of visual features but at the cost of perfor- mance degradation (r7-r9). Considering the over- all performance on all text-rich images, we finally choose the 1x4 as the merging shape in H-Reducer. Besides proving the effectiveness of H- 7\nHuman: What is the Dept.No? UReader: 76/77-142 (×) DocOwl 1.5: 218-12 (√) Human: which edition has unlimited remote desktop services and virtulization rights? UReader: Enterprise edition (×)DocOwl 1.5: Standard (×) Ground Truth: Datacenter Human: What is the forecast for the increase in customs duty revenue in 2030?UReader: 90.5 (×)DocOwl 1.5: 100 (√)Human: Who did clinical research?UReader: Steve Haas / Art O‘Neal(×)DocOwl 1.5: Steve peoples(√) (a)(b)(c) (d)Figure 4: Qualitative results of DocOwl 1.5 and UReader on different domains of images. Reducer through downstream text-rich image understanding performance, we further directly compare the text localization performance after the Unified Structure Learning. Experiments on a carefully-built Multi-grained Text Localiza- tion evaluation set DocLocal4K validate that H- Reducer(1x4) is better than H-Reducer(2x2) and Abstractor in preserving spatial features. More details about DocLocal4K and localization perfor- mance can be found in Appendix B.4. Effectiveness of Unified Structure Learning. Af- ter determining the vision-to-text module, we per- form two-stage training with Unified Structure Learning. With only the structure-aware parsing tasks, there is significant improvement across dif- ferent domains (r10 vs r5). This validates that fine-tuning the visual encoder and H-Reducer with structure-aware parsing tasks greatly helps MLLMs understand text-rich images. Further tuning the pa- rameters of LLM brings slight improvement (r11 vs r10), suggesting that general language knowl- edge is not the main obstacle to visual document understanding. By replacing the learnable crop po- sition embeddings with special textual tokens, the model achieves better performance (r12 vs r11), showing that the LLM can well understand the rel- ative positions of multiple cropped images with just simple textual indicators. Finally, by intro- ducing Multi-grained Text Localization tasks, Do- cOwl 1.5 achieves the best performance, validating that correlating visual texts with concrete positions helps comprehend documents more accurately. Effectiveness of the Two-stage Training. Besides two-stage training, we also try one-stage joint train- ing of the structure learning and downstream tasks and gradually increase samples from DocStruct4M. Our experiments validate that the two-stage train- ing could better enhance basic text recognition and structure parsing abilities and is more beneficial and efficient for downstream document understand- ing. More details can be found in Appendix B.5. 4.3 Qualitative Results Besides quantitative results, we further present some qualitative results of visual document under- standing on different domains of images. As shown in Fig. 4(a) and (b), both models answer the ques- tion with texts in the image. DocOwl 1.5 can better understand the structure of two documents and give correct answers. In Fig. 4(c), due to the learning of parsing chart with Markdown codes, DocOwl 1.5 can better understand the chart and success- fully correlate the x/y axis. Fig. 4(d) shows that al- though inconsistent with the ground truth, DocOwl 1.5 gives another correct answer with the help of stronger structure understanding on tables. More qualitative results can be found in Appendix B.6. 5 Conclusion To enhance the Visual Document Understanding performance of MLLMs, we first propose Uni- fied Structure Learning across 5 domains of text- rich images, including both structure-aware parsing tasks and multi-grained text localization tasks. To better maintain structure and spatial information during vision-and-language feature alignment, we design a simple and effective vision-to-text module, named H-Reducer. It mainly utilizes a convolution layer to aggregate horizontally neighboring visual features. To support the Unified Structure Learning, we build DocStruct4M by collecting publicly avail- able images and carefully constructing structure- aware text sequences and multi-grained pairs of texts and bounding boxes. With Unified Structure Learning, our model DocOwl 1.5 achieves state-of- the-art OCR-free performance on 10 visual docu- ment understanding benchmarks. 8\n6 Limitation In this work, we mainly focus on enhancing the OCR-free document understanding performance of the Multimodal Large Langauge Model through unified structure learning and pay less attention to the efficiency of vision encoding. Our H-Reducer could preserve spatial information and produce fewer visual tokens than MLP. However, a high- resolution image can still occupy a maximum of 2,560 tokens during the language modeling of the LLM decoder. This is not efficient enough when un- derstanding a document with multiple pages (e.g., 20 pages will occupy a maximum of 51,200 tokens). How to encode high-resolution images with fewer tokens and maintain the document understanding performance are left to future work. 7 Ethics Statement Our DocOwl 1.5 is initialized from a general Mul- timodal Large Language Model, which is trained on massive vision-and-language data from the web. Thus, it may also suffer from issues of LLMs such as toxic language and bias . However, the Unified Structure Learning proposed in this work focuses on parsing texts in publicly available images and introduces few biases relevant to ethical issues.
[ "MM-LLMs: Recent Advances in MultiModal Large Language Models", "ChartAssisstant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction Tuning", "CogAgent: A Visual Language Model for GUI Agents", "Honeybee: Locality-Enhanced Projector for Multimodal LLM", "mPLUG-PaperOwl: Scientific Diagram Analysis with the Multimodal Large Language Model", "ChartLlama: A Multimodal LLM for Chart Understanding and Generation", "DocPedia: Unleashing the Power of Large Multimodal Model in the Frequency Domain for Versatile Document Understanding", "TableLlama: Towards Open Large Generalist Models for Tables", "Monkey: Image Resolution and Text Label are Important Things for Large Multi-Modal Models", "mPLUG-OwI2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration", "CogVLM: Visual Expert for Pretrained Language Models", "UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model", "Improved Baselines with Visual Instruction Tuning", "Qwen Technical Report", "Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond", "mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding", "VisText: A Benchmark for Semantically Rich Chart Captioning", "Kosmos-2: Grounding Multimodal Large Language Models to the World", "Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering", "InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning", "CCpdf: Building a High Quality Corpus for Visually Rich Documents from Web Crawl Data", "mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality", "MPMQA: Multimodal Question Answering on Product Manuals", "Visual Instruction Tuning", "A Survey of Large Language Models", "LLaMA: Open and Efficient Foundation Language Models", "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models", "Unifying Vision, Text, and Layout for Universal Document Processing", "LAION-5B: An open large-scale dataset for training next generation image-text models", "Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding", "Flamingo: a Visual Language Model for Few-Shot Learning", "LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking", "ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning", "Chart-to-Text: A Large-Scale Benchmark for Chart Summarization", "OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework", "OCR-Free Document Understanding Transformer", "Question-controlled Text-aware Image Captioning", "E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning", "Kleister: Key Information Extraction Datasets Involving Long Documents with Complex Layouts", "InfographicVQA", "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜", "Learning Transferable Visual Models From Natural Language Supervision", "Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts", "VisualMRC: Machine Reading Comprehension on Document Images", "WebSRC: A Dataset for Web-Based Structural Reading Comprehension", "LayoutLMv2: Multi-modal Pre-training for Visually-rich Document Understanding", "TAP: Text-Aware Pre-training for Text-VQA and Text-Caption", "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "DocVQA: A Dataset for VQA on Document Images", "Language Models are Few-Shot Learners", "TextCaps: a Dataset for Image Captioning with Reading Comprehension", "Image-based table recognition: data, model, and evaluation", "TabFact: A Large-scale Dataset for Table-based Fact Verification", "PlotQA: Reasoning over Scientific Plots", "Towards VQA Models That Can Read", "Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning", "DVQA: Understanding Data Visualizations via Question Answering", "FigureQA: An Annotated Figure Dataset for Visual Reasoning", "Compositional Semantic Parsing on Semi-Structured Tables", "Evaluation of deep convolutional nets for document image classification and retrieval", "Bleu: a Method for Automatic Evaluation of Machine Translation", "On the Hidden Mystery of OCR in Large Multimodal Models", "DUE: End-to-End Document Understanding Benchmark" ]
FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture
FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture Abstract Food is a rich and varied dimension of cul- tural heritage, crucial to both individuals and social groups. To bridge the gap in the liter- ature on the often-overlooked regional diver- sity in this domain, we introduce FoodieQA, a manually curated, fine-grained image-text dataset capturing the intricate features of food cultures across various regions in China. We evaluate vision–language Models (VLMs) and large language models (LLMs) on newly col- lected, unseen food images and corresponding questions. FoodieQA comprises three multiple- choice question-answering tasks where models need to answer questions based on multiple im- ages, a single image, and text-only descriptions, respectively. While LLMs excel at text-based question answering, surpassing human accu- racy, the open-sourced VLMs still fall short by 41% on multi-image and 21% on single-image VQA tasks, although closed-weights models perform closer to human levels (within 10%). Our findings highlight that understanding food and its cultural implications remains a challeng- ing and under-explored direction. 1 Introduction One of the most popular dishes in China is hotpot , which comes in many varieties, as shown in Fig- ure 1: Beijing is renowned for its mutton hotpot served with a traditional copper pot (tóng 铜gu¯o 锅shuàn 涮yáng 羊ròu 肉). Guangdong province is home to a famous porridge- based hotpot (zh¯ou 粥dˇı 底huˇo 火gu¯o 锅), while its coastal region of Chaoshan is known for beef hotpot (cháo 潮shàn 汕niú 牛 ròu 肉huˇo 火gu¯o 锅). The hotpot varieties from Sichuan and Chongqing are celebrated for their flavorful broths, with chili peppers and Sichuan peppercorns that create a unique numbing-spicy sensation. The vari- ation among regional cultures within a country highlights the challenges that language models face in understanding cultural knowledge and context- specific information in the food domain. BeijingBeijing ChaoshanChaoshan GuangdongGuangdong SichuanSichuan Figure 1: An example of regional food differences in referring to hotpot in China. The depicted soups and dishware visually reflect the ingredients, flavors, and traditions of these regions: Beijing in the north, Sichuan in the southwest, and Guangdong in the south coast. Existing datasets and models that focus on food and culinary practices primarily concentrate on tasks such as food recognition, recipe generation, food knowledge probing or recipe-related question answering . How- ever, they often take a coarse view, conflating coun- try, culture and language. Important regional cul- tural differences remain under-studied . We introduce FoodieQA , a manually curated set of multimodal test questions designed to probe fine- grained cultural awareness with a focus on the food domain. Our dataset targets two under-explored directions: regional cultural diversity within a coun- try and challenging fine-grained vision-language understanding in the culinary domain. To build a regionally diverse dataset, we gather dishes and images selected by native Chinese speakers from various regions, covering 14 dis- 1\nၛ༯Ғ௖൞ଧֹ۱౵֥ห೤ҒĤ Which region is this food a specialty? CA B DMulti-Image VQA Single-Image VQA A Text QAࡳɓ༸ൄ㵧ɲʇൄʕٙ䁋ൄĤWhich is a cold dish in Sichuan cuisine? Ϣ్֥࠙१໅ห೤൞Ĥ What is the flavor of Ϣ్ ࠙ A B C DB C Dࡾ් Jiangsu ࣃࣘBeijing & Tianjin ཐ ۖHong Kong ܼ༆ Guangxi ઐজ spicy  ຓࢊ৚ା crispy-tender ඾ೈ soft ༹ salty Figure 2: The tasks in FoodieQA evaluate food culture understanding from three perspectives. Multi-image VQA requires the ability to compare multiple images, similar to how humans browse a restaurant menu. Single-image VQA assesses whether models can use visual information to better understand food culture. Text-based questions probe model performance without multimodal data. Fine-grained attributes that the questions focus on are highlighted. tinct cuisine types across China. To ensure the images used for benchmarking are fresh and have no chance of leaking into the pretraining data of VLMs, we collect images uploaded by local peo- ple, which are not publicly available online. We then define multiple attributes associated with the dishes and have native Chinese annotators create multiple-choice questions based on their expertise. Our dataset includes both text-based question an- swering and vision-based question answering tasks, as illustrated in Figure 2. We benchmark a series of state-of-the-art mod- els, including seven LLMs and eight VLMs, on the Foodie dataset using zero-shot evaluation. By comparing their performance to human accuracy, we highlight the gap between open-weight and closed-weight models and demonstrate their lim- itations in understanding Chinese regional food culture. Additionally, we compare the performance of bilingual models trained on both Chinese and English datasets to English-focused models, reveal- ing biases in their understanding of region-specific food culture and the language of the questions. Fi- nally, our analysis shows that visual information improves the performance of VLMs compared to text-only inputs, although some models struggle with identifying dishes from images. 2 Related Work Multilingual Multimodal Datasets Multimodal systems are typically evaluated on English due to the widespread availability of English-language datasets. However, there are some examples of re- search on training and evaluating models beyond English for image captioning , image–sentence retrieval , visual reasoning , and question- answering . This paper fo- cuses on Chinese visual question answering, with fine-grained attributes in the food domain. Food Datasets In recent years, most food datasets have been designed for food image classi- fication , food captioning , and recipe-focused generation and question answering . For culture knowl- edge probing in the food domain, some of the re- cent datasets span multiple countries and include broad cultural or regional metadata . However, they often use country as a proxy for culture, such as the coun- try of origin for the food. For example, Palta and Rudinger (2023) introduced a test set to probe culi- nary cultural biases by considering US and non-US traditions, and Cao et al. (2024a) focuses on recipe transfer between Chinese and English. Investigat- ing cultural differences within a country remains an under-explored area . Fine-grained vision-language understanding Bugliarello et al. (2023) quantified the fine-grained vision-language understanding capabilities in exist- 2\nXinjiang(新疆菜)Northwestern(西北菜)Mongolian(内蒙古菜)Northeastern(东北菜) Cantonese(粤菜)Fujian(闽菜)Shandong(鲁菜)Jiangsu(苏菜)Zhejiang(浙菜)Sichuan(川菜)Guizhou(黔菜)Anhui(徽菜)Hunan(湘菜)Jiangxi(赣菜)Figure 3: Geographical distribution of cuisine types.1 ing models, focusing on aspects within the general domain. Later works focus on the culture under- standing in VL models . However, current fine-grained VL datasets are often framed as binary classification tasks, which limits their difficulty. Our multi- choice vision question answering dataset aims to advance the boundaries of fine-grained understand- ing in the context of food and culture. 3 FoodieQA: Dataset Annotation China, with its expansive territory and long history, has cultivated rich and diverse food culture and traditions. Focusing on regional food culture differ- ences, our dataset collection contains five distinct phases. 1) selection of cuisine types inside China; 2) collection of private images; 3) individual dish annotation; 4) visual question formulation; 5) text question formulation. 3.1 Selection of Cuisine Types The well-recognized "eight major cuisines" in China are Sichuan (chu¯an 川cài 菜), Guangdong (i.e., Can- tonese,yuè 粤cài 菜), Shandong (lˇu 鲁cài 菜), Jiangsu (s¯u 苏cài 菜), Zhejiang (zhè 浙cài 菜), Fujian (mˇın 闽cài 菜), Hunan (xi¯ang 湘cài 菜), An- hui (hu¯ı 徽cài 菜) cuisines . This categorization is based on historical, cultural, and geographical factors that have influenced the de- velopment of distinct cooking styles and flavors in different regions of the country. For a better ge- ographical coverage, we extend the eight cuisine types to additionally include Northwest (x¯ı 西bˇei 北cài 菜), 1We omit the Islands of the South China Sea in the figure for visualization simplicity. 南⽅-客家 Southern China-Hakka region ⾁在梅菜上 pork is on top of meigancai how it is presented when served 酱油⾊ soy-sauce color color ⽼抽/梅菜 / 姜⽚ Other ingredients 热菜 warm dish cold or warm dish 咸/鲜 salty/savory flavor 荤菜-猪⾁ meat-pork category 梅菜扣⾁ Meigancai with pork Name 五花⾁ pork belly main-ingredient 碗装 bowl dishware 粤菜 cantonese cuisine type 煮/炸 /蒸 boil/deep fry/ steam cooking skillsFigure 4: Meta-info annotation for local specialty. Northeast (d¯ong 东bˇei 北cài 菜), Xinjiang (x¯ın 新ji¯ang 疆cài 菜), Jiangxi (gàn 赣cài 菜) and, Mongolian cuisines (nèi 内méng 蒙gˇu 古cài 菜) in this study. This results in 14 types (Figure 3) in total, for which we collect dish images and annotations. 3.2 Collection of Images To ensure that the images are not used in the pre- training of existing models and contaminating eval- uation, we designed and distributed a survey for Chinese locals to upload their own dish images (Figure 11).2We provide detailed guidelines for image uploading, specifying that: (1) the image should be clear, with a single dish as the focal point in the center; (2) participants should select the cui- sine type of the dish from our list or specify it if it is not listed; (3) participants should provide the specific name of the dish, e.g., “mapo tofu (má 麻pó 婆 dòu 豆fˇu 腐)" instead of “tofu (dòu 豆fˇu 腐)"; (4) participants should indicate where the dish was served in their image, choosing from options such as cooked at home, restaurant, canteen, or delivery; (5) partici- pants need to grant us permission to use the image for research purposes and confirm the image is not publicly available online, i.e., it has neither been downloaded from nor uploaded to the web or so- cial media. In other words, the images we collected only existed on their phones or cameras. The up- loaded images genuinely represent the locals’ daily diet and culinary experiences, showcasing dishes that are currently popular. We manually filter out 102 images that are blurry, have the dish off-center, or show a mismatch be- tween the dish and the image. 3.3 Local Specialty Annotation We also gather text annotations of representative local specialties for each cuisine type on our list. Annotators are asked to collect meta information 2The survey is distributed through WeChat and Douban. 3\nfor representative local dishes for each cuisine type, based on their life experience and knowledge ob- tained from the web. These meta-fields provide information beyond recipes, offering insights into how the food looks and tastes when people are eat- ing it. An example is provided in Figure 4. The annotation is done by eight native Chinese speaker which include five PhD students and three postdoc- toral researchers from different provinces in China. The 17 meta-info fields cover the looks, taste, and culinary attributes of a dish. They include the food category, dish name, alternative names, main ingredient, characteristics of the main ingredient, three other key ingredients, dish flavor, presenta- tion style, dish color, serving temperature (cold or warm), dishware used, region and province of ori- gin, cuisine type, three primary cooking techniques, eating habits (if any), and reference links. 3.4 Visual Question Answering Annotation One major consideration for vision-language under- standing is that models can rely on language priors, consequently neglecting visual information . This underscores the importance of formulating visual questions in such a way that they can only be answered by ex- amining visual features, rather than relying on text priors. Based on the number of images used as inputs, we formulate both multi-image VQA ques- tions and single-image VQA questions. 3.4.1 Multi-image VQA Multi-image VQA requires the ability to compare detailed visual features from multiple images, sim- ilar to how humans browse a restaurant menu. Question formulation We ask the annotators to write challenging questions that require: (1) look- ing at the dish images to answer, (2) thinking be- yond merely recognizing the dish and questions that may require multi-hop reasoning, (3) asking diverse questions that belong to a diverse set of question types such as food type, flavor, color, ex- pense, amount, and etc., (4) only one image is the correct answer to the question. The multi-image VQA questions are written by five native speakers from five different regions in China. We organize the collected images into 28 groups based on cuisine types and food categories, as out- lined in Section 3.2. This allows annotators to write questions sequentially for related images ex- tracted from the same group. Each annotator is asked to write two–three questions, given a four- image group. We note that in order to avoid the bias from language priors, dish names corresponding to the images are not presented. The user interface that we use for annotation is shown in Figure 12. Question verification Once the questions and answers for the multi-image multiple-choice ques- tions are collected, we verify the questions by ask- ing the annotators (who did not create the ques- tions) to answer them. If a question does not meet our defined criteria, annotators are instructed to flag it as a "bad question." Through this process, 87 questions were discarded. Additionally, when answering the questions, annotators are required to provide the rationale they use to reach the an- swer, as well as judge whether the question requires multi-hop reasoning. The user interface that we use for verification is shown in Figure 13. Each ques- tion is verified by two annotators, and we exclude the questions that do not have full agreement. 3.4.2 Single-Image VQA Besides using images as multiple-choice answer options, we also ask diverse fine-grained ques- tions about various aspects of a dish based on its meta-information. We identify dishes that have both meta-information annotations and collected images, and then create questions based on the meta-information. As shown in the example in Fig- ure 2, the dish name is intentionally omitted from the questions to ensure they can only be answered by examining the visual features. Question formulation We adopt a template- based approach, where a question about the same meta-field is asked multiple times, varying factors like the image of the dish, while the answer options are carefully selected from the wrong candidates in the meta-field to ensure that only one answer is correct. The single-image VQA questions are generated using a rule-based method, followed by thorough human verification and filtering through that is similar to the multi-image VQA verification process. Please see details in the Appendix A. Question verification Similar to verification for the multi-image VQA questions, annotators are asked to answer the question given the text question and the corresponding image, and raise a “bad- question” flag to filter out questions that does not satisfy the criteria. 88 questions were discarded as bad. Note that the name of the dish is not revealed 4\nin the text question so that the question needs to be answered based on visual information. Annotators are asked to write “I don’t know" in the rationale and randomly guess an answer if they think the question is beyond their knowledge. 3.5 Text Question Answering Annotation We formulate the text-based questions by com- bining human annotations and rule-based gener- ation. Similar to the single-image VQA approach described in Section 3.4.2, we generate questions and multiple-choice answer options based on the meta-information fields. However, instead of us- ing the dish image, we included the dish name directly in the question. The questions are formu- lated using templates, where only the dish names and meta-fields are varied. A same human verifica- tion process to single-image question answering is included. 135 bad questions are discarded. Notice that annotators are asked to answer the questions based on their knowledge without using search en- gines, this makes the task challenging as it would be hard for one to answer questions about unfamil- iar foods and regions without any other available information besides names of the food. 4 Dataset Statistics 4.1 Human Validation In Table 1, we calculate human accuracy and inter-annotator agreement scores based on human- verified questions, excluding those identified as bad questions. For the single-image VQA and text QA questions, given the diverse cultural backgrounds of the human annotators, some questions can be challenging if the required food culture knowledge falls outside an annotator’s cultural experience. An- notators are instructed to indicate "I don’t know" when they lack the cultural knowledge to answer a question. These questions are classified as out-of- domain. For out-of-domain questions, the answer is randomly selected from the provided choices when calculating human accuracy and Cohen’s Kappa scores. We also report Cohen’s Kappa ( κ), and human accuracy for in-domain questions. The human validation process involves eight native Chi- nese speakers from seven different provinces across China3, including three postdoctoral researchers and five PhD students. Each question is verified and answered by two annotators. 3The annotators are from Sichuan, Shaanxi, Guangdong, Jiangsu, Jiangxi, Shandong, and Chongqing. /uni00000027/uni00000056/uni0000004b/uni0000004a/uni00000042/uni0000004f/uni00000001/uni00000009 /uni0000000a /uni00000015/uni0000000f/uni00000018/uni00000006 /uni00000039/uni0000004a/uni0000004f/uni0000004b/uni0000004a/uni00000042/uni0000004f/uni00000048/uni00000001/uni00000009 /uni0000000a /uni00000017/uni0000000f/uni00000011/uni00000006 /uni0000002f/uni00000050/uni00000053/uni00000055/uni00000049/uni00000058/uni00000046/uni00000054/uni00000055/uni00000001/uni00000009 /uni0000000a /uni00000019/uni0000000f/uni00000016/uni00000006 /uni00000028/uni00000056/uni00000042/uni0000004f/uni00000048/uni00000045/uni00000050/uni0000004f/uni00000048/uni00000001/uni00000009 /uni0000000a /uni00000012/uni00000017/uni0000000f/uni00000016/uni00000006 /uni0000003b/uni00000049/uni00000046/uni0000004b/uni0000004a/uni00000042/uni0000004f/uni00000048/uni00000001/uni00000009 /uni0000000a /uni00000018/uni0000000f/uni00000018/uni00000006 /uni00000030/uni00000055/uni00000049/uni00000046/uni00000053/uni00000054/uni00000001/uni00000009 /uni0000000a /uni00000019/uni0000000f/uni00000019/uni00000006/uni00000028/uni00000056/uni0000004a/uni0000005b/uni00000049/uni00000050/uni00000056/uni00000001/uni00000009 /uni0000000a /uni00000017/uni0000000f/uni00000017/uni00000006/uni0000002f/uni00000050/uni00000053/uni00000055/uni00000049/uni00000046/uni00000042/uni00000054/uni00000055/uni00000001/uni00000009 /uni0000000a /uni00000012/uni0000000f/uni00000012/uni00000006/uni00000034/uni0000004a/uni00000044/uni00000049/uni00000056/uni00000042/uni0000004f/uni00000001/uni00000009 /uni0000000a /uni00000012/uni00000017/uni0000000f/uni00000019/uni00000006/uni0000002b/uni0000004a/uni00000042/uni0000004f/uni00000048/uni00000054/uni00000056/uni00000001/uni00000009 /uni0000000a /uni00000012/uni00000012/uni0000000f/uni00000011/uni00000006/uni00000029/uni00000056/uni0000004f/uni00000042/uni0000004f/uni00000001/uni00000009 /uni0000000a /uni00000015/uni0000000f/uni00000012/uni00000006/uni00000034/uni00000049/uni00000042/uni0000004f/uni00000048/uni00000049/uni00000042/uni0000004a/uni00000001/uni00000009 /uni0000000a /uni00000017/uni0000000f/uni00000017/uni00000006/uni0000002a/uni0000004f/uni0000004f/uni00000046/uni00000053/uni00000001/uni0000002e/uni00000050/uni0000004f/uni00000048/uni00000050/uni0000004d/uni0000004a/uni00000042/uni00000010/uni0000002b/uni0000004a/uni00000042/uni0000004f/uni00000048/uni00000059/uni0000004a/uni00000010/uni00000022/uni0000004f/uni00000049/uni00000056/uni0000004a /uni00000009 /uni00000010/uni00000010/uni0000000a /uni00000012/uni0000000f/uni00000017/uni00000006Figure 5: Region distribution of collected food images. Task Questions κ Accuracy Multi-image VQA 403 .834 .916 Single-image VQA 256 .556 .744 - In-domain 168 .674 .818 Text QA 705 .470 .562 - In-domain 307 .808 .857 Table 1: Question Statistics per task in FoodieQA. Multi-image Single-image TextQA Avg. length 12.9 17.0 14.9 Question types 14 6 7 Multi-hop (%) 25.3 73.4 1.6 Unique Images 403 103 - Table 2: Question Statistics. 4.2 Image and Question Distribution Image statistics We collected 502 images but discarded 113 due to quality control issues. The final dataset of 389 images are distributed across regions in China as shown in Figure 5. All 389 images are used for multi-image VQA; a subset of 103 images are used for single-image VQA. Question statistics After human verification, we obtain 403 multi-image VQA questions, where each question needs to be answered with a set of four provided images. Single-image VQA tasks consists of 256 question in total, and text QA con- sists of 705 questions in total (Table 1). We report the key statistics of the questions in Table 2. Please see more details in Appendix B. 5 Baselines: How Much of a Foodie are the LLMs/VLMs? We evaluate open-weight and API-based state-of- the-art LLMs and VLMs to probe their culture 5\nPhi-3-vision Idefics2-8b Mantis-8BQwen- VL-12BGPT-4o GPT-4V20406080AccuracyEN ZHFigure 6: Accuracy of multi-image VQA tasks across four different prompts compared to a 91.96% human accuracy in Chinese. Although Idefics2 and Mantis have higher accuracy than other models, they show greater variation across different prompts. knowledge in the food domain. We evaluate the models in both Chinese and English4. For VQA questions are translated to English using the DeepL free API5and validated by two PhD students. 5.1 Multi-Image VQA is Difficult We evaluate the multi-image VQA task using open- weight models that are capable of handling mul- tiple image inputs, including Phi-3-vision-128k- instruct , Idefics2-8B (Lau- rençon et al., 2024), Mantis-8B-Idefics2 , and English-Chinese bilingual Qwen- VL-12B , and Yi-VL 6B and 34B models , as well as API-based mod- els such as GPT-4V and GPT-4o. We experimented with four different prompts that utilized lists of images and texts or interleaved image-text inputs. Details can be found in Ap- pendix C. As shown in Figure 6, when compared to the human accuracy of 91.69% in Chinese, the best-performing open-weight model, Idefics2-8B, achieves an accuracy of 50.87%, which is still sig- nificantly lower than human performance. This indicates that current state-of-the-art models are still weak at distinguishing differences among food from visual input. This underscores that multi- image understanding, especially in contexts requir- ing cultural knowledge in the food domain, remains a challenging problem. When evaluating on the translated English questions, model performance decreases for all models except Phi-3-vision. 4We also include an estimate, calculated over 100 random samples, of Human performance on the English Multi-Image and Single-Image VQA from one native speaker with nospe- cialized knowledge of Chinese food culture. 5https://www.deepl.com/en/translatorEvaluation Multi-image VQA Single-image VQA ZH EN ZH EN Human 91.69 77.22†74.41 46.53† Phi-3-vision-4.2B 29.03 33.75 42.58 44.53 Idefics2-8B 50.87 41.69 46.87 52.73 Mantis-8B 46.65 43.67 41.80 47.66 Qwen-VL-12B 32.26 27.54 48.83 42.97 Yi-VL-6B - - 49.61 41.41 Yi-VL-34B - - 52.73 48.05 GPT-4V 78.92 69.23 63.67 60.16 GPT-4o 86.35 80.64 72.66 67.97 Table 3: Comparison of Multi-image and Single-image VQA Performance in Chinese and English. We report the best accuracy from four prompts.†: see Footnote 4. 5.2 Single-Image VQA Results Besides the four open sourced models that we used for multi-image VQA, we also evaluate the bilin- gually trained (Chinese and English) Yi models for the single-image VQA task. The evaluation accuracy is reported in Table 3. Almost every open-weight model performs better on Single-image VQA than Multi-image VQA. We can observe that, for the bilingually trained mod- els, i.e., Qwen-VL and Yi-VL, their performance is better when evaluated in Chinese. However, for the multilingual models, i.e. Phi-3, Idefics2, and Mantis-8B, their performance is better when eval- uated in English. The best performing models are the API-based models from OpenAI. 5.3 Models are Strong at Text QA We evaluate text question answering with a se- ries of open-weight models, including Phi-3- medium-4k-instruct , Llama3- 8B-Chinese , Mistral-7B- Instruct-v0.3 , Yi-6B and 34B models , and Qwen2-7B- instruct , as well as API-based model GPT-4. Given that dish names translation is challeng- ing and would likely introduce additional infor- mation and unfair comparison, we only evaluate the text questions in Chinese. For example, a fa- mous Sichuan dish “f¯u 夫q¯ı 妻fèi 肺piàn 片” can be translated to “couple’s lung slices" if translate word by word, however it would be translated as “Sliced Beef and Ox Tongue in Chilli Sauce" by meaning. While the literal translation makes no sense, translation by meaning would hint the flavor and ingredients that are not included in its original Chinese name. 6\nPhi-3-medium Mistral-7B-instructLlama3-8B-ChineseYI-6BYI-34B Qwen2-7B-instructGPT-43040506070AccuracyFigure 7: Accuracy of text QA across four different prompts. The blue dashed line indicates human accu- racy (56.2%). Input prompt1 prompt2 prompt3 prompt4 Dish name only 28.52 27.73 36.72 37.11 + dish image 40.23 41.41 40.62 42.19 Table 4: Accuracy on two variants of Single-image VQA task, showing that visual information of food images is crucial for Idefics2 to correctly answer the questions. From Figure 7, we see that the Qwen2-7B- instruct model surpasses human performance on the text QA task, where the questions are formu- lated based on the local specialty annotations in Section 3. Since the local specialty annotations are collected and summarized from public resources such as Baidu-Baike by local representatives, we suspect that the high performance could be due to the inclusion of domain-specific training data. 6 Analysis Visual information helps. In Single-image VQA, the default setting is to use only dish im- age without specifying the dish name. We now ex- amine whether the visual information is beneficial using the Idefics2-8B model.6Results are shown in Table 4, where we investigate two variants of Single-image VQA: providing the model with dish name only versus both the dish name and image. We observe that the Idefics2 model consistently performs better when dish images are available as additional information. Please see comparison examples in Appendix E.2. Dish names could be helpful clues for some of the models. As discussed in Section 4.2, over 73.4% of single-image questions require multi-hop 6We selected this model because it supports text-only in- puts, unlike some other models such as the Yi-VL series.Model Condition p1 p2 p3 p4 Yi-VL-6BImage-only 49.61 48.05 47.66 46.09 + dish name 73.83 74.61 76.17 62.50 Yi-VL-34BImage-only 50.39 52.73 50.78 48.83 + dish name 75.39 78.13 79.30 75.39 Idefics2-8BImage-only 44.53 43.75 46.09 46.87 + dish name 40.23 41.41 40.62 42.19 Table 5: Accuracy in the Single-image VQA task when dish name is revealed in the questions along with the im- age or not. While the Yi models benefit greatly from the additional information of the dish name, Idefics2 does not. “p1–4” indicates four different prompt templates. reasoning, which typically involves identifying the dish and then leveraging related knowledge to an- swer the questions. To determine whether the iden- tification of the food image and the utilization of vi- sual information are bottlenecks for the models, we compare their performance on single-image VQA when provided with the dish name in the question. The results in Table 5 indicate that while the Yi models significantly benefit from being given both the images and names of the dishes, the Idefics2-8B model does not show the same improvement from this additional information. This indicates that rec- ognizing the dishes could be a possible bottleneck for the Yi series models. Models are foodies who know cooking better than taste. Figure 8a shows the model perfor- mance under fine-grained questions attributes on Single- and Multi-image VQA. We observe that all models generally excel at answering questions related to cooking skills and ingredients. The Yi models, in particular, demonstrate a stronger ability to identify the flavors of dishes. Conversely, the Qwen-VL and Phi3-vision models perform well in observing the presentation of food when served but struggle with flavor-related questions. When an- swering questions based on multiple images, it also holds true that models are generally good at ques- tions regarding cooking skills and the amount of food (Figure 8b). However, these models are weak at answering questions related to the region and taste of the dish. Idefics-8B stands out, excelling in most of the fine-grained features we evaluated. Favorite food of the models. In Figure 9, we compare model performance on multi-image VQA tasks for questions grouped by food categories and cuisine types. This analysis provides insight into how well the models can compare features from 7\nPresentCooking-skills Main-ingredient Flavor Region Cuisine-type0 0.2 0.4 0.6 0.8 1GPT-4o GPT-4v Mantis-8B Idefics-8B Qwen-VL Phi3-vision Yi-VL-6B Yi-VL-34B(a) Single-image VQA. ColorCooking-skillsIngredientAllergy Region Taste Food-type Eating-habit PlatingAmountOther0 0.2 0.4 0.6 0.8 1GPT-4o GPT-4v Mantis-8B Idefics-8B Qwen-VL Phi3-vision (b) Multi-image VQA. Figure 8: Model accuracy on fine-grained question attributes. Hotpot SnackSeafoodMeat NoodlesTofu Bao BBQ SoupMantis Idefics-2 Qwen-VL Phi-3Model0.36 0.22 0.57 0.42 0.4 0.62 0.6 0.8 0.57 0.46 0.33 0.59 0.49 0.5 0.25 0 0.8 0.71 0.091 0.56 0.35 0.3 0.23 0.12 0.4 0.2 0.43 0.18 0.22 0.3 0.28 0.28 0.5 0.2 0.2 0.14 0.00.20.40.60.8 (a) Accuracy by food categories. Northeast ShanghaiGuizhouNorthwestXinjiangHunan Fujian OthersSichuan InnerMongolia/Jiangxi/AnhuiZhejiang Jiangsu GuangdongMantis Idefics2 Qwen-VL Phi-3Model00.47 0.31 0.46 0.42 0.3 0.5 0.68 0.36 0.33 0.63 0.48 0.51 00.35 0.54 0.5 0.75 0.6 0.5 0.58 0.46 0.67 0.63 0.44 0.51 00.35 0.23 0.38 0.25 0.2 0.21 0.42 0.44 0.33 0.32 0.16 0.3 00.41 0.23 0.19 0.25 0.4 0.29 0.42 0.26 0.33 0.37 0.32 0.3 0.00.20.40.6 (b) Accuracy by cuisine types. Figure 9: Model accuracy on questions categorized by food categories and cuisine types. images within the same group. The overall best per- forming model on multi-image VQA tasks excels at questions about BBQ and Xinjiang cuisines, but weak at questions about Shanghai dishes. Another interesting finding is that, despite Sichuan food being one of the most popular cuisines in China, and presumably having more available images and resources online, none of the models excel at an- swering questions related to this cuisine type. 7 Conclusion We introduce FoodieQA, a multimodal dataset de- signed to evaluate fine-grained understanding of Chinese food culture through multi-image, single- image, and text-only multiple-choice questions. Our experiments, which focus on regional cul- tural differences and detailed visual features, re- veal that understanding food and its cultural con- text remains a complex and under-explored task. We find that comparing food across multiple im- ages—similar to the common scenario of people browsing menus—is particularly challenging. All open-source models underperform human accuracy by more than 40% in this task. This suggests that our dataset offers a more accurate assessment of the suitability of state-of-the-art models for real-world applications in the food domain. Our analysis of language and prompt templates indicates that models can be sensitive to the lan- guage in which questions are asked—bilingually trained Chinese–English models perform better in Chinese, while other multilingual models are stronger in English. We also demonstrate the effec- tiveness of incorporating visual features compared to text-only settings in this context. Improved models or methods for understanding food culture may be essential for future progress in the FoodieQA challenge. Looking ahead, we aim to expand the dataset to include dishes from other countries and regions. We make all of our data collection, annotation, and verification tools freely available for re-use, and encourage the community to create Foodie datasets for their own language.7 8 Limitations The size of the FoodieQA dataset is limited by the challenge of collecting unseen images from indi- viduals, as it requires them to voluntarily upload images from their phones or cameras. Although we 7We will release our dataset as a benchmark on Codabench. 8\nhave distributed the survey on two popular Chinese social media platforms, we anticipate that increased social media exposure or collaboration with food industry professionals could facilitate the collec- tion of more images, and contribute to a training dataset for advancing this direction. Translating Chinese dish names into other lan- guages poses another challenge, as some dish names do not directly relate to their ingredients or cooking methods. Introducing translated dish names could potentially introduce additional infor- mation, leading to unfair comparisons among the models. Consequently, we have chosen to exper- iment solely with Chinese questions for the text- based queries. We have benchmarked fifteen popular models using our dataset. However, due to the rapid ad- vancements in the field, it is impossible to bench- mark all trending models continuously. We hope our dataset will inspire future researchers to de- velop similar Foodie datasets for their own regions and languages, thereby guiding LLMs and VLMs towards a better understanding of regional food cultures.
[ "Benchmarking Vision Language Models for Cultural Understanding", "MANTIS: Interleaved Multi-Image Instruction Tuning", "Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone", "Does Mapo Tofu Contain Coffee? Probing LLMs for Food-related Cultural Knowledge", "Yi: Open Foundation Models by 01.AI", "\"My Answer is C\": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models", "Exploring Visual Culture Awareness in GPT-4V: A Comprehensive Probing", "Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond", "Towards Measuring the Representation of Subjective Global Opinions in Language Models", "Stop Uploading Test Data in Plain Text: Practical Strategies for Mitigating Data Contamination by Evaluation Benchmarks", "Measuring Progress in Fine-grained Vision-and-Language Understanding", "GPT-4 Technical Report", "Towards Equitable Representation in Text-to-Image Synthesis Models with the Cross-Cultural Understanding Benchmark (CCUB) Dataset", "Counterfactual Recipe Generation: Exploring Compositional Generalization in a Realistic Scenario", "Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality", "VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena", "Visually Grounded Reasoning across Languages and Cultures", "xGQA: Cross-Lingual Visual Question Answering", "Probing Image-Language Transformers for Verb Understanding", "VSR: A Unified Framework for Document Layout Analysis combining Vision, Semantics and Relations", "WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning", "Nutritional characteristics and health effects of regional cuisines in China", "RecipeQA: A Challenge Dataset for Multimodal Comprehension of Cooking Recipes", "You Are What You Eat: Exploring Rich Recipe Information for Cross-Region Food Analysis", "ChineseFoodNet: A large-scale Image Dataset for Chinese Food Recognition", "Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering", "Multi30K: Multilingual English-German Image Descriptions", "Yin and Yang: Balancing and Answering Binary Visual Questions", "FORK: A Bite-Sized Test Set for Probing Culinary Cultural Biases in Commonsense Reasoning Models" ]
Decoding with Limited Teacher Supervision Requires Understanding When to Trust the Teacher
Decoding with Limited Teacher Supervision Requires Understanding When to Trust the Teacher Abstract How can sLLMs efficiently utilize the super- vision of LLMs to improve their generative quality? This question has been well stud- ied in scenarios where there is no restriction on the number of LLM supervisions one can use, giving birth to many decoding algorithms that utilize supervision without further training. However, it is still unclear what is an effective strategy under the limited supervision scenario, where we assume that no more than a few to- kens can be generated by LLMs. To this end, we develop an algorithm to effectively aggre- gate the sLLM and LLM predictions on initial tokens so that the generated tokens can more accurately condition the subsequent token gen- eration by sLLM only. Critically, we find that it is essential to adaptively overtrust or disregard the LLM prediction based on the confidence of the sLLM. Through our experiments on a wide range of models and datasets, we demonstrate that our method provides a consistent improve- ment over conventional decoding strategies. 1 Introduction Large language models (LLMs) have demonstrated their tremendous capability to generate human-like text sentences that convey rich knowledge in var- ious problem domains . However, due to their gigantic model scale and au- toregressive nature based on next-token generation, LLMs often suffer from having a significantly high latency . Small-scale LLMs, called sLLMs, have thus gar- nered much attention . sLLMs can run much faster than LLMs, making them a promising alternative to LLMs for applications that require agile responses or on-device inference. However, sLLMs tend to perform clearly worse than their larger counterparts, especially for tasks that require in-depth reasoning . If we have both LLM and sLLM available, can we use both models to enjoy the quality of LLM and the speed of sLLM? A recent line of works shows that this is possible, even without any fur- ther training, by having two models collaborate on the decoding procedure: In speculative decoding , the candi- date text is generated rapidly by the sLLM, which is then verified by the LLM to ensure its correctness. This method admits parallel inference, enabling a rapid generation of LLM-level responses. How- ever, this approach requires a lot of memory to load multiple instances of LLMs and sLLMs, and thus, it is very difficult to use on edge devices. In this paper, we consider an alternative decod- ing scenario with limited LLM supervision . That is, we assume that one primarily decodes with sLLM (thestudent ) but can utilize a very limited number of supervisions from the LLM (the teacher ). The supervision may be as scarce as generating a single token, incorporating the scenario where the LLM is remotely located, e.g., at a central server. Criti- cally, we no longer restrict the model to generate the exact same outcome as the LLM, but simply aim to maximize the generative quality itself. A natural approach is to use two models to pre- dict the same token and aggregate their predictions to generate a better token. Such an approach has been recently studied by Li et al. (2023a), without any limitation on the number of LLM supervision. Here, it turns out that an effective strategy is to overtrust the teacher and negatively account for the student. Precisely, one takes a weighted sum of teacher and student logits with weights 1 +α and−α, respectively. It has been observed that a single positive α= 0.5works consistently well over diverse scenarios (O’Brien and Lewis, 2023). Under the limited supervision scenario, however, we make a critical observation that overtrusting 1\nStudent modelTeacher modelPrompt: John runs 60 miles a week. He runs 3 days a week. He runs 3 hours the first day and half as much the other two days he runs. How fast does he run? + αStudent probTeacher prob Teacher prob - Student probOutput logits (Softmax) Student first token : John Ours first token : LetFigure 1: Illustration of our methodology. As the parameter αincreases, the methodology leverages a more significant disparity in knowledge between the teacher and the student models. The example shows a modification in the initial generated word from ‘John’ to ‘Let,’ which allows the sentence to generate the correct answer when subsequently generated with the student model. the teacher no longer continues to be the dominant strategy. Intriguingly, we find that overtrusting the student works much better under certain setups, enjoying a better alignment with subsequently gen- erated tokens where no supervision is available. In fact, our empirical analysis reveals that who should we overtrust by how much highly varies over the choice of models and tasks, and even for each da- tum. In other words, for supervision-limited sce- narios, we are in desperate need of a mechanism to determine whom to overtrust by how much. To this end, we develop an algorithm to utilize the LLM supervision in an adaptive manner to im- prove sLLM performance. In particular, we iden- tify that the entropy ( i.e., confidence) of the gener- ated tokens is highly correlated with whether one should trust the teacher or the student. If the stu- dent’s generated token has high or low entropy, overtrusting the student works better. This implies if the student is confident, the teacher’s information could be the noise, and if the student ponders gener- ating a token ( i.e., high entropy), the teacher’s infor- mation can cause confusion. Our method predicts the optimal αon a per-datum basis, consistently im- proving the predictive performance of sLLM over a wide range of models and tasks. Our key contributions are threefold: •We formulate and initiate research toward sLLM decoding with limited LLM supervi- sion, which bears much practical importance. •We discover that, with limited supervision, the conventional strategies of overtrusting LLMs are largely suboptimal. •We propose a novel entropy-based mechanism to determine who to overtrust by how much among sLLM and LLM and demonstrate its effectiveness on a wide range of setups. 2 Framework We now formally describe the problem of decoding with limited teacher supervision and the overtrust- based framework to aggregate teacher and student predictions. Similar to recent works and unlike distillation , we do not assume that we can train the sLLM further. We consider a setup where we have two models available: The teacher LLM and the student sLLM. Given some input prompt, we assume that we can invoke the teacher up to Ntimes and the student unlimited times to generate subsequent tokens. For decoding, we consider aggregating the pre- dictions of the teacher and student at the token- level. Concretely, let fs(x), ft(x)be the prediction logits of the student and teacher. Then, we consider a prediction based on the aggregated softmax Sα=σ(fs(x)) +α(σ(ft(x))−σ(fs(x))) (1) where σdenotes the softmax function and α∈Ris a tunable parameter that determines which model should be trusted. If α= 1, we are following the teacher’s prediction, and if α= 0, we are using the student prediction. Contrastive decoding uses α > 1, which is to overtrust the teacher and disregard the student.1In any case, the token is generated as an output achieving the maximum aggregated softmax Sα(Fig. 1). 1Note that the contrastive decoding actually combines log- its instead of softmax. We use softmax for our cases since it empirically works better. 2\nGSM8K(T) Phi-3-medium -> (S) Phi-3-mini Multiarith StrategyQA (T) Llama-2-13B -> Llama-2-7B GSM8K Multiarith StrategyQASVAMP ARC-Challenge ARC-Easy SVAMP ARC-Challenge ARC-Easy Figure 2: Visualization of accuracy as a function of α. The red dashed line indicates α=1, and the orange line represents the student model’s baseline performance. As the number of teacher supervisions is limited, we make combined predictions (eq. 1) only for the firstNtokens. That is, the text consists of (p1,···, pm| {z } given prompt, t1,···, tN| {z } LLM + sLLM, tN+1,···|{z } sLLM only). (2) 3 Method Given this framework, we identify two core algo- rithmic questions. First, what is the optimal value ofα? Second, given some prompts, how can we de- termine whether we should use the teacher’s knowl- edge or not, specifically for the datum? 3.1 Key Observations We now conduct a systematic empirical analysis to answer these questions. (1) Strange case of N= 1.Under a wide range of setups, we have empirically evaluated the case ofN= 1 with varying trust hyperparameter α (Fig. 2). We make two intriguing observations. •Even a single-token supervision from LLM canboost the accuracy substantially ; in Phi3- mini evaluated on the StrategyQA dataset, the accuracy increases by 2%p . •Theoptimal αsignificantly differs from task to task and model to model. Surprisingly, there are certain cases where the optimal α is smaller than zero, i.e., overtrusting the stu- dent works better. Detailed case studies are in Appendix A. This contrasts with the case of unlimited supervision, where α= 1.5works consistently well (O’Brien and Lewis, 2023). Summing up these observations, we conclude that we are in need of a good mechanism to predict the optimal αrather than using a fixed value. (2) Entropy and supervision. We also analyze the relationship between the prediction entropy of the student and the effect of teacher supervision (Fig. 3), where we use the optimal αapproximated 3\n(a) (b) Figure 3: Visualization of the number of correct answers performed by “Receive knowledge from the teacher” and “Generate solo” based on student entropy values utilizing Llama-2 on GSM8K. (a) represents cases with low entropy, while (b) shows cases with high entropy. The red dashed line indicates the threshold beyond which “Generate solo” demonstrates superior performance. from the previous analysis. We observe that there exists certain interval of entropy that whenever the student prediction en- tropy lies inside the interval, aggregating teacher predictions are likely to boost performance. If the student is extremely confident, the teacher’s predic- tion will only add noise, leading to a lower accu- racy. If the student is extremely unsure, aggregat- ing teacher knowledge may disrupt the student’s careful consideration and degrade the performance. 3.2 Algorithmic implications Motivated by these observations, we develop two mechanisms to best incorporate the limited super- vision from the teacher LLM. Predicting optimal αfor each datum. To avoid performing an extensive search for the optimal α for each data point, we propose to train a predictor that estimates the optimal α. More specifically, the model predicts αbased on the teacher and student logits given as inputs (Fig. 4). We compare two different models, XGBoost and DNN. Entropy-based knowledge injection. We deter- mine whether to utilize the teacher supervision or not based on an interval-based classifier using the student classification entropy (Algorithm 1). 4 Experimental Setup To demonstrate the superiority of our method, we evaluate our method on two types of tasks: classi- fication tasks over multiple domains (section 4.1) and various LLM benchmarks (section 4.2). In sec- tion 4.3, we describe the implementation details of our algorithm. Any detail that does not appear in this section is given in Appendix B. Student model Teacher modelTeacher prob LM head LM headStudent probXGboost, DNNPredict α Logits, EntropyFigure 4: Illustration of the optimal αpredict module. 4.1 Classification Task For the classification task, we first fine-tuned each student and teacher model. Then, we predict the class based on this equation 1 in inference time. Datasets. For the classification task, we use three datasets: For image, we use the CIFAR- 100 ; for audio, we use the ESC-50 ; for text, we use the MNLI dataset. Models. For CIFAR-100, we use the DeiT . For ESC-50, we use the AST . For MNLI, we use the DeBERTa . Other details. For CIFAR-100, we report the average score from 5 runs with different random seeds for each setting. Experiments are done on a single NVIDIA A6000 GPU, and we report the 4\nAlgorithm 1 Entropy-based knowledge injection Require: Entropy of the generated token E, two thresholds T1andT2(T1< T 2) ifT1< E < T 2then Receive knowledge from the teacher else if E≤T1orT2≤Ethen Generate solo end if cross-validation score for ESC-50 following the most recent work and measure performance based on the open-source fine-tuned model of DeBERTa as a single inference run for MNLI. 4.2 LLM Benchmark Datasets. For LLM benchmarks, we evaluate a total of six different tasks. In particular, we use the GSM8K , Multi- arith , SV AMP , partition of MATH (MATH-Easy), ARC , and StrategyQA datasets. Further details are available in the Appendix B.3. Models. We have tested over a total of four differ- ent families of models: Phi-3 , Llama-2 , Vicuna , and Qwen . Decoding. To confirm the exact performance measurement, we do not apply sampling algo- rithms during the decoding process. Prompt. We have provided few-shot prompts, with the number of shots ranging from 2 to 8. The prompts end with the phrase ‘the answer is.’ Then, we extract that span and compare it with the ground truth answer. More detailed information about the prompts is given in Appendix D. Other details. We search for the optimal αby exploring the range from 3 to -3 with 0.25 inter- vals. All experiments are done on a single NVIDIA A100 or H100 GPU, and the model inference is performed using bfloat16 format. Detailed infor- mation about the evaluation strategy of each bench- mark is in Appendix B.3.2. 4.3 Implementing our Algorithm Our model, designed to predict the optimal α, is initially conducted in inference across αranging Methods CIFAR-100 ESC-50 MNLI-m MNLI-mm Student 82.71 ±0.24 77.80 ±2.75 91.26 91.12 Teacher 89.47 ±0.16 94.55 ±0.84 91.67 91.82 Ours 89.75±0.15 94.75±0.98 92.08 92.07 Table 1: Experimental results on the classification task Student Model Student acc. Teacher acc. Ours acc. Optimal α Distill-BERT 82.16 91.67 91.83 0.87 DeBERTa-v3-small 87.62 91.67 91.89 0.83 DeBERTa-large 91.26 91.67 92.08 0.62 Table 2: Experimental results on different students for optimal αablation study on MNLI matched dataset. from 3 to -1 in intervals of 0.25 to make the train- ing dataset. The dataset is constructed by labeling αs that correctly predict the answer as one and those that do not as zero. Then, we train the model to perform multi-label binary classification. Our implementation uses logits from both student and teacher models as inputs. Additionally, we have incorporated the entropy of each logit as further in- put. The αwith the highest confidence is selected as the output during inference. To demonstrate the superiority of our method- ology, we mainly experiment with Phi-3, which shows the best performance on the GSM8K dataset. Since GSM8K is absent in the validation set, we split the training set into five cross-validation folds. Experiments are done on a single GeForce RTX 4090 GPU, and we conduct experiments using ran- dom seeds and report the average test score across these five folds. 5 Results 5.1 Classification task We assess experimental results across various clas- sification tasks within diverse domains, as detailed in Table 1. Our method consistently outperforms the teacher model across all classification tasks, indicating significant enhancements in model per- formance. Moreover, we analyze our method in the MNLI dataset using student models of varied per- formance and size, illustrated in Table 2. The table shows that even if a student model performs signifi- cantly worse than the teacher model, an appropriate mixture of their knowledge through an optimal α value can surpass the teacher’s performance. Addi- tionally, our results show that as the performance gap between the student and teacher narrows, the optimal αvalue converges to 0.5. This indicates a simple weighted average ensemble, as when there 5\nModels GSM8K Multiarith SV AMP MATH-Easy StrategyQA ARC-Challenge ARC-Easy Phi3-mini 80.36 97.62 89.90 63.51 67.54 85.92 92.76 w/ reference ( α= 1) 81.50 97.62 89.40 64.39 69.00 86.09 93.27 w/α= 1.5 (O’Brien and Lewis, 2023) 81.96 97.86 89.00 65.09 69.58 86.01 93.18 w/ optimal α 82.18 98.10 90.30 65.09 69.58 86.43 93.35 + entropy sweet spot 82.34 98.10 90.50 65.61 70.01 86.60 93.56 Llama-2-7B 21.68 69.76 53.00 9.65 61.72 62.88 79.67 w/ reference ( α= 1) 22.90 67.38 54.90 9.83 61.43 63.65 80.56 w/α= 1.5 (O’Brien and Lewis, 2023) 22.90 67.14 55.20 9.65 60.55 64.08 80.56 w/ optimal α 23.35 70.00 55.20 10.00 62.45 64.08 80.68 + entropy sweet spot 24.03 70.48 56.10 10.53 62.59 64.59 80.81 Vicuna-7B 19.56 61.67 45.00 7.90 65.36 62.88 80.60 w/ reference ( α= 1) 21.08 60.71 45.80 7.90 64.77 62.29 80.77 w/α= 1.5 (O’Brien and Lewis, 2023) 20.32 60.48 46.30 7.54 65.50 62.12 80.35 w/ optimal α 21.08 62.38 46.60 8.42 65.65 63.91 80.98 + entropy sweet spot 21.38 63.81 46.90 8.77 66.23 64.42 81.36 Qwen-1.8B (4B on MATH-Easy) 35.03 83.10 34.50 27.54 58.22 50.00 70.41 w/ reference ( α= 1) 35.10 83.81 35.80 27.89 59.97 48.38 70.20 w/α= 1.5 (O’Brien and Lewis, 2023) 34.72 83.57 36.30 28.42 60.12 48.21 70.12 w/ optimal α 35.41 83.81 37.10 28.77 60.41 50.94 70.41 + entropy sweet spot 36.09 85.95 37.10 29.82 61.14 51.28 71.04 Table 3: Experiment results on various LLM benchmarks. The results are in case only the first generated token received knowledge from the teacher model. ‘entropy sweet spot’ is a method using Algorithm 1. Multiarith SVAMP Figure 5: Results of comparison of our method with CoT-decoding using the Phi-3. K denotes counts of exploring paths starting from top- k. is minimal difference in knowledge between the students and teachers. 5.2 LLM benchmark Overall results. We conduct vast experiments to evaluate the effectiveness of our proposed method across various LLMs and diverse benchmarks. The results of these experiments are in Table 3. Our approach is applied solely to the first token gen- erated. As demonstrated by the results, directly utilizing the logits from a teacher model and utiliz- ing the previous method (O’Brien and Lewis, 2023) can enhance performance; however, it is slight and occasionally leads to degraded performance. By op- timizing the αparameter, we consistently achieve performance improvements over them across all models. Furthermore, we observe a significant en- hancement in model performance by determining whether to employ our decoding method based on the entropy value. Various tokens setting. We define the extent of collaborative decoding between the student and MethodsSec. / Sentences Total Phi-3-medium Phi-3-mini CoT-decoding (K=3) 8.15 - 8.15 CoT-decoding (K=5) 14.65 - 14.65 Ours 2.63 0.08 2.55 Table 4: Comparative speed analysis of our method against CoT-Decoding. Our approach requires addi- tional time one token computation times for teacher per sentences. teacher models by notating ‘ N,’ the number of tokens for collaboration. Then, we conduct diverse Nvalues to assess the impact on performance, and the results are in Table 5. Moreover, we conduct our method on all tokens reported in Appendix C.2. Comparision with CoT-decoding. As the con- cept to reasonably determine the first token, our method is related to CoT-decoding , a methodology that begins by se- lecting the top- ktokens from the initial token and 6\nModels GSM8K Multiarith SV AMP MATH-Easy StrategyQA ARC-Challenge ARC-Easy Phi3-mini 80.36 97.62 89.90 63.51 67.54 85.92 92.76 N = 3 82.79 98.10 90.10 66.67 69.87 86.60 93.48 N = 5 82.34 98.10 90.10 66.32 70.31 86.69 93.69 N = 10 83.32 98.10 90.30 66.67 72.20 86.69 93.43 Llama-2-7B 21.68 69.76 53.00 9.65 61.72 62.88 79.67 N = 3 23.50 70.48 56.70 9.83 63.17 63.48 80.39 N = 5 23.43 70.95 56.90 10.18 63.46 64.51 80.77 N = 10 24.56 70.95 56.60 10.00 64.05 64.42 80.56 Vicuna-7B 19.56 61.67 45.00 7.90 65.36 62.88 80.60 N = 3 20.85 63.10 45.70 9.83 66.23 64.76 81.19 N = 5 20.70 64.05 46.20 10.00 66.67 64.68 81.23 N = 10 21.91 64.76 47.00 9.30 66.52 65.78 81.02 Qwen-1.8B (4B on MATH-Easy) 35.03 83.10 34.50 27.54 58.22 50.00 70.41 N = 3 35.71 83.81 39.30 29.47 62.45 51.45 70.79 N = 5 35.18 84.05 41.20 30.35 61.86 52.90 70.83 N = 10 35.63 83.57 43.20 31.23 61.72 54.18 72.14 Table 5: Experiment results on various LLM benchmarks with diverse N. We use optimal αfor all experiments. [GSM8K] John runs 60 miles a week. He runs 3 days a week. He runs 3 hours the first day and half as much the other two days he runs. How fast does he run? Ground truth answer : 10 Generate solo : John runs 3 hours the first day, so he runs 3/3 = 1 hour per day for the other two days. He runs a total of 3 + 1 + 1 = 5 hours a week. He runs 60 miles a week, so his speed is 60 miles / 5 hours = 12 miles per hour. The answer is 12. Receiving knowledge from the teacher (w/ α= 1.5) : Let’s break this down step by step. 1. John runs 3 hours on the first day. 2. He runs half as much time on the other two days, so that’s 3 / 2 = 1.5 hours per day. 3. In total, he runs for 3 + 1.5 + 1.5 = 6 hours a week. 4. He runs 60 miles a week, so to find his speed, we divide the total distance by the total time: 60 miles / 6 hours = 10 miles per hour. John runs at a speed of 10 miles per hour. The answer is 10. Receiving knowledge from the teacher (w/ optimal α): Let’s break this down step by step. 1. John runs 3 hours on the first day. 2. He runs half as much time on the other two days, so that’s 3 / 2 = 1.5 hours per day. 3. In total, he runs for 3 + 1.5 + 1.5 = 6 hours a week. 4. He runs 60 miles a week, so to find his speed, we divide the total distance by the total time: 60 miles / 6 hours = 10 miles per hour. John runs at a speed of 10 miles per hour. The answer is 10. Teacher generation : First, we need to find out how many hours John runs in total. He runs 3 hours the first day and half as much the other two days. So, he runs 3 + (3/2) + (3/2) = 3 + 1.5 + 1.5 = 6 hours in total. Now, we know he runs 60 miles in 6 hours. To find his speed, we divide the distance by the time. So, he runs 60 miles / 6 hours = 10 miles per hour. The answer is 10. [StrategyQA] Is Argon near Neon on the periodic table of elements? Ground truth answer : True (yes) Generate solo : Argon is in period 3 and Neon is in period 2. Periods are horizontal rows on the periodic table. Thus, Argon is not near Neon on the periodic table. So the answer is no. Receiving knowledge from the teacher (w/ α= 1.5) : Argon is in period 3 and Neon is in period 2. Periods are horizontal rows on the periodic table. Thus, Argon is not near Neon on the periodic table. So the answer is no. Receiving knowledge from the teacher (w/ optimal α): Yes. Argon is the 18th element on the periodic table. Neon is the 10th element on the periodic table. Thus, Argon is near Neon on the periodic table of elements. So the answer is yes. Teacher generation : Argon and Neon are both noble gases. Noble gases are in the same column on the periodic table. Thus, Argon is near Neon on the periodic table. So the answer is yes. Table 6: Detailed results showing how our method brought performance gain in LLM benchmarks utilizing Phi-3 model. Each generated tokens all in green; if the answer is correct we highlighted in blue, else red. subsequently generating sentences from each. The most confident response is then chosen as the final output. We compare our method to CoT-decoding using SV AMP and Multiarith, which do not require elaborate prompting, focusing on performance and speed. The performance results are in Fig. 5, and the speed comparison is in Table 4. The perfor- mance of our method is competitive with CoT- decoding and even surpasses it in Multiarith. The speed calculated on the A100 GPU shows that our method is faster than CoT-decoding. 5.3 Optimal αpredict model We evaluated our optimal αprediction model using the Phi-3 on the GSM8K, and the results in Ta- ble 7. While XGBoost failed to classify effectively, 7\nModels GSM8K Phi3-mini 80.36 w/ entropy sweet spot 82.34 w/ XGboost 81.44 ±0.51 w/ DNN 82.44±0.22 Table 7: Experiment results on optimal αpredictor. the DNN exceeded the performance of the en- tropy sweet spot. Demonstrates that the model can autonomously predict the appropriate knowledge without needing hand-crafted tuning processes. 5.4 Case study Table 6 provides examples of successful cases where only the first generated token receives knowl- edge from a teacher model in several benchmarks. As follows, receiving initial guidance from the teacher model encourages the student model to generate correct solutions. A detailed analysis of the first example from the GSM8K dataset reveals that ‘Receiving knowledge from the teacher’ facil- itates a more organized chain-of-thought process than ‘Generate solo.’ Notably, the token generated after receiving the teacher’s input is ‘ Let’ rather than the teacher-generated ‘ First ,’ which indicates why an αof 1 is not optimal. Also, the second example suggests why an αof 1.5 is not optimal, commonly used in previous research (O’Brien and Lewis, 2023). 6 Related work Efficient decoding strategy. Speculative decod- ing enables faster generation by having the draft model produce results first, which the target model then verifies in parallel, ensuring dis- tributions identical to those generated by the target model. On the other hand, our method aims to effi- ciently enhance the student model’s performance by utilizing only a few tokens generated by the teacher. Logit arithmetic. Recent works suggest using arithmetic to harness the capabilities of two or more language models during decoding. Contrastive de- coding (Li et al., 2023a; O’Brien and Lewis, 2023) enhances LLM generation quality by subtracting the log probabilities of a smaller model from those of the LLM. SafeDecoding mit- igates jailbreak attacks by amplifying the proba- bilities of tokens aligned with human values, us- ing the distribution difference between the safety- enhanced expert model and the original model. Other analogous studies utilize differences in log- its over vocabulary distributions between sLLMs to influence the output of LLM. DExperts suggests a decoding strategy to reduce undesirable outputs of target LLM by leveraging "expert" LMs and "anti-experts" LMs. Using a sim- ilar equation, Emulator fine-tuning emulates instruction-tuning large pre-trained models by ensembling them with small fine-tuned models. uses a small unsafe model to manipulate the output distribution of a larger model to induce jailbreaking. Finally, proxy- tuning utilizes the difference in output logits between tuned and untuned small LMs to shift the predictions of the untuned large LMs. Unlike previous methods, our approach in- volves searching for the optimal mixing coefficient αacross a broader range. We discover that the op- timal point may differ from those used in previous studies. Our work also employs an entropy-based method and a model-driven approach to enhance the general reasoning ability of LMs. Importance of the first token. The chain-of- thought decoding shows that the first token that LLMs produce have a signif- icant impact on the quality of the entire response in the reasoning task. Exploring paths starting from top-kalternative tokens enables chain-of-thought reasoning without prompts. Similarly, our study also shows that variations in the initial token can greatly impact the model’s performance. However, our method only requires a single token generated by the teacher model, eliminating the need for ad- ditional computation to produce all possible kout- comes. 7 Conclusion In this paper, we have formulated the problem of sLLM decoding with a limited LLM supervision. Through our study, we have unveiled that the op- timal combination of sLLM and LLM predictions may significantly depend on the considered task, models, and even datum. Understanding when and why such discrepancies happen is an important future question that needs to be addressed. We be- lieve that our entropy-based perspective will help provide a strong starting point for this pursuit. 8\nLimitations A notable limitation of our method is that, in its current form, it is difficult to incorporate the predic- tions of the teacher that has a different embedding space, e.g., using a different tokenizer. Another lim- itation is that we rely on a single feature, namely the prediction entropy, to determine how to aggre- gate the predictions. A more in-depth analysis of what other features one can utilize is much needed. Ethics statement All experimental results we provide in this paper is based on publicly available datasets and open- source models.
[ "Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone", "Chain-of-Thought Reasoning Without Prompting", "SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding", "An Emulator for Fine-Tuning Large Language Models using Small Language Models", "DistillSpec: Improving Speculative Decoding via Knowledge Distillation", "Textbooks Are All You Need II: phi-1.5 technical report", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena", "Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling", "GPT-4 Technical Report", "Fast Inference from Transformers via Speculative Decoding", "Contrastive Decoding: Open-ended Text Generation as Optimization", "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback", "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation", "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection", "Training Verifiers to Solve Math Word Problems", "DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts", "AST: Audio Spectrogram Transformer", "Are NLP Models really able to Solve Simple Math Word Problems?", "Measuring Mathematical Problem Solving With the MATH Dataset", "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies", "Training data-efficient image transformers & distillation through attention", "DeBERTa: Decoding-enhanced BERT with Disentangled Attention", "The Curious Case of Neural Text Degeneration", "Hierarchical Neural Story Generation", "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding", "Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge", "Decoupled Weight Decay Regularization", "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference", "MAWPS: A Math Word Problem Repository", "ESC: Dataset for Environmental Sound Classification", "Distilling the Knowledge in a Neural Network", "Learning Multiple Layers of Features from Tiny Images" ]
Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering
Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering Abstract Users post numerous product-related questions on e-commerce platforms, affecting their pur- chase decisions. Product-related question an- swering (PQA) entails utilizing product-related resources to provide precise responses to users. We propose a novel task of Multilingual Cross- market Product-based Question Answering (MCPQA) and define the task as providing an- swers to product-related questions in a main marketplace by utilizing information from an- other resource-rich auxiliary marketplace in a multilingual context. To facilitate the research, we propose a large-scale dataset named Mc- Market, with over 2 million questions across 13 marketplaces in 8 languages. We focus on two subtasks: review-based answer generation and product-related question ranking. Answers are obtained either by generating or ranking from product-related resources (e.g., reviews, questions). For each subtask, we label a subset of McMarket using an LLM and further eval- uate the quality of the annotations via human assessment. We then conduct experiments to benchmark our dataset, using a range of mod- els ranging from traditional lexical models to LLMs in both single-market and cross-market scenarios across two datasets. Results show that incorporating cross-market information sig- nificantly enhances performance in both tasks.1 1 Introduction Online shoppers on e-commerce platforms post numerous questions to specific products every day . Since most of these questions remain unanswered, Product- related question answering (PQA) involves pro- viding accurate responses to them. By utilizing product-related information like reviews and meta- data, responses to product-related questions can be enriched, offering enhanced depth and authenticity for potential customers . 1The code and dataset will be released via https:// github.coms/anonymous under the Amazon data license. Figure 1: An example of enhancing product-related QA using cross-market data. ①depicts generating answers with cross-market reviews. ②depicts ranking-related cross-market questions to find the answer. The recent success in cross-market PQA under- scores the capability to effectively leverage rele- vant questions from a resource-rich marketplace to address questions in a resource-scarce market- place . In this work, we extend the hypothesis that us- ing knowledge from popular marketplaces can im- prove answers in less common marketplaces, even in other languages. As shown in Figure 1, for a question to a product in the French marketplace (denoted as main marketplace ) asking if the clock is a real one, we can either address it by examining reviews of the same product or similar ones in the much larger US marketplace (denoted as auxiliary marketplace ), or ranking related questions from both main and auxiliary marketplaces to find the answer. These multilingual reviews and related questions serve as valuable hints, by saying “it’s not a real clock,” thereby providing crucial infor- mation for the pertinent question at hand. We, therefore, propose a novel task of Multi- lingual Cross-market Product-based Question An- swering (MCPQA). We define this task as gen- erating the answer to a product-related question in an original marketplace, using information sourced from an auxiliary marketplace with richer 1\nresources, within a multilingual setting . To this end, our initial goal is to address the following re- search question RQ1 :In a multilingual context, how can we utilize an auxiliary marketplace to enhance question-answering in the main market- place by leveraging product-related resources (i.e., questions, reviews)? To address RQ1 , we pro- pose the first large-scale MCPQA dataset, named McMarket, covering 13 different marketplaces (in- cluding the usauxiliary marketplace and 12 main marketplaces) across 8 different languages. To con- struct the dataset, we gather data from an existing Amazon product dataset and supplement it with information from user-generated Amazon product question-answering sources. In particular, we pro- vide diverse product information in McMarket, ex- ploring the possible answers using both questions and reviews. In total, our dataset consists of over 2 million product-related questions and 7.7 mil- lion product reviews. With McMarket, we then perform comprehensive data analysis to address RQ1 . We demonstrate a notable increase in the percentage of review-answerable questions across all marketplaces, with support from the auxiliary usmarketplace. Given the recent success of large language mod- els (LLMs ) in NLP tasks , their potential application to the MCPQA task prompts our second research ques- tionRQ2 :Can LLMs benefit the dataset construc- tion in the MCPQA task? Addressing RQ2 , we randomly select some questions from McMarket and perform GPT-4 auto-labeling. Specifically, we focus on two widely-studied PQA subtasks under the multilingual cross-market settings, including review-based answer generation ( AG) and product-related ques- tion ranking ( QR). For AG, we prompt LLMs to judge whether a question can be answered from associated reviews and provide its corresponding answer. This subset is denoted as McMarket r. For QR, given two question answering pairs, we ask LLMs to judge if one helps answer the other and denote the subset as McMarket q. With the two subsets, we then conduct human assess- ment, scrutinizing the LLM-generated results from multiple angles to ensure their quality meets the required standards. Notably, in McMarket r, 61.8% LLM-generated answers are assumed ‘better’ than the human ground truth. Finally, we are interested in answering the re- search question RQ3 :How do existing multilingual and monolingual methods perform in the single- and cross-market scenarios? To this end, we per- form experiments of models on AGandQRsub- tasks. For each task, we report the performance of state-of-the-art methods under single- and cross- market scenarios on both McMarket and the cor- responding subset. We benchmark methods rang- ing from traditional lexical models ( i.e.,BM25) to LLM-based approaches ( i.e.,LLaMA-2, Flan-T5). We demonstrate the superiority of cross-market methods against their single-market counterparts. In conclusion, our contributions are as follows: •We propose a novel task named MCPQA, where we leverage product-related informa- tion from an auxiliary resource-rich market- place to answer questions in a resource-scarce one in a multilingual setting. Specifically, we investigate two subtasks named AG and QR. •We benchmark a large-scale real-world dataset named McMarket to facilitate the research in the MCPQA task. We also collect two LLM- annotated subsets and adopt human assess- ment to ensure the dataset’s quality. •To provide a comprehensive evaluation of the task and verify the superiority of cross-market methods, experiments are performed under both single/cross-market scenarios. 2 Related Work Product-related QA . Product-related QA (PQA) seeks to address consumers’ general inquiries by utilizing diverse product-related resources such as customer reviews, or the pre-existing QA sections available on a retail platform . Among the existing literature in this area, retrieval-based methods have been a popular direction that retrieve related reviews for provid- ing the right answer (Wan and McAuley, 2016; Zhang et al., 2019b, 2020b,a; Yu and Lam, 2018). For example, McAuley and Yang (2015) propose a model that leverages questions from previous records for selecting the relevant review for the question. While most of these works assume there are no user-written answers available, Zhang et al. (2020b) rank answers for the given question with review as an auxiliary input. Another line of re- search investigates answer 2\ngeneration grounding on retrieved product-related documents. More recently, Ghasemi et al. (2023) introduce a novel task of utilizing available data in a resource-rich marketplace to answer questions in a resource-scarce marketplace. Building upon their research, we explore multilingual contexts, examining marketplaces with non-English content. Cross-domain and cross-lingual QA . Our work can be seen as a special format of cross-domain QA, which involves addressing questions that span different domains or fields of knowledge . For instance, Yu et al. (2017) propose a general frame- work that effectively applies the shared knowledge from a domain with abundant resources to a domain with limited resources. Also, cross-domain QA is often with a close connection to cross-lingual QA in the sense that both involve transferring knowl- edge and understanding from one domain or lan- guage to another. . Asai et al. (2020) ex- pand the scope of open-retrieval question answer- ing to a cross-lingual setting, allowing questions in one language to be answered using contents from another language. Recently, Shen et al. (2023) in- troduce a multilingual PQA dataset called xPQA where cross-market information is also leveraged to aid the product-based question answering. Com- pared to these datasets, more diverse information is provided in McMarket, exploring the possible answers with both questions and reviews available. 3 Problem Formulation We investigate two subtasks of the MCPQA task, review-based answer generation ( AG)andproduct- related question ranking ( QR), where answers to a product question are obtained by a generative or ranking way, respectively. Review-based answer generation . In this task, we assume that the answer can be obtained from the reviews of the product (or similar products). Based on the setting in Gupta et al. (2019), we define this task in a multilingual cross-market scenario. Given a question Qin the main marketplace MT, we first retrieve and rank all the related reviews from similar items within both MTand auxiliary marketplace MA. Given the retrieved review set Ω ={R1, ..., R k}, we predict if Qis answerable from it by assigning a tag t. If yes, a generative function Γis learned: A= Γ(Q,Ω), so that answer Ais generated with both QandΩas input. Product-related question ranking . Following the problem setting in Ghasemi et al. (2023), we as- sume that there are similar questions already asked about the product or similar products in other mar- ketplaces. Therefore, given a main marketplace in language LM, denoted as MT, which usually suffers from resource scarcity of the number of knowledgeable users answers, MTconsists of sev- eral items {I1, ..., I m}, where each Ikcontains a set of question answering pairs {QAk1, ...QA kn}. Besides, there also exists a high-resource market- place MA, denoted as the auxiliary marketplace (theusmarketplace in our case) in language LA (note that in some cases LAcan be the same as LM). Similarly, MAalso includes several items {I′ 1, ..., I′ z}, where we can assume z >> m . The task is defined as, for a given question Qin the main marketplace MT, in a multilingual setting, we rank the questions from both MTandMAto take the corresponding answers of the top ranks as the possible answer to Q. 4 Data Collection & Analysis We describe how we collect McMarket and perform several analysis to answer RQ1 andRQ2 . 4.1 Data collection 4.1.1 Data preprocessing We construct our dataset on top of an Amazon prod- uct dataset called XMarket . XMarket includes authentic Amazon product meta- data and user-generated reviews. Specifically, we sample 13 marketplaces covering 8 different lan- guages from the XMarket Electronic category, in- cluding 12 as main marketplaces and the additional usmarketplace as the auxiliary marketplace. For each marketplace, we gather metadata and reviews for each product from XMarket. We also collect the question-answering pairs posed by the users by crawling the Amazon website. We then provide the corresponding English translation for the non- English contents. Specifically, we adopt a widely- used professional translation tool named DeepL Pro2for all the question-answer translation and the pre-trained NLLB model fine- tuned on each non-English language for review translation. We ensure the translation quality and provide a detailed assessment in Appendix A. To 2https://www.deepl.com/ 3\nName Market Language Product Question Review Avg. Question per Market xPQA 12 12 16,615 18,000 - 1500 XMarket-QA 2 1 34,100 4,821,332 - 2,410,666 semiPQA 1 1 - 11,243 - 11,243 SubjQA 1 1 - 10,098 10,098 10,098 ReviewRC 1 1 - 2,596 959 2,596 AmazonQA 1 1 155,375 923,685 8,556,569 923,685 Amazon 1 1 191,185 1,447,173 13,498,681 1,447,173 McMarket 13 8 30,724 2,700,179 7,706,519 207,706 Table 1: Comparison of McMarket with existing PQA datasets. The detailed statistics are listed in Appendix D. au br ca cn fr in jp mx uk all Market1020304050% of answerable questionsSingle Market Cross Market Figure 2: Portion of answerable questions using single/cross-market review information. the best of our knowledge, this is the first multilin- gual cross-market QA dataset with questions and reviews in the community. 4.1.2 LLM annotation For the two concerned subtasks, we both provide LLM-labeled data for supervised training. Specif- ically, we randomly select a small portion of data from McMarket and instruct GPT-4 to perform an- notation. For AG, we randomly select 1000 ques- tions per marketplace.3Then, we follow the typi- cal top-K pooling technique González and Gómez (2007) and pool the top five retrieved reviews from a variety of retrieval methods. After that, we in- struct GPT-4 to judge if the question is answer- able and write the corresponding answer with the question and reviews as input. We denote this sub- set as McMarket r. For QR, we randomly select 200 questions from each marketplace. Employ- ing the same strategy, we retrieve the top five re- lated question-answering pairs from both the main and auxiliary marketplaces. Consequently, we ac- quire 1,000 question-answering pairs for each mar- ketplace, with 9k pairs in total. Then, GPT-4 is instructed to determine if the retrieved QA pairs would be useful in answering the original question by assigning a score from 0–2, representing ‘ Very useful ’, ‘Partially useful ’, and ‘ Not useful ’, respec- 3For the aumarketplace, the total is 584 questions, so we sample all of them. 06-12 07-14 08-16 09-18 11-20 Time101102103104105106Question Countus uk au br ca cn fr in jp mxFigure 3: Temporal gap analysis. tively. We denote this subset as McMarket q. For simplicity, we perform the GPT-4 labeling based on translated English contents. More details of the subsets as well as the prompts we gave to GPT-4 are listed in Appendix B. 4.2 Data analysis 4.2.1 Dataset overview Overall, McMarket covers 13 different market- places and 8 languages, ranging from marketplaces with a small scale ( i.e.,au,br) to marketplaces with rich resources ( i.e.,uk,us). It contains over 2 million product-related questions, 7 million re- views, and 30k unique products in total. We compare McMarket with existing PQA datasets. According to Table 1, McMarket exhibits advantages in various aspects: (1) contains mul- tiple languages – we provide product, question, and review information in the original text of their respective marketplaces and additionally offer the corresponding English translations; (2) supports cross-market QA – our dataset is designed to facil- itate question answering research across different marketplaces, enhancing its utility for cross-market analyses and evaluations; (3) includes diverse in- formation – compared with existing multilingual PQA dataset, McMarket encompasses comprehen- sive question and review information, paving the way for more diverse research avenues and tasks in the future; (4) is large in scale – overall, McMar- ket surpasses most PQA datasets in terms of size, 4\nVery Bad Bad Good Very Good Correctness 2.5 0.9 8.5 88.1 Completeness 4.9 1.3 15.6 78.2 Relevance 3.5 2.7 13.4 80.4 Naturalness 0.8 0.9 5.4 92.9 Better than Ground Truth 61.8 Table 2: Human evaluation on McMarket r. All the numbers are shown in percentage. ensuring it comprises a substantial amount of data for experimentation and analysis. 4.2.2 Cross-market QA analysis To answer RQ1 , we compare the effect of product- related resources ( i.e.,reviews) on question answer- ing under both single- and cross-market scenarios. Figure 2 shows the comparison of answerable ques- tions based on both single- and cross-market re- trieved reviews in McMarket.4We notice that the portion of answerable questions gets raised in ev- ery marketplace with cross-market reviews, with a particularly significant uplift observed in low- resource marketplaces ( i.e.,br). This verifies the transferability of knowledge across marketplaces and underscores the advantages of leveraging cross- market information in enhancing the performance of product QA models. We further analyze the temporal characteristics of McMarket. Figure 3 illustrates the cumulative sum of the number of QA data available on all the items in all marketplaces. There are several notable observations: 1) at the beginning, all marketplaces feature very few QA data. 2) At each timestep, the most resource-rich marketplace ( i.e.,us) always dominates the number of QA data compared to other marketplaces by several orders of magnitude. 3) Over time, the resource intensity levels of differ- ent marketplaces continue to change. For example, the number of QA data in mxsurpasses that in cn andjpafter 2018/09. We further observe that, on average, over 70% of the questions in the main marketplace have already been answered in the us auxiliary marketplace under the same item, before the first question even receives an answer. These findings confirm the practicality and importance of exploring how auxiliary marketplaces can be utilized as valuable resources for PQA. 4.2.3 LLM-generated data analysis To assess the quality of LLM-generated data, we perform several analyses. On both McMarket rand 4We adopt the answerable question prediction model in Gupta et al. (2019) to predict if a question is answerable or not given the review information.Incorrect Partially correct Correct Portion 6.0 10.9 83.0 Overall Precision 98.2 Overall Recall 97.4 Overall F1 97.6 Table 3: Human evaluation on McMarket q. All the numbers are shown in percentage. McMarket q, we randomly sample 500 questions with the average of 50 questions from each market- place, and hire 3 crowd-workers for each task5to manually assess the GPT-4 labels. AG. For McMarket r, we ask the crowd-workers to assess GPT-4-generated answers in terms of cor- rectness, completeness, relevance, and naturalness. The detailed definitions of them are listed in Ap- pendix E. For each metric, we asked them to assign a score from −2to+2to assess the answer quality, with−2representing ‘very bad’ and +2represent- ing ‘very good.’ We also asked them to choose the better answer between the GPT-4 and human- provided answers. They were also asked to provide their reasons without knowing the true category, mitigating bias towards longer and more detailed responses. We note a high agreement among anno- tators, with a 0.76 IAA score. From Table 2, we note that GPT-4 answers demonstrate reasonable performance in terms of every metric. Surprisingly, our findings reveal that in the majority of cases, hu- man assessors perceive GPT-4 results to be better than human-generated ground truth. It is worth not- ing that GPT-4’s outcomes are derived solely from review information, whereas human ground truth relies on both reviews and actual user experiences. QR. For McMarket q, we ask the crowd-workers to judge the quality of the question ranking gen- erated by GPT-4, by assigning a score between 0–2 to each sample, where 0 denotes GPT-4 an- swers are not correct, 1 as partially correct, and 2 as completely correct. Furthermore, we instruct the annotators to provide their own judgment of the ranking score if they mark GPT-4 answers as 0 or 1. We also observe high agreement in this task with the IAA score 0.83. Table 3 shows that the quality of the generated question ranking results by GPT-4 is also deemed satisfactory, achieving over 93% correctness in question ranking pairs and an overall F1 score of 97.6%. 5We hire the crowd-workers via a professional data man- agement company named Appen (https://appen.com/). 5\nMethodau br ca cn fr in jp mx uk A VG B R B R B R B R B R B R B R B R B R B RSingleBM25 6.1 7.0 4.9 6.9 6.9 7.7 4.8 5.2 8.0 8.1 4.7 5.9 11.0 9.6 7.0 8.2 10.3 9.3 8.0 7.9 BERT 7.4 7.3 9.0 5.3 7.3 6.8 5.4 5.0 8.5 7.2 5.1 4.8 10.6 8.7 9.4 7.7 9.5 8.2 7.9 7.0 T5 15.5 11.4 14.3 12.6 16.4 12.1 13.5 10.7 16.5 11.5 12.8 9.9 22.6 15.6 20.2 14.4 18.9 13.3 16.9 12.2 mT5 6.2 5.3 8.1 9.2 14.3 10.0 19.5 11.8 15.5 10.7 9.7 8.7 26.3 13.3 12.2 9.4 14.6 9.6 13.7 9.7 Llama-2* 10.2 14.7 16.4 17.1 15.9 13.1 14.8 13.6 18.3 14.2 13.5 13.1 26.6 19.7 22.3 16.6 20.1 18.3 17.8 15.4CrossBM25 10.6 7.9 9.0 6.1 7.8 7.9 4.6 5.4 9.0 8.2 5.6 6.1 11.3 9.5 9.9 9.1 10.4 9.2 8.9 8.0 BERT 10.5 8.1 9.5 6.4 8.5 8.9 5.8 5.1 9.8 8.3 6.1 7.3 11.8 9.6 10.4 8.7 11.4 10.3 9.4 9.0 Exact-T5 14.0 11.8 16.6 13.0 18.2 11.9 13.0 11.0 18.1 11.3 12.5 10.1 22.7 15.0 20.3 14.2 20.6 13.7 17.9 12.3 T5 16.1 11.3 17.0 14.1 17.0 12.7 15.1 11.3 19.4 12.6 13.2 10.6 23.6 16.0 22.3 16.6 20.2 15.4 18.1 13.5 Exact-Llama-2* 19.5 15.1 17.4 15.5 16.4 13.8 15.6 11.4 21.6 17.6 16.9 15.1 27.3 17.8 24.7 17.8 22.4 19.8 20.1 17.0 Llama-2* 21.4 20.6 18.9 19.5 19.5 14.4 17.6 15.5 22.0 19.0 16.5 15.0 29.5 18.6 25.7 19.2 25.0 22.7 21.7 18.3 Table 4: Experimental results of AGon McMarket, which adopts human-provided answers as ground-truth. * denotes LLM based methods. The best-performed model in the single-market setting is highlighted in light grey. The models in dark grey are highlighted to distinguish from their Exact- counterparts. Methodau br ca cn fr in jp mx uk A VG B R B R B R B R B R B R B R B R B R B RSingleBM25 10.3 11.7 10.7 12.5 8.3 13.0 8.5 10.1 11.6 15.7 11.7 14.3 12.8 12.1 13.3 13.6 12.4 14.7 10.7 13.3 BERT 12.4 10.0 14.8 8.7 11.3 8.8 8.5 7.1 11.1 10.2 12.0 10.6 10.9 9.0 14.1 9.5 9.0 11.1 10.8 9.5 T5 29.8 27.0 26.7 33.6 29.2 27.4 31.1 24.2 34.9 30.8 29.0 32.2 31.1 27.0 27.2 26.5 29.5 25.9 29.9 28.4 mT5 10.6 14.3 5.2 13.5 6.8 10.4 41.1 26.4 19.9 17.4 9.2 14.7 34.2 29.1 24.5 16.3 7.2 13.5 18.0 17.4 Llama-2* 35.7 34.3 37.6 40.8 36.3 37.2 38.7 34.3 35.7 32.6 34.4 35.8 34.7 32.4 35.9 34.7 35.4 37.0 35.4 35.9CrossBM25 13.5 11.0 12.9 10.0 13.4 12.2 7.4 8.5 12.8 13.0 14.6 15.0 11.6 10.1 15.5 12.6 12.0 15.2 12.6 12.0 BERT 15.8 10.6 15.7 11.0 14.4 9.8 6.8 8.1 12.2 14.2 13.0 12.1 13.8 11.3 15.7 11.1 10.1 13.1 12.9 11.3 Exact-T5 30.9 28.2 30.1 29.0 29.3 30.7 29.8 26.7 34.7 31.7 31.8 30.3 30.0 24.6 27.3 28.0 29.1 25.9 30.3 28.4 T5 32.0 30.2 31.0 28.6 29.9 29.7 32.1 26.8 32.2 31.5 30.1 32.4 36.3 29.9 29.4 27.6 30.2 26.0 31.4 29.1 Exact-Llama-2* 37.0 34.6 34.1 32.6 38.0 39.9 33.0 35.2 40.8 44.3 36.2 40.2 38.0 34.7 38.4 37.8 35.2 37.9 36.7 37.3 Llama-2* 35.9 37.4 38.0 37.9 39.2 40.2 39.1 36.9 39.6 41.7 37.0 41.0 40.9 35.2 38.8 37.1 35.9 38.5 38.4 38.5 Table 5: Experimental results of AG on McMarket r, where LLM-generated answers are adopted as ground-truth. 5 Experiments 5.1 Experimental setup Dataset . We perform experiments on AGandQR. For each task, we report the single/cross-market results on the whole dataset and its subset. For AG, on the McMarket dataset, we first adopt the BERT classifier trained in . It assesses each question based on the review information, categorizing them as either answerable or unanswerable. Subsequently, we employ it to filter out all answerable questions. We then split the training/validation/testing sets following the portion of 70/10/20%, resulting in 183,092/24,973/49,958 samples, respectively. On the McMarket rdataset, we also split the data into three sets with the same portions. Specifically, we adopt the GPT-4 generated answers as the ground truth. In the single-market setting, we retrieve the topKreviews from the main marketplace before generating the answers6. In the cross-market set- ting, we retrieve the reviews from both the main and auxiliary marketplaces. We report the genera- tion performance of baselines on the testing set. ForQR, we first rank products, then among the topNproducts, we rank the top Kquestions7. Since McMarket does not come with any ground- 6We choose K=5 in our case. 7Following Ghasemi et al. (2023), we use N=3 andK=50.truth ranking results, we perform unsupervised training and adopt GPT-4-labeled data, McMarket q, as the testing set. Besides, to further test the per- formance of supervised methods on this task, we split McMarket qinto three sets, with 1260/180/360 samples in each. We then train each model on the training set and report results on the testing set. Evaluation metrics . We adopt several evalua- tion metrics to assess the performance of models on two tasks. For AG, we compare the model- generated answers with ground-truth user answers using BLEU-4 and ROUGE- L scores. For QR, we report major information retrieval metrics, namely, mean recip- rocal rank (MRR) and Precision@3 to evaluate the ranking performance of different methods. Compared methods . For AG, we first directly rank and select a review as the answer with meth- ods such as BM25 , BERT . Besides, several gen- erative methods such as T5 , LLaMA-2 , are leveraged to train the model to generate the answer given the question and reviews. Specifically, under the cross- market scenario, Exact-model means that in the auxiliary marketplace, we only use reviews from the same item before performing answer generation. ForQR, on McMarket, we report ranking meth- ods that do not involve any training ( i.e.,BERT, 6\nMethodau br ca cn fr in jp mx uk A VG M P M P M P M P M P M P M P M P M P M PSingleBM25 24.5 16.9 15.2 18.3 31.5 28.7 22.0 28.7 21.0 34.7 44.4 46.0 23.8 31.5 28.9 38.7 38.4 40.2 27.7 31.5 BERT 26.9 43.0 18.2 35.0 30.4 42.8 18.2 34.3 17.7 40.8 47.9 52.7 28.5 34.2 30.0 47.0 40.0 51.8 28.6 42.4 mBERT 25.9 33.0 16.1 26.7 32.7 33.5 18.5 30.0 17.9 31.2 45.2 46.2 24.1 32.5 32.8 40.2 39.9 43.7 28.1 35.2 UPR-m 30.4 46.0 21.9 39.3 31.9 48.0 36.2 45.5 36.3 43.7 25.7 56.3 34.7 43.3 39.5 54.2 32.5 52.7 32.1 47.7 UPR-l* 38.9 48.8 27.8 43.3 36.5 49.7 38.1 48.3 42.5 47.3 35.2 59.8 43.3 47.2 49.0 57.2 38.9 55.5 38.9 50.8CrossBM25 51.2 45.2 47.4 40.0 51.0 47.5 50.2 46.8 50.8 44.3 58.0 57.5 54.6 45.5 59.0 54.3 50.8 57.5 52.6 48.7 Exact-BERT 50.7 38.8 49.1 41.8 48.8 47.0 46.2 46.5 50.1 44.7 59.0 57.3 54.8 45.8 59.3 55.7 51.2 57.3 52.1 48.3 BERT 52.3 45.7 49.7 42.8 50.4 48.8 49.3 44.2 49.4 43.5 60.5 58.3 55.9 46.0 59.7 57.0 52.5 59.3 53.3 49.5 CMJim 57.5 56.7 52.4 49.3 53.3 57.7 54.0 50.5 56.9 54.3 62.9 66.8 58.4 53.2 64.9 63.8 52.9 62.7 57.0 57.2 UPR-m 59.1 55.5 57.8 56.0 54.3 58.5 52.8 52.1 54.9 52.3 64.1 64.3 57.5 52.9 62.8 63.7 53.6 64.5 57.4 57.8 Exact-UPR-l* 59.3 56.0 56.3 57.1 59.7 59.5 54.4 53.7 55.4 54.0 65.6 68.8 58.5 53.3 62.4 62.9 54.1 62.8 58.4 58.7 UPR-l* 60.0 59.5 57.7 57.5 59.0 63.2 61.1 54.8 57.8 58.0 67.2 70.5 62.8 56.0 67.2 66.2 59.0 66.3 60.5 60.9 Table 6: Unsupervised experimental results of the QRon McMarket. Where M and P denote MRR and Precision@3, respectively. * denotes LLM-based methods. Methodau br ca cn fr in jp mx uk A VG M P M P M P M P M P M P M P M P M P M PSingleBERT-f 32.7 44.4 25.8 48.9 30.0 42.2 31.7 35.6 45.8 47.8 46.2 64.4 51.1 48.9 46.4 58.9 54.4 61.1 40.5 50.2 mBERT-f 32.8 41.1 21.9 40.0 27.5 40.0 29.4 34.4 41.9 45.6 42.9 56.7 48.6 41.1 42.3 51.1 52.9 56.7 37.8 45.2 T5 29.4 42.2 23.3 41.1 31.7 38.9 31.3 30.9 42.0 45.1 43.8 58.4 49.7 47.8 44.4 54.1 53.9 56.4 38.8 46.1 monoT5 30.1 44.4 23.1 41.1 31.3 43.2 31.4 31.1 43.2 46.7 49.4 63.3 53.5 49.9 47.8 54.4 53.4 58.9 40.4 48.1 Flan-T5* 39.7 51.1 26.9 50.0 34.0 46.7 38.3 42.2 52.2 54.4 51.4 63.3 54.8 64.4 49.3 60.0 55.8 62.2 44.7 54.9CrossExact-BERT-f 46.4 45.6 40.0 51.1 51.5 47.8 49.4 45.6 52.3 53.2 49.3 66.0 53.4 47.8 48.9 63.3 58.7 66.7 50.0 54.1 BERT-f 58.6 54.4 52.3 54.4 55.3 53.3 56.2 46.7 53.9 55.6 65.8 70.0 56.0 52.2 63.2 71.1 59.6 70.0 57.9 58.6 Exact-monoT5 52.6 48.9 50.7 53.8 54.6 55.6 54.4 44.9 53.2 53.1 63.1 71.0 56.9 52.1 62.8 67.8 59.3 66.8 56.4 57.1 monoT5 52.9 53.3 51.4 52.2 54.1 56.7 56.8 44.4 52.8 52.2 68.1 75.6 56.8 53.3 62.9 68.9 58.2 67.8 57.1 58.3 Exact-Flan-T5* 60.8 60.3 55.7 56.9 61.3 59.2 57.6 55.2 58.1 57.8 67.2 73.3 57.1 54.3 63.9 74.9 63.0 73.9 60.5 62.9 Flan-T5* 63.6 62.2 56.9 55.6 62.9 61.1 59.7 57.8 60.8 61.1 69.7 76.7 60.4 56.7 64.3 75.6 63.6 72.2 62.4 64.3 Table 7: Supervised experimental results of QR using McMarket q. UPR ) or methods that perform unsupervised training ( i.e.,CMJim ). On McMarket q, we adopt supervised fine- tuning methods ( i.e.,BERT-f/monoT5 ), and report testing performance. More experimental details including the description of each method are listed in Appendix F. 5.2 Experimental results 5.2.1 Review-based answer generation Tables 4 and 5 show the single/cross-market an- swer generation performance on McMarket and McMarket rdatasets8. We have the following obser- vations: first of all, cross-market models have supe- rior overall performance in all marketplaces com- pared with methods in the single-market setting. This result verifies RQ1 from the model perspec- tive, showing that external resources ( i.e.,reviews), from auxiliary marketplaces, can significantly con- tribute to improved outcomes in the main market- place. A clear advantage of LLMs over traditional methods is evident across various marketplaces. Notably, LLaMA-2 outperforms the overall cross- market McMarket dataset, with a notable ROUGE improvement from 13.5 in T5 to 18.3. Similarly, in McMarket r, the overall ROUGE score sees signifi- cant enhancement, rising from 29.1 to 38.5. This 8We report performance on 9 marketplaces and leave the 3 untranslated raw marketplaces ( es,it,de) for future work.provides an answer for RQ3 , offering insights into the efficacy and potential advancements of LLMs. 5.2.2 Product-related Question ranking Tables 6 and 7 show the question ranking results within the single/cross-market scenario on two datasets. We notice that most observations from Section 5.2.1 still hold. For example, performance advantages persist in product-related question rank- ing compared to a single-market scenario. This shows that the large number of relevant questions in the auxiliary marketplaces help address simi- lar questions in a low-resource marketplace. Fur- thermore, the performance boost is more obvious in marketplaces with a smaller scale ( i.e.,au,br) compared with marketplaces with a larger scale (i.e.,uk). For instance, the P@3 BM25 perfor- mance exhibits an improvement 28.3 and 21.7 for auandbrmarketplaces, respectively, compared with 17.3 in ukon McMarket. We also find that in the cross-market setting, the Exact-models have a weaker overall performance than their original counterparts ( i.e.,Exact-T5/Llama-2 v.s. T5/Llama- 2). For example, on McMarket q, the cross-market Exact-Flan-T5 is 1.4 weaker in terms of overall P@3 compared with Flan-T5. This demonstrates that valuable information can be found within sim- ilar products from auxiliary marketplaces, even when they possess slightly different titles. We list some cases in Appendix G to elaborate this. 7\n2 4 6 K81012141618Rouge-L Single Market Cross Market(a) McMarket 2 4 6 K283032343638Rouge-L Single Market Cross Market (b) McMarket r 20 30 40 K30405060P@3 Single Market Cross Market (c) McMarket 20 30 40 K3035404550556065P@3 Single Market Cross Market (d) McMarket q Figure 4: K-value analysis on different marketplaces. The upper row is on AG, the lower is QR. 6 External Analysis 6.1 Hyperparameter analysis We investigate the effect the number of retrieved product-related resources ( i.e.,questions, reviews) Kunder both single/cross-market scenarios. We report the average performance among every mar- ketplace on both McMarket and the corresponding subset. The results are shown in Figure 4. We observe that in AG, initially, the performance of Llama-2 in the cross-market setting is inferior to that in the single-market. However, after increasing the value of K, the optimal Kvalue in the cross- market scenario surpasses that in the single-market. This tendency indicates that richer information is contained in the cross-market reviews. In QR, the ranking performance in the single-market scenario begins to decline when Kis around 50. This indi- cates that some less relevant questions are retrieved, negatively impacting the results. Conversely, in the cross-market scenario, as a greater number of rele- vant questions are accessible, it helps to effectively mitigate this issue. 6.2 Multilingual analysis We undertake a comparative analysis between trans- lated and non-translated contents to delve deeper into performance variations across non-English marketplaces. In particular, within the single- market scenario, we compare mBERT with BERT in 5 non-English marketplaces. Here, ‘mBERT’ refers to a setup where all contents and the model br cn fr jp mx Market24681012Rouge-L BERT mBERT(a) McMarket br cn fr jp mx Market24681012Rouge-L BERT mBERT (b) McMarket r br cn fr jp mx Market102030405060P@3 BERT mBERT (c) McMarket br cn fr jp mx Market102030405060P@3 BERT mBERT (d) McMarket q Figure 5: Multilingual analysis on non-English market- places. The upper row is on AG, the lower is QR. itself are preserved and fine-tuned in their origi- nal language without translation. The results are shown in Figure 5. We notice that in the AGtask, concerning some non-Latin languages ( i.e.,cn,jp), the performance of single-market mBERT without translation results in higher score compared with T5 and BERT on two datasets. However, we ob- serve opposite results in some other non-English marketplaces ( i.e.,fr). Besides, in the QRtask, the performance of mBERT is inferior to the translated BERT model. This underscores a crucial future direction for this task: effectively enhancing per- formance in non-English marketplaces, an aspect that has been relatively underexplored. 7 Conclusions We propose a novel task of Multilingual Cross-market Product-based Question Answering (MCPQA). We hypothesize that product-related in- formation ( i.e.,reviews/questions) from a resource- rich marketplace can be leveraged to enhance the QA in a resource-scarce marketplace. Specifically, we focus on two different tasks: AGandQR. To fa- cilitate the research, we then propose a large-scale dataset named McMarket, which covers over 2 mil- lion questions across 13 marketplaces and 8 lan- guages. We also provide LLM-labeled subsets for the two tasks, namely McMarket rand McMarket q. We conduct experiments to compare the perfor- mance of models under single/cross-market scenar- ios on both datasets and demonstrate the superiority of cross-market methods in this task. 8\nLimitations The task of PQA holds significant potential in im- proving user experiences on e-commerce platforms. However, there are several limitations and chal- lenges associated. One major challenge is the qual- ity and reliability of the information available for answering user questions. Even though we make sure all of the information comes from real user- generated data, the reviews and QA pairs might still contain biased or inaccurate information. Further- more, language barriers and the availability of data in multiple languages add complexity to the task of product-related QA, particularly in cross-lingual scenarios. We discovered that the performance of non-English content remains unsatisfactory com- pared to results in English marketplaces. Limited availability of data in low-resource languages fur- ther exacerbates this challenge. To address them, continued research and development efforts are still under process which aim at improving data quality, handling language diversity, etc. We discuss it as our future work in Appendix C. Ethics Statement Our dataset is derived from the publicly avail- able product question-answering dataset, XMar- ket , which grants academic usage so that follow-up research papers can re-use the data. We adhere to the policies throughout the creation and utilization of this dataset to ensure the protection of user privacy. No personally identifi- able information is exposed or utilized in any form during the processes associated with the dataset. Also, we have licensed our data under CC0 1.0 DEED such that it will only be available for aca- demic research purposes to further protect the users. We make sure that individuals sign an agreement stipulating that the dataset will only be used for research purpose when we release the dataset.
[ "Let the LLMs Talk: Simulating Human-to-Human Conversational QA via Zero-Shot LLM-to-LLM Interactions", "Cross-Market Product-Related Question Answering", "xPQA: Cross-Lingual Product Question Answering in 12 Languages", "LLaMA: Open and Efficient Foundation Language Models", "Product Question Answering in E-Commerce: A Survey", "No Language Left Behind: Scaling Human-Centered Machine Translation", "Improving Passage Retrieval with Zero-Shot Question Generation", "Toward Personalized Answer Generation in E-Commerce via Multi-perspective Preference Modeling", "Cross-Market Product Recommendation", "Multi-Type Textual Reasoning for Product-Aware Answer Generation", "Conversational Fashion Image Retrieval via Multiturn Natural Language Feedback", "Answering Product-Questions by Utilizing Questions from Other Contextually Similar Products", "Meaningful Answer Generation of E-Commerce Question-Answering", "AnswerFact: Fact Checking in Product Question Answering", "XOR QA: Cross-lingual Open-Retrieval Question Answering", "RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering", "Opinion-aware Answer Generation for Review-driven Question Answering in E-Commerce", "MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering", "Less is More: Rejecting Unreliable Reviews for Product Question Answering", "Answer Ranking for Product-Related Questions via Multiple Semantic Relations Modeling", "SubjQA: A Dataset for Subjectivity and Review Comprehension", "Document Ranking with a Pretrained Sequence-to-Sequence Model", "TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages", "Discovering Relevant Reviews for Answering Product-Related Queries", "On the Cross-lingual Transferability of Monolingual Representations", "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "AmazonQA: A Review-Based Question Answering Task", "XQA: A Cross-lingual Open-domain Question Answering Dataset", "Improving Low-Resource Cross-lingual Document Retrieval by Reranking with Deep Bilingual Representations", "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis", "Review-Driven Answer Generation for Product-Related Questions in E-Commerce", "Product-Aware Answer Generation in E-Commerce Question-Answering", "Knowledge as A Bridge: Improving Cross-domain Answer Selection with External Knowledge", "Review-Aware Answer Prediction for Product-Related Questions Incorporating Aspects", "Modelling Domain Relationships for Transfer Learning on Retrieval-based Question Answering Systems in E-commerce", "Modeling Ambiguity, Subjectivity, and Diverging Viewpoints in Opinion Question Answering Systems", "Addressing Complex and Subjective Product-Related Queries with Customer Reviews", "Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews", "The Probabilistic Relevance Framework: BM25 and Beyond", "TREC: Experiment and evaluation in information retrieval", "ROUGE: A Package for Automatic Evaluation of Summaries", "Bleu: a Method for Automatic Evaluation of Machine Translation", "Exploring the", "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" ]
DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models
DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models Abstract Visual document understanding (VDU) is a challenging task that involves understanding documents across various modalities (text and image) and layouts (forms, tables, etc.). This study aims to enhance generalizability of small VDU models by distilling knowledge from LLMs. We identify that directly prompting LLMs often fails to generate informative and useful data. In response, we present a new framework (called DocKD) that enriches the data generation process by integrating exter- nal document knowledge. Specifically, we provide an LLM with various document ele- ments like key-value pairs, layouts, and descrip- tions, to elicit open-ended answers. Our experi- ments show that DocKD produces high-quality document annotations and surpasses the direct knowledge distillation approach that does not leverage external document knowledge. More- over, student VDU models trained with solely DocKD-generated data is not only compara- ble to those trained with human-annotated data on in-domain tasks but also significantly excel them on out-of-domain tasks. 1 Introduction Visual document understanding (VDU) requires ex- tracting and analyzing both textual and non-textual information from a document. The textual informa- tion is usually obtained via optical character recog- nition (OCR), which only provides unstructured or naïvely ordered text. The non-textual information is visually-rich, demanding a solution to directly process the document image. Earlier studies of VDU primarily focused on identifying cer- tain parts of a document using heuristics or simple networks. Recent approaches have shifted towards pretraining multi-modal document understanding models to ad- dress the model’s comprehension of textual, visual, and layout features. However, the existing VDU LLMDocument library Generated annotations Question 1: What is the heading of the document?Answer 1: FILTRATION EFFICIENCYQuestion 2: Whatis the filter size mentioned in the document?Answer 2: 15mmFigure 1: We leverage LLM to generate document anno- tations given the text extracted from a document image. methods are limited by training on a small-scale, curated document dataset, compromising the gen- eralizability of VDU models to diverse documents. Thus, their performance heavily relies on the anno- tated training document set for downstream tasks. In this study, we aim to improve the general- izability of VDU models by distilling knowledge from large language models (LLMs). In particular, we introduce an open-world document understand- ingproblem, where the model needs to address the downstream task with a broader scope of doc- uments than covered by the available annotations. LLMs, given instructions to elicit open-ended an- swers, can create rich and diverse annotations, as illustrated in Fig. 1. For instance, we might in- struct the LLM to “ generate question-answer pairs from this document ”, along with docu- ment text extracted from OCR. However, this ap- proach entails a critical challenge, since LLMs of- ten struggle to comprehend unstructured OCR text , leading to its generation of low-quality annotations. Moreover, there is a vari- ety of non-textual information within documents which is not included in the LLM prompt. To overcome these challenges, we present DocKD, a document knowledge distillation frame- work that leverages external document information to enhance LLM data generation . In this frame- work, we extract various document elements ( e.g., key-value pairs, layout, and descriptions) along 1\nFigure 2: Document question-answering task perfor- mance (%) with GPT-4V and GPT-4 + OCR, where we utilized linearized OCR text (see Sec. 3.1). Appx. A.1 includes the experimental details. with text and formulate a generation prompt for LLMs with this visual information. The LLM outputs then serve as annotations to train a small- scale VDU model. While large multimodal mod- els like GPT-4V are also recog- nized for their visual-language capabilities, they still lag behind state-of-the-art OCR systems . As shown in Fig. 2, LLMs that uti- lize well-structured OCR text excel in document processing and understanding. Thus, we employ LLMs aided with visual tools for data generation. We demonstrate the efficacy of DocKD on three document understanding tasks: visual question an- swering, entity extraction, and classification. In each task, we introduce new tools for incorporating external document knowledge. Our experiments reveal that DocKD allows student models to attain open document understanding abilities, generaliz- ing to unseen documents, questions, entities, or categories. Our contributions are as follows: ◦We introduce DocKD, a framework designed to facilitate VDU models for open-world document understanding. It boosts the generalizability of VDU models by leveraging LLMs and external document knowledge to generate training data. ◦We demonstrate that DocKD surpasses direct knowledge distillation approach that relies solely on the LLM prompt tuning to generate data with- out document-specific knowledge. ◦In comparison to models trained with human- annotated data, student VDU models trained solely with DocKD-generated data achieve com- parable performance on in-domain tasks and excel in addressing out-of-domain tasks. This showcases DocKD’s potential to improve models for open-world documents understanding. 2 Related Work Document understanding models. Research in document intelligence has gained significant interest, developing ma- chines to understand document contents and ad- dress associated tasks. Previous studies have proposed document understanding models to improve the comprehension of multi-modality by integrating textual and layout information. These models later have evolved to incorporate visual information as well . These models are typically pretrained through self-supervised learning methods, such as word/line alignment or masked text/image modeling . Subsequently, they undergo a fine-tuning phase for specific down- stream tasks, which entails the manual annotation of documents. To facilitate the training of VDU models without the need for human labels, we pro- pose knowledge distillation approach from LLMs. Leveraging LLMs for data generation. Knowl- edge distillation (KD) from LLMs has been ex- plored across various natural language processing tasks . LLMs like GPT-3 are utilized for guided annotation of unlabeled data or for distilling reasoning ca- pabilities which is then used to fine-tune smaller language models. Among these, targeted distillation has demonstrated that identifying and amplifying the LLM’s knowledge to a high-quality dataset enables student models to attain task-specific knowledge. It has the potential to make specialized language mod- els that outperform in specific tasks, at the expense of generic performances . In visual instruction tuning research (Li et al., 2023a,b,c; Liu et al., 2023b,a), LLMs are employed to generate visual-language instruction-following data. For instance, LLaV A is trained on the instruction-following dataset for con- versation, description, and complex reasoning, cre- ated by prompting the LLM with bounding box coordinates of objects along with image captions. InstructBLIP incorporates diverse tasks, such as image question generation and video question answering. Our work is the first to extract knowledge from LLMs for open document under- standing tasks, exploring methods to inject visual document-specific knowledge into LLM and pro- 2\nLLM TeacherTextOCRDocument Expert ModelStudent ModelOCRBboxText(b) Training student model(a) Data generation for task formulation 𝐩!"#$𝐚!"#$𝐚!"#$𝐩!"#$𝐩%&'𝐚%&'Post-processing Image ImageSec. (3.1)(3.2)(3.3)Figure 3: Overview of DocKD. (a)To prepare training data, we provide an LLM teacher with a generation prompt pgengiven the document text. LLM generates answers agenwhich are then converted into ( ptask,atask). We explore methods to inject external document knowledge ( 99K) into the document text or pgento obtain high-quality annotations. (b)We train a student VDU model using the generated task prompt and answer pairs ( ptask,atask). duce high-quality data for training VDU models. 3 Document Knowledge Distillation Problem formulation. Similar to prior work , we formulate document understanding problem under a sequence-to-sequence (seq2seq) generation framework. That is, we design a task- specific prompt ptaskwhich asks a VDU model to solve the task and output an answer atask. DocKD involves an LLM teacher fTto generate these prompt and answer pairs. Given an image of a document page, we apply a pre-built OCR engine to extract its words and word bounding boxes. For simplicity, we represent a document input as d. The overall pipeline of the DocKD approach is described in Fig. 3. In Fig. 3 (a), we first construct a generation prompt pgenfor the task. Then, given pgenand document text dtextas inputs, the LLM generates agen,i.e.,fT(dtext,pgen)→agen. This can be readily parsed into (ptask,atask)by post- processing. Here, we can inject document-specific knowledge into the LLM inputs, so that it can bet- ter understand the document content and generate more accurate ( ptask,atask) pairs. In Fig. 3 (b), we train a student model fSto output an answer atask given dandptask,i.e.,fS(d,ptask)→atask. We exemplify the application of our training pipeline on three document understanding tasks: visual question answering (VQA), entity extrac- tion, and document classification. To summarize each section, we leverage document knowledge by using the OCR linearization model to improve dtext (Sec. 3.1), using the key-value detection model to guide pgen(Sec. 3.2), and introducing the docu- ment description into pgenfor better class candi- dates (Sec. 3.3). Refer to Appx. B for the full tem- plates of pgenin each task. 3.1 Document VQA Document VQA (Borchmann et al., 2021; Mathew et al., 2021, 2022; Van Landeghem et al., 2023) is the task of answering questions about documents. Given a document dand a corresponding question- answer (QA) pair ( q,a), we design the task prompt asptask=“Document: dtext.Question: q”, and atask=“Answer: a”. To distill knowledge for a VDU model, we investigate a way to prompt LLMs to generate QA pairs from documents. Designing QA generation task. Based on the OCR text as input context, we provide the LLM with a generation prompt pgento generate several QA pairs, as shown in Fig. 4 (a): fT(dtext,pgen)→agen={(q1,a1),(q2,a2), . . .} We randomly select one question and its corre- sponding answer from agenand create ( ptask,atask) for training the student model. We find that in- cluding an instruction into pgenhelps the teacher avoid creating low-quality QAs ( e.g., duplicated questions or answers inconsistent with context) and enables us to control the generation output so that it can be easily parsed into ( ptask,atask). We also note that pgeninstructs the LLM to out- put questions and answers together , which we find facilitates the generation of accurate QA pairs. Al- ternatively, we may ask the LLM to generate ques- tions first and then answer them, which we observe that the generated questions are often difficult to answer, or the answers do not match the questions. Introducing layout knowledge to OCR text. One limitation of the LLM’s QA generation lies on its text-to-text framework, where it requires the text to be organized in a semantically meaningful order. However, OCR text is a simple sequence of words typically ordered by raster scanning, which ignores the important layout and structural information of document pages. Therefore, QAs generated from such text are usually less challenging and do not cover the spatial relationship between entities. To ensure the LLM’s awareness on the text lay- out, we replace the raw OCR text with spatially 3\nOCR Table detect KVdetect LayoutLinearization Model <Instructions>Document: Generate QA pairs. FormatQuestion: What is the table number? Answer: 14 Question: What percentage of buyers were asked for proof of age in areas with no local ordinance? Answer: 51.2%LLM LLM(b) Using linearized OCR textDocument image𝐩!"#templateTable 14RequestforProofofBuyer'sAgeRelatedtoLocalTobacco-ControlOrdinancesWeakNoRowLocalLocalTotalOrdinanceOrdinanceBuyerAskedforProof4012526381.6%51.2%65.3%BuyerNotAsked9119140forProof18.4%48.8%34.7%Column49244403Total12.2%60.5%100% <Instructions>Document: Generate QA pairs.# Table 14Request for Proof of Buyer's Age Related to Local Tobacco-Control Ordinances| | Weak Local Ordinance | No Local Ordinance| Row Total ||---------------------------|----------------------|--------------------|-----------|| Buyer Asked for Proof| 40 81.6% | 125 51.2%| 263 65.3% || Buyer Not Asked for Proof | 9 18.4% | 119 48.8% | 140 34.7% || Column Total| 49 12.2% | 244 60.5% | 403 100% |(a) Using raw OCR text 𝐩!"#templateFigure 4: (a) When the input document text is in its raw OCR form, LLM produces simply extracted QA pairs. (b) When provided with linearized OCR text processed by a linearization model, LLM generates QA pairs that require visual layout knowledge to solve. linearized OCR text, where we organize document text into a markdown style as displayed in Fig. 4 (b). We use the linearization model inspired by , also extracting tables, key-value pairs, and layout information using Textract API1which assists the conversion to markdown. Interestingly, an LLM understands this markdown style; thus, the linearization model supplements document layout knowledge that is missing and helps the LLM to generate more diverse and higher-quality QAs. The student model trained with these QA pairs achieves notable VQA performances (Table 1). Refer to Appx. C.1 for the examples of generated QAs with raw or linearized OCR text. 3.2 Entity Extraction Entity extraction aims to identify entities in the document that matches a given field name. Sim- ilar to the VQA task, we convert this task into a seq2seq form. For each field name fand the corresponding entity e,ptask=“Document: dtext. Question: what are entities of <f>?” andatask=“Answer: e”. The challenge of this task lies in that we do not know which field will be queried for a new docu- 1https://aws.amazon.com/textract/ment. Thus, we should generate as many diverse fields as possible for different kinds of entities, and train the entity extraction model to link those fields to the entities. Indeed, LLMs are known to be proficient at the entity recognition task and can even identify their names . Designing entity generation task. To generate data for entity extraction, we prompt LLMs to ex- haustively extract any entities present in a doc- ument. We design an entity extraction prompt pgen-entand send it together with the document text dtextas the inputs to an LLM, which then outputs a list of entities along with their field names: fT(dtext,pgen-ent)→agen-ent={(f1,e1),(f2,e2), . . .} where fiis a generated field name for the i-th entity ei. We find that LLMs are able to capture a group of words into a single entity and generate a field based on the context, as observed in Fig. 5 (a). Introducing KV entity knowledge to pgen.Al- though LLMs can identify entities from documents to a certain extent, we notice that they are unable to sufficiently enumerate the entities. They tend to list mostly the major ones, especially when there are many potential entities in the document, and fail to identify diverse types. To help LLMs to enumerate them, we propose to leverage a docu- ment expert model that extracts key-value (KV) pairs from documents. KV pairs are frequently found in documents, e.g., the entity “ Name:XYZ ” is composed of a key “ Name: ” and a value “ XYZ”. We detect all KV pairs using an external KV detection model, and send the detected KV pairs to LLMs to obtain their field names. Because there exist multiple KV pairs, we iteratively present each KV entity line by line to the LLM, with the previ- ous line’s output appended (refer to Fig. 5 (b)): fT(dtext,pgen-kv,(fi,ei)1:n,en+1)→agen-kv=fn+1 where fn+1is a field name for the KV entity en+1, as result of the (n+ 1) -th generation. This way, we make the LLM focus on the field generation only for the current KV entity. In addition, it has access to previous generated outputs, so if there are similar entities given, it can assign the same field. Note that we do not eliminate the entity genera- tion process by pgen-ent. Not all entities are detected by the KV detection model, so it is still required to extract non-KV entities. Hence, when generating non-KV entities, we provide the OCR text in which all KV entities are removed. 4\n<Instructions>Document:Extract the entities and their names.[TEXTw/o KV](a)𝐩!"#$%&'template <Instructions>Document:Extract the entities and their names.[TEXT](b)𝐩!"#$()template 1. PAGES: 13 ––(entity) ––(field)1. REVISED––Document Status2. Journal of Organic Chemistry––Journal Name3. ABSTRACT––Section Header4. N H CH3 N––Chemical Formula5. 1a––Chemical Compound ID6. M.R.B. COORDINATOR––Journal Editor Role1. PAGES: 13 ––Number of Pages2. Applied for ––1. PAGES: 13 ––Number of Pages2. Applied for ––Patent Status3. Not Filed X ––1. PAGES: 13 ––Number of Pages2. Applied for ––Patent Status3. Not Filed X ––Patent Status4. PROPOSED FOR PUBLICATION OR PRESENTATION ––iter 1 iter 2Data generation result iter 3LLM output 𝐚!"#$%&' LLM outputs 𝐚!"#$() Document TypePaper TitleDocument StatusJournal Name Section Header Chemical FormulaChemical Compound ID Journal Editor RoleManuscript TitleAuthorsPublication TypeManagement ApprovalNumber of PagesNumber of FiguresNumber of TablesDate Patent Status Document TypePaper TitleDocument StatusJournal Name Section Header Chemical FormulaChemical Compound ID Journal Editor RoleManuscript TitleAuthorsPublication TypeManagement ApprovalNumber of PagesNumber of FiguresNumber of TablesDate Patent Status Document TypePaper TitleDocument StatusJournal Name Section Header Chemical FormulaChemical Compound ID Journal Editor RoleManuscript TitleAuthorsPublication TypeManagement ApprovalNumber of PagesNumber of FiguresNumber of TablesDate Patent StatusFigure 5: The templates on the left serve as input prompts to the LLM, for (a) generating non-KV entities and (b) naming KV entities, respectively. For (b), in the iteration n, then-th KV entity is provided as input as well as the output from the previous iteration. On the right, we show the result of generated entities and field names, with blue boxes representing non-KV entities and red boxes representing KV entities. 3.3 Document Classification We formulate a classification task within a seq2seq framework so that a VDU model can generalize to any novel classes. Specifically, we design the input prompt as ptask=“Document: dtext.Question: what is the class of this document? choose from the following:{ candidate list}”, and correspondingly, atask=“Answer: class label ”. The candidate list contains document class labels, including the answer class. We collect the LLM-generated labels to fill out the prompt without human annotations. Designing document class generation task. We generate candidates of class labels that can further be used to formulate a downstream classification task. For this, we need two types of generation prompts. pgen-posis used to generate candidates of a given document’s type, and we call this output listpositive labels that may be used as an answer. In order to build a classification task, we not only need the document types that match the given docu- ment but also the candidate types that do not match the document. LLM is instructed with pgen-negto suggest these types, which we call negative labels . Introducing knowledge from agentopgen.We notice that when an LLM is directly prompted to predict document classes, it frequently generates class labels that are overly general, resulting in low diversity. To address this, we incorporate docu- ment descriptions to pgenwhich we find can facil- itate LLMs to better summarize a document and generate more diverse class labels. LLM is instructed with pgen-desc=“Describe this document in one sentence ”. The out- put document description agen-descis then appended to the generation prompt for positive labels. This strategy makes the positive labels more diverse and detailed, e.g.,letter →consumer letter . Subsequently, we also use the output positives in the negatives generation prompt, in order to avoid generating labels that are similar to the positives. We summarize the generation steps as follows: (1) description: fT(dtext,pgen-desc)→agen-desc, (2) positives: fT(dtext,pgen-pos,agen-desc)→agen-pos, (3) negatives: fT(dtext,pgen-neg,agen-pos)→agen-neg. While this approach does not directly leverage vi- sual information, it adopts a similar strategy to the chain-of-thought reasoning that encourages better outputs by prompting the instruction steps to LLMs. Candidate list formulation. We select one posi- tive label the list agen-pos, as an answer. For other non-answer candidates, we randomly sample a few from agen-neg. We train the model to choose one among the {positive +negatives }list. In ad- dition, the generated description agen-descis ap- pended to each positive label to give a hint about the class. We also gather all unique negative classes and use the LLM to produce descriptions for these types, which are also appended to the labels. Refer to Appx. B.3 for the prompt we used based on this. 4 Experiments and Results 4.1 Implementation Details Models. We compare the DocKD performance with the plain KD approach, naïvely using dtextand pgenwithout external document knowledge, as a prompt engineering baseline. By default, we use 5\n(a) VQA (b) Entity extraction (c) Classification model size val ANLS val EM test F1 test ANLS test mAcc test mAcc⋆ LLM zero-shot prediction Flan-T5 large 750M 59.6 48.8 0.90 2.57 46.7 54.0 Flan-T5 XXL 11B 70.4 60.0 21.2 24.1 52.0 58.1 LLaV A-1.5 13B 49.0 37.3 9.12 5.20 36.1 43.3 Llama-2 13B 59.8 41.8 22.5 21.9 47.8 57.3 Vicuna-1.3 33B 62.4 51.9 24.3 27.6 48.4 57.7 Falcon 40B 72.4 62.7 48.5 38.7 37.9 43.3 Claude-2 unk. 79.6 70.0 61.5 49.2 58.9 67.8 VDU models trained with only generated data Flan-T5 large+ KD 750M 70.4 59.4 24.4 56.3 52.3 59.8 Flan-T5 large+DocKD 750M 72.9 62.7 55.9 66.1 57.0 71.7 DocFormerv2 large+ KD 750M 76.9 67.4 30.2 51.8 58.6 69.0 DocFormerv2 large+DocKD 750M 81.0 71.9 61.5 68.7 62.4 73.9 Table 1: Document understanding results for LLMs and student VDU models. Note that none of these models were trained with human-labeled annotations. (a) DocVQA validation performance. KD baseline uses raw OCR text for the QA generation, while DocKD uses linearized OCR text. (b) Entity extraction performance on CORD (F1) and DeepForm (ANLS). KD baseline generates entities without KV detection. (c) RVL-CDIP test accuracy. For DocKD, both class labels and descriptions are generated. mAcc⋆measures the mean accuracy excluding four ambiguous categories: memo, filefolder, handwritten, and presentation. Claude-22as a teacher LLM and DocFormerv2 large as a student VDU model, while partially using DocFormerv2 baseto facilitate more efficient analysis. The training procedure of DocFormerv2 (DFv2) closely follows that of the original paper, where it jointly encodes document image, OCR text, and bounding boxes. The pro- vided query ( ptask) is appended to the text ( dtext), and the decoder outputs the target answer ( atask). For comparison, we also employ Flan-T5 large as a student language-only model, since the DFv2 structure is based on T5 . To provide a base comparison for each task, we additionally present the zero-shot performance of instruction-tuned LLMs and a vision-language multi-modal foundation model . Datasets. For the LLM’s data generation, we use a randomly sampled subset of Industry Document Library (IDL, Lewis et al. (2006)) as unannotated document images. To accurately evaluate the open- world capabilities, we have removed all IDL docu- ments that overlap with any of our downstream task datasets and excluded them from the data genera- tion phase. For the evaluation datasets and metrics, we use DocVQA validation set in the document VQA task, measured by ANLS (average normalized Levenshtein similarity) and EM (exact match). In the entity extraction, we use two datasets, CORD and DeepForm , evaluated by entity-level F1 score and ANLS, re- spectively. In the classification task, we use RVL- 2https://www.anthropic.com/index/claude-2CDIP test set, evaluated by the mean accuracy over 16 document categories. Refer to Appx. D for more details on each dataset. 4.2 Evaluation on Open-World Document Understanding Tasks Document VQA. Claude-2 generates QAs from randomly sampled 100K IDL documents. We prompt Claude-2 to generate three QA pairs per document sample, and the trained student model is evaluated on DocVQA . Table 1 (a) summarizes the DocVQA performances of the distilled students as well as the LLMs, where none of these models have been trained on human annotations for the document VQA task. We con- firm that knowledge-distilled student models can effectively answer document questions, being com- parable with much larger-size language models. Compared to the plain KD with raw OCR text, DocKD significantly enhances the performance up to 81.0% ANLS. This result is comparable to us- ing human-labeled annotations (refer to Sec. 4.3), which implies the high quality of generated data. Furthermore, the performance gain is greater with DFv2 (vision + language) than Flan-T5 (language), which shows that the linearization model supple- ments informative visual knowledge. Entity extraction. For generating the entities with KV detection, we need documents with rich key and value information. Such documents are frequently found from forms or invoices. Thus, in- stead of using IDL, we use the invoices subset of RVL-CDIP for entity genera- tion, sampling 5K documents. Table 1 (b) demon- strates that if the data generation does not involve the KV detection model but only exploits the en- 6\nPositive labelsNegative labelsFigure 6: Top-10 frequently generated document class labels from IDL . DocVQA CORD DeepForm RVL-CDIP teacher student val ANLS test F1 test ANLS test mAcc Falcon-40B DFv2 base 68.6 55.1 48.5 54.7 Falcon-180B DFv2 base 71.3 59.8 62.0 53.8 Claude-2 DFv2 base 77.2 60.2 64.2 61.9 Falcon-40B DFv2 large 74.9 59.8 61.2 55.6 Falcon-180B DFv2 large 76.8 66.6 64.5 58.5 Claude-2 DFv2 large 81.0 61.5 68.7 62.4 Table 2: We compare the Claude-2 teacher with Falcon- 40B and Falcon-180B teacher models, and the DFv2 large (750M) and DFv2 base(232M) student models. tity generation prompt pgen-ent, the LLM produces low-quality entities and field names, leading to the subpar performance of the student models. Document classification. We sample 50K doc- uments from IDL to generate class labels. For each document sample, Claude-2 generates one- sentence description, three positive labels, and ten negative labels. Table 1 (c) shows that our distilla- tion framework enables the student model to clas- sify novel documents, removing the need to pre- define categories or collect annotated documents to train a classification model. In addition, we find that DocKD’s description generation induces more knowledge on documents compared to the plain KD, improving the accuracy by large margin: 58.6% →62.4% mAcc. Fig. 6 shows the spectrum of generated class la- bels from the IDL documents. After filtering out invalid labels ( e.g., too long or outliers), it amounts to 49.9K unique positive labels and 10.5K unique negative labels. Before introducing the description generation, we had 17.2K unique positives, imply- ing that the provision of description contributes to increasing the label diversity. Smaller teacher and student models. Table 2 presents the result with a smaller teacher, Falcon- 40B , and a smaller student, DFv2 base. We find that smaller teacher and student models can degrade the data genera- tion quality and task performances. In contrast, KDQ: What are the sample codes mentioned in the table? A: J112 and J113aDocKDQ: What is the mean moisture content percentage for sample code J112 at sampling point EX-HOPPER? A: 14.86 KDResearch proposalaDocKDDescription: A recommendation letter outlining suggested studies and analyses to be performed on an expanded tobacco blend product, including estimated costs and timelines. Class label: Technical recommendation letterKDBank Name, Bank Address, Payee Address, Check Number, Check Routing NumberaDocKDCompany Name, Company Type, Company Address, Bank Name, Bank Address, Payee Name, Check Number, Amount, Date, Voucher Number, Payment Details(a) (b) (c) Figure 7: Comparison between data generated by KD and DocKD: (a) document VQA, (b) entity extraction, and (c) document classification. larger and stronger teacher models like Claude- 2 or Falcon-180B can generate better data, leading to the highest task performances. For instance, Claude-2 better un- derstands the linearized OCR text than Falcon-40B does, so it generates diverse and accurate QAs from the layout-aware text. Refer to Appx. C for com- parisons between different teacher models. Visualization and statistics of generated data. Fig. 7 visualizes some sample data generated by using KD and DocKD respectively. For docu- ment VQA, DocKD generates more challenging QA pairs that requires understanding the structure of the table. In Fig. 7 (a), the question generated by DocKD requires understanding the relationship between “mean”, “moisture content %”, “sample code” and “sample point”. For entity extraction, we show a common example in Fig. 7 (b) where we list the entity names extracted by KD and DocKD. We see that DocKD is able to capture significantly more entities than KD. For document classification, we note that DocKD generates a document descrip- tion which help to give class labels that aligns better with the document content. Additional examples of DocKD-generated data are available in Appx. C. Table 3 shows some statistics of the data gener- ated by KD and DocKD. For entity extraction, we calculate the number of unique entity types (# of 7\nentity extraction document classification method # of ent. types # ent. per doc. # pos. labels # neg. labels KD 1454 11.5 4674 2476 DocKD 2316 20.1 6053 3013 Table 3: Statistics of data generated by KD and DocKD. DocVQA val DUDE val human anno. DocKD -generated ANLS EM ANLS EM (a)human anno. = DocVQA train set ✓ 80.6 72.0 53.8 37.2 ✓ 77.2 68.6 52.6 36.0 ✓ ✓ 83.4 76.2 55.3 38.8 (b)human anno. = DUDE train set ✓ 66.0 54.9 54.4 40.0 ✓ 77.2 68.6 52.6 36.0 ✓ ✓ 79.1 70.8 58.0 42.1 Table 4: The document VQA task performance using a human-annotated training dataset. DocKD indicates the generated QAs from the IDL documents. The teacher model is Claude-2, and the student model is DFv2 base. For results with DFv2 large, refer to Appx. A.2. ent. types) and average number of entities gener- ated per document (# of ent. per doc.). We note that DocKD can generate significantly more entities and entity types than KD, by leveraging external doc- ument knowledge. Similarly, we also summarize the number unique document labels generated by KD and DocKD for document classification. For both the positive and negative class labels, DocKD generates more unique labels than KD. We attribute this to leveraging document descriptions for gen- eration which helps LLMs generating fine-grained labels that align better with the document. 4.3 Leveraging Human-Labeled Annotations Human annotation QAs. We demonstrate that unsupervised knowledge from an LLM remains valuable even when human annotations are avail- able for training. As shown in Table 4 (a), augment- ing DocVQA human annotations with DocKD- generated QAs, which incorporate a variety of document knowledge, results in stronger student models, achieving 83.4% ANLS on the DocVQA validation set. In a more practical scenario where human-labeled documents have different distribu- tion, we utilize DUDE, a dataset featuring multi- domain documents with diverse VQA annotations (text, numerical, yes/no, lists, etc.). In Table 4 (b), DocKD-generated data significantly enhances stu- dent model performance, reaching 79.1% ANLS, compared to 66.0% with human annotations alone. Open-set classification. One of the main appli- cations by distilling LLM’s knowledge lies in its open-set classification ability, i.e., it can classify documents of unseen categories. The diversity of generated class labels ensures robustness, while a fixed set of annotations makes it hard to adapt to RVL-CDIP test out-of-domain model C1(known) C2(unk.) RVL-O IRS-50 WikiDoc Falcon-40B 62.3 27.4 76.3 54.0 39.8 DFv2 baseS 86.1 0.08 0.00 0.00 0.00 DFv2 baseU 50.5 56.1 42.6 74.0 44.4 DFv2 baseS+U 77.1 52.1 52.8 82.0 45.2 Table 5: Open-set classification performance. S: su- pervised training with C1annotations, U: unsupervised DocKD from LLM-generated class labels. unseen labels. To verify this, let Cdenote the set of all RVL-CDIP labels, and we split Cinto two sets:C1={email, letter, memo, news article} and C2=C − C 1. We train the model with documents from the web, crawled by C1labels . Table 5 shows that this supervised model (S) makes highly biased predictions—while it pre- dicts known classes accurately (86.1%), it struggles to identify unknown categories in C2. In contrast, DocKD without any supervised data (U) enables generalization to unseen types of documents. Fur- ther, merging the C1annotations with the generated data (S+U) leverages the advantages of both super- vised and unsupervised learning. We also evaluate our model in a more realistic distribution of data and labels, using the documents out of the domain of IDL or RVL-CDIP. To this end, we use three evaluation sets, RVL-O , IRS-50, and WikiDoc , all of which contain out-of-domain docu- ments (refer to Appx. D for the details of datasets). While the supervised model cannot handle these novel categories, unsupervised DocKD makes the student model even adaptable to out-of-domain classification and outlier detection, following the LLM teacher’s robust predictions. 5 Conclusion We address the open-world document understand- ing problem by instructing the LLMs to generate document annotations, given the generation prompt and OCR text. To successfully achieve this, we sug- gest DocKD framework, designing task prompts and answers that LLMs can easily generate, and in- corporate external document knowledge from var- ious sources. Consequently, the student models distilled by DocKD annotations demonstrate re- markable performance improvements compared to the plain KD approach in various document tasks. The integration with human-labeled annotations further enhances model performance. 8\nLimitations This study represents the pioneering work to uti- lize LLMs for open-world document understand- ing, specifically focusing on relatively simpler doc- uments and tasks. We have applied LLMs to generate document annotations, and subsequently, trained student VDU models using these annota- tions. Our primary focus has been on common document understanding tasks such as visual ques- tion answering, entity extraction, and classification, which primarily involve documents containing ta- bles, layouts, and forms. However, extending our approach to handle doc- uments with more complex visual elements, such as intricate figures, diagrams, or dense equations, remains an area for future exploration. While ad- dressing more sophisticated problems could sig- nificantly enhance the model’s applicability, such advancements would require efforts in developing new generative prompts. Furthermore, integrating LLMs with document expert models and large mul- timodal models, such as GPT-4V , holds potential to synthesize visually-rich, informative annotations. This integration has not yet been explored and rep- resents a promising avenue for future research. De- spite these limitations, our study lays foundational work for more complex applications in the field of document understanding using LLMs.
[ "LayoutLLM: Layout Instruction Tuning with Large Language Models for Document Understanding", "TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document", "Enhancing Visual Document Understanding with Contrastive Learning in Large Visual-Language Models", "InstructDoc: A Dataset for Zero-Shot Generalization of Visual Document Understanding with Instructions", "A Multi-Modal Multilingual Benchmark for Document Image Classification", "Improved Baselines with Visual Instruction Tuning", "PatFig: Generating Short and Long Captions for Patent Figures", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "MIMIC-IT: Multi-Modal In-Context Instruction Tuning", "Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering", "Document Understanding Dataset and Evaluation (DUDE)", "InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning", "Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes", "Specializing Smaller Language Models towards Multi-Step Reasoning", "Is GPT-3 a Good Data Annotator?", "Teaching Small Language Models to Reason", "Unifying Vision, Text, and Layout for Universal Document Processing", "Scaling Instruction-Finetuned Language Models", "Evaluating Out-of-Distribution Performance on Document Image Classifiers", "ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding", "Unified Pretraining Framework for Document Understanding", "LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking", "Want To Reduce Labeling Cost? GPT-3 Can Help", "DocFormer: End-to-End Transformer for Document Understanding", "SelfDoc: Self-Supervised Document Representation Learning", "LayoutLMv2: Multi-modal Pre-training for Visually-rich Document Understanding", "A Survey of Deep Learning Approaches for OCR and Document Understanding", "DocVQA: A Dataset for VQA on Document Images", "Knowledge Distillation: A Survey", "Language Models are Few-Shot Learners", "Visual Detection with Context for Document Layout Analysis", "A Unified MRC Framework for Named Entity Recognition", "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "CORD: A Consolidated Receipt Dataset for Post-OCR Parsing", "ICDAR 2019 Competition on Scene Text Visual Question Answering", "FUNSD: A Dataset for Form Understanding in Noisy Scanned Documents", "A Table Detection Method for PDF Documents Based on Convolutional Neural Networks", "Distilling the Knowledge in a Neural Network", "Evaluation of deep convolutional nets for document image classification and retrieval", "TableSeer: automatic table metadata extraction and searching in digital libraries", "Building a test collection for complex document information processing", "Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing", "PaD: Program-aided Distillation Specializes Large Models in Reasoning", "A Benchmark for Structured Extractions from Complex Documents", "DUE: End-to-End Document Understanding Benchmark", "BROS: A PRE-TRAINED LANGUAGE MODEL" ]
NormTab: Improving Symbolic Reasoning in LLMs Through Tabular Data Normalization
NormTab: Improving Symbolic Reasoning in LLMs Through Tabular Data Normalization Abstract In recent years, Large Language Models (LLMs) have demonstrated remarkable capa- bilities in parsing textual data and generating code. However, their performance in tasks in- volving tabular data, especially those requiring symbolic reasoning, faces challenges due to the structural variance and inconsistency in ta- ble cell values often found in web tables. In this paper, we introduce NormTab, a novel framework aimed at enhancing the symbolic reasoning performance of LLMs by normal- izing web tables. We study table normaliza- tion as a stand-alone, one-time preprocessing step using LLMs to support symbolic reason- ing on tabular data. Our experimental eval- uation, conducted on challenging web table datasets such as WikiTableQuestion and Tab- Fact, demonstrates that leveraging NormTab significantly improves symbolic reasoning per- formance, showcasing the importance and ef- fectiveness of web table normalization for en- hancing LLM-based symbolic reasoning tasks. 1 Introduction Tables are a fundamental format for structured data representation and are widely used across vari- ous sources, including relational databases, web pages, and financial documents. However, many ta- bles within documents and web pages are designed for direct human consumption and often lack the strict formatting that is expected in relational ta- bles. This discrepancy poses significant challenges when querying them using languages such as SQL, integrating them with relational databases, and pro- cessing them within applications. Large Language Models (LLMs) have emerged as powerful tools for semantic parsing both textual and tabular data and perform- ing complex tasks such as code generation. Trained on vast amount of Internet data, including both text and tables, and employing techniques such as Chain of Thought (CoT) prompting and self-consistency , these models outperform many traditional models on various table reasoning tasks . However, their performance in tasks involv- ing tabular data, particularly those requiring sym- bolic reasoning, is often hindered by the structural variability and inconsistencies commonly found in web tables. Symbolic reasoning over tables neces- sitates a clear understanding of the table structure and values, and may involve constraining rows and columns, which can be challenging when dealing with unstructured or noisy web tables . Our hypothesis is that normal- izing ill-formatted tables can address this challenge, enabling the execution of symbolic programs (such as SQL or Python) on the tables and making rea- soning tasks involving comparison, aggregation, and mathematical calculations more manageable. Moreover, normalization may enhance the explain- ability by allowing the tracking of the intermediate steps in reasoning. Consider the table QA task shown in Figure 1. Retrieving answers from the table on the left using a symbolic approach such as SQL is challenging due to the irregular structure of the data and the lim- itations of SQL. While an LLM may handle simple look-up questions, it struggles with tasks requir- ing complex aggregation and arithmetic operations. However, the normalized version of the same table, shown on the right, can be easily analyzed, allow- ing text-to-SQL approaches to effectively obtain the answers to questions. Existing models for table reasoning typically rely on a multi-step framework, where an LLM performs a sequence of actions such as adding columns before additional scripts are invoked to process data, retrieve cell values, or compute an- swers to questions . These models are of- 1\nrownumberweekdateopponentresulttype resultscoreattendance011981-09-06at los angeles rams w27–2063198 121981-09-13at cleveland browns w9–3 79483 ---------------------451981-10-04cincinnati bengals w17-10 44350 561981-10-11seattle seahawks w35-17 42671 671981-10-18at new england patriots l38-10 60474 781981-10-26at pittsburgh steelersl26-1352732 ---------------------15161981-12-20pittsburgh steelers w21–20 41056 Q: what is the total attendance for October?SQL: select sum(attendance) from T where strftime('%m', date) = '10' Answer: 200227 Q: how many times did the oilers have consecutive wins? SQL: select count(*) from T where result_type = 'w' and row_number in (select row_number + 1 from T where result_type = 'w') Answer: 2 rownumberweekdateopponentresultattendance01september 6, 1981 at los angeles rams w 27–20 63,198 12september 13, 1981 at cleveland browns w 9–3 79,483 ------------------45october 4, 1981 cincinnati bengals w 17-10 44,350 56october 11, 1981 seattle seahawks w 35-17 42,671 67october 18, 1981 at new england patriots l 38-10 60,474 78october 26, 1981 at pittsburgh steelers l 26-13 52,732 ------------------1516december 20, 1981 pittsburgh steelers w 21–20 41,056 Q: what is the total attendance for october?LLM Response: The total attendance for october is 147,497. Q: how many times did the oilers have consecutive wins?LLM Response: The oilers had consecutive wins twice during the 1981 season. Q: what is the total attendance for october? SQL: select sum(attendance) from T where date like '%october%’ Answer: 198 Q: how many times did the oilers have consecutive wins? SQL: select count(*) from T where result = 'w' and row_number in (select row_number + 1 from T where result = 'w') Answer: 0 Normalized TableTitle: 1981 Houston Oilers season Original Unnormalized TableTitle: 1981 Houston Oilers season Figure 1: An example of a Table QA task, with the original unnormalized web table shown on the left and its normalized version on the right. Retrieve answers using a symbolic approach from the unnormalized table poses difficulties due to inconsistent formatting of date,result andattendance columns. Also, direct querying with LLMs often fails for questions involving numerical operations. Normalization enables effective text-to-SQL conversion, as shown by the normalized table on the right. ten dependent on question and table structure and do not address the root cause of table irregular- ity, making them less scalable. An alternative is normalizing tables, often part of a larger process known as data wrangling, which involves process- ing, cleaning and organizing data into a format that is suitable for further analysis. Significant progress has been made on data wrangling , with recent approaches employing LLMs for tasks such as error detection and data imputa- tion . Selected operations, such as normalizing numbers and dates, may also be introduced into data processing pipelines to fa- cilitate further analysis . To the best of our knowledge, our work is the first to study table normalization as an stand-alone one- time preprocessing step using LLMs. In this paper, we introduce NormTab , a frame- work designed to normalize web tables to align them with the structured format of relational database tables. NormTab addresses challenges such as structural variance, mixed data formats, and extraneous information, thereby facilitating ac- curate and efficient symbolic reasoning and query processing using LLMs. Our work explores two key research questions: •RQ1: How can we leverage LLMs’ textual un- derstanding to effectively clean and normalize web tables? •RQ2: How can web table normalization en- hance table reasoning tasks, particularly in the context of LLM-based symbolic reasoning? Our proposed solution leverages the advanced textual understanding capabilities of LLMs to inde- pendently process and normalize web tables, with- out relying on specific questions. By normalizing tables in this manner, we enable a robust foundation for any downstream task involving table reasoning. This approach allows for multiple questions to be asked from a single, normalized table, significantly enhancing reasoning and query capabilities. More- over, our normalization process only needs to be performed once, unlike other models that require repeated adjustments based on different questions, highlighting a key advantage of our approach. Through a comprehensive experimental evalua- tion conducted on challenging web table datasets such as WikiTableQuestions and TabFact , we assess the effectiveness of NormTab in improving table 2\nreasoning performance. These datasets provide diverse examples of table structures and content, allowing us to thoroughly investigate the impact of web table normalization on LLM-based symbolic reasoning tasks. By addressing RQ1 and RQ2, we aim to demonstrate the importance of web ta- ble normalization and its potential to enhance the capabilities of LLMs in handling tabular data for complex reasoning tasks. Key Contributions of our paper are: •We introduce NormTab, a novel framework that enhances LLMs’ symbolic reasoning on tabular data by normalizing web tables. NormTab includes structure normalization (e.g., transposing tables, flattening rows and columns) and value normalization (e.g., re- moving extraneous strings, standardizing the formatting of dates and numbers) to ensure consistency and accuracy in reasoning tasks. •We demonstrate how LLMs’ textual under- standing can be effectively utilized for data cleaning and transformation tasks, addressing challenges such as structural variance, mixed values, noise, and substring extraction in web tables •We conduct extensive experimental evalua- tions using challenging web table datasets, in- cluding WikiTableQuestion and TabFact, to assess the effectiveness of NormTab in im- proving table reasoning performance, partic- ularly in the context of LLM-based symbolic reasoning tasks. 2 Related Work Our work is related to a few areas as discussed next. General LLMs and CoT Related to our work is the line of research aimed at improving the per- formance of LLMs on var- ious reasoning tasks, with capabilities spanning mathematics, common sense, and symbolic rea- soning . These approaches often excel using few- shot prompts without requiring fine-tuning. Their reasoning abilities can be further enhanced by breaking complex tasks into steps, employing meth- ods like chain-of-thought (CoT) prompting and Zero-CoT. For instance, the Table- CoT model utilizes in-context learn- ing and CoT prompting to generate answers for table-based tasks. Several studies have utilized instruction tun- ing and supervised fine-tuning to enhance the performance of LLMs on table reasoning tasks. Notable examples include TableLLaMA and TableGPT , which have shown significant improvements in specific applications. In contrast, the BINDER model extends the capabilities of LLMs to programming language generation for solving commonsense problems. Additionally, the DATER approach employs LLMs to decompose tables and questions, facilitating table-based QA and fact verification tasks. These diverse approaches underscore the potential of LLMs in handling complex reasoning tasks involving tabular data. Reasoning over structured data/tables An- other line of related work is reasoning over tabular data. Several studies leverage symbolic reasoning through text-to-SQL or Python code for table-based reasoning tasks. However, for effectively utilizing the symbolic code generation approach with LLMs for table reasoning tasks, it is crucial to ensure that the table is in the proper format . Chain-of-Table enhances rea- soning on tabular data by iteratively transforming and evolving table structures through a series of rea- soning steps, including row/column selection, cell splitting to refine table representations for specific reasoning tasks. Their method employs in-context learning to direct LLMs in iteratively generating operations and updating the table, thus forming a chain of reasoning specific to tabular data. Liu et al. (2023) explore the capabilities of LLMs in interpret- ing and reasoning over tabular data, emphasizing robustness to structural perturbations, comparing textual and symbolic reasoning, and examining the potential of aggregating multiple reasoning path- ways. Their findings indicate that structural vari- ations in tables presenting the same content can significantly degrade performance, particularly in symbolic reasoning tasks. They propose a method for table structure normalization through transposi- tion to mitigate this issue and find that while textual reasoning slightly outperforms symbolic reasoning, each approach has distinct strengths depending on the task. 3\nColumn Selection----------------------- ---------------------------------------------------------------------------------------------------------------------------- Value Normalization Structure Normalization(a) NormTab Basic(b) NormTab Targetedrownumberweekdateopponentresulttype resultscoreattendance011981-09-06at los angeles rams w27–2063198 121981-09-13at cleveland browns w9–3 79483 ---------------------451981-10-04cincinnati bengals w17-10 44350 561981-10-11seattle seahawks w35-17 42671 ---------------------15161981-12-20pittsburgh steelers w21–20 41056 Normalized Tablerownumberweekdateopponentresultattendance01september 6, 1981 at los angeles rams w 27–20 63,198 12september 13, 1981 at cleveland browns w 9–3 79,483 ------------------45october 4, 1981 cincinnati bengals w 17-10 44,350 56october 11, 1981 seattle seahawks w 35-17 42,671 ------------------1516december 20, 1981 pittsburgh steelers w 21–20 41,056 Original Table rownumberweekdateopponentresulttype resultscoreattendance011981-09-06at los angeles rams w27–2063198 121981-09-13at cleveland browns w9–3 79483 ---------------------451981-10-04cincinnati bengals w17-10 44350 561981-10-11seattle seahawks w35-17 42671 ---------------------15161981-12-20pittsburgh steelers w21–20 41056 Merged Normalized Table dateresulttype resultscoreattendance1981-09-06w27–2063198 1981-09-13w9–3 79483 ------------1981-10-04w17-10 44350 1981-10-11w35-17 42671 ------------1981-12-20w21–20 41056 Normalized subtable dateresultattendanceseptember 6, 1981 w 27–20 63,198 september 13, 1981 w 9–3 79,483 ---------october 4, 1981 w 17-10 44,350 october 11, 1981 w 35-17 42,671 ---------december 20, 1981 w 21–20 41,056 Sub Table 2: Not Normalized rownumberweekopponent01at los angeles rams 12at cleveland browns ---------45cincinnati bengals 56seattle seahawks ---------1516pittsburgh steelers Sub Table 1: Already NormalizedValue Normalization Structure NormalizationFigure 2: Overview of NormTab . The methodology encompasses two distinct strategies: (a) Entire Table Normalization (NormTab Basic):we provide the LLM with the entire web table along with specific instructions for cleaning and normalizing. The LLM reads the table and the instructions, then returns a cleaned and normalized version of the table. (b) Targeted Normalization (NormTab Targeted ):In this approach the LLM identifies and targets only the portions of the web table requiring normalization based on the table metadata and a few sample rows. The original table is split into two subtables: one for normalization and one already clean. The LLM processes the subtable that requires normalization then returned a cleaned version. Finally, the normalized subtable is merged with the clean portion, resulting in a fully cleaned and normalized table. StructGPT employs an iter- ative reading-then-reasoning approach to enhance LLM reasoning for structured data, but its scalabil- ity is constrained by token limits when processing large tables. The ReAcTable model adopts the ReAct paradigm, integrating step-by-step reasoning, external tool-based code execution, intermediate table generation, and ma- jority voting to process tabular data. Similarly, the LEVER model improves language- to-code generation by validating generated pro- grams based on their execution results, enhancing the accuracy and reliability of table reasoning tasks. Data wrangling and imputation Normalizing tables is a crucial aspect of the broader data wran- gling process, which involves processing, cleaning, and organizing data into a format suitable for fur- ther analysis. Considerable research has focused on data wrangling, addressing challenges such as error detection, data imputation, and standardiza- tion of data formats . Recent ap- proaches have leveraged the capabilities of LLMs for these tasks. For instance, Narayan et al. (2022) demonstrated the effectiveness of LLMs in identify- ing errors and imputing missing data, showcasing how these models can enhance the data wrangling process. By integrating LLMs, the efficiency and accuracy of preparing data for analysis can be sig- nificantly improved, streamlining and automating many aspects of data wrangling. Operations like normalizing numbers and dates can be incorporated into data processing workflows to aid in subsequent analysis . All these works highlight the importance of table normalization in improving LLMs’ performance 4\non tabular data, paving the way for more effective and accurate table reasoning models. 3 Methodology Our methodology encompasses several essential parts designed to ready web tables for proficient reasoning by LLMs. 3.1 Normalization Operations The normalization operations in NormTab can be divided into two groups: (1)value normalization and(2)structural normalization. The former in- volves splitting cells to add new columns, handling empty cells and value ranges, removing extraneous strings, and normalizing data formats such as dates and numerical values to ensure consistency and accuracy in reasoning tasks. Structural normaliza- tion, on the other hand, aims to detect structural variance by analyzing the first row and first column of a web table and determining whether a transpo- sition is needed. If transposition is required, we address this issue by flipping the rows and columns. Value Normalization: Our value normalization is based on the principle that every cell in a table must contain an atomic value (e.g., string, date, number), meaning that cell content cannot be com- posite or multi-valued. This principle, known as the first normal form in database systems (Kifer et al.), ensures that cell values can be smoothly queried and updated without introducing anomalies. The process of value normalization involves sev- eral critical steps to ensure data consistency and accuracy. First, we focus on value splitting and extraction, identifying and splitting all composite columns. This may involve adding new columns as necessary while ensuring that no existing columns are deleted. Next, we standardize date and nu- merical values to a uniform format, paying spe- cial attention to any additional strings such as cur- rency symbols, units or comma that may accom- pany numerical values. Additionally, we normalize all “N/A” and blank values to NULL to maintain consistency throughout the dataset. In SQL, Null values signify an attribute value that is not avail- able or missing, and they are treated differently than any other values. SQL engines recognize the semantics of null values and consider this when processing queries. For columns containing value ranges, such as “2010/11” or “2015-2018”, we split these into two separate columns to facilitate clearer data interpretation and processing. An example of value normalization is shown in Figure 1. The original table presents date columns with dates in textual format, a result column combining match outcomes with scores, and an attendance column where numbers are written with commas. The value representation in the original table is more readable for humans; however, this format poses challenges for symbolic programs to process. Our normalization process converts the date to the “YYYY-MM-DD” format and attendance values to a pure numerical format by removing commas. Additionally, NormTab splits the composite result column into two separate columns: “result_type” and “result_score”, thereby organizing the data more effectively for analysis. This standardization is crucial for maintaining data integrity across the table. Structural Normalization: Tables can be orga- nized either row-oriented or column-oriented. In a row-oriented table, each row typically represents an entity or a relationship between entities, while each column describes an attribute of the entity or relationship. Column-oriented tables, on the other hand, are stored in a transposed fashion. Most traditional databases store data in a row-oriented format, which is well-supported across relational databases. Our structure normalization primarily focuses on addressing structural differences between ta- bles to enhance their usability for reasoning tasks. Initially, we carefully examine the table structure to determine if the first row resembles a header, indicating the table is row-oriented and requires no structural changes. However, if the first col- umn appears to serve as the header, we transpose the table to normalize its structure, ensuring that the layout aligns with our adopted tabular format. Additionally, web tables sometimes include aggre- gated rows or columns, which can pose challenges if specific rows or columns need aggregation to answer a query. We handle these aggregated rows by disregarding any information present in the last row that pertains to aggregated data, such as “total”, “sum”, or “average”. This step prevents redundant or misleading data from affecting subsequent anal- yses and ensures that the table remains clean and focused on the relevant data points. 5\n3.2 Normalization Approach: NormTab As depicted in Figure 2, our methodology for normalizing web tables involves two distinct ap- proaches to leverage the capabilities of LLMs for enhancing symbolic reasoning and query capabili- ties. Entire Table Normalization (NormTab- Basic): In the first approach, we provide the LLM with the entire table along with specific instructions for cleaning and normalizing. The LLM reads the table and the instructions, then returns a cleaned and normalized version of the table. However, we observed that many web tables contain portions already in a well-structured form, with only a few columns requiring normalization. To optimize this process, we developed a modified approach. Targeted Normalization(NormTab-Targeted): To improve efficiency, we developed a modified approach that targets only the portions of the table requiring normalization. Our analysis of web ta- bles revealed that often only a few columns need the normalization process. This realization led to a more optimized methodology. In this more re- fined approach, we first ask the LLM to identify which columns require normalization and cleaning, based on the table metadata (such as column head- ers and titles) and a few sample rows. Once these columns are identified, we split the original table into two subtables: one that requires normalization and cleaning, and one that is already normalized and clean. We then send only the subtable that needs normalization to the LLM along with the instructions. The LLM processes this subtable and returns a cleaned and normalized version. After normalization, we merge the normalized subtable with the already clean portion of the table. This approach not only improves the efficiency of the normalization task by reducing the amount of data sent to the LLM but also ensures that the resulting table is in a consistent and accurate format suitable for subsequent reasoning and querying tasks. Following this, we analyze the overall structure of the merged table. With the assistance of the LLM, we determine whether the table needs to be transposed based on its layout. If needed, table transposition is performed outside of the LLM. Additionally, we check if the last row contains summarized or aggregated values and if so, NormTab ignore this row. This selective column normalization method reduces the workload on the LLM, enhances efficiency, and ensures that only the necessary parts of the table are processed, thereby preserving the integrity of already structured data. 4 Experimental Setup 4.1 Dataset We conduct experimental evaluations using two challenging web table datasets: WikiTableQuestion (WikiTQ) and TabFact . These datasets are specifically curated to test the reasoning capabilities of models on complex tabular data. WikiTQ comprises tables extracted from Wikipedia along with corresponding natural language questions, while TabFact consists of tables sourced from Wikipedia paired with tex- tual facts. These datasets provide a diverse range of table structures and content, allowing us to thor- oughly evaluate the performance of NormTab in enhancing table reasoning tasks. The WikiTQ standard test set comprises 416 unique tables and 4,344 samples, while the Tab- Fact standard test set includes 298 unique tables with 2,003 samples. By utilizing these datasets, we aim to demonstrate the effectiveness of web table normalization in improving the symbolic reason- ing performance of LLMs, thereby highlighting the importance of addressing the challenges posed by web table irregularities. 4.2 Baselines and Evaluation Metrics We compare our approach with several robust base- line methods, including TableCoT , BINDER , DATER , StructGPT , ReAcTable , Rethinking-Tab-Data , TabSQLify , and Chain-of-Table . For the WikiTQ dataset, exact match (EM) ac- curacy was used to check if the predicted answers matched the correct ones. To address varying text formats, a pre-matching check using LLMs was in- corporated . The accuracy for TabFact was assessed using binary classification accuracy. 4.3 Implementation We utilized gpt-3.5-turbo-0125 as the Language Model which supports 16k context window. We 6\nwere inspired by the prompting style from in our imple- mentation of NormTab. To compare performance, we employ few-shot in-context learning. This in- volves supplying the LLM with the table title, table header, question, and three example rows of the table, along with the question, to generate an SQL query. The SQL query is then executed on the table to obtain the answer. Further details can be found in Appendix C1. 5 Results In this section, we analyzed the performance of NormTab. To evaluate its impact, we conducted few-shot in-context learning experiments to gener- ate SQL queries for answering specific questions. First, we performed experiments on unnormalized tables without any modifications. Then, we com- pared the performance on normalized tables. Addi- tionally, we reported the performance of different normalization processes. 5.1 Results on Downstream Tasks Table 1 and Table 2 presents a comparison between the performance of NormTab and the other base- lines on WikiTQ and TabFact datasets. In the WikiTQ dataset, the results showed that after applying the targeted version of NormTab, we achieved 61.2% accuracy, surpassing the per- formance of other baseline models. The targeted NormTab approach performs slightly better than the basic version, where the entire table is passed to the LLMs. This suggests that LLMs may be more effective at normalization tasks when deal- ing with targeted smaller tables. Additionally, we gained about 10% improvement compared to the Text-to-SQL model and SQL (gpt-3.5-turbo) model. Notably, Rethinking- Tab-Data achieved an accuracy of 56.87% by addressing structural varience using LLMs and a Python agent. Chain-of-Table employed an iterative sequence of oper- ations to tailor complex tables to specific questions, achieving 59.94% accuracy. However, these and other baseline models are question-dependent. In contrast, our model adopts a straightforward and simple approach: it normalizes the table only once, irrespective of the question, enabling answers to be 1The source code for the implementation discussed in this paper will be made publicly available upon acceptance of the paper.derived from the normalized table using program- aided symbolic reasoning. Model Acc (%) TableCoT 52.40 Binder 56.74 Dater 52.80 ReAcTable 52.40 Rethinking-Tab-Data 56.87 Chain-of-Table 59.94 Text-to-SQL 52.90 Text-to-SQL (gpt-3.5-turbo) 51.30 NormTab Basic+ SQL (ours) 60.80 NormTab Targeted + SQL (ours) 61.20 Table 1: Performance comparison of NormTab on WikiTQ dataset. The results clearly demonstrate that NormTab significantly surpasses other models in accu- racy when employing symbolic reasoning. In Table 2, we can observe a similar performance enhancement compared to the original table in table-based fact verification tasks. We achieved ap- proximately a 6% performance improvement com- pared to the results of Text-to-SQL on the original table. It is worth noting that table-based fact verifi- cation differs from table-based question answering tasks. Generating a SQL query to verify a fact is more complex than simply retrieving an answer from the table. Although other models not employ- ing program-aided symbolic reasoning perform bet- ter in this task, these models utilize LLMs for the verification task providing the whole table to the model. Our experimental results show promise for utilizing symbolic reasoning in such scenarios. Model Acc (%) TableCoT-chatgpt 73.10 Binder 79.17 Dater 78.01 Chain-of-Table 80.20 ReAcTable 73.10 Text-to-SQL 64.71 Text-to-SQL (gpt-3.5-turbo) 62.32 NormTab Basic+ SQL (ours) 67.10 NormTab Targeted + SQL (ours) 68.90 Table 2: Performance comparison of NormTab on Tab- Fact dataset with other models. 7\n5.2 NormTab Evaluation To assess the accuracy of various normalization operations, we evaluated the performance on 100 tables, with 50 tables from each dataset, WikiTQ and TabFact. Table 3 summarizes the accuracy of different normalization processes. NormTab demonstrated strong performance in normalizing dates and numbers, detecting transposition require- ments, and handling aggregated summaries in the last row effectively. However, NormTab faced diffi- culties in extracting and cleaning values in certain critical tables where value extraction from the origi- nal table was particularly challenging. The column selection accuracy indicates that LLMs can be very effective in identifying columns where values are not in the proper format. However, the accuracy of splitting columns was low. Additional errors in- cluded managing value cleaning and handling "n/a" values. Although these tasks are challenging, the performance in these areas shows the potential for utilizing LLMs to address these tasks effectively. Type Accuracy Columns Selection 91.0% Transpose Detection 97.0% Last Row Aggregation 100.0% Split Column 87.0% Date and Number 100.0% N/A value 93.0% Value Cleaning 82.0% Table 3: Accuracy of NormTab in various types of normalization. NormTab has shown superior performance com- pared to several robust models, demonstrating its efficacy in table normalization. A key advantage of NormTab is its use of program-aided symbolic reasoning, which streamlines code generation with- out requiring the entire table to be passed to the LLM. This enhances efficiency and eliminates de- pendencies on table size and answer position. With NormTab, only key elements like the title, header, and a few example rows are needed to generate SQL queries and obtain accurate answers. This approach reduces computational overhead while maintaining high accuracy, highlighting its practi- cal utility in various table-based tasks. 5.3 Analysis We conducted a detailed analysis of the impact of NormTab on the WikiTQ dataset. Table 4 shows that in 67% of cases (Category A), performance improved after applying NormTab. In 24% of cases (Category B), performance remained unchanged, indicating no improvement. Additionally, in 9% of cases (Category C), performance actually de- creased. The detailed experimental findings are summarized in Table 5. Categories Description % of Tables AWhere performance enhanced after applying NormTab67% BWhere no change in performance after applying NormTab24% CWhere the performance decreased after applying NormTab9% Table 4: Categories of tables on WikiTQ test dataset. - Tables (A) Tables (B,C) Overall (A,B,C) Original 46.28% 59.62% 51.30% NormTab 62.55% 56.76% 61.20% Change +16.27 -2.86 +9.9 Table 5: Result breakdown on WikiTQ dataset. Table 5 demonstrates that NormTab can improve overall performance by 9.9%. Notably, in Cate- gory A, we observed a substantial enhancement of 16.27%. However, Categories B and C saw a slight decline in performance due to highly complex table values and structures. 6 Conclusion In conclusion, our study introduces NormTab, a framework aimed at enhancing LLMs’ perfor- mance on tabular data by normalizing web tables. Through our investigation, we have shown the sig- nificance of web table normalization in overcom- ing challenges such as mixed values and structural variance. By leveraging LLMs’ textual understand- ing in data cleaning and normalization, NormTab improves table reasoning. Our experiments on challenging datasets demonstrate its effectiveness. Our work contributes to advancing techniques for LLMs in handling tabular data, emphasizing the importance of addressing web table challenges for improved performance. Further research can ex- plore additional normalization strategies and ex- tend NormTab’s applicability across various do- mains. This would establish a robust foundation for a wide range of downstream tasks involving table reasoning. 8\nLimitations Despite the advancements brought by NormTab, there are several limitations. First, while our frame- work significantly enhances the symbolic reasoning capabilities of LLMs on tabular data, there remains room for improvement in the normalization pro- cess, particularly with more complex table struc- tures. Additionally, for larger tables, LLMs may sometimes produce hallucinated results, leading to inaccuracies in the normalized output, indicating a need for better handling of extensive datasets. Furthermore, when dealing with tables that contain extremely noisy data, LLMs struggle to effectively clean and normalize the information. The presence of excessive noise and inconsistencies can hinder the normalization process and negatively impact overall performance. Addressing these limitations is crucial for further enhancing the robustness and reliability of NormTab. As we measure the accu- racy using the results obtained from LLM based Text-to-SQL model, it is important to note that some questions in the dataset may not directly map to SQL queries which may affect the performance. Ethical Considerations The datasets used in this study are accessible through the peer-reviewed articles cited in the ref- erences section. Additionally, our source code is openly available for future research under the MIT License. It is important to mention that our framework relies on GPT-3.5-turbo, which may in- herit ethical concerns associated with GPT models. These concerns include the potential for generat- ing responses to toxic content or displaying biased behavior.
[ "TabSQLify: Enhancing Reasoning Capabilities of LLMs Through Table Decomposition", "Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding", "TableLlama: Towards Open Large Generalist Models for Tables", "ReAcTable: Enhancing ReAct for Table Question Answering", "StructGPT: A General Framework for Large Language Model to Reason over Structured Data", "DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction", "LEVER: Learning to Verify Language-to-Code Generation with Execution", "Large Language Models are Versatile Decomposers: Decomposing Evidence and Questions for Table-based Reasoning", "PASTA: Table-Operations Aware Fact Verification via Sentence-Table Cloze Pre-training", "Large Language Models are few(1)-shot Table Reasoners", "Binding Language Models in Symbolic Languages", "Can Foundation Models Wrangle Your Data?", "Self-Consistency Improves Chain of Thought Reasoning in Language Models", "Language Models are Few-Shot Learners", "TaPas: Weakly Supervised Table Parsing via Pre-training", "Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs", "TabFact: A Large-scale Dataset for Table-based Fact Verification", "Data Wrangling for Big Data: Challenges and Opportunities", "Detecting Data Errors: Where are we and what needs to be done?", "Compositional Semantic Parsing on Semi-Structured Tables", "Database Systems : An Application-Oriented Approach", "Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval", "Database Tables", "Principles of Data Wrangling Practical Techniques for Data Preparation" ]
LaCo: Large Language Model Pruning via Layer Collapse
LaCo: Large Language Model Pruning via Layer Collapse Abstract Large language models (LLMs) based on trans- former are witnessing a notable trend of size expansion, which brings considerable costs to both model training and inference. However, existing methods such as model quantization, knowledge distillation, and model pruning are constrained by various issues, including hard- ware support limitations, the need for exten- sive training, and alterations to the model inter- nal structure. In this paper, we propose a con- cise layer-wise structured pruner called Layer Collapse (LaCo) , in which rear model layers collapse into a prior layer, enabling a rapid reduction in model size while preserving the model structure. Comprehensive experiments show that our method maintains an average task performance of over 80% at pruning ratios of 25-30%, significantly outperforming exist- ing state-of-the-art structured pruning methods. We also conduct post-training experiments to confirm that the LaCo effectively inherits the parameters of the original model. Additionally, we perform ablation studies on various settings ofLaCo . Finally, we discuss our motivation from the perspective of layer-wise similarity and evaluate the performance of the pruned LLMs across various pruning ratios. 1 Introduction Recently, large language models (LLMs) based on transformer have showcased impressive capabilities across diverse tasks. How- ever, the prevailing trend in model development leans towards larger scales, placing substantial de- mands on computational resources. To mitigate the above challenge, various ap- proaches have been explored to reduce the infer- ence and training costs of models or to derive a smaller model from an LLM, including model quantization , knowledge distillation (Liu et al., 2022; Hsieh et al., 2023; Shridhar et al., 𝜽𝒍+∆𝜽𝟏+...+∆𝜽𝒎𝜽𝒍ା𝟏𝜽𝒍ା𝟐𝜽𝒎 𝜽𝒍𝜽𝒍ା𝟏𝜽𝒍ା𝟐𝜽𝒎 ∆𝜽𝟏∆𝜽𝟐∆𝜽𝒎 (a) (b)Figure 1: An example of Reserving-Differences-while- Seeking-Common (RDSC) Layer Merge . In (a), we perform parameter differencing, which we regard as Reserving-Differences. In (b), we conduct parameter merging, which we interpret as Seeking-Common. 2023), and model pruning . How- ever, existing solutions exhibit certain notable draw- backs. Model quantization typically necessitates specific hardware support and often impacts model performance. Knowledge distillation often requires retraining a smaller model, which is costly and task- specific. Model pruning, whether non-structured or structured, has its issues. Non-structured prun- ing often involves model sparsity, which generally leads to certain performance loss and also relies on hardware support. Structured pruning entails re- moving specific modules, often altering the model structure and diminishing the model portability. Considering the above issues, we contemplate di- rectly pruning the model with a new idea: to prune some layers directly from a well-trained LLM and substitute the parameters of one layer for multiple layers, enabling effective model pruning. Specifically, we observe that merging the param- eter differentials of certain layers with their sub- sequent layers often does not significantly impact model performance, as illustrated in Figure 1. We term it the Reserving-Differences-while-Seeking- 1\nCommon (RDSC) Layer Merge , as it incorporates parameter differencing and merging. Building upon this insight, we introduce a streamlined yet potent layer-wise pruner dubbed Layer Collapse (LaCo) , in which rear layers collapse into a prior layer, with the objective of preserving the model’s output representation. In this paper: •TheLayer Collapse can directly remove 30%- 50% of model layers without training while main- taining the model performance. Experiments on multiple benchmarks show that our approach out- performs state-of-the-art structured pruning meth- ods under equivalent pruning ratios. •The Layer Collapse preserves the internal structure of LLMs, such as maintaining intermedi- ate dimensions. So, the pruned models can be seam- lessly integrated into existing applications without any changes to the system’s implementation. •We conduct post-training to confirm that Layer Collapse can efficiently inherit parameters and re- quires only minimal training to restore the pruned model to the original model’s loss convergence level. Additionally, we discuss our motivation and evaluate the performance of pruned models using LaCo across different pruning ratios. We also per- form ablation studies on various settings of LaCo . 2 Method 2.1 Reserving-Differences-while-Seeking- Common Layer Merge For the l-th layer of an LLM, we denote all its parameters, including those in self-attention (SAN) and MLP as θl. For the mconsecutive layers following it, we merge the parameters of θl+1,θl+2,···,θl+mintoθlto form θ∗ l: θ∗ l=θl+ (θl+1−θl) +···+ (θl+m−θl) =θl+mX k=1(θl+k−θl)(1) where (θl+k−θl)is the layer-wise parameter dif- ference. Given identical layer structures, we in- dependently apply these processes to both SAN and MLP. Then, these mconsecutive layers will be discarded. Subsequent model pruning will contin- uously involve RDSC Layer Merge which can be regarded as the continual collapse of layers onto specific layers, hence the name Layer Collapse . 2.2 Layer Collapse We dynamically merge adjacent layers from the topmost layer down, ensuring the pruned model’s Algorithm 1 Workflow of Layer Collapse Input: LLMM; Number of layers combined in each merge C; Layer range [L,H]; Minimum interval between two adjacent merged layers I; Few-shot Calibration Samples D; Threshold for representation similarity T Output: Pruned LLM M∗ 1:M∗← M 2:l← H − C 3:while l >=Ldo 4:K ← Min(C −1, Layer_Count( M∗)−l) 5:Mtmp←RDSC_Lay_Merge( M∗,l,K) 6: s←Avg_Cos_Sim( Mtmp,M,D) 7: ifs >Tthen 8: M∗← M tmp 9: l←l− I 10: ifl >Layer_Count( M∗)then 11: l←Layer_Count( M∗)−C 12: end if 13: else 14: l←l−1 15: end if 16:end while 17:return M∗ output representation on few-shot calibration sam- ples remains as similar as possible to the original model to minimize performance loss. Algorithm 1 summarizes the workflow of Layer Collapse : (1) Preparation For an LLM Mto be pruned, we define the number of layers to be merged during each merg- ing operation as C. We configure the merging to operate within a certain range of layers, denoted as [L,H]. As the layer merging operation inevitably leads to a performance loss, to prevent consecutive layer merging from causing a sharp decline in the model performance, we set a minimum interval of layers between two merging operations as I. Few- shot calibration samples D, typically a few plain sentences, are used during the pruning process. We perform forward computations on Dwith both the pruned and original models to obtain the output representations and ensure that the similarity of representations is not less than the threshold T. (2) Pruning (line 1-17) We present an illustration of Layer Collapse in Figure 2. We begin by initializing the model M∗ with the model Mand set a layer pointer lto start fromH − C . Then, the iterative process begins: RDSC Layer Merge (line 4-5) During each iter- 2\n𝜽𝒍𝜽𝒍ା𝟏𝜽𝒍ା𝟐𝜽𝒍ା𝓚 𝜽𝓛𝜽𝓗𝓜∗ 𝜽𝒍+∆𝜽𝟏+...+∆𝜽 𝓚𝜽𝒍ା𝟏𝜽𝒍ା𝟐𝜽𝒍ା𝓚 𝜽𝓛𝜽𝓗𝓜𝒕𝒎𝒑 Given the model ℳbefore pruning, evaluate ℳ௧௠௣on 𝒟 𝓚 = 𝑀𝑖𝑛(𝓒−1,𝐿𝑎𝑦_𝑐𝑛𝑡(𝓜∗)) 𝓜∗= 𝓜𝒕𝒎𝒑 𝒍 = 𝑀𝑖𝑛(𝒍− ,𝐿𝑎𝑦_𝑐𝑛𝑡(𝓜∗)−𝓒) 𝒔 = 𝐴𝑣𝑔_𝐶𝑜𝑠_𝑆𝑖𝑚(𝓜 𝒕𝒎𝒑,𝓜,𝓓) 𝒍 = 𝒍−𝟏 𝒍 ≥ 𝓛 𝒍 < 𝓛 𝒔 >𝒔 ≤ Pruned LLMNext IterationTry RDSC layer merging the following 𝓚layers of 𝒍RDSC Layer Merge Calculate Similarity Merge Evaluation and Adjustment Figure 2: An illustration of Layer Collapse. ation, our approach involves merging the Klayers following layer linto layer litself and then discard- ing the redundant Klayers, where Kis the mini- mum of C −1and the total layer count of M∗−l, implying merging either the subsequent C −1lay- ers or all layers following l, thus to prune the model M∗, resulting in the interim model Mtmp. Calculate Similarity (line 6) We process each sentence in Dusing forward computations with MtmpandMto derive their representations which are the output hidden-states of the last layer of the model. For every sentence, we then calculate the cosine similarity between these representations from both models, averaging these values to obtain the overall similarity score s. Merge Evaluation and Adjustment (line 7- 15)Then, we evaluate sagainst the threshold T. Should sexceed T, the current merge is considered successful. Then, Mtmpis updated to M∗for the next iteration, and the pointer lis adjusted down- wards by Ilayers. Conversely, lis simply reduced by a single layer. It is important to highlight that the instances may occur where lfalls below the total layer count of M∗after a series of successive merges. Consequently, it is required to reset lto the layer count in M∗− C, as illustrated in line 11. We iterate through the above process until lis less than Land output the pruned LLM. 2.3 Complexity Analysis The complexity of LaCo primarily depends on model inference. In the worst-case scenario, with Lset to 0 and Hto the total number of layers, if in each iteration the similarity sis less than T, all layers will be traversed. Thus, the worst-case time complexity is O(H ×||D|| ). For example, for Llama2-13B with 40 layers and ||D|| consisting of 10 sentences, the maximum number of inference steps would be only 400 sentences, which can be completed within minutes on a single GPU. 3 Experiments 3.1 Models To assess the effectiveness of the proposed LaCo , we conduct experiments on popular English LLMs, Llama2-7B and 13B . Ad- ditionally, we test the effectiveness on bilingual LLMs, specifically Baichuan2-7B and 13B , which support both Chinese and En- glish. We leverage the base versions of these LLMs. All these models are decoder-only models based on the transformer architecture. 3.2 Benchmarks To comprehensively evaluate the pruned model’s capabilities, we utilized the OpenCompass eval- uation framework . Specifi- cally, following OpenCompass categorization, we conduct evaluations in five aspects: Reasoning, Language, Knowledge, Examination and Under- standing. We select several benchmarks from each category. Reasoning: CMNLI , HellaSwag (HeSw) , PIQA . Language: CHID , WSC . Knowledge: CommonSenseQA (CoQA) , BoolQ . Examination: MMLU , CMMLU . Understanding: Race-High/Middle (H/M) (Lai 3\net al., 2017), XSum , C3 . We conduct evaluations using official scripts from OpenCompass, all zero-shot or few-shot, without additional training. Two evaluation modes are utilized: perplexity (PPL) and generation (GEN)1. For CHID and XSum, we use the GEN mode. For the WSC dataset, we use both PPL (WSC P) and GEN (WSC G) modes. The remain- ing benchmarks are evaluated using the PPL mode. The evaluation results on each benchmark are con- verted to a score by OpenCompass, where a higher score indicates better performance. OpenCom- pass provides official evaluation results for the Baichuan2 and Llama2 series. However, to avoid discrepancies resulting from hardware and software environments, as well as potential errors in official results, we reproduce all results to ensure fairness. 3.3 Baselines Since LaCo involves structured pruning, which di- rectly removes components from LLMs, we se- lect two state-of-the-art (SOTA) structured pruning methods, LLM-Pruner (LLMPru.) and SliceGPT , as our base- lines. These methods have surpassed the previous SOTA sparsity method, SparseGPT . In our experiments, we set the pruning ratios of baselines to be equivalent to or slightly smaller than LaCo to ensure fairness. 3.4 Settings Since previous work mostly set pruning ratios be- low 30%, we heuristically adjust the hyperparame- ters to bring the model pruning ratio close to 30%, as shown in Appendix A Table 7. We randomly se- lect 5 sentences from both the English and Chinese Wikipedia datasets for Baichuan2 and 10 sentences from English Wikipedia for Llama2 as few-shot cal- ibration samples. All experiments are conducted on a server with 8 Nvidia A100 80GB GPUs. 3.5 Main Results In Table 1, we present the results of four LLMs un- der different pruning methods across various bench- marks. “Dense” represents the official results of the unpruned LLMs in OpenCompass leaderboards, while “Dense∗” represents our reproduction of the “Dense” results. "LLMPru." and "SliceGPT" corre- spond to the two baselines, respectively. “Ratio" 1opencompass.readthedocs.io/en/latest/get_started/faq.htmlrefers to the overall pruning ratio, namely the pro- portion of the total number of pruned parameters to that of the unpruned model. “Lay.” denotes the total number of layers in the model. Comparing Dense and Dense∗, the results show not much difference, with most discrepancies within 5%. This indicates our experimental setup is error-free. To ensure fairness, we compare the results against Dense∗in the subsequent analyses. Upon comparing LaCo with the baselines, from Table 1, it can be observed that LaCo achieves the best results on most benchmarks, despite our prun- ing ratio being slightly higher than the baselines. To provide a more intuitive presentation of the results in Table 1, we compute the average scores of each pruner across all benchmarks (Avg.), the average scores per category (Reas., Lan., Know., Exam., Unde.), and the average performance per- centages relative to Dense∗across all benchmarks (Per.) in Table 2. Overall, our average scores are significantly higher than the baselines. LaCo shows superior performance in four out of five categories. Though there is a slight dip in Reasoning, it re- mains comparable. Additionally, LaCo’s average performance percentage across all datasets, relative to Dense∗, is far superior to the baselines. The aver- age percentage surpasses 80% in three out of four models, with the lowest being 73% on Baichuan2- 7B. In contrast, none of the baselines exceed 70%. To demonstrate the stability of the pruned mod- els by LaCo, we compute the performance percent- age relative to Dense∗(Appendix D.3 Table 16). LaCo-pruned models maintain performance above 70% on most benchmarks and do not experience crashes, with no performance dropping below 30%. Notably, on three benchmarks evaluated through GEN mode, CHID, XSUM, and WSC G, the LLMs pruned by LaCo maintain relatively stable perfor- mance, while models pruned by baselines exhibit poorly, with even multiple results becoming 0.00. GEN mode scores are based on the model’s gen- erated sentences, and the models pruned by base- lines are prone to producing meaningless repeti- tive outputs. In Appendix D.4 Table 17, we show- case an example from the Xsum benchmark, where Llama2-7B, pruned by baselines, produces nonsen- sical repeated outputs, whereas our LaCo yields outputs resembling normal sentences. In summary, LaCo is a superior pruner that pre- serves model performance and demonstrates excep- tional stability across various benchmarks. It relies solely on parameter differences and additions, with- 4\nLLM Pruner Ratio/Lay.Reasoning Language Knowledge Examination Understanding CMNLI HeSw PIQA CHID WSC PWSC GCoQA BoolQ MMLU CMMLU Race HRace MXSum C3 Llama2 -7BDense 0%/32 34.90 74.00 78.30 46.50 - 66.30 66.50 74.90 46.80 31.80 37.50 40.20 19.70 42.80 Dense∗0%/32 32.98 71.35 78.18 46.04 37.50 38.46 66.67 70.67 45.92 31.86 35.51 33.15 19.68 43.78 LLMPru. 27.0%/32 34.33 56.46 71.22 25.25 36.54 0.96 42.51 55.20 23.33 25.25 22.56 22.35 11.51 25.64 SliceGPT 26.4%/32 31.70 50.27 66.21 20.79 36.54 19.23 41.36 38.32 28.92 25.37 21.07 21.66 4.89 39.78 LaCo 27.1%/23 34.43 55.69 69.80 36.14 40.38 25.00 45.70 64.07 26.45 25.24 22.61 23.61 15.64 39.67 Llama2 -13BDense 0%/40 41.40 77.50 79.80 53.00 - 66.30 66.70 82.40 55.00 38.40 58.90 63.00 23.40 46.10 Dense∗0%/40 32.99 74.83 79.71 52.97 50.96 63.46 66.91 71.50 55.63 38.74 58.03 60.24 23.56 47.51 LLMPru. 24.4%/40 33.03 67.76 76.66 35.64 40.38 0.00 50.86 56.42 25.21 24.71 22.47 22.08 19.17 32.33 SliceGPT 23.6%/40 29.82 55.71 69.04 19.31 36.54 36.54 47.26 37.86 37.14 25.79 23.41 24.03 5.27 41.92 LaCo 24.6%/30 32.86 64.39 74.27 40.10 52.88 35.58 52.66 63.98 45.93 32.62 54.49 56.55 14.45 44.93 Baic2. -7BDense 0%/32 32.90 67.00 76.20 82.70 - 66.30 63.00 63.20 54.70 57.00 52.50 50.90 20.90 64.50 Dense∗0%/32 33.37 67.56 76.17 82.67 41.35 63.46 63.14 63.30 54.25 56.95 52.63 51.04 20.84 64.55 LLMPru. 24.2%/32 32.28 53.66 71.82 69.80 53.85 0.00 47.83 61.19 24.93 25.69 21.96 22.28 15.98 41.64 SliceGPT 22.2%/32 32.07 25.29 50.33 14.85 36.54 0.00 19.57 39.30 25.18 25.25 23.53 22.49 0.00 26.58 LaCo 24.2%/23 33.00 52.28 68.50 76.24 42.31 26.92 47.26 56.15 31.53 31.24 28.99 27.72 12.03 50.85 Baic2. -13BDense 0%/40 32.70 70.80 78.10 83.20 - 63.20 65.60 67.00 59.50 61.30 67.20 68.90 25.20 65.60 Dense∗0%/40 33.21 71.10 78.07 83.17 41.35 63.46 65.60 67.00 58.81 61.27 67.27 68.94 24.95 65.64 LLMPru. 24.3%/40 33.80 53.57 71.82 72.77 37.50 0.00 38.82 56.54 23.19 25.18 21.17 21.61 13.67 39.89 SliceGPT 22.8%/40 32.07 25.85 51.03 10.40 36.54 0.00 18.02 37.83 22.95 25.26 21.56 21.52 0.00 24.99 LaCo 24.7%/30 33.03 60.71 68.88 76.73 44.23 60.58 55.45 62.35 51.35 53.65 56.92 57.80 12.32 61.10 Table 1: The main results of our experiments. Lay. is the number of model layers. Dense is the official LLM results in OpenCompass and Dense∗is our reproduction. LLMPru. andSliceGPT are two baseline comparisons. out altering the model’s internal structure, resulting in a concise and efficient pruning solution. 3.6 Pruning Time To verify that LaCo has lower time complexity and faster pruning speed than the baselines, we com- pare LaCo with them for 27% sparsity pruning of Llama2-7B on a single A100 GPU. For fairness, we only measure the main pruning process, exclud- ing the time for loading models, loading data, and storing models. The results in Table 3 show LaCo pruning is more efficient compared to the baselines. 3.7 Memory Usage and Inference Speed We also aim to investigate whether the model pruned by LaCo offers advantages in memory us- age and inference speed compared to the models pruned by the baselines. In Table 4, we present the average memory consumption and inference speed of the Llama2-13B pruned models from Table 1 on the English Wiki dataset (The results for all models are in Appendix D.1, Table 14). All models are loaded in Bf16 on a single A100 GPU. LLM Pruner Avg. Per. Reas. Lan. Know. Exam. Unde. Llama2 -7BDense∗46.55 100% 60.83 40.67 68.67 38.89 33.03 LLMPru. 32.36 67.79% 54.00 20.92 48.86 24.29 20.52 SliceGPT 31.87 67.37% 49.39 25.52 39.84 27.15 21.85 LaCo 37.46 80.28% 53.30 33.84 54.89 25.85 25.38 Llama2 -13BDense∗55.50 100% 62.51 55.80 69.20 47.18 47.34 LLMPru. 36.19 65.87% 59.15 25.34 53.64 24.96 24.01 SliceGPT 34.97 61.78% 51.52 30.80 42.56 31.46 23.66 LaCo 47.55 85.21% 57.17 42.85 58.32 39.28 42.60 Baic2. -7BDense∗56.52 100% 59.03 62.49 63.22 55.60 47.26 LLMPru. 38.78 69.65% 52.59 41.22 54.51 25.31 25.46 SliceGPT 24.36 44.27% 35.90 17.13 29.44 25.22 18.15 LaCo 41.79 73.26% 51.26 48.49 51.70 31.38 29.90 Baic2. -13BDense∗60.70 100% 60.79 62.66 66.30 60.04 56.70 LLMPru. 36.40 60.70% 53.06 36.76 47.68 24.18 24.08 SliceGPT 23.43 40.33% 36.32 15.65 27.92 24.10 17.02 LaCo 53.94 87.94% 54.21 60.51 58.90 52.50 47.04 Table 2: The average scores and the percentages com- parison with the Dense∗. 5\nPruner LaCo LLM-Pruner SliceGPT Pruning Time 14.7s 15.9s 313s Table 3: Pruning time for different pruners. Pruner LaCo Dense LLMPru. SliceGPT Memory 19422 25902 19874 22506 Infer. 38.65 29.98 27.15 (↓) 35.16 Table 4: Memory usage (MB) and inference speed (to- kens/s) of the Llama2-13B pruned by different pruners. ↓indicates performance worse than the Dense model. The results indicate that the LaCo-pruned mod- els consume less memory and achieve faster infer- ence speeds. Moreover, while existing baselines may decrease inference speeds compared to the dense model, LaCo does not have this issue. 4 Further Analysis 4.1 Post-training and Re-pruning 4.1.1 Post-training Due to the inevitable performance loss caused by pruning, we investigate whether models pruned us- ing our LaCo can effectively inherit parameters from the original model and quickly recover per- formance through post-training on the full param- eters. Specifically, we select the pruned Llama2- 7B and Baichuan2-7B models obtained through LaCo in the main experiments and post-train them. For training pruned Llama2-7B, we utilize approx- imately 1.0 billion tokens from the English dataset, while for pruned Baichuan2-7B, we employ ap- proximately 1.25 billion tokens, with a 50% from English and Chinese. The detailed implementation can be found in the Appendix C. In Figure 3, we present the loss curves. It can be observed that both models converge rapidly during training, with the loss showing a sharp decline af- ter about 250 steps, then stabilizing. The pruned Llama2-7B and Baichuan2-7B models, both ap- proximately 5 billion parameters, exhibit final con- vergence losses around 1.6 and 2.0, which are quite comparable to the reported values of 1.75 for Llama2-7B and 1.9 for Baichuan2-7B in their tech- nical reports. The post-training of pruned Llama2- 7B and Baichuan2-7B on 4 Nvidia A100 80GB GPUs takes approximately 28 hours and 35 hours, respectively. Training a 5B LLM from scratch requires at least 500 billion tokens on hundreds 0 500 1000 1500 step1.61.82.02.22.4lossPruned Llama2-7B + Post training original smoothed 0 500 1000 1500 2000 step2345678lossPruned Baichuan2-7B + Post training original smoothedFigure 3: Loss curves for post-training. LLM Method Avg. Reas. Lan. Know. Exam. Unde. Llama2 -7BDense∗46.55 60.83 40.67 68.67 38.89 33.03 LaCo 37.46 53.30 33.84 54.89 25.85 25.38 LaCo +post train40.33 56.66 36.43 61.85 27.40 26.70 LaCo +post train +re prune32.40 48.07 20.26 49.46 25.72 24.56 Baic2. -7BDense∗56.52 59.03 62.49 63.22 55.60 47.26 LaCo 41.79 51.26 48.49 51.70 31.38 29.90 LaCo +post train40.46 51.67 40.82 53.97 27.98 31.28 Table 5: Average scores across all categories and the overall average score of pruned models, post-trained models, post-trained models followed by re-pruning. of A100 GPUs for several months. However, we achieve a loss-converged model of similar size with only one-thousandth of their cost. This indicates that the pruned models have effectively inherited the parameters of the original models, enabling them to rapidly recover performance with minimal post-training and achieve convergence. We also evaluate the post-trained models on multiple benchmarks with detailed results in Ap- pendix E Table 18. The average scores for each category and the overall average are in Table 5. From the tables, it is evident that the post- training of pruned Llama2-7B significantly im- proves its performance across various benchmarks. However, the performance of pruned Baichuan2- 7B after post-training shows mixed results, with some benchmarks showing improvement while oth- ers exhibit a decrease and there is also a slight decrease in the overall score. We speculate that the pre-training data of Baichuan2-7B includes a variety of sources, resulting in a data distribution different from that of our post-training data, hinder- ing the effectiveness of post-training. However, the consistent score improvement on pruned Llama2- 6\n7B indicates that models pruned using our LaCo indeed effectively inherit the parameters and can regain performance through low-cost post-training. LaCo achieves excellent performance through post-training, prompting us to compare its effective- ness with the SOTA LLM-Pruner on the same train- ing data. Our results, shown in the Appendix D.2 Table 15, indicate that the model pruned by LaCo outperforms the one pruned by LLM-Pruner after post-training. Meanwhile, LaCo also significantly reduces training resource consumption. 4.1.2 Re-pruning Since it is possible to partially restore performance using post-training on an LLM with approximately 25%-30% of its parameters pruned, it raises the question of whether we can further prune the post- trained model to obtain one with only around 50% parameters while still maintaining relatively good performance. Thus, we further prune the previously post-trained pruned Llama2-7B model using LaCo, resulting in a model with 17 layers, retaining 55% of the parameters of the original Llama2-7B model. We evaluate the re-pruned model. The detailed results are shown in Appendix E Table 18 and the average results are in Table 5. The tables show that even with only 55% param- eters, the model still retains about 70% of the orig- inal 7B model performance. However, our train- ing data quality and scale are limited. With more and better training data, LaCo should demonstrate even greater utility in the pruning+post-training+re- pruning pipeline on larger models. 4.2 Layer-wise Similarity This section discusses our motivation for merging adjacent layers. Our primary motivation comes from observing that the changes in parameters and output representations between adjacent layers in the LLMs are not particularly significant. In Figure 4, we show the L2 similarities between the SAN q, k, v matrices of each layer and their counterparts in the subsequent layer, as well as the upscaling and downscaling matrices of the MLP for both Llama2-7B and Baichuan2-7B. The re- sults indicate that the maximum L2 values between corresponding matrices in adjacent layers are gen- erally no more than 200. Given the large sizes of the MLP upscaling (11008x4096) and SAN q, k, v (4096x4096) matrices, the change in each element between adjacent layers is minimal. In Figure 5 (a), we randomly select 20 sentences 0 10 20 30 Layers80100120140160180200L2 valueBaichuan2-7B q k v up down 0 10 20 30 Layers6080100120140160180L2 valueLlama2-7B q k v up downFigure 4: The L2 similarity of corresponding matrices between adjacent layers. 051015202530 Layers0.20.40.60.81.0Cosine Similarity Llama2-7B Baichuan2-7B (a) The cosine similarity of output representations be- tween adjacent layers. 10 12 14 16 18 Layers0.99600.99650.99700.99750.99800.99850.99900.99951.0000Cosine Similarity Llama2-7B Baichuan2-7B(b) The similarity of output representations before and after RDSC Layer Merge. Figure 5: The cosine similarity of layer representations. from Wikipedia and calculate the cosine similar- ity between the hidden-states of adjacent layers outputs. The results show that for both Baichuan2- 7B and Llama2-7B, the representation similarity between adjacent layers from layers 3 to 28 is typi- cally very close to 1. The high similarity in param- eters and representations between adjacent layers leads us to consider that a single layer might re- place multiple subsequent layers. Moreover, the similarity in parameters suggests focusing on the differences between layers. In- spired by previous model merging work , we come up with collecting parameter differences between layers and merging them into a single layer. To verify that RDSC Layer Merge can replace multiple layers with one, we conduct the experiment: we merge every four consecutive layers into one within lay- ers 10 to 19 and evaluate the cosine similarity be- tween the merged layer’s output and the original last layer’s output, as in Figure 5 (b), where the low- est cosine similarity on the 4096-dimensional vec- tors is above 0.996, confirming the effectiveness of RDSC Layer Merge in preserving representations. 4.3 Varying Pruning Ratio In this section, we explore the performance of LaCo at different pruning ratios. We conduct experiments on Llama2-7B and Llama2-13B, controlling the 7\nLLM Ratio/Lay. Avg. Reas. Lan. Know. Exam. Unde. Llama2 -7B0%/32 46.55 60.83 40.67 68.67 38.89 33.03 12.0%/28 36.13 44.46 36.31 56.35 26.34 24.54 27.1%/23 37.46 53.30 33.84 54.89 25.85 25.38 45.0%/17 30.00 43.66 19.27 48.06 24.78 21.44 Llama2 -13B0%/40 55.50 62.51 55.80 69.20 47.18 47.34 14.6%/34 53.89 60.56 54.51 63.58 46.10 47.46 24.7%/30 47.55 57.17 42.85 58.32 39.28 42.60 49.7%/20 38.27 48.20 26.89 49.26 32.82 36.58 Table 6: Model performance at different pruning ratios. pruning ratios at approximately 10%, 25% (our main experiments), and around 50% by setting dif- ferent hyperparameters (as shown in Appendix A Table 8)2. We evaluate pruned models accordingly. The average results are shown in Table 6 and the detailed results are shown in Appendix E Table 19. As the pruning ratio increases, overall model performance decreases. However, from a pruning ratio of around 10-15% to about 25%, the perfor- mance does not significantly decline, indicating our method’s stability within this range. Furthermore, at a pruning ratio close to 50%, the model still main- tains approximately 70% performance, demonstrat- ing that our method prevents model crashes even with about half of the parameters removed. 5 Related Work Model Quantization reduces model size by con- verting weights from high-precision floating points to lower-precision floating points or integers. SmoothQuant quantizes both weights and activations while smoothing activa- tion outliers. Gptq uses ap- proximate second-order information for quantiza- tion. Qlora backpropagates gradients through a frozen, 4-bit quantized model into Low Rank Adapters. OmniQuant optimizes various quantization parameters. Knowledge Distillation transfers knowledge from a large model to a smaller one. Distilling step-by-step trains smaller models that outperform LLMs. DISCO distills counterfactual knowledge from LLMs. SOCRATIC COT distills the ability of Chain-of-Thought from LLMs. 2Further ablation study on the hyperparameters D,T, dif- ferent similarity metrics, different merging strategies can be found in Appendix B.ZEPHYR applies distilled di- rect preference optimization to learn a chat model. Model Pruning refers to techniques for improv- ing model efficiency by sparsification or param- eter removal. Non-structured pruning often in- volves model sparsity. SparseGPT reduces the pruning problem to large-scale instances of sparse regression, while SpQR identifies and iso- lates outlier weights during LLM sparsification. Structured pruning primarily removes parts of model modules. LLM-Pruner se- lectively eliminates non-critical structures based on gradient information. ShearedLLaMA uses targeted structured pruning and dynamic batch loading to prune Llama2. However, model quantization and sparsification typically require special hardware and usually im- pact performance. Knowledge distillation is costly and task-specific. Existing structured pruning meth- ods often disrupt the model inherent structure. In contrast, LaCo maintains the model structure, which is more concise and preserves excellent per- formance. Although some existing works have utilized layer-skipping/dropping to accelerate inference, LaCo is fundamentally different. It is the first pruner based on layer collapse, resulting in a smaller, faster, more memory-efficient model with strong performance. Furthermore, those methods typically require training new parameters to deter- mine which layers to skip/drop during inference, whereas LaCo does not require any training. 6 Conclusion In this paper, we propose a concise layer-wise structured pruning method called Layer Collapse (LaCo) , which merges rear model layers into pre- ceding layers for rapid model size reduction. LaCo does not require special hardware support and pre- serves the model intrinsic structure. Experimental results show that LaCo significantly outperforms current SOTA structured pruning methods, also re- vealing potential parameter redundancy in existing LLMs. We conduct ablation studies on various settings of LaCo . We also post-train the pruned models, confirming that LaCo effectively inher- its the original model parameters. Additionally, we discuss our motivation from the perspective of layer-wise similarity and explore the performance ofLaCo -pruned models at different pruning ratios. 8\nLimitations Due to LaCo ’s pruning process primarily relying on layer-wise iterations, it cannot directly control the pruning ratio like previous methods. Instead, it re- quires tuning hyperparameters such as the represen- tation similarity threshold Tfor control. In future work, we will summarize additional experimental patterns regarding how to set hyperparameters to achieve a specific pruning ratio. Our motivation comes from current model merg- ing techniques, but like existing baselines (LLM- Pruner and SliceGPT ), our method lacks a complete theoret- ical proof. We consider this as future work. Additionally, there may be better merging meth- ods, even though our experimental results demon- strate that LaCo’s current merging approach is ef- fective. We will continue to search for improved layer merging methods in the future.
[ "SliceGPT: Compress Large Language Models by Deleting Rows and Columns", "Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch", "Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning", "Baichuan 2: Open Large-scale Language Models", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "CMMLU: Measuring massive multitask language understanding in Chinese", "SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression", "QLoRA: Efficient Finetuning of Quantized LLMs", "Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes", "Jump to Conclusions: Short-Cutting Transformers with Linear Transformations", "Eliciting Latent Predictions from Transformers with the Tuned Lens", "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot", "Distilling Reasoning Capabilities into Smaller Language Models", "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers", "LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale", "PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance", "ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers", "Merging Models with Fisher-Weighted Averaging", "Measuring Massive Multitask Language Understanding", "CLUE: A Chinese Language Understanding Evaluation Benchmark", "PIQA: Reasoning about Physical Commonsense in Natural Language", "Reducing Transformer Depth on Demand with Structured Dropout", "ChID: A Large-scale Chinese IDiom Dataset for Cloze Test", "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions", "HellaSwag: Can a Machine Really Finish Your Sentence?", "Similarity of Neural Network Representations Revisited", "Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension", "Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization", "Attention is All you Need", "RACE: Large-scale ReAding Comprehension Dataset From Examinations", "The Winograd Schema Challenge", "Multi-Granularity Structural Knowledge Distillation for Language Model Compression", "CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge" ]
Towards Effective Counter-Responses: Aligning Human Preferences with Strategies to Combat Online Trolling Anonymous EMNLP submission
Towards Effective Counter-Responses: Aligning Human Preferences with Strategies to Combat Online Trolling Anonymous EMNLP submission Abstract Trolling in online communities typically in- volves disruptive behaviors such as provoking anger and manipulating discussions, leading to a polarized atmosphere and emotional dis- tress. Robust moderation is essential for miti- gating these negative impacts and maintaining a healthy and constructive community atmo- sphere. However, effectively addressing trolls is difficult because their behaviors vary widely and require different response strategies (RSs) to counter them. This diversity makes it chal- lenging to choose an appropriate RS for each specific situation. To address this challenge, our research investigates whether humans have preferred strategies tailored to different types of trolling behaviors. Our findings reveal a cor- relation between the types of trolling encoun- tered and the preferred RS. In this paper, we introduce a methodology that recommends an appropriate RS for various trolling behaviors. This approach is supported by a dataset we con- structed, which aligns these strategies with user preferences. This enables the generation of effective counter-responses by recommending the most appropriate strategies based on these preferences. The experimental results demon- strate that our proposed approach improves dis- cussion quality and reduces the negative effects of trolls, thereby enhancing the online commu- nity environment. 1 Introduction In online communities, trolling is characterized as a disruptive activity, such as teasing, provoking anger, offending others, dominating discussions, or manipulating opinions . Such behav- iors often interfere with the productive exchange of ideas , contribute to polarized and hostile atmospheres , and cause significant emotional distress to victims . To preserve a positive com- munity atmosphere, moderation is essential, as it helps mitigate the impact of trolling and maintain the continuity of constructive discussions . However, determining the appropriate response to trolls is not straightforward. As Hardaker (2010) noted, the range of troll behaviors is diverse, and the corresponding response strategies for address- ing them should vary accordingly. For example, when faced with highly politicized and offensive comments, the responses should explicitly and strongly incorporate clear warnings. By contrast, when a troll shares off-topic opinions during fo- cused discussions, the responses should gently guide them to realign their contributions with the goals of the discussion. This range of behaviors and required responses adds to the challenge of choosing the most appropriate strategy for a given situation. A recent study has found that humans tend to prefer certain strategies when coun- tering hate speech. Inspired by this finding, we clearly speculated that humans might also have a preferred response tailored to each distinct troll sit- uation. To investigate this, we explored whether preferences exist for various response strategies to different trolling behaviors. Our findings showed a clear correlation between the types of trolling encountered and response strategies preferred, en- hancing our understanding of how to counter dif- ferent trolling behaviors appropriately. In this paper, we aim to develop a method for generating the most effective strategy for respond- ing to trolls in diverse situations, thereby promoting a desirable online community environment. Ac- cordingly, we propose a method that recommends a specific response strategy for each type of trolling behavior, which enables the generation of appro- priate Counter- Responses (CR) to trolls aligned with human preference. To this end, we investi- gated the relationship between different Trolling Strategies (TS) and the corresponding preferred 1\nResponse Strategies (RS). Then, we constructed a dataset that matches RS to user preferences across various troll contexts. Utilizing this dataset, we developed a recommendation system for RS and designed a CR generation methodology that se- lects the most appropriate strategy based on this system. Our experimental results demonstrate that our methodology effectively promotes constructive discussions and reduces the negative impacts of trolling, as well as improving the engagement of community users. Our contributions and findings are threefold: •This is the first study to explore the relationship between human preferences and response strate- gies for addressing various trolling behaviors, shedding light on novel approaches for managing online communities. •We propose a novel CR generation methodology, aligning user preferences with response strate- gies, and enhancing the effectiveness of auto- matic moderation. •Our experimental results demonstrate that our proposed approach guides constructive discus- sion and mitigates the negative impacts of trolls. 2 Related Works Troll behaviors vary widely, from explicit expres- sions of hate, such as promoting discrimination based on gender, to subtle annoyance, including di- gressing onto irrelevant topics or misleading others with harmful advice . Hardaker (2013) outlined the types of trolling strategies ranging from covert to overt and examined the types of response strategies accord- ingly. Attempts to implement automatic counter- trolling have been made , but the chal- lenge of automatically selecting the appropriate RS still remains. In this study, we explore effective CR generation strategies to address these gaps. When moderating trolls to preserve a healthy online community environment, a critical factor is community approval of the intervention ap- proach . Common responses to trolling include ignoring , delet- ing comments , and banning users or communities . However, these approaches have been criticized for potential contagion of such behav- ior , leading to censorship accu- sations , and neglect- ing user feedback . Addition- ally, Zheng et al. (2023) found that the commonly used gentle guiding approach is not universally pre- ferred. In this paper, we explore how to choose the appropriate RS for countering trolls, motivated by the previous research that highlights significant variations in preferences for responding to hate speech . 3 Methodology In this section, we explore the relationship between TS and preferred RS, detailing the process we used to construct a dataset that aligns human preferences with RS. Furthermore, we outline our method for generating CRs using this dataset. 3.1 Data Collection Our data collection involves crawling posts and troll comments from various subreddits on Reddit published in 2022. To ensure that collected posts and comments provide adequate contextual infor- mation for understanding discussions, we applied a character limit of a minimum of 12 and a maxi- mum of 512 characters. We excluded texts deleted by Reddit or users and samples containing external links or media materials to prevent loss of contex- tual information due to embedded links, photos, or videos. To gather texts with a high likelihood of being troll comments, we first selected posts that had root comments with negative scores. We then employed GPT-3.5 for troll classification. Further details are shown in Appendix A.2. 3.2 Investigation of Human Preference We adopted the taxonomy of trolling behavior de- veloped by Hardaker (2013), which classifies TS ranging from covert to overt. This taxonomy classi- fies trolling behaviors along a continuum, starting from the covert strategy, such as Disgression , to the overt strategy, Aggression . For RS, we utilized a set of seven response strategies to counter-trolling. These strategies include En- gage ,Ignore , and Expose as nudging responses, andChallenge ,Critique ,Mock , and Reciprocate as confrontational responses. Detailed descriptions of TS and RS are provided in Appendix A.1. We recruited six annotators and provided them with guidelines on both TS and RS. Additionally, 2\n0 20 40 60 80 100 % (Freq)Aggression Shocking Endangering Antipathy Hypocriticism DigressionTroll Strategy Preferred Response Strategy Engage IgnoreExpose ChallengeCritique MockReciprocateFigure 1: Distribution of preferred RS relative to the TS. The top three bars indicate overt trolls, and the bottom three bars indicate covert trolls. we instructed them to label responses that resonate with, influence, or accurately represent viewers’ perspectives as preferred responses. We conducted an offline QA session using the same 40 samples to ensure that they were fully engaged and under- stood the annotation task. Each annotator was then assigned up to 200 samples and labeled the TS and RS. Annotators were instructed to skip samples that were unclear, non-English content, and not related to trolling. Finally, we collected a dataset of 873 labeled samples. Detailed annotation processes are provided in Appendix A.3. 3.3 Aligning Human Preference with RS Figure 1 shows the distribution of preferred RS rel- ative to the types of TS within our dataset. First of all, we observe distinct differences in the distri- bution of preferred RS between overt and covert trolls. Delving into the details of TS, we also ob- serve a gradual increase in the preference for nudg- ing strategies such as Engage ,Ignore , and Expose as moving from the most overt troll strategy, Ag- gression , to the most covert troll strategy, Dgres- sion. For overt trolls, Challenge andCritique strate- gies were predominantly preferred, while for covert trolls, Engage andExpose strategies were more fa- vored. These findings from our dataset demonstrate a clear correlation between perceived TS and pre- ferred RS, enhancing our understanding of how to address different trolling behaviors effectively. 3.4 Counter-Response Generation Our goal is to generate appropriate and human- preferable CRs for trolls automatically by respect- ing the connection between TS and RS. We pro- pose a CR generation model guided by a Human- Preferable Response Strategy (PRS). Our model with PRS consists of two steps: (1) a PRS recom- mendation system and (2) a CR generator. A PRS recommendation system takes a post, a troll com- ment, and the comment’s TS as inputs and predicts which RS is preferred the most. Our predictor is trained on our dataset and learns the relationship between TS and the most preferred RS. Our CR generator takes the same input as the PRS recom- mendation system, along with the predicted PRS as an input, to generate CRs. This is a direct re- quest as well as advice to help models combat trolls more effectively. Our generator is expected to gen- erate highly favorable responses by utilizing the predicted PRS. 4 Experiments 4.1 Experimental Setup Models We use GPT-3.5 , the ac- cessible Large Language Model capable of gen- erating human-like sentences, as our default CR generator, employing in-context learning. In our experiments, we compare three models. (1) De- fault model deals only with an online post and a troll comment left on the post, for its generation. (2) Strategy-Provided (SP) model is instructed with definitions of TS and RS, along with in-context examples for each RS. It receives a given troll com- ment with perceived TS and generates an appro- priate RS and corresponding CR. (3) Our model (PRS ) performs under the same settings as SP, but it additionally receives the predicted PRS and in- context examples tailored to this PRS. For the PRS recommendation system, we fine-tuned Flan-T5 Large. Appendix B contains details of the experi- mental setup. Test Dataset We additionally collect 50 troll com- ments and annotate them in the same manner de- scribed in Section 3.2. Evaluation Metrics We asked five evaluators to assess the generated responses in the test dataset across three key aspects: 1) Preference assesses how well the responses resonate with, change, or represent their views. Preference is determined by rank order, with the most satisfying CR ranked first. 2)Constructiveness assesses whether the response contributes positively or not to the discussion by offering solutions, support, or constructive criti- cism. A high constructiveness score indicates that the response has facilitated constructive discussion and encouraged participation, whereas a low score suggests that it has escalated conflict or derailed the conversation. 3) Supportiveness assesses the level of understanding and empathy towards the 3\n0 20 40 60 80 100 % (Freq)Aggression Shocking Endangering Antipathy Hypocriticism DigressionTroll Strategy 0 20 40 60 80 100 % (Freq)Troll Strategy 0 20 40 60 80 100 % (Freq)Troll Strategy 0 20 40 60 80 100 % (Freq)Aggression Shocking Endangering Antipathy Hypocriticism DigressionTroll Strategy Preferred Response Strategy Engage Ignore Expose Challenge Critique Mock ReciprocateFigure 2: Distribution of humans’ perceived response strategies of generated responses (left: Default, center: Strategy-Provided, right: PRS (Ours)). 0 20 40 60 80 100 Win Ratio (%)Ours Ours Default5248 7426 7030Default SP SP Figure 3: Visualization of the rank test for preference. Default SP Ours Model12345Constructiveness4.03 3.034.25 Default SP Ours Model12345Supportiveness3.94 3.054.07 Figure 4: The result scores of our experiments (left: Constructiveness, right: Supportiveness). target individuals or groups involved in the discus- sion . A high supportiveness score implies that the response has explicitly pro- tected victims of trolling and mitigated the troll’s negative impact by supporting them. Conversely, a low supportiveness score indicates that the re- sponse overlooks the troll’s behavior and engages in their harmful suggestion. These two criteria are measured on a Likert scale of . Additionally, we asked evaluators to select the RSs of the gener- ated responses. 4.2 Experimental Results and Discussions Preference Figure 3 presents the win ratios of AB testing that we converted the preference rankings of three methods. Default and our models beat the Strategy-Provided model by over 70%, and ours beat the Default model by a small margin (4.8%p). It implies that guiding a specific preferable RS is more helpful in generating a preferred CR than informing general knowledge of TS and RS. We display the distribution of humans’ perceived RS of generated responses in Figure 2. The Default model generally responded using Nudging strate- gies, while the SP model utilized Confront strate- gies against most trolls. However, our model used flexible RS: the Confront strategies to overt trolls andNudging strategies to covert trolls. Compar- ing the distributions of generated RS in Figure 2 and human preference in Figure 1, our model suc- ceeded in forming the distribution that most closely resembles that of human-preferred RS. Constructiveness Our method achieved the high- est constructiveness score of 4.25 compared to the baseline scores of 4.03 for Default and 3.03 for SP (see Figure 4). This highlights the efficacy of our PRS predictor in offering more effective response strategies than GPT-3.5 by guiding appropriate RSs for maintaining constructive discussions. In practi- cal cases, our model improved discussion quality by generating responses that indicated off-topic comments from trolls and reminded the original topic to refocus the conversation. Supportiveness Our model achieved the highest supportiveness at 4.07, compared to 3.94 for De- fault and 3.05 for SP. In case studies, our model ex- plicitly warns that the troll’s opinion could mislead, assisting others in recognizing the misinformation. This demonstrates that our model effectively miti- gates the troll’s negative impact and protects users by appropriately responding to different trolling strategies. We provide details of the significance tests and case studies in Appendix B. 5 Conclusion In this work, we addressed the challenge of trolling in online communities by developing a methodol- ogy that matches RS with human preferences. Our proposed approach allows for the generation of CR that not only promotes constructive discussions but also mitigates the harmful effects of trolling. Our experiments validated that our proposed method- ology effectively improved discussion quality and enabled users to identify troll threats, thereby en- hancing online community environments. 4\nLimitations In this study, we collected about 900 labeled data. The limited size of the dataset is due to the ex- clusion of a substantial amount of non-troll data and deleted comments from the initially crawled datasets. Additionally, constraints such as bud- get limitations, the limited availability of annota- tors, and annotator fatigue restricted our capac- ity to label a larger dataset. These limitations also prevented us from applying a variety of train- ing approaches, such as supervised fine-tuning (SFT) (Tekiro ˘glu et al., 2020; Chung et al., 2021; Lee et al., 2022) or reinforcement learning from hu- man feedback (RLHF) with the PPO algorithm , with Large Language Models (LLMs) like LLaMA and Mixtral . Therefore, we adopted a methodology utilizing an accessible LLM, GPT-3.5, with in-context learning. Despite its size, our dataset reveals clear patterns between troll strategies and response strategies. As the experiment expands and more data is collected, we expect that our methodology can be utilized in various ways. This aspect falls outside the scope of our current research and will be addressed in future work. Although we provide the annotators with de- tailed guidelines to facilitate a clear understand- ing of troll strategies and response strategies, there are still differences in perceptions of trolling and preferences of CR. Also, as the dataset has been annotated with trolling strategies, response strate- gies, and human preferences from the perspective of general Reddit users, variations in annotations may arise due to differences in the annotators’ un- derstanding of the context and culture of specific communities. Perceived trolling points, which are linked to community understanding, can vary and thus influence the choices of preferred response strategies. However, these differences also mirror real-world variations and can be viewed as a natural diversity of opinions. Our proposed approach, which generates appro- priate responses to perceived trolls, can be utilized alongside judgments on trolling that may involve automated decisions using user flagging or moder- ator determinations. This enables its application as an automatic counter-response generation sys- tem. While automatic counter-response genera- tion systems avoid the problem of censorship, they can still manifest biases and result in unintended consequences . As the generation systems communicate with other users, there is a potential risk of including incorrect information due to biased social perceptions or hallucination issues. Despite these risks, we believe that further investigation and analysis of these systems could provide valuable insights and guidance on how on- line communities can adapt, practice, and moderate in an era filled with AI-generated content . Ethics Statement Our annotation experiment was approved by the Institutional Review Board (IRB)1. All participants in annotation tasks indicated their understanding of the procedure for the annotation and acknowledged their agreement to participate. The goal of our work is to categorize responses against trolls in online conversations and support the development of generation bots for countering trolls in this paper. Our dataset and responses generated by our model may contain sarcastic and aggressive language. We tried to observe how they communicate as-is, even though it could include socially biased content or hate speech.
[ "\"There Has To Be a Lot That We're Missing\": Moderating AI-Generated Content on Reddit", "Can Language Model Moderators Improve the Health of Online Discourse?", "Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "Ignoring As a Moderation Strategy for Volunteer Moderators on Twitch", "ELF22: A Context-based Counter Trolling Dataset to Combat Internet Trolls", "Training language models to follow instructions with human feedback", "Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?", "What Makes Online Communities 'Better'? Measuring Values, Consensus, and Conflict across Thousands of Subreddits", "Towards Knowledge-Grounded Counter Narrative Generation for Hate Speech", "Generate, Prune, Select: A Pipeline for Counterspeech Generation against Online Hate Speech", "Language Models are Few-Shot Learners", "Generating Counter Narratives against Online Hate Speech: Data and Strategies", "Cyberbullying impacts on victims' satisfaction with information and communication technologies: The role of Perceived Cyberbullying Severity", "Decoupled Weight Decay Regularization", "Don’t feed the trolling: rethinking how online trolling is being defined and combated", "Proximal Policy Optimization Algorithms", "THE INEXORABLE SHIFT TOWARDS AN INCREASINGLY HOSTILE CYBERSPACE ENVIRONMENT: THE ADVERSE SOCIAL IMPACT OF ONLINE TROLLING BEHAVIOR", "Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions", "The dark side of Facebook®: The Dark Tetrad, negative social potency, and trolling behaviours", "Hunting for Troll Comments in News Community Forums", "“I refuse to respond to this obvious troll\":an overview of responses to (perceived) trolling", "Antisocial Behavior in Online Discussion Communities", "The art of trolling law enforcement: a review and model for implementing ‘flame trolling' legislation enacted in Great Britain (1981–2012)", "Building Successful Online Communities: Evidence-Based Social Design", "Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions", "Moderation, Response Rate, and Message Interactivity: Features of Online Communities and Their Effects on Intent to Participate", "A new metric for probability distributions", "Searching for Safety Online: Managing \"Trolling\" in a Feminist Forum", "Minimum Hellinger distance estimates for parametric models", "What Makes Good Counterspeech? A Comparison of Generation Approaches and Evaluation Metrics", "High-quality argumentative information in low resources approaches improve counter-narrative generation", "A Fine-Grained Taxonomy of Replies to Hate Speech", "Generation of Korean Offensive Language by Leveraging Large Language Models via Prompt Design", "“Uh. . . . not to be nitpicky,,,,,but…the past tense of drag is dragged, not drug.”: An overview of trolling strategies", "Counterspeech 2000: A New Look at the Old Remedy for \"Bad\" Speech" ]
QPaug: Question and Passage Augmentation for Open-Domain Question Answering of LLMs
QPaug: Question and Passage Augmentation for Open-Domain Question Answering of LLMs Abstract Retrieval-augmented generation (RAG) has received much attention for Open-domain question-answering (ODQA) tasks as a means to compensate for the parametric knowledge of large language models (LLMs). While previ- ous approaches focused on processing retrieved passages to remove irrelevant context, they still rely heavily on the quality of retrieved passages which can degrade if the question is ambigu- ous or complex. In this paper, we propose a simple yet efficient method called question and passage augmentation (QPaug) via LLMs for open-domain QA. QPaug first decomposes the original questions into multiple-step sub- questions. By augmenting the original ques- tion with detailed sub-questions and planning, we are able to make the query more specific on what needs to be retrieved, improving the retrieval performance. In addition, to compen- sate for the case where the retrieved passages contain distracting information or divided opin- ions, we augment the retrieved passages with self-generated passages by LLMs to guide the answer extraction. Experimental results show that QPaug outperforms the previous state-of- the-art and achieves significant performance gain over existing RAG methods. 1 Introduction Large language models (LLMs) have shown re- markable in-context learning capability for various real-world applications such as assistant chat- bot , robot planning , search ranking , and code generation . However, the knowledge of LLMs is lim- ited to the pre-training corpus, making it difficult to provide answers to questions on up-to-date infor- mation. To overcome such limitations via external knowledge sources, recent works have focused on the retrieval-augmented-generation (RAG) . The RAG sys- tems consist of retrievers which search and retrieve related information from knowledge sources and readers which generate responses based on the re- trieved information. The RAG approach has been proven effective in leveraging external knowledge to complement the parametric knowledge of LLMs. Open-domain Question Answering (ODQA) is one of the NLP tasks most relevant to RAG systems. In the ODQA, retrievers search for relevant passages from questions, and readers answer the questions based on retrieved contexts. Research efforts have been put into enhancing retrievers and readers . For exam- ple, RAPTOR is an advanced retriever which recursively captures multiple levels of details of a text using a tree. However, even advanced retrievers may fetch poor passages when the question is ambiguous or complex , requiring question clarification or multi-step re- trieval , rendering the retrieval inaccurate and inefficient. Thus, for tough questions, it is unclear whether the retrieved passages are of high quality, i.e., they contain context relevant to the question, which makes the design of readers challenging. The prob- lem can be alleviated if the parametric knowledge of LLMs can be tuned to extract answers given the relevant retrieved passages . However, fine-tuning is often infeasible due to the sheer scale of LLMs, or some LLMs are essentially black-box APIs . Recently, have proposed to process retrieved passages via LLM prompting without fine-tuning, but with Internet retrieval and 1\nreranking , or with summa- rization and verification . How- ever, these approaches are still limited in that they heavily rely on the quality of contexts provided by retrieved passages. In this paper, we deal with question complex- ity and complement the quality of retrieval by in- context learning. The goal is to harmoniously com- bine the parametric and non-parametric knowledge of LLMs through prompting. We propose QPaug (pronounced cue-pug ) which stands for question and passage augmentation. Firstly, the question augmentation is based on the hypothesis that the LLMs can decompose complex questions into mul- tiple easier sub-questions . The sub-questions contain fine-grained informa- tion and planning as to what knowledge should be retrieved from external source. The question for retrieval is composed by augmenting the origi- nal question with the sub-questions. Secondly, the passage augmentation is based on the hypothesis that the LLMs’ parametric knowledge is the most pragmatic alternative source to non-parametric (re- trieved) knowledge to deal with the degraded qual- ity of retrieval. A self-generated passage is com- posed with respect to the augmented question, leveraging extensive knowledge of LLMs. Also, self-generation can prevent irrelevant retrieved pas- sages from overriding the LLMs’ factual knowl- edge . We aug- ment the retrieved passages with the self-generated passages. Experiments show that the question and passage augmentation through LLMs complement the retrieved generation well, achieving state-of- the-art performance on ODQA benchmark datasets. Our contributions are summarized as follows. (i) We propose a simple yet effective framework for the LLM-based question and passage augmentation method (QPaug) based on the prompting of LLMs. (ii) QPaug successfully decomposes and augments questions improving the performance of retrieval. In addition, the self-generated passage by QPaug, when combined with retrieved passages, strength- ens the factual knowledge for the reader, achieving large performance gains. (iii) Extensive ablation studies show that QPaug can be integrated with various LLMs and retrievers, exhibiting excellent performances on several ODQA benchmarks. 2 Related Work 2.1 Open-Domain Question Answering first proposed retrieve-and-read system for solving open-domain question answer- ing tasks. Following conventional lexical-based sparse retriever systems like BM25 , DPR pro- posed a dense passage retrieval for a semantic re- triever system. The semantic retriever is based on sentence embeddings, and there have been a num- ber of works for improving embeddings . By contrast, the reader system which extracts answers from retrieved documents consists of extractive methods such as BERT or RoBERTa and genera- tive methods like BART or T5 . 2.2 Retrieval-Augmented Generation Augmenting language models with retrieved in- formation from external knowledge sources has proven effective for a wide range of NLP tasks . In the LLM era, proposed an in-context learning-based retrieval-augmented generation methods. proposed Self-RAG which generates and reflects on retrieved passages and own generated text using reflection tokens. proposed recursive re- trieval methods using embedding, clustering, and summarizing chunks of text, where they construct a tree with differing levels of summarization. proposed summarizing retrieved pas- sages conditioned on candidate answers to select the more relevant context. 2.3 Prompting of Large Language Models GPT-3 opened a few-shot learning era of language models. extensively studied about prompting of GPT-3 us- ing manually designed prompts on diverse tasks. They showed that GPT-3 is more reliable with proper prompts. proposed chain-of-thoughts (CoT), which de- composes a problem into multi-step subproblems. In addition, there have been modified works of CoT . (Wang et al., 2\n1. QuestionWhere was the place of death of the director of film The Chambermaid On The Titanic?Let’s Think step by step2. Question Augmentation Your job is to act as a subject matter expert. You will write …Write [NONE] if you cannot write a factual passage.…3-2. Passage GenerationThe director of the film "The Chambermaid On The Titanic," Bigas Luna, passed away on April 6, 2013, …3-1. Passage Retrieval…Retrieve 4. PredictionLa Riera de GaiàRetrieved PassagesGenerated Passagethat of the movie ‘Where is the Truth? The Tragedy of the Jewish College Girl” which was made in 1913. The Cinema Paths of Riga allows people to …of the bow of a ship. A complete listing of those victims buried in Fairview can be found here. A grave marked ""J. Dawson"" gained fame following the release of … ReadYour knowledge:Generated PassagePassage 1Passage 2Passage 1Passage 2Passage KAnswer: <latexit sha1_base64="JSPLr4Z544terXR+DSu/U8r4iAM=">AAAC1XicjVHLSsNAFD2Nr1pfUZdugkVwVVKR6rLoxmUF+4C2lCSd1tC8nEyKJXQnbv0Bt/pL4h/oX3hnTEEtohOSnDn3njNz77Ujz42Fab7mtIXFpeWV/GphbX1jc0vf3mnEYcIdVndCL+Qt24qZ5wasLlzhsVbEmeXbHmvao3MZb44Zj90wuBKTiHV9axi4A9exBFE9Xb/ppR3BbgX3UysZTqc9vWiWTLWMeVDOQBHZqoX6CzroI4SDBD4YAgjCHizE9LRRhomIuC5S4jghV8UZpiiQNqEsRhkWsSP6DmnXztiA9tIzVmqHTvHo5aQ0cECakPI4YXmaoeKJcpbsb96p8pR3m9Dfzrx8YgWuif1LN8v8r07WIjDAqarBpZoixcjqnMwlUV2RNze+VCXIISJO4j7FOWFHKWd9NpQmVrXL3loq/qYyJSv3Tpab4F3ekgZc/jnOedA4KpUrpcrlcbF6lo06jz3s45DmeYIqLlBDnbzHeMQTnrWmNtXutPvPVC2XaXbxbWkPH+iNltQ=</latexit>qaug<latexit sha1_base64="zzxZUdJJOmKs43PveCdzd9Wdt4Y=">AAACxHicjVHLSsNAFD2Nr1pfVZdugkVwVRKR6rIoiMsW7ANqkWQ6rcG8nJkIpegPuNVvE/9A/8I7YwpqEZ2Q5My595yZe6+fhoFUjvNasObmFxaXisulldW19Y3y5lZbJplgvMWSMBFd35M8DGLeUoEKeTcV3Iv8kHf8m1Md79xxIYMkvlDjlPcjbxQHw4B5iqjm7VW54lQds+xZ4Oaggnw1kvILLjFAAoYMEThiKMIhPEh6enDhICWujwlxglBg4hz3KJE2oyxOGR6xN/Qd0a6XszHttac0akanhPQKUtrYI01CeYKwPs028cw4a/Y374nx1Hcb09/PvSJiFa6J/Us3zfyvTteiMMSxqSGgmlLD6OpY7pKZruib21+qUuSQEqfxgOKCMDPKaZ9to5Gmdt1bz8TfTKZm9Z7luRne9S1pwO7Pcc6C9kHVrVVrzcNK/SQfdRE72MU+zfMIdZyjgZbxfsQTnq0zK7SklX2mWoVcs41vy3r4AF4Oj4E=</latexit>q…QuestionQuestion: Q: Where was the place of death of the director of film The Chambermaid On The Titanic?Sub-questions: To find the place of death of the director of the film "The Chambermaid on the Titanic," let's break down the steps we need to take:1. **Identify the Director**: First, identify who directed "The Chambermaid on the Titanic." 2. **Research the Director's Biography**: …<latexit sha1_base64="cyQhGcIYC5er4LiHH/4vlTcXGFc=">AAAC3nicjVHLSsNAFD3GV31HXYmbYBFclVREXYpuXFawtWBLnUxHG5oXk4lYSnHnTtz6A271c8Q/0L/wzhhBLaITkpw5954zc+/1ksBPleu+jFijY+MTk4Wp6ZnZufkFe3GplsaZ5KLK4yCWdY+lIvAjUVW+CkQ9kYKFXiBOvO6Bjp9cCpn6cXSseolohuwi8s99zhRRLXulETLV4SzoVwathhJXSoZ9cmgPWnbRLblmOcOgnIMi8lWJ7Wc00EYMjgwhBCIowgEYUnpOUYaLhLgm+sRJQr6JCwwwTdqMsgRlMGK79L2g3WnORrTXnqlRczoloFeS0sE6aWLKk4T1aY6JZ8ZZs795942nvluP/l7uFRKr0CH2L91n5n91uhaFc+yaGnyqKTGMro7nLpnpir6586UqRQ4JcRq3KS4Jc6P87LNjNKmpXfeWmfirydSs3vM8N8ObviUNuPxznMOgtlkqb5e2j7aKe/v5qAtYxRo2aJ472MMhKqiS9zUe8Ign68y6sW6tu49UayTXLOPbsu7fAd47mrE=</latexit>Ppred <latexit sha1_base64="Clg2iKeHPxvc1rz6IDw7h0GXufg=">AAAC33icjVHLSsNAFD2N7/qqutNNsAiuSipSXRbduKxgtWBLmYzTGsyLyUQsoeDOnbj1B9zq34h/oH/hnTGCWkQnJDlz7j1n5t7rxr6XKMd5KVhj4xOTU9Mzxdm5+YXF0tLycRKlkosmj/xItlyWCN8LRVN5yhetWAoWuL44cS/2dfzkUsjEi8IjNYhFJ2D90Ot5nCmiuqXVdsDUOWd+1hh2s7YSV0oGGUv7w2G3VHYqjln2KKjmoIx8NaLSM9o4QwSOFAEEQijCPhgSek5RhYOYuA4y4iQhz8QFhiiSNqUsQRmM2Av69ml3mrMh7bVnYtScTvHplaS0sUGaiPIkYX2abeKpcdbsb96Z8dR3G9Dfzb0CYhXOif1L95n5X52uRaGHXVODRzXFhtHV8dwlNV3RN7e/VKXIISZO4zOKS8LcKD/7bBtNYmrXvWUm/moyNav3PM9N8aZvSQOu/hznKDjeqlRrldrhdrm+l496GmtYxybNcwd1HKCBJnlf4wGPeLKYdWPdWncfqVYh16zg27Lu3wFXLptF</latexit>Paug <latexit sha1_base64="DhELI+WOVmQIClJEhtmi0arh8Q4=">AAAC33icjVHLSsNAFD2N7/qqutNNsAiuSipSXRbduKxgtWBLmYzTGsyLyUQsoeDOnbj1B9zq34h/oH/hnTGCWkQnJDlz7j1n5t7rxr6XKMd5KVhj4xOTU9Mzxdm5+YXF0tLycRKlkosmj/xItlyWCN8LRVN5yhetWAoWuL44cS/2dfzkUsjEi8IjNYhFJ2D90Ot5nCmiuqXVdsDUOWd+1hh2s7YSV0oGWV+Ew2G3VHYqjln2KKjmoIx8NaLSM9o4QwSOFAEEQijCPhgSek5RhYOYuA4y4iQhz8QFhiiSNqUsQRmM2Av69ml3mrMh7bVnYtScTvHplaS0sUGaiPIkYX2abeKpcdbsb96Z8dR3G9Dfzb0CYhXOif1L95n5X52uRaGHXVODRzXFhtHV8dwlNV3RN7e/VKXIISZO4zOKS8LcKD/7bBtNYmrXvWUm/moyNav3PM9N8aZvSQOu/hznKDjeqlRrldrhdrm+l496GmtYxybNcwd1HKCBJnlf4wGPeLKYdWPdWncfqVYh16zg27Lu3wFQBJtC</latexit>Pgen<latexit sha1_base64="OJT2qjkQapVyQVXBnMWclUN/hBg=">AAACyXicjVHLSsNAFD2Nr1pfVZdugkVwVRKR6rLoRnBTwT6gLZKk0xqbl5OJWIsrf8Ct/pj4B/oX3hmnoBbRCUnOnHvPmbn3ukngp8KyXnPGzOzc/EJ+sbC0vLK6VlzfaKRxxj1W9+Ig5i3XSVngR6wufBGwVsKZE7oBa7rDYxlv3jCe+nF0LkYJ64bOIPL7vucIohqdOAmy9KJYssqWWuY0sDUoQa9aXHxBBz3E8JAhBEMEQTiAg5SeNmxYSIjrYkwcJ+SrOMM9CqTNKItRhkPskL4D2rU1G9FeeqZK7dEpAb2clCZ2SBNTHicsTzNVPFPOkv3Ne6w85d1G9He1V0iswCWxf+kmmf/VyVoE+jhUNfhUU6IYWZ2nXTLVFXlz80tVghwS4iTuUZwT9pRy0mdTaVJVu+yto+JvKlOycu/p3Azv8pY0YPvnOKdBY69sV8qVs/1S9UiPOo8tbGOX5nmAKk5QQ528r/CIJzwbp8a1cWvcfaYaOa3ZxLdlPHwAKKGR0Q==</latexit>GenerateOr RejectPlanAugmentFigure 1: Step 1 . LLMs receive questions. Step 2 . The original question is decomposed into multi-step sub- questions on what information should be retrieved. Step 3-1 . Retrieve passages from augmented questions. Step 3-2 . Self-generate as a passage from the augmented questions using factual knowledge. The LLM is asked to generate [NONE] if it does not have the required knowledge. Step 4. Augment retrieved passages with the self-generated passage, then predict answers. Pxdenotes the prompt at each step. 2023a) extensively studied the properties of CoT, and illustrated that even incorrect reasoning paths can improve performances of LLM reasonings. 3 Method 3.1 Problem Statement and Notations Open domain question answering (ODQA) is the extension of QA tasks in which explicit evidence or context is not provided to the model. Thus, ODQA requires other knowledge sources such as an exter- nal knowledge base. The basic strategy to solve ODQA tasks is retrieve-and-read . Specifically, re- triever Robtains top- Krelated passages CKfrom knowledge source Z: CK=R(q,Z, K) (1) Then reader extracts predicted answer ˆaabout question qusing given the retrieved passages CK: ˆa=M(P(q,CK)) (2) where we denote the LLM-generated text as M(·), andPdenotes certain prompt. 3.2 Question Augmentation We propose question augmentation to enhance the retrieval capability. The original question is de- composed into multiple sub-questions which can arise during the reasoning process. The motiva- tion is that, a better retrieval is possible if we aug- ment the original question with those sub-questions which can provide fine-grained hints for resolv- ing the question. We utilize zero-shot chain of thoughts (CoT) for compos- ing sub-questions. Specifically, we use prompt Paug(·)for question q, where Paug(q)adds the sentence“Let’s think step-by-step” to q.Paug(q)is then passed to LLM to generate augmented question qaug, i.e., qaug=M(Paug(q)) (3) An example is given in Fig. 1: see Step 1 and 2. In Step 2, we observe that qaugcontains fine-grained instructions to tackle the question. Notably, al- though prior zero-shot CoT is shown to achieve large performance gains on arith- metic reasoning tasks, its effectiveness was less examined on ODQA tasks. However, we observe that augmenting the question with reasoning steps planned out by CoT boosts the retrieval capability, leading to improved performances on ODQA tasks. Next, the augmented question is used for re- trieval. Specifically, we create query ˆqby con- catenating qandqaug: ˆq=q⊕qaug (4) where ˆqis used for the retrieval. We retrieve top- Kpassages c1, ..., cKusing Approximated Max Inner Product Search (MIPS) . CK=R(ˆq,Z, K) ={c1, ..., cK}. (5) 3\n3.3 Passage Self-Generation We propose to utilize the vast knowledge of LLMs to complement the contextual information provided by retrieved passages. Specifically, the LLM is asked to self-generate a passage regarding the ques- tion. We augment the retrieved passages with the generated passage. Let Pgen(q)denote a prompt for passage generation given question q. We have ˆc=M(Pgen(ˆq)) (6) where ˆcdenotes the LLM-generated passage. For example, see Step 1 and 2-2 in Fig. 1, and see the outline of prompt Pgenfor passage generation in Step 2-2. Finally, we extract predicted answers from the LLM using CKandˆc. The final prompt Ppredfor prediction contains q,C, and ˆc, which is input to LLM for the predicted answer, i.e., ˆa=M(Ppred(q,CK,ˆc)) (7) An important consideration is that, the LLM may not have sufficient knowledge for the question, and should refrain from generating plausible but incor- rect passages. To derive as truthful passages as possible, we explicitly instruct LLM to base its pas- sage on factual knowledge, and to generate [NONE] in case it does not have sufficient knowledge: see the detailed prompt Pgenfor passage generation in Table 7 of Appendix A.1. There are two cases as a result. LLM admits lack of knowledge. In this case, LLM returns [NONE] passage, and only the re- trieved passages are used at the final read step to predict the answer. LLM hallucinates. LLM provides a bogus pas- sage, believing that it is from factual knowledge. To prevent possible hallucinations from mixing with retrieved passages, we label the LLM-generated passage as “Your Knowledge:” in composing the final prompt Ppredfor the answer prediction: see the box of Ppredabove Step 4 in Fig. 1. Presum- ably, if a majority of Kretrieved passages provide relevant information, LLM is likely to recover cor- rect answer in spite of hallucination. Examples of the above two cases are provided in Table 15 and Table 16 of Appendix A.2.2. Overall, the pro- posed passage augmentation is more beneficial than harmful, as demonstrated by experiments in the fol- lowing section. Dataset # samples Passages 21,015,324 NQ 4,289 2wiki 12,576 Hotpot 7,405 Table 1: Dataset Statistics of passages and test sets of ODQA benchmark datasets. 4 Experiment 4.1 Experimental settings Evaluation datasets and metrics. We experiment with zero-shot QA tasks on three ODQA bench- marks: Natural Questions (NQ) , 2wiki hop questions (2wiki) , and HotpotQA (Hotpot) . The test sets from those datasets are used for the ex- periments. 2wiki and HotpotQA are multi-hop QA datasets that typically require multi-step retrieval for traditional methods. As the knowledge sources for retrieval, we use 21M passages of Wikipedia dump proposed by DPR . Table 1 shows the dataset statistics of passages and test sets. Since the datasets include both short and long answers in benchmark datasets, answers are not only words but also phrases or sentences. Thus, we use Rouge-L and F1 scores as evaluation metrics to compute fine-grained scores the same as . We normal- ize answers and predictions before computing both metrics similar to . All the baseline methods use K= 10 retrieved passages as the input to the reader. For a fair comparison, QPaug uses K= 9 retrieved passages combined with one LLM-generated passage. Baseline models. We experiment with three re- trievers: SBERT , ANCE , and Contriever . In addition, we experiment with three LLMs as readers: Llama-2 , GPT-3.5 and GPT-4 . We use the same LLM as the reader model for the proposed question and passage aug- mentation. We compare QPaug with no retrieval methods, e.g., chain-of-thoughts and Self-verification as well as context-augmentation methods such as Rerank and SuRE . Implementation Details. Greedy decoding is used 4\nMethod NQ 2wiki Hotpot No retrieval 37.9 27.1 35.3 Chain-of-thoughts 38.2 28.2 35.5 Rerank 38.0 26.6 33.2 Self-verification 38.4 30.8 35.9 SuRE 40.4 32.6 33.6 QPaug (Proposed) 44.6 35.5 45.1 Table 2: Comparison between QPaug and baseline meth- ods. We use retriever as Contriever for Rerank, SuRE, and QPaug, and use GPT-3.5 as the LLM of all methods. The evaluation metric is the F1 score between answers and predictions. Bold indicates the best performance. for LLM generations in all the experiments. We evaluate zero-shot QA tasks with retrieval where the retrieval is implemented based on with Faiss indexing . We use Langchain1for LLM inference with manually-designed prompt templates. Detailed prompt templates are provided in Appendix A.1. 4.2 Main Results Table 2 shows the comparison between QPaug and other baseline methods, where Contriever is used as the retriever and GPT-3.5 as the LLM. Experimen- tal results demonstrate that QPaug achieves a per- formance gain of 10.4% on NQ, 8.9% on 2wiki, and 34.2% on HotpotQA over SuRE which is the cur- rent state-of-the-art. In addition, QPaug achieves average performance improvements of 22.6% and 27.9% over Chain-of-thoughts and Self-Verification respec- tively. The results demonstrate that QPaug suc- cessfully can retrieve relevant context and generate factual knowledge supporting retrieved passages. Next, we examine the compatibility of QPaug with various types of retrievers and LLMs. Table 3 summarizes the experimental results on ODQA benchmark datasets with three retrievers and LLMs. Each element indicates the combinations of re- trievers (column) and LLMs (row). We first apply QPaug to proprietary LLMs . GPT-4 with QPaug achieves performance gain on average from 14.8% to 23.8% in Rouge and from 20.1% to 27.9% in F1 score. In addition, GPT-3.5 achieves a similar performance gain on average of up to 24.5% in Rouge and 27.5% in F1 score when they are combined with QPaug. As an open-source LLM, LLaMA-2-7b-chat achieves slightly lower average gains up to 20.1% 1https://www.langchain.com/and 23.9% in Rouge and F1 scores respectively. We observe that QPaug consistently improves the performances of ODQA irrespective of the types of LLMs and retrievers. In particular, QPaug exhibits notable improve- ments in the multi-hop QA datasets of 2wiki and HotpotQA. GPT-4 with SBERT obtains perfor- mance gains of 2.1% in Rouge and 17.4% in F1 score on NQ; however, the same model achieves gains of 34.2% / 37.9% on 2wiki, and 14.5% / 20.1% on HotpotQA. The results show that QPaug achieves particularly large performance gains in multi-hop QA datasets, illustrating its effectiveness in solving complex questions. 5 Ablation study We conduct ablation study on two main compo- nents, i.e., question augmentation and passage gen- eration, denoted by Qaug andPgen in the following subsections. 5.1 Question Augmentation (Qaug) In this section, we investigate the effect of question augmentation (Qaug). Table 4 shows the change in the retrievers’ performance measured in Re- call@10 by adding only Qaug component. We experimented with various LLMs: GPT-4, GPT-3.5 and LLaMA-2. Table 4 demonstrates that Qaug boosts the retrieval performance by up to 30% on average with GPT-4, where the improvements are greater than GPT-3.5. In addition, we experiment with both LLaMA-2 models of 7b and 70b sizes which achieve performance gains in all datasets where 70b size model obtains higher performance gains by a large margin. Fig. 2 shows the performance gains achieved by Qaug with the varying number of retrieved pas- sages K. We observe that the performance gain is more pronounced in K= 10 than 50 or 100. This is because, if the number of passages to be re- trieved is limited, the relevance and fine-granularity of queries composed by Qaug become more impor- tant for better retrieval. We also observe that Qaug improves the retrieval performances with a rela- tively large number of retrieved passages as well, i.e., with K= 50 or 100. 5.2 Passage Self-Generation (Pgen) In this section, we examine the effect of the passage self-generation (Pgen) component. Table 5, 6 and 7 show the change in performance by adding the self- 5\nLLM Datasets No retrieval SBERT + QPaug ANCE + QPaug Contriever + QPaug GPT-4NQ 37.0 / 42.0 42.8 / 46.0 43.7 / 54.0 40.8 / 45.0 43.9 / 52.0 39.4 / 43.7 42.4 / 52.0 2wiki 26.8 / 37.6 27.8 / 36.5 37.3 / 49.1 25.0 / 33.6 36.7 / 47.2 24.6 / 32.6 35.2 / 47.2 Hotpot 33.9 / 42.2 39.2 / 49.0 44.9 / 54.6 34.6 / 43.6 42.4 / 52.0 34.0 / 43.2 43.7 / 53.5 Average 32.6 / 40.6 36.6 / 43.8 42.0 / 52.6 33.5 / 40.7 41.0 / 50.4 32.7 / 39.8 40.4 / 50.9 GPT-3.5Datasets No retrieval SBERT + QPaug ANCE + QPaug Contriever + QPaug NQ 35.1 / 37.9 35.5 / 40.3 40.7 / 44.2 35.5 / 39.8 40.9 / 45.4 35.2 / 37.9 41.6 / 44.6 2wiki 19.7 / 27.1 20.2 / 30.2 24.0 / 35.3 20.0 / 26.0 24.6 / 35.7 20.3 / 26.4 24.5 / 35.5 Hotpot 27.4 / 35.3 28.3 / 38.3 38.0 / 44.9 29.0 / 34.8 37.3 / 44.4 28.2 / 33.7 38.4 / 45.1 Average 27.4 / 33.4 28.0 / 36.1 34.2 / 41.8 28.2 / 33.5 34.3 / 41.5 27.9 / 32.7 34.8 / 41.7 LLaMA-2-7b-chatDatasets No retrieval SBERT + QPaug ANCE + QPaug Contriever + QPaug NQ 13.0 / 15.5 24.0 / 27.5 29.5 / 34.1 23.3 / 26.5 28.1 / 32.4 20.8 / 24.7 27.9 / 32.0 2wiki 16.7 / 20.3 20.9 / 23.2 22.0 / 26.9 20.6 / 23.0 22.5 / 25.8 20.3 / 22.5 22.3 / 25.9 Hotpot 16.1 / 18.6 22.5 / 27.5 24.9 / 33.0 21.5 / 26.9 22.2 / 29.3 20.1 / 24.4 23.3 / 31.0 Average 15.3 / 18.1 22.5 / 26.1 25.5 / 31.3 21.8 / 25.5 24.3 / 29.1 20.4 / 23.9 24.5 / 29.6 Table 3: Performance (Rouge/F1 score) comparison between no-retreival, base RAG and QPaug across various retrievers and readers (LLMs). For example, the column labelled “SBERT” means a base RAG with SBERT as the retriever, and column labelled “+QPaug” on the right means we use QPaug as “add-on” to the RAG with SBERT retriever. GPT-4 is used for the question augmentation in QPaug. Model / Dataset NQ 2wiki Hotpot SBERT 63.41 27.90 47.47 +Qaug by LLaMA-2-7b-chat 63.74 (+0.33) 29.21 (+1.31) 52.07 (+4.60) +Qaug by LLaMA-2-70b-chat 66.10 (+2.69) 33.56 (+5.66) 56.89 (+9.42) +Qaug by GPT-3.5 64.98 (+1.57) 30.37 (+2.47) 51.06 (+3.59) +Qaug by GPT-4 70.20 (+6.79) 38.68 (+10.8) 62.08 (+14.6) ANCE 60.43 22.57 38.81 +Qaug by LLaMA-2-7b-chat 62.91 (+2.48) 24.04 (+1.47) 43.88 (+5.07) +Qaug by LLaMA-2-70b-chat 64.58 (+4.15) 26.18 (+3.61) 46.63 (+7.82) +Qaug by GPT-3.5 62.55 (+2.12) 22.91 (+0.34) 40.85 (+2.04) +Qaug by GPT-4 66.68 (+6.25) 27.31 (+4.74) 49.59 (+10.8) Contriever 52.48 21.26 40.54 +Qaug by LLaMA-2-7b-chat 58.10 (+5.62) 25.00 (+3.74) 47.78 (+7.42) +Qaug by LLaMA-2-70b-chat 62.18 (+9.70) 29.04 (+7.78) 53.19 (+12.7) +Qaug by GPT-3.5 52.76 (+0.28) 21.51 (+0.26) 42.74 (+2.20) +Qaug by GPT-4 60.38 (+7.90) 28.95 (+7.69) 53.69 (+13.2) Table 4: Answer Recall @10 of retrieved passages ques- tion augmentation. Figure 2: Performance gains of recall @ K with question augmentation of GPT-4. K is 10, 50, and 100. The base retriever is Contriever. generated passage to retrieved passages, with GPT- 4, GPT-3.5 and LLaMA-2-7b-chat respectively. Table 5 shows that the addition of Pgen with GPT-4 improves performances in all datasets as compared to the baseline using only retrieved pas- sages. In particular, the performance gain is signif- icantly large at 35.1% on 2wiki dataset for which the search/retrieval results tend to be relatively poor due to multi-hop questions. Pgen is particularly ef- fective when the retrieval performance degrades. This demonstrates that the self-generated passage is able to complement the missing context in the retrieved passages. We observe similar trends with other LLMs, e.g., see Table 6 and Table 7 for the effects of Pgen with GPT-3.5 and LLaMA-2-7b- chat. The performance gains with GPT-3.5 are on average from 14.2% to 25.9%, and are on average from 11.6% to 25.9% with LLaMA-2-7b-chat. Finally, we examine the effect of Pgen by vary- ing the number of retrieved passages K. Fig. 3 shows the F1 scores per top- Kgrounded passages. Similar to Fig. 2, experimental results show that Pgen achieves performance gains when the number of retrieved passages increases. Thus, LLM can extract factual knowledge even though there are a number of possibly irrelevant retrieved passages. Also, the results show that the passage generation method can be plugged into various LLMs since it illustrates the same trend for GPT-4, GPT-3.5, and LLaMA-2-7b-chat. 6 Qualitative Analysis Effectiveness of Qaug. Table 8 shows the com- parison between the retrieval results using the origi- nal question versus the augmented question (Qaug) 6\nModel / Dataset NQ 2wiki Hotpot SBERT 46.0 36.5 49.0 +Pgen by GPT-4 47.6 47.8 53.0 ANCE 45.0 33.6 43.6 +Pgen by GPT-4 47.8 45.6 47.4 Contriever 43.7 32.6 43.2 +Pgen by GPT-4 47.2 45.2 47.7 Table 5: F1 score between predictions and answers of GPT-4. Model / Dataset NQ 2wiki Hotpot SBERT 40.3 30.2 38.3 +Pgen by GPT-3.5 44.0 34.2 43.1 ANCE 39.8 26.0 34.8 +Pgen by GPT-3.5 44.2 35.2 44.1 Contriever 37.9 26.4 33.7 +Pgen by GPT-3.5 43.0 34.8 44.6 Table 6: F1 score between predictions and answers of GPT-3.5. Model / Dataset NQ 2wiki Hotpot SBERT 27.5 23.2 27.5 +Pgen by LLaMA-2-7b-chat 31.2 26.2 30.1 ANCE 26.5 23.0 26.9 +Pgen by LLaMA-2-7b-chat 28.9 25.6 29.0 Contriever 24.7 22.5 24.4 +Pgen by LLaMA-2-7b-chat 29.7 26.2 29.0 Table 7: F1 score between predictions and answers of LLaMA-2-7b-chat. Figure 3: Comparison of F1-scores between RAG and QPaug per top-K grounded passages on 2wiki. Retriever is vanilla SBERT. on 2wiki. The retrieval by Qaug successfully ob- tains relevant information containing the answer, while the retrieved passages from the original ques- tion do not contain relevant clues. As a result, the LLM outputs the correct answer, Catherine Robbe- Grillet with Qaug, but outputs Not mentioned with the original question (for details, refer to Table 11 in Appendix A.2.1). Thus, Qaug enhances the re- trieval of passages with relevant contexts, leading to significant improvement on the performance of ODQA. Effectiveness of Pgen. Table 9 shows a qualita- tive analysis of passage generation (Pgen) using LLMs. In this example, the Contriever fails to retrieve relevant passages. The top-1 passage pro- vides a description of Mole Men Against the Son of Hercules . The passage likely received the highest score due to the exact match of the phrase Mole Men Against the Son of Hercules with the question. The top-2 passage contains information about The Fury of Hercules . It seems to have been retrieved based on its focus on Hercules . Lastly, the top-3 passage is also completely unrelated to the correct answer. It is due to getting a high score due to the presence of the words director andAgainst the Son of Hercules . The rest of the passages in top-10 did not include content relevant to the correct answer. However, the LLM-generated passage accurately contained a short biography of Antonio Leonviola , the director of Mole Men Against The Son Of Her- cules , which revealed a crucial hint on the place of death. Results show that even though the re- triever does not retrieve relevant context, Pgen com- plements the retrievers by their knowledge. Con- versely, as mentioned earlier, there are cases where the LLM-generated passage did not provide rele- vant context to the question. Nevertheless, a correct answer was recovered in case the retrieved passages provided relevant information. Some examples are shown in Table 15 and Table 16 of Appendix A.2.2. 7 Conclusion In this paper, we propose a simple yet effec- tive question and passage augmentation (QPaug) method via LLMs. QPaug harmoniously com- bines parametric and non-parametric knowledge of LLMs by leveraging the massive knowledge of LLMs for designing queries for retrieval and for guiding answer extraction from passages. Ex- periments show that both proprietary and open- source LLMs successfully decompose questions into multiple sub-questions by the proposed ques- tion augmentation, which significantly improved the performances of passage retrieval. In addition, 7\nQuestion Answer Who is the spouse of the director of film Eden And After? Catherine Robbe-Grillet Augmented Question by Qaug 1. **Identify the Director**: First, we need to determine who directed "Eden And After". 2. **Research the Director**: Once we have the name of the director, we can research more about his or her personal life, focusing on marital status to find out who their spouse is (or was, depending on current status). 3. **Verify Information**: It’s important to cross-check the information from multiple sources to ensure accuracy. ... Retrieved Passage w/o Qaug Top-1 and during ""I Dream of Jeannie"". Also covered are her three marriages to Michael Ansara (1958–1974), Charles Fegert (1977–1982) and Jon Eicholtz (1991–present) and her ""emotional breakdown"" following the death of her only child, ... Retrieved Passage w/ Qaug Top-1 ...Catherine Robbe-Grillet (; ""née"" Rstakian; born 24 September 1930) is a French theatre and cinema actress, photographer, and writer of Armenian descent who has published sadomasochistic writings under ... Table 8: Qualitative comparison of retrieved passage with or without question augmentation (Qaug) on 2wiki dataset. The question-augmentation method is GPT-4. The first passage results are obtained by searching using the original question, while the second passage results are obtained by searching using the augmented question. The complete search results and examples are provided in Appendix A.2.1. Question Answer Where was the place of death of the director of film Mole Men Against The Son Of Hercules? Rome Retrieved Passages Top-1 ... Against the Son of Hercules"" was released on 10 October 1961. Mole Men Against the Son of Hercules Mole Men Against Top-2 ... poisoned by his own men (the ""sers felons"" Antipater and Divinuspater), as was another key figure of the work, Darius... Top-3 at Dubrava Film in Zagreb, Croatia and on location in Zagreb. ""The Fury of Hercules"" as released in Italy on 21 March 1962. The Fury of Hercules The Fury of Hercules () is a 1962 peplum film written and directed by Gianfranco Parolini... Generated passage The director of the film "Mole Men Against The Son Of Hercules" was Antonio Leonviola. He passed away on December 14, 1971, in Rome , Italy... Table 9: Qualitative analysis of generated passage (Pgen) compared to retrieved passages. The passage generation method is GPT-4. While retrieved passages do not include relevant context with the answer, generated passage includes the answer. The entire search results and additional examples are provided in Appendix A.2.2. the self-generated passage was shown to achieve a large performance gain when it is augmented with the retrieved passages. 8 Limitations Although we showed that the proposed question and passage augmentation method highly improves the performances on ODQA tasks, the detailed planning of questions and self-generation of pas- sages rely much on LLMs’ knowledge. However, the capability of LLMs are growing at an unprece- dented rate, and we believe our work is timely in that it is reported that even LLM can generate synthetic data to train other language models and achieve good performance , and such massive knowledge of LLMs should be more actively explored for many tasks, including RAG. In addition, although we explicitly instructed LLMs not to generate bogus passages, hallucinations still can occur. However, we believe that alignment techniques reducing LLMs’ halluci- nations are actively researched, which can further benefit our method in the future.
[ "Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone", "RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval", "Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding", "Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection", "Textbooks Are All You Need II: phi-1.5 technical report", "Graph of Thoughts: Solving Elaborate Problems with Large Language Models", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "Lost in the Middle: How Language Models Use Long Contexts", "Asking Clarification Questions to Handle Ambiguity in Open-Domain QA", "Can We Edit Factual Knowledge by In-Context Learning?", "GPT-4 Technical Report", "Larger language models do in-context learning differently", "Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters", "Precise Zero-Shot Dense Retrieval without Relevance Labels", "Large Language Models are Better Reasoners with Self-Verification", "Text Embeddings by Weakly-Supervised Contrastive Pre-training", "Prompting GPT-3 To Be Reliable", "Few-shot Learning with Retrieval Augmented Language Models", "Large Language Models are Zero-Shot Reasoners", "Training language models to follow instructions with human feedback", "Unsupervised Dense Information Retrieval with Contrastive Learning", "Improving language models by retrieving from trillions of tokens", "Evaluating Large Language Models Trained on Code", "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", "Representations", "Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reasoning Steps", "Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering", "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval", "Language Models are Few-Shot Learners", "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", "Generating Clarifying Questions for Information Retrieval", "Dense Passage Retrieval for Open-Domain Question Answering", "REALM: Retrieval-Augmented Language Model Pre-Training", "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension", "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "Natural Questions: A Benchmark for Question Answering Research", "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering", "Constructing Datasets for Multi-hop Reading Comprehension Across Documents", "Reading Wikipedia to Answer Open-Domain Questions", "Billion-Scale Similarity Search with GPUs", "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset", "SQuAD: 100,000+ Questions for Machine Comprehension of Text", "Clustering is Efficient for Approximate Maximum Inner Product Search", "Maximum inner-product search using cone trees", "The Probabilistic Relevance Framework: BM25 and Beyond", "Open-Domain Question-Answering", "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" ]
Style-Specific Neurons for Steering LLMs in Text Style Transfer
Style-Specific Neurons for Steering LLMs in Text Style Transfer Abstract Text style transfer (TST) aims to modify the style of a text without altering its original mean- ing. Large language models (LLMs) demon- strate superior performance across multiple tasks, including TST. However, in zero-shot setups, they tend to directly copy a significant portion of the input text to the output with- out effectively changing its style. To enhance the stylistic variety and fluency of the text, we present sNeuron-TST , a novel approach for steering LLMs using style-specific neurons in TST. Specifically, we identify neurons asso- ciated with the source and target styles and deactivate source-style-only neurons to give target-style words a higher probability, aiming to enhance the stylistic diversity of the gener- ated text. However, we find that this deactiva- tion negatively impacts the fluency of the gen- erated text, which we address by proposing an improved contrastive decoding method that ac- counts for rapid token probability shifts across layers caused by deactivated source-style neu- rons. Empirical experiments demonstrate the effectiveness of the proposed method on six benchmarks, encompassing formality, toxicity, politics, politeness, authorship, and sentiment1. 1 Introduction Text style transfer (TST; Jin et al., 2022; Hu et al., 2022) aims to transform text from a source style to a target style while maintaining the original content and ensuring the fluency of the generated text. Large language models (LLMs; Minaee et al., 2024) exhibit exceptional performance across var- ious NLP tasks , including TST . How- ever, existing LLMs (e.g., LLaMA-3 Meta, 2024) tend to prioritize preserving the original meaning over enhancing stylistic differences in TST. Our analysis reveals that 34% of the outputs generated 1We will make the code and data public available upon acceptance.by LLaMA-3 are identical to the input text when tasked with transferring polite text to impolite text (Section 6.2). Enhancing the generation of words that align with the target style during the decoding process remains a significant challenge in TST. Recent LLMs have been successfully applied to TST, broadly categorized into two approaches: (i) employing single-style or parallel-style text data for either full-parameter or parameter-efficient fine- tuning (Mukherjee et al., 2024c,a), and (ii) lever- aging the robust in-context learning capabilities of LLMs to create specialized prompts for zero- shot or few-shot learning . However, (i) typically requires substantial data and computational resources to achieve good results, while (ii) is highly sensitive to prompts, where even minor changes can significantly impact the outcomes . Neuron analysis , which aims to identify and understand the roles of individ- ual neurons within a neural network, is a crucial method for enhancing the interpretability of neural networks and has garnered increasing attention in recent years. By identifying neurons associated with specific attributes such as language , knowledge , and skill , neuron analysis can boost performance on targeted tasks. Recent research has demonstrated that focusing on language-specific neurons can markedly enhance the multilingual ca- pabilities of LLMs during the decoding stage . However, the exploration of style-specific neurons remains rela- tively underexplored until now. Thus motivated, we raise the following two re- search questions: Q1: Do LLMs possess neurons that specialize in processing style-specific text? Q2: If such neurons exist, how can we optimize their utilization during the decoding process to steer LLMs in generating text that faithfully adheres to 1\nthe target style? To address these research questions, we intro- duce sNeuron-TST , a novel framework designed to steer LLMs in performing TST by leveraging style- specific neurons. Initially, we feed both source- and target-style texts into the LLM to identify neu- rons that exclusively activate in each style based on their activation values. We distinguish neurons ac- tive in both styles as overlapping neurons. Notably, eliminating these overlapping neurons during style- specific neuron selection is crucial as their presence can hinder the generation of text in the target style. Our experiments highlight that deactivating neu- rons specific solely to the source style (excluding those active in both source and target styles) im- proves style transfer accuracy while impacting sen- tence fluency. Furthermore, to improve the fluency of generated text, we adapt the state-of-the-art con- trastive decoding algorithm (Dola; Chuang et al., 2024) for optimal performance in TST tasks. Our empirical findings (detailed in Section 3.3.2) reveal that layers primarily responsible for style-related outputs are concentrated in the model’s latter layers, termed as style layers . This indicates that the de- termination of style-specific words predominantly occurs in these style layers. More precisely, we re- fine the probability distribution of generated words by comparing logits from these style layers with the final layers, which exert significant influence on style-related outputs. We conduct a comprehensive evaluation to verify the efficacy of our approach across six benchmarks: formality , toxicity , politics , politeness , authorship and sentiment . Each benchmark contains two distinct styles, resulting in a total of 12TST directions. Experimental results demonstrate that our method generates a higher proportion of words in the target style compared to baseline systems, achieving superior style transfer accuracy and fluency, while preserving the original meaning of the text. In summary, we make the following contribu- tions: (i)To the best of our knowledge, this is the first work on using style-specific neurons to steer LLMs in performing text style transfer tasks. (ii)We emphasize the significance of eliminat- ing overlap between neurons activated by source and target styles, a methodological innovation with potential applications beyond style transfer. (iii)We introduce an enhanced contrastive decod- ing method inspired by Dola. Our approach not only increases the production of words in the target style but also ensures the fluency of the generated sentences, addressing issues related to direct copy- ing of input text in TST. 2 Related Work Text Style Transfer. Recently, LLMs have shown promising results in TST through additional fine- tuning (Mukherjee et al., 2024c,b,a; Dementieva et al., 2023) or in-context learning techniques. However, these methods often require either extensive computational resources or sensitive prompts, impacting their practicality. In this paper, we focus on a novel decoding approach to guide LLMs for TST using fixed prompts and therefore it does not require significant computa- tional consumption and ensures stable outputs. Neuron Analysis. Neuron analysis has emerged as a powerful method for eluci- dating the inner workings of neural network mod- els, offering deeper insights into their behaviors and attracting growing interest in recent years. The common practice is to associate neuron activation with learned knowledge, demonstrating effective- ness in tasks such as knowledge enhancement , sentiment analysis and multilingualism in LLMs . Motivated by the promising outcomes of neuron analysis in enhancing multi- lingual capabilities of LLMs, this paper posits the presence of style-specific neurons, identifies them, and integrates neuron activation and deactivation seamlessly into the decoding process. 3 Method Our goal is to identify style-specific neurons to steer LLMs towards generating vocabulary tailored exclusively to a target style, while maintaining flu- ent text generation in a zero-shot setting. To accom- plish this, we first identify style-specific neurons based on their activation values and demonstrate the necessity of eliminating source- and target- style neurons to avoid overlap (Section 3.1). Then, we deactivate neurons associated solely with the source style, observing an increased probability of generating words aligned with the target style, al- beit at the expense of fluency (Section 3.2). Finally, we adapt the recent contrastive decoding approach Dola to TST, ensuring the flu- 2\nInstruction: Please transfer the following positive style sentence into a negative style sentence and maintain the meaning of the sentence.Input: Both dishes were prepared with quality veggies. Output: Neither dishes were prepared with poor veggies.Deactivating Source Style NeuronsIdentifying Style-Specific NeuronsPositive TextsThe product is so amazingPositive Negative TextsNegative I’m not satisfied with your services Layer × N ………Negative selected neurons ………Layer × N Positive selected neuronsPositive activation neurons… ……Layer × N Negative activation neurons… ……Layer × N Negative neuronPositive neuron Overlap neuron… …BothNeitherTheThese1st Layer31th LayerBothNeitherTheThese32th LayerBothNeitherTheThese… …poorqualitybadgood1st Layerpoorqualitybadgood31st Layerpoorqualitybadgood32th Layer ………Layer × N Deactivated neuron Output: Neither dish was prepared with quality veggies.Deocding by Contrasting Style Layer… …1st Layer30th Layer31th Layer32th Layer29th LayerFinal outputs… …BothNeitherTheTheseBothNeitherTheTheseBothNeitherTheTheseBothNeitherTheTheseBothNeitherTheTheseBothNeitherTheTheseContrast !Contrast !… …poorqualitybadgoodpoorqualitybadgoodpoorqualitybadgoodpoorqualitybadgoodpoorqualitybadgoodpoorqualitybadgood… …Figure 1: Method overview. The whole framework consists of three parts: identifying style-specific neurons, deactivating source style neurons, and decoding by contrasting style layer. The histogram represents the probability distribution of each word across different layers. When source style neurons are deactivated, LLMs tend to generate all target-style words, such as “Neither” and “poor”. By employing contrastive decoding, LLMs take fluency into account and reduce the probability of generating “poor”. ency of generated sentences (Section 3.3). Figure 1 illustrates the framework of our approach. 3.1 Identifying Style-Specific Neurons Neurons are commonly perceived as feature ex- tractors that map neural networks to human- interpretable concepts . How- ever, neurons can exhibit polysemy, where a single neuron may encode multiple features (e.g., formal and informal styles), thereby complicating their interpretability. To selectively modify specific fea- tures of LLMs without unintended changes, it be- comes imperative to identify and remove unam- biguous neurons. 3.1.1 Neurons in LLMs The dominant architecture of LLMs is the Trans- former , characterized by multiple layers of multi-head self-attention and feed-forward network (FFN) modules. FFNs con- tain2/3of the model’s parameters and encode ex- tensive information, which is crucial for multiple tasks . Moreover, the activation or deactivation of neurons within the FFN can exert significant influence on the model’s output . Inspired by this, we aim to identify neurons in the FFN modules of LLMs that are ded- icated to specific styles. Formally, the activation values of layer jin a network are defined as: a(j)=act_fn (W(j)a(j−1)+b(j)) (1) where W(j)andb(j)are the weights and biases of layer j, while a(j−1)is the activation values of the previous layer and act_fn (·)denotes the acti- vation function (e.g., GLU; Shazeer, 2020 used in LLaMA). The ithneuron of the layer is considered to be active when its activation value a(j) i>0. 3.1.2 Neuron Selection Recently, Tang et al. (2024) introduced a method for identifying language-specific neurons and demonstrated a significant overlap among neurons across different languages, such as an approximate 25% overlap between Chinese and English neu- rons. However, their study did not evaluate the performance implications of these overlaps. We measure the overlap of style-specific neurons by applying the method of Tang et al. (2024) directly to a style-specific corpus. As illustrated in Figure 2, 3\nFormality T oxicity Politics Politeness Authorship Sentiment0500100015002000250030003500Figure 2: Overlap statistics of style-specific neurons identified using the method of on six benchmarks. we observe a higher overlap among style-specific neurons. For instance, in the Politics benchmark, nearly 95% of neurons overlap between “demo- cratic” and “republican” styles. Moreover, we demonstrate that this substantial overlap negatively impacts the performance of TST (Section 6.1). To eliminate the overlap between neurons of dif- ferent styles, we identify style-specific neurons and their intersection. Formally, suppose we have two distinct styles, denoted as AandB. We feed the corpora of the two styles to an LLM separately, to obtain the activation values of the neurons in the FFN layers for both styles, as described in Eq (1). We then select the neurons whose activation value exceeds zero, forming two sets denoted as SAand SB, respectively. Subsequently, we sort the ac- tivation values within SAandSBin descending order and select the neurons with the top kvalues (k= 500 n, n∈ {1,2,3, . . . , 20}tuned on the val- idation set), resulting in S′ AandS′ B. Finally, we identify the neurons associated with strictly one of the styles by computing the disjoint sets of the two smaller sets: NA=S′ A\S′ BandNB=S′ B\S′ A. 3.2 Deactivating Source Style Neurons After identifying neurons associated with a particu- lar style, a common practice is to deactivate these neurons by setting their activa- tion values to zero during the model’s forward pass. However, neurons are sensitive components in neu- ral networks; thus, deactivating a neuron associated with a specific feature (e.g., formal style) can lead to significant performance deterioration . To investigate the effects of de- activating source- and target-style neurons in TST task, we conduct experiments focusing on formality and politeness transfer tasks. From Table 1, we observe that: (1)Deactivating the source-style neurons while keeping the target- style neurons active improves the accuracy of gen- Style Accuracy Source TargetFormality Politeness informal formal impolite polite ✗ ✗ 80.00 11.20 79.50 14.80 ✓ ✗ 80.53 13.63 80.06 19.37 ✗ ✓ 76.25 8.51 65.50 9.27 ✓ ✓ 78.42 9.27 73.48 10.36 Fluency Source TargetFormality Politeness informal formal impolite polite ✗ ✗ 92.53 87.69 105.35 92.34 ✓ ✗ 104.17 96.83 127.26 105.12 ✗ ✓ 113.14 106.23 136.10 112.51 ✓ ✓ 108.22 100.79 131.22 108.64 Table 1: Experiments for deactivating neurons on for- mality and politeness benchmarks. ✓means the neuron is deactivated, while ✗means the neuron is activated. “Source” and “Target” denotes the neuron sides. The in- dicated style (e.g. formal) within a task (e.g. Formality) indicates the source, and its pair is the target style. Style accuracy and fluency are defined in Section 4.4. erating the target style. Conversely, deactivating the target-style neurons, regardless of the state of the source-style neurons, leads to a decrease in the accuracy of generating the target style. This occurs because deactivating the target-style neurons im- pairs the ability of LLMs to generate target-style words during decoding, resulting in lower accuracy. On the other hand, deactivating the source-style neurons allows LLMs to focus more on generating target-style words, thus improving target style ac- curacy. This finding aligns with related work on language-specific neuron deactivation . (2)Fluency decreases whenever neurons are deactivated, whether they are source-style or target-style neurons. This is mainly due to the significant impact that deactivat- ing neurons has on the word distribution during decoding. Specifically, the model tends to generate words of the non-deactivated style with a higher probability, leading to generated texts that are sim- ply a concatenation of non-deactivated style words, thereby compromising fluency. As illustrated in Figure 1, after deactivating the source-style neu- rons, the generated text includes both “Neither” and “quality”— two target-style words without main- taining sentence fluency. 3.3 Contrastive Decoding for TST Contrastive decoding (CD; Li et al., 2023), which adjusts the probability of predicting the next word 4\nby comparing the outputs of a LLM with a weaker, smaller model, has been proven effective in enhanc- ing fluency and coherence. More recently, Chuang et al. (2024) proposed Dola, a CD approach that achieves excellent results by comparing outputs between the final layer and the early layers. We adapt Dola to TST to mitigate the fluency issues observed during neuron deactivation. 3.3.1 Dola Given a sequence of tokens {x1, x2, . . . , x t−1}and the total number ( N) of layers in LLMs, the prob- ability of the next token xtinj-th transformer layer can be computed in advance (known as early exit; Schuster et al., 2022) as: pj(xt|x<t) = softmax ϕ(h(j) t) xt(2) where htis the hidden states obtained from the embedding layer. ϕ(·)is the vocabulary head used to predict the probabilities of the tokens. Dola aims to contrast the information of the final layer and a set of early layers ( J ⊂ { 0, . . . , N − 1}) to obtain the next-token probability as: ˆp(xt|x<t) = softmax F pN(xt), pM(xt) xt(3) where F(·)is the function used to contrast be- tween the output distributions from one premature layer Mand the final layer by computing the log- domain difference between two distributions as follows: F pN(xt), pM(xt) =  logpN(xt) pM(xt),ifxt∈Φ, −∞, otherwise.(4) where Φis defined as whether or not the token has high enough output probabilities from the mature layer as: Φ (xt|x<t) =n pN(xt)≥max wpN(w)o (5) Layer M, the premature layer , is selected dynami- cally at each time step by taking the layer with the largest Jensen-Shannon Divergence (JSD; Menén- dez et al., 1997) to contrast output distributions from the final and the set of early candidate layers. 3.3.2 Our adaptation to TST Candidate layer selection. To better adapt Dola to TST, we select candidate layers for comparison Formal 20406080100120140160180 051015202530 Informal 0100200300400500600 051015202530 Neutral 050100150200250 051015202530 Toxic 0102030405060708090 051015202530Figure 3: Statistics of the number of style-specific neu- rons in each layer in LLaMA-3 on formality and toxicity benchmarks. based on the criterion that these layers should con- tain more style information. To this end, we mea- sure the amount of style-specific neurons across each layer. As shown in Figure 3, the last few lay- ers, particularly the final layer, contain significantly more style neurons compared to the earlier layers. Therefore, we select the last few layers ( 4in our experiments) as our candidate layers. Next-token prediction. After deactivating the source-style neurons, LLMs tend to generate target- style tokens. However, we need to determine whether the appearance of these target-style tokens is due to their consistently high probability from the early layers to the final layer or due to a prob- ability shift caused by neuron deactivation in the last few layers. If the probability of tokens at a given time step remains consistent from the first layer to the final layer, it indicates that these tokens are style-independent (typically function words) and are retained in the output of the final layer by Eq. (3). Conversely, if these words have a low probability in the early layers (typically target-style words) and only exhibit a probability “mutation” in the last few layers due to the deactivation of source- style neurons, we then select the layer with the maximum JSD distance from the candidate layers as our premature layer Mand adjust their probabil- ity distribution according to Eq. (3). 4 Experiments 4.1 Datasets We evaluate our approach on six typical TST tasks: formality, toxicity, politics, politeness, authorship, and sentiment on GYAFC , ParaDetox , Polite- ness , Shakespeare (Xu et al., 5\n2012) and Yelp . The statistics of the datasets can be found in Appendix A. 4.2 Baselines We compare our approach with the following base- lines: (1) LLaMA-3: We use LLaMA-3 without additional fine-tuning as the vanilla baseline system. (2) APE: Using activation prob- ability entropy to identify the style specific neu- rons . (3) A VF: Using activa- tion value frequency and set a threshold to identify the style neurons . (4) PNMA: Finding neurons that activate on the source style sentences but do not activate on target style sen- tences . Note that (2), (3), and (4) from the original paper focus on identifying language-specific neurons to enhance the multilin- gual capabilities of LLMs, and we extend these methodologies to our style-related corpus. For (4), it requires the use of parallel data from both source and target texts to identify neurons, whereas (2), (3), and our method does not require the use of par- allel data. Additionally, after identifying the neu- rons, we deactivate the source-style neurons in (2), (3), and (4). For a detailed comparison of various decoding strategies, please refer to Appendix G. 4.3 Implementation We use the 8B model of LLaMA-3, available in the HuggingFace repository2in zero-shot setting. To further assess the scalability of our method, we also employ the 70B LLaMA-3 model (Appendix D). For each baseline system, we use the same hyper- parameters (e.g., threshold) as the original paper. 4.4 Evaluation Metric We evaluate our approach using three metrics com- monly employed in TST tasks. Style Accuracy. Accuracy of labels predicted as correct by a style classifier. Please refer to Appendix B for more details. Content Preservation. Cosine similarity between the embeddings of the original text and the text generated by the model, using LaBSE to obtain sentence embeddings as our primary metric. Additionally, we employ BLEURT metrics for comparison, as recent studies indicate strong correlations between BLEURT assessments on TST and human evalu- ation results (Appendix F). Fluency. Perplexity of the generated sentences using GPT-2 . 2https://github.com/huggingface/transformers5 Results Table 2 shows the transfer performance (style ac- curacy, content preservation and fluency) of the six benchmarks in 12directions. Overall Performance. While the APE,AVF, andPNMA demonstrate strong performance in en- hancing multilingual capabilities, they do not out- perform the original LLaMA-3 model in the TST task, with the exception of the content preserva- tion metric. This disparity arises primarily because language-specific properties can be identified us- ing straightforward features, such as script differ- ences. Consequently, the neuron selection methods of these baselines, despite their partial overlaps, have minimal impact on multilingual performance. However, text style represents a more complex at- tribute, requiring models to learn extensive knowl- edge and execute nuanced judgments at both the word and semantic levels. The overlap of neurons in baseline systems across source and target styles adversely affects the results, particularly in style accuracy. Furthermore, the baseline methods lack a contrastive decoding strategy, which compromises their fluency. Our method outperforms the base- line methods in terms of both accuracy and fluency, highliting the importance of eliminating overlap- ping style neurons and employing contrastive de- coding. Content Preservation. Interestingly, we ob- serve that the original LLaMA-3 and other base- line systems exhibit strong performance in content preservation, which appears inconsistent with con- clusions drawn from the other two metrics. Upon closer examination, we find that this content preser- vation is largely attributable to the copy mechanism, i.e., the generated text tends to prioritize maintain- ing the original semantics, thereby neglecting the stylistic differences. A detailed discussion on this can be found in Section 6.2. Another potential ex- planation is the semantic gap, which varies signifi- cantly between sentences of different styles, and for which no effective metric currently exists to fully measure this gap. For example, when transferring text from an informal to a formal style, the original text“Sorry about that. ” and the target text “I apolo- gize for the inconvenience caused. ” are stylistically aligned, but they diverge significantly in semantic space. This is reflected in a low cosine similarity score of 0.447between them. Different Directions. We observe significant performance discrepancies when transferring be- 6\nStyle Transfer Accuracy Formality Toxicity Politics Politeness Authorship Sentiment informal formal toxic neutral democratic republican impolite polite shakespeare modern positive negative LLaMA-3 80.00 11.20 47.67 29.04 35.50 48.20 79.50 14.80 63.80 43.80 76.40 52.80 APE 74.00 12.20 47.57 28.44 40.90 44.80 77.10 18.20 55.80 44.60 78.90 48.00 A VF 76.00 12.40 47.57 28.44 38.80 44.20 77.90 18.70 55.60 44.40 79.20 47.90 PNMA 73.85 8.70 42.43 23.79 35.57 37.05 72.84 14.16 53.74 37.58 75.39 41.71 Our 80.80 14.40 55.36 31.98 37.81 50.30 80.63 23.27 73.40 45.14 77.93 54.73 Content Preservation Formality Toxicity Politics Politeness Authorship Sentiment informal formal toxic neutral democratic republican impolite polite shakespeare modern positive negative LLaMA-3 85.95 74.71 73.54 82.71 82.48 75.77 75.32 89.14 78.75 62.28 76.17 74.47 APE 76.72 85.06 76.72 83.00 87.99 82.21 76.80 87.89 80.07 57.61 76.52 73.53 A VF 75.21 84.53 76.63 83.57 86.92 80.68 76.94 87.32 80.94 58.98 76.15 73.95 PNMA 75.52 84.11 75.67 82.54 86.79 80.67 76.04 86.93 79.22 57.42 75.04 72.67 Our 85.84 86.28 75.85 80.10 82.32 74.96 75.65 82.47 77.19 60.92 75.25 74.21 Fluency Formality Toxicity Politics Politeness Authorship Sentiment informal formal toxic neutral democratic republican impolite polite shakespeare modern positive negative LLaMA-3 92.53 87.69 113.84 191.30 88.22 68.49 105.35 92.34 197.62 136.03 177.01 125.98 APE 94.27 89.93 133.12 188.34 88.51 69.06 108.24 95.17 250.65 133.92 151.06 126.73 A VF 96.63 89.36 131.10 191.29 87.93 75.94 112.67 97.50 220.30 126.42 151.33 130.17 PNMA 103.61 90.85 136.27 194.71 96.31 77.95 111.77 101.61 260.52 135.00 154.85 129.49 Our 90.79 81.46 85.65 172.26 85.28 66.68 104.92 83.36 151.71 134.86 174.46 110.48 Table 2: Main Results: Style transfer accuracy (higher values are better; ↑), content preservation ( ↑) and fluency (↓) on6datasets across 12transfer directions. Best results are highlighted in bold. tween different directions within the same task. For example, transferring from impolite to polite achieves a style accuracy of nearly 80%, whereas the reverse direction achieves only about 12%. This disparity can be attributed to the training data of LLMs, which predominantly consist of positive corpora (e.g., polite, neutral, formal), with inade- quate representation from negative corpora. Addi- tionally, LLMs have a tendency to generate safer responses , which can com- promise the utility of tasks involving style transfer. 6 Analysis In this section, we conduct an ablation study to verify the criticality of eliminating overlap between source- and target-side style neurons, alongside the importance of neuron deactivation and contrastive decofing (Section 6.1). Subsequently, we conduct a detailed analysis of the copy problem in the TST task (Section 6.2). Finally, we delve into several other significant factors inherent to our approach (Section 6.3). 6.1 Ablation Study We conduct an ablation study, detailed in Table 3, to evaluate the effectiveness of removing overlap- ping source- and target-style neurons. The results Style without with Formalityinformal →formal 74.00 79.40 formal →informal 12.20 13.63 Toxicitytoxic→neutral 47.57 49.78 neutral →toxic 28.44 29.82 Politicsdemocratic →republican 40.90 37.51 republican →democratic 44.80 49.70 Politenessimpolite →polite 77.10 80.10 polite→impolite 18.20 21.73 Authorshipshakespeare →modern 55.80 63.00 modern →shakespeare 44.60 45.42 Sentimentpositive →negative 78.90 79.75 negative →positive 48.00 51.70 Table 3: Ablation study: Style transfer accuracy on removing overlap between source- and target-side style neurons in six benchmarks. “with” indicates the removal of overlap. demonstrate a considerable advantage in eliminat- ing such overlap compared to allowing mixed pat- terns of neuron activation. As highlighted by the statistics in Section 3.1, there is a substantial 95% overlap in most neurons, indicating that source style neurons largely coincide with target style neu- rons, meking them nearly indistinguishable when directly decoding using LLMs. Additionally, Table 4 presents the results of ablat- ing neuron deactivation and contrastive decoding 7\nDeactivate ContrastiveToxicity Authorship toxic neutral shakespeare modern #1 ✗ ✗ 47.67 29.04 63.80 43.80 #2 ✓ ✗ 52.63 31.07 68.39 44.71 #3 ✗ ✓ 46.82 28.31 63.23 43.16 #4 ✓ ✓ 55.36 31.98 73.40 45.14 Table 4: Ablation study: Style transfer accuracy for neuron deactivation and contrastive decoding on the toxicity and authorship tasks. “ ✓” means the inclusion of the neuron deactivation or contrastive decoding steps, while “ ✗” means they are turned off. #1 indicates the results from baseline LLaMA-3 model, which do not use the deactivation nor the contrastive steps. (CD). Our findings are as follows: (1)Compar- ing #1 and #2, we observe a significant impact of deactivating neurons on the final results. This is because deactivating neurons on the source side encourages the LLMs to generate words in the tar- get style. (2)Comparing #1 and #3, we find that using CD alone does not significantly improve and may even degrade the results. This is attributed to the fact that style-related information is processed in later layers, and simply comparing these layers does not yield substantial improvements. Without deactivating neurons, the target style words are not effectively generated, resulting in minimal JSD dis- tance between the style layers and the final layer, thereby reducing the effectiveness of CD. (3)Exper- iment #4 demonstrates that optimal performance is achieved when both deactivating source-side style neurons and employing CD. Deactivating neurons enhances the probability to generate target style vo- cabulary, as discussed in Section 3.2, albeit at the cost of fluency in generated sentences. Therefore, CD proves crucial in further enhancing the fluency of sentences. 6.2 Copy Problem The “copy problem” arises when models simply reproduce the input text unchanged in the output, a challenge prevalent in multilingual machine trans- lation (Lai et al., 2023a,b). Given the goal to main- tain semantic consistency of the input sentences in TST, LLMs often resort to direct copying. To investigate this phenomenon, we analyze tasks re- lated to formality ,politeness , and toxicity . Figure 4 illustrates a significant number of copy instances in the original LLaMA-3, indicating a preference for preserving semantic meaning rather than incorpo- rating stylistic variations in TST. Neuron-based ap- proaches ( APE,AVF, and PNMA ) partially mitigate this issue by controlling neuron activation, thereby producing more target-style words during decoding, 00.10.20.30.40.5 LLaMA-3APEAVFPNMAOur formal→informal polite→impolite neutral→toxicFigure 4: Copy Ratio on three selected TST tasks. Lower value indicates better performance of the model. as evidenced in Section 3. However, these base- lines suffer in performance due to their inability to fully eliminate overlap between source and target style neurons. In contrast, our approach achieves a reduced copy rate by deactivating source-side neurons and employing a novel decoding strategy. 6.3 Further Analysis We conduct a comprehensive analysis of our method across various dimensions, including differ- ent model (Appendix D), layer selection strategies (Appendix E), content preservation metrics (Ap- pendix F), and decoding strategies (Appendix G), yielding several key insights: (1)Our method con- sistently demonstrates effectiveness across diverse model sizes, including larger models like 70B. (2)Selecting the last few layers proves optimal compared to earlier layers. (3)Different strate- gies for preserving meaning yield similar outcomes, highlighting the importance of exploring innova- tive approaches in future research. (4)Contrastive decoding exhibits significant advantages over tradi- tional decoding methods in the TST task, motivat- ing our adoption of CD strategy. 7 Conclusion We revisit the TST task in LLMs through a neu- ronal analysis perspective. Our study focuses on identifying style-specific neurons within LLMs, highlighting the critical importance of removing overlap between source- and target-side stylistic neurons. We find that deactivating source-specific neurons enhances the probability of generating target-style words but may compromise the flu- ency of generated sentences. To mitigate this issue, we adapt the state-of-the-art contrastive decoding method (Dola) for TST, ensuring both the fluency and effective style transformation of generated sen- tences. Experimental results across six benchmarks demonstrate the efficacy of our approach. 8\n8 Limitations This work has the following limitations: (1) We deactivate style-specific neurons across all layers; however, considering other layers may yield ad- ditional insights. For instance, Zhao et al. (2024) found that deactivating neurons in different layers (e.g., understanding layer or generating layer) can have subtle effects on experimental results. We will consider this as a direction for future research. (2) We evaluate our approach only on the text style transfer task; however, our method has the potential to be applied to other style-related tasks, such as image style transfer and mul- tilingual style transfer . Furthermore, our approach is task-agnostic, with significant potential to adapt to other tasks, such as identifying domain-specific neurons and apply- ing them to domain adaptation tasks (Lai et al., 2022a,b).
[ "Multilingual Text Style Transfer: Datasets & Models for Indian Languages", "What does the Knowledge Neuron Thesis Have to do with Knowledge?", "Neuron Specialization: Leveraging Intrinsic Task Modularity for Multilingual Machine Translation", "PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits", "On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons", "LMStyle Benchmark: Evaluating Text Style Transfer for Chatbots", "Distilling Text Style Transfer With Self-Explanation From LLMs", "How do Large Language Models Handle Multilingualism?", "Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models", "Unsupervised Text Style Transfer via LLMs and Attention Masking with Multi-way Interactions", "Neuron-based Multifractal Analysis of Neuron Interaction Dynamics in Large Models", "Text Detoxification as Style Transfer in English and Hindi", "Multimodality-guided Image Style Transfer using Cross-modal GAN Inversion", "Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification", "Extending Multilingual Machine Translation through Imitation Learning", "Prefix-Tuning Based Unsupervised Text Style Transfer", "Linear Representations of Sentiment in Large Language Models", "Evolutionary Neural Architecture Search for Transformer in Knowledge Tracing", "Text Style Transfer Evaluation Using Large Language Models", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "Inference-Time Intervention: Eliciting Truthful Answers from a Language Model", "Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation", "Prompt-Based Editing for Text Style Transfer", "Contrastive Decoding: Open-ended Text Generation as Optimization", "On the Relation between Sensitivity and Accuracy in In-context Learning", "Improving Both Domain Robustness and Domain Adaptability in Machine Translation", "Deep Learning for Text Style Transfer: A Survey", "Language-agnostic BERT Sentence Embedding", "Politeness Transfer: A Tag and Generate Approach", "BLEURT: Learning Robust Metrics for Text Generation", "RtGender: A Corpus for Studying Differential Responses to Gender", "Dear Sir or Madam, May I Introduce the GYAFC Dataset: Corpus, Benchmarks and Metrics for Formality Style Transfer", "Attention is All you Need", "Style Transfer from Non-Parallel Text by Cross-Alignment", "THE JENSEN-SHANNON DIVERGENCE", "m^4 Adapter: Multilingual Multi-Domain Adaptation for Machine Translation with a Meta-Adapter", "Language Models are Unsupervised Multitask Learners", "Paraphrasing for Style" ]
DECOR : Improving Coherence in L2 English Writing with a Novel Benchmark for Incoherence Detection, Reasoning, and Rewriting
DECOR : Improving Coherence in L2 English Writing with a Novel Benchmark for Incoherence Detection, Reasoning, and Rewriting Abstract Coherence in writing, an aspect that L2 En- glish learners often struggle with, is crucial in assessing L2 English writing. Existing au- tomated writing evaluation systems primarily use basic surface linguistic features to detect coherence in writing. However, little effort has been made to correct the detected incoher- ence, which could significantly benefit L2 lan- guage learners seeking to improve their writing. To bridge this gap, we introduce DECOR , a novel benchmark that includes expert annota- tions for detecting incoherence in L2 English writing, identifying the underlying reasons, and rewriting the incoherent sentences. To our knowledge, DECOR is the first coherence as- sessment dataset specifically designed for im- proving L2 English writing, featuring pairs of original incoherent sentences alongside their expert-rewritten counterparts. Additionally, we fine-tuned models to automatically detect and rewrite incoherence in student essays. We find that incorporating specific reasons for incoher- ence during fine-tuning consistently improves the quality of the rewrites, achieving a level that is favored in both automatic and human evaluations.1 1 Introduction Automatic English writing tools have gained ex- tensive popularity among second-language (L2) learners. These tools serve as a cost-effective sup- plement to traditional, expensive human tutoring, providing learners with timely and constructive feedback. Several areas of progress in this re- spect include automatic grammar correction sys- tems and tools to improve the vocabulary usage of learners (Johnson et al. 2016; González 2017; Zhang et al. 2024). However, these tools primarily focus on 1We will make DECOR publicly available upon publica- tion of the paper. TOEFL-11 EssaysStep 1: Create Context-Sentence pairs Step 2: Incoherence Detection between C and SIs the current sentence S incoherent with the context C ?Step 3: Incoherence ReasoningWhat are the specific reasons that cause the incoherence?•R1: Semantic connection •R2: Entity reference •R3: Discourse relation •R4: Consistency •R5: Relevance - Context •R6: Relevance - Tangential •R7: Others Step 4: Incoherence Rewriting“Align the new information with the previous information to make the new information consistent with the context” First of all, I wanna introduce young people's actions in comparison with older people’s.Suggested edits based on R4:Context (C)Current Sentence (S)… In general, many people think young people enjoy life more than older people do. I agree with this statement in terms of young men's advantages. There are three main reasons that my ideas support effectively, like action, study and knowledge.First of all, I wanna introduce young people's active points in comparison with older people. The new information "active points” is inconsistent with “action” introduced in the context C GPT-4 Rewrite Answer: Yes R4Incoherent (C, S) pairFirst of all, I want to emphasize the advantages of youth in terms of activity compared to older people.Human Rewrite Invasive and unnecessary editsMinimal and acceptable edits that improve coherenceFigure 1: The overview of DECOR , containing three tasks: incoherence detection, reasoning, and rewrit- ing. An example human rewrite is generated for the given context-sentence pair. GPT-4 rewrite is unaccept- able since it generates more invasive and unnecessary changes. the word and sentence-level issues that affect L2 writing rather than discourse-level issues. An aspect of L2 writing that could also benefit from automated tools is the overall textual coher- ence which is a requirement to efficiently convey one’s ideas. To improve L2 writing skills, whether it is part of a course assessment or standardized test of English ability, learners are often required to carefully organize their thoughts in response to a predetermined writing prompt. Previous research has identified coherence as a crucial feature to mea- sure when assessing L2 writing proficiency, as it is an aspect that students often struggle with . Current automated writing evaluation tools primar- ily provide learners with scores that indicate the level of coherence in their writing . They primarily detect coherence with sim- ple surface linguistic features, such as syntax and 1\nparts of speech . However, merely detecting coherence in writing is insufficient to help L2 English writers enhance their writing. An automated system capa- ble of detecting incoherence in L2 writing, identify- ing the underlying reasons, and correcting the inco- herent sentences would be immensely valuable for both language learners and instructors. However, the absence of a benchmark dataset specifically de- signed for incoherence detection, reasoning, and rewriting in L2 English essays significantly im- pedes the development of such systems. Hence, we introduce DECOR , a novel bench- mark dataset that can be used for coherence as- sessment for texts written by L2 English learn- ers. To construct DECOR , we start by creating context-sentence pairs from the TOEFL-11 cor- pus , following the incre- mental annotation protocol suggested in Maimon and Tsarfaty (2023). We then design a language- learning-oriented annotation scheme that guides human annotators to perform the following three tasks: 1) detect if the current sentence Sis incoher- ent with the context C, 2) identify specific reasons that cause the incoherence, and 3) rewrite the in- coherent sentences based on the identified reasons. We specifically recruited two expert annotators who are both associate professors with extensive experi- ence in teaching English as a foreign language and have advanced degrees in Applied Linguistics. Fig- ure 1 demonstrates the overview of DECOR and the three tasks. In the illustrative example, we compare the rewrites by GPT-4 with those produced by hu- man experts; the revisions from GPT-4 are typically more invasive and less essential. To our knowledge, DECOR is the first benchmark to feature human an- notations for incoherence detection, reasoning, and rewriting, specifically tailored to improve coher- ence in L2 English writing. The resulting parallel corpus with pairs of original incoherent sentences and their expert-revised versions, provides valuable references for evaluating coherence in automated writing evaluation systems. In addition, we build models to perform inco- herence detection, reasoning, and rewriting auto- matically on DECOR . The findings from our ex- periments indicate that our incoherence detection models deliver performance comparable to GPT-4 in zero-shot and few-shot scenarios, despite be- ing significantly smaller and less costly. We also demonstrate that both automatic and human evalua- tions affirm that fine-tuning rewriting models with specific reasons for incoherence consistently en- hances their ability to produce rewrites that match the quality of those generated by human annotators. Overall our contributions are three-fold: •We present a novel benchmark DECOR , with 1,352context-sentence pairs, to assess and improve coherence in L2 English essays with three tasks: incoherence detection, reasoning, and rewriting. •We produce the first parallel corpus that in- cludes 213pairs of original incoherent sen- tences as well as their expert-rewritten coun- terparts. •We fine-tuned models using task-specific syn- thetic data and evaluated them on DECOR . These models achieve results comparable to GPT-4 in detecting incoherence and produc- ing rewrites that match the quality of those generated by human experts. 2 Related Work 2.1 Definitions of coherence in English writing Earlier efforts at defining coherence in English, such as Halliday and Hasan (1976), focus on ex- plicit cohesive ties (e.g. semantic relations between elements). In particular, Halliday and Hasan (1976) define cohesion as a combination of lexical and grammatical items that facilitate sentences to be understood as connected discourse rather than in- dividual sentences. Moreover, Lautamatti (1978) defined Topical Structure Analysis (TSA) that fo- cuses on different types of progression that are used to create coherence in a text to advance the discourse topic . Additionally, Rein- hart (1980) introduced three conditions for a text to be coherent: cohesion, consistency, and relevance, capturing various aspects of the text. In developing our annotation scheme, we referred to these previ- ous efforts and established a useful guideline that is beneficial for annotating incoherence in L2 English writing. 2.2 Assessing coherence in texts Machine-generated texts Following the linguis- tic definition of coherence established in Reinhart (1980), a much recent work by Maimon and Tsar- faty (2023) incorporated these conditions into a novel benchmark, namely CoheSentia, and pro- posed a new coherence-annotation protocol that 2\naligns better with human judgments. Unlike previ- ous work that assigns a single holistic coherence score to each target text , CoheSentia provides incremental coherence label- ing on a sentence-by-sentence basis, enabling hu- mans to identify the specific reasons for incoher- ence. In our human annotation process, we fol- low the CoheSentia protocol to create the context- sentence pairs incrementally. We expand the lin- guistic fundamentals applied in CoheSentia and devise an annotation scheme that is tailored to in- coherence detection and rewriting in L2 English writings. Human-written texts NLP techniques of Coher- ence detection for human-written texts primarily identified simple surface feature proxies. McNa- mara et al. (2010) developed Coh-Metrix that mea- sures cohesion from a wide range of linguistic in- dexes. Similarly, Crossley et al. (2016) proposed a toolkit for automatic analysis of text cohesion. Re- cent work by Naismith et al. (2023) investigated the ability of GPT-4 to produce ratings for discourse coherence assessment. 3 DECOR Benchmark and Annotation Scheme In this section, we detail the data creation process forDECOR (Section 3.1). We also outline the specific annotation schemes for each proposed task: Incoherence Detection (Section 3.2), Incoherence Reasoning (Section 3.3), and Incoherent Sentence Rewriting (Section 3.4). 3.1 Data Creation We propose DECOR , a benchmark for assessing the writing coherence in L2 English essays. To con- struct the dataset, we first sampled 100medium- level essays from the TOEFL-11 dataset , and corrected their basic gram- mar errors using the grammar models from Zhang et al. (2024). Then, we incrementally constructed context-sentence pairs (C, S )for each essay, fol- lowing the protocol suggested by Maimon and Tsar- faty (2023). In these pairs, sentence Sis the current sentence to be assessed, and context Cincludes all preceding sentences in the essay up to and includ- ing the sentence immediately before S. Overall, we constructed 1,352 (C, S )pairs from the 100es- says. The general statistics of DECOR is shown in Table 1. More detailed statistics, such as the num- ber of sentences and words per essay, are shown in Items Count # of essays 100 # of words 26,376 # of context-sentence pairs 1,352 # of coherent sentences 906 # of incoherent sentences 446 # of human rewrites 213 Table 1: Overall statistics of DECOR . Figure 5 in the Appendix. Next, for each context- sentence pair (C, S ), we ask our human annotators to complete three tasks according to our annotation schemes: incoherence detection, reasoning, and rewriting. These three tasks are the main features ofDECOR . We discuss these features and their specific annotation schemes below. 3.2 Incoherence Detection Annotation Scheme DECOR features the ability to detect the incoher- ence of a given context-sentence pair. Inspired by the linguistic fundamentals of coherence (i.e. cohesion, consistency, and relevance) defined in Reinhart (1980), we expanded these fundamentals with reference to previous work in order to apply the task of incoherence detection to L2 English writing. We describe five specific criteria for de- tecting incoherence in each context-sentence pair below. Semantic connection serves as the criterion that is based on the expanded categories of dis- course progression for TSA proposed in Lautamatti (1978), where a sentence’s semantic connection with the context of discourse is defined by its ap- propriate use of the sequential progression of topics from sentence to sentence that contributes to local coherence . Entity reference refers to the requirement for writers to establish a link between the topics of the current sentence and the context of the discourse and is re- lated to cohesion. Accurate anaphoric pronominal use is a key component of this criterion . For instance, in the passage Learning about ideas and concepts is essential for all students. For example, they help students to apply their knowl- edge in new ways. , the pronoun they in the second sentence agrees in person and number with the referent ideas and concepts in the first sentence. Discourse relation is concerned with how the sen- tence is related to the overall context through the use of explicit cohesive ties that refer to the seman- 3\nLabel Codes Descriptions Examples R1: Semantic connectionThe sentence Sdoes not connect semantically with the context C.C: If students study ideas and concepts, they can explore new areas of research. S: We need to make effort to apply our knowledge S′:They need to make effort to apply their knowledge. R2: Entity referenceThe current sentence Sdiscusses an entity that has not been introduced in Cyet, or sentence Sdiscusses an entity that is ambiguous in C.C: Some people enjoy tours. S: Guides provide a lot of value for tourists. S′:Traveling in tour groups provides a lot of value for them . R3: Discourse relationThe relation between sentence Sand previous ones in Cdoesn’t make sense due to a missing discourse marker.CAdvertisements are not good for consumers. S: They only show the good features of a product. S′:For example , they only show the good features of a product. R4: ConsistencyThe current sentence Scontradicts or is inconsistent with previously presented information.C: Because gas is getting more expensive, less people will drive in the future. S: Scientists are finding ways to make gas cheaper for drivers. S′: Scientists are researching alternative sources of energy . R5: Contextual relevanceThe current sentence Sintroduces information that is completely irrelevant to the context.C: To become successful, people need to take risks. S: I think fear controls our decision making process. S′:Risks are important for people to learn what works and what doesn’t work . R6: Tangential relevanceThe current sentence Sintroduces information that is tangential or unnecessary for the development of the context.C: Young people tend to not help the people of their community. S: When I was younger I used to volunteer at a retirement home. S′:As a result, there may be a lack of volunteers an places like retirement homes . R7: OthersOther reasons that are not listed above. For example, the comment (rheme/focus) of the sentence does not agree with the topic of the sentence.S: My pet fish is flying in the sky. S′: My pet fish is swimming inits tank . Table 2: Label codes for the specific reasons for incoherence during annotation. The rewrites S′are provided for each incoherent (C, S )pair. The erroneous parts in Sare marked in red, and the corrections are marked bold in S′. tic relations between an element in a text and some other element that is crucial to the interpretation of it . Consistency is asso- ciated with the logical requirements for a sentence to align with the preceding sentences in the context . Relevance dictates a sentence must be related to previous sentences in the discourse and the underlying discourse topic of the global context . If the given context-sentence pair violates any of the aforementioned criteria, it is considered inco- herent, necessitating the subsequent step (described in Section 3.3) to identify the specific reasons caus- ing sentence Sto be incoherent to context C; oth- erwise, the sentence is labeled as coherent. The de- tailed annotation guidelines for this task are demon- strated in Appendix A. 3.3 Incoherence Reasoning Annotation Scheme In addition to detecting incoherence, annotators are tasked with identifying the specific reasons for incoherence in the context-sentence pairs that are labeled as such. Drawing on the linguistic prin- ciples of coherence outlined in Reinhart (1980), three primary factors contribute to incoherence: Cohesion ,Consistency , and Relevance . Given that Cohesion pertains to the linear sequencing and con- nections of sentences, we specifically designated three label codes for annotations within this cate- gory: semantic connection, entity reference, and discourse relation. For Consistency , we use a sin- gle code: consistency. Regarding Relevance , we have devised two codes: contextual relevance and tangential relevance. Other possible reasons that are not listed above are referred to as others. De- tailed descriptions and examples of each label code are illustrated in Table 2. 3.4 Incoherent Sentence Rewriting Annotation Scheme After selecting all applicable reasons, sentence S is rewritten by the annotators to convert it to be co- herent with context C. Concretely, annotators are asked to make the least invasive changes necessary to improve the coherence based on the identified reasons. For example, if Discourse relation is se- lected as the reason, annotators are instructed to add or change a discourse marker that ties sentence S with context C . The complete list of suggested edits is described in Appendix A.2. Considering the challenges of providing all possible edits to sentence S during the annotation process, we in- structed our annotators to provide only one possible edit that addresses at least one selected reason from the previous step. We leave the exploration of mul- tiple edits for future work. 4 Data annotation process and statistics Considering the need for substantial experience in English essay grading, we recruited two annota- tors with extensive teaching experience in English and advanced degrees in Applied Linguistics, spe- cializing in English language education. Before annotating DECOR , we conducted a tutorial ses- sion to train the two annotators and familiarize them with our annotation scheme. Subsequently, in accordance with our specified scheme, we tasked them with annotating five sample essays, which comprised 72sentence-context pairs. 4\nWe calculated the inter-annotator agreement for these pairs using Cohen’s Kappa . The two annotators achieved a κvalue of 0.83 for Incoherence Detection, indicating an almost perfect agreement. For Incoherence Reasoning, they reached an average κ= 0.90across all rea- son types, also reflecting almost perfect agreement. The specific agreement scores for each reason type are presented in Appendix B. As for Incoherent Sentence Rewriting, the leading authors validated whether the new sentences are acceptable. In partic- ular, a new sentence S′is acceptable if it preserves the semantic meaning of the original sentence S and is coherent with the given context C. Overall, the rewrites by the two annotators were deemed acceptable at rates of 88% and 89%, respectively. Subsequently, the two annotators worked inde- pendently on the test set, with each annotating around 700 (C, S )pairs that are constructed from Section 3. Overall, among all 1,352 (C, S )pairs, 906sentences are coherent with their correspond- ing contexts, whereas 446sentences are labeled as incoherent. We present the number of words per rewrite in Figure 4. Note that we do not con- sider rewrites marked as DELETE , resulting in 213 rewrites that contain more than one word. In addi- tion, we presented the distribution of the annotated reasons for incoherence in Figure 2. Our analysis shows that the medium-level essays, randomly sam- pled from the TOEFL-11 corpus, generally main- tain consistency and rarely contradict the context. Moreover, we also find that the primary sources of incoherence in these essays are related to Rele- vance andCohesion , with issues of tangential rele- vance and weak discourse relations being the most prevalent. 5 Incoherence Detection, Reasoning and Rewriting We propose DECOR to benchmark the model’s ability in incoherence detection, reasoning, and rewriting for English essays written by L2 language learners. In this section, we will outline each of the three tasks and describe their specific task for- mulations, evaluation metrics, data, baselines, and results and analysis. 5.1 Incoherence Detection 5.1.1 Task formulation In this task, the model will assess the given context- sentence pairs that are extracted from essays writ- (a) Distribution of reasons for incoherence clustered into groups. (b) Distribution of specific reasons for incoherence. Figure 2: Distribution of specific reasons for incoher- ence, and those clustered into groups. ten by L2 learners, determining whether the sen- tence Smaintains coherence with the context C. This task is specifically designed to evaluate the effectiveness of systems in capturing coherence within learner-written texts. 5.1.2 Evaluation metrics Given the class imbalance in our test set, where 906instances are labeled as coherent and 446as incoherent, we opt to use the weighted F1 score as a metric to assess the performance of different models. This approach ensures a fair evaluation by accounting for the disproportionate distribution of classes. 5.1.3 Data Given the absence of a dedicated incoherence de- tection corpus for language learners suitable for model training purposes, we followed the approach recommended by Zhang et al. (2024) and synthe- sized task-specific incoherence detection data using GPT-4 2. The prompt we used for GPT-4 is shown in Appendix D.1. To start with, we randomly sampled 800medium-level essays from the TOEFL-11 dataset and generated 11,267 context-sentence pairs. We then used GPT-4 to ana- lyze these pairs for incoherence, producing a label 2Throughout this paper, we employ GPT-4o as the default model unless otherwise specified. 5\nModelsIncoherence Detection (%)Incoherence Reasoning (%) Cohesion Consistency Relevance Others BERT-baseDC 63.04 48.17 93.76 28.47 DT 66.43 44.38 75.41 46.37 80.36 DeBERTa-baseDC 62.21 47.93 93.88 29.45 DT 68.54 48.36 77.17 45.14 74.20 Llama2-7BDC 59.52 43.93 93.65 28.87 DT 66.08 46.63 83.55 47.20 87.78 GPT-4zero 66.56 51.03 93.02 56.60 87.93 16 69.33 48.71 90.10 65.54 85.64 Table 3: Evaluation of models on DECOR using weighted F1 scores in percentages (%) for Incoherence Detection and Incoherence Reasoning tasks. For each task, the task-specific synthetic training data is denoted as DT, whereas the out-of-domain training data is denoted as DC. We also conducted zero-shot ( zero ) and in-context learning (16-shot) with GPT-4. Since Others is not specified in DC, we exclude it for evaluation. for each. In this process, 6,422sentences were identified as coherent, while 4,845were labeled as incoherent. For the training process, we allocated 90% of this synthetic data for training purposes, denoted as DT, and reserved the remaining 10% for validation. Moreover, we also utilized out-of- distribution training data proposed in Maimon and Tsarfaty (2023), denoted as DC. 5.1.4 Baselines We conducted experiments with classification- based models that consist of encoder-only architec- tures equipped with a classification head. Specifi- cally, we tested models such as BERT and DeBERTa with their base and large variants. Each model generates pre- dictions with two labels— yesorno—to determine if the sentence Sis coherent with the context C. The input to the model’s encoder is structured in the format " C<SEP> S," facilitating the assess- ment of coherence between the given context and sentence. In light of the burgeoning field of powerful instruction-following models , we also ex- plored two generation-based large language mod- els: Llama 2 and GPT-4. For Llama 2, we fine-tuned its 7B variant using our synthetic dataset DTfor this task. With GPT-4, we tested in both zero-shot and 16-shot settings. De- tails of the prompts used in the GPT-4 experiments are provided in Appendix F. 5.1.5 Results and analysis The results for the task of incoherence detection are demonstrated in Table 3. As observed, train- ing with our task-specific synthetic dataset DT yielded superior results compared to using the out-of-domain dataset DC. This improvement is attributed to the fact that DCconsists solely of machine-generated texts, which introduces a signif- icant distribution shift. Additionally, while GPT-4 with 16-shot examples surpassed all other mod- els, smaller models trained on our synthetic data DT, such as BERT-base and Llama-2-7B, achieved performance comparable to GPT-4 in a zero-shot setting. Moreover, DeBERTa-base matched GPT- 4’s performance in the 16-shot setting and even exceeded it in the zero-shot scenario. We also ex- perimented with combining both DCandDTdur- ing training; however, this did not lead to improved results. Details of the experiment are provided in Appendix E. 5.2 Incoherence Reasoning 5.2.1 Task formulation The incoherence reasoning task aims to develop models capable of identifying the specific causes of incoherence in context-sentence pairs labeled as such. Due to the sparse distribution of incoherence reason types depicted in Figure 2b, we focus on the four high-level causes previously introduced: Cohesion ,Consistency ,Relevance , and Others . For each of these four causes of incoherence, we hope to develop specialized models capable of determin- ing whether the incoherence stems from a specific cause. This approach divides the overall incoher- ence reasoning task into four distinct sub-tasks, each targeting a different cause. 5.2.2 Evaluation metrics In Figure 2a, DECOR exhibits unbalanced annota- tions across the four reason types of incoherence. 6\nHence, we report weighted F1 scores for each of the four sub-tasks to account for this imbalance. 5.2.3 Data We adopted a similar approach as described in Sec- tion 5.1.3 to synthesize the training data for all four sub-tasks. Specifically, we prompted GPT-4 to identify all potential reasons for each instance of incoherence detected from Section 5.1.3, based on the seven predefined causes outlined in Table 2. The prompts we used for data synthesis are demon- strated in Appendix D.1. Furthermore, we post- processed the resulting data to create four distinct datasets, each serving as the training data for de- tecting Cohesion ,Consistency ,Relevance , and Oth- ers. For instance, in creating the training set for detecting Cohesion as the cause, an instance is la- beled "Yes" if GPT-4 identifies R1, R2, or R3 as the cause of incoherence for that instance; other- wise, the label is "No", indicating that the incoher- ence is caused by other factors. Similar to 5.1.3, the synthetic datasets are denoted as DT. The de- tails for the post-processing and statistics of the resulting data for each sub-task are described in Appendix D.2. 5.2.4 Baselines We adopted the same set of baseline models that are tested in the incoherence detection task: classification-based models (i.e. BERT and De- BERTa), and generation-based models (i.e. Llama 2 and GPT-4). Similarly, for each sub-task of the incoherence reasoning, each model predicts with two labels (i.e. yes or no) to determine if the sen- tence S is incoherent with the context C due to a specific cause. We fine-tuned BERT, DeBERTa, and Llama2-7B models on the task-specific syn- thetic data DTfor each sub-task as well as the out-of-distribution data DC. We also prompted GPT-4 under both zero-shot and 16-shot settings. The prompts for GPT-4 experiments are shown in Appendix F. 5.2.5 Results and analysis The results for incoherence reasoning in terms of the four sub-tasks are demonstrated in Table 3. It was observed that training DeBERTa-base and Llama2-7B models with DTresulted in en- hanced performance for Cohesion andRelevance when compared to training with DC. For Cohe- sion, DeBERTa-base outperforms the Llama2-7B model and is close to the performance of GPT- 4. In comparison, for the Consistency task, all of ModelTraining ConditionAcceptance Rate (%)Win Rate (%) Llama2-7Bw/ reason 75.59 69.16 w/o reason 74.65 69.01 Llama3-8B-Instructw/ reason 77.46 72.30 w/o reason 75.12 71.83 Table 4: Automatic evaluation of models for the in- coherence rewriting task. The win rate is calculated by adopting GPT-4 as a judge to compare the system- generated rewrites against human-written references. our models demonstrate markedly enhanced per- formance when trained with DCrather than DT. This improvement is likely attributed to the imbal- anced training data distribution in DC, which more closely mirrors the Consistency class distribution inDECOR . For the task of Others , we have omit- tedDCfrom the table because the category Others is not included in DC. Our Llama2-7B model, fine-tuned with DT, achieved results comparable to GPT-4 in both zero-shot and 16-shot settings. We further explored the effects of combining DT andDCas training data to fine-tune our models for tasks excluding Others . The results varied across different tasks and are presented in Table 8 in Ap- pendix E. 5.3 Incoherence Rewriting 5.3.1 Task formulation The incoherence rewriting task is designed to assess the model’s capability to edit a given incoherent sentence Sto a revised sentence S′that restores the coherence with context C, based on the iden- tified reasons Rfor incoherence. Specifically, we prefer edits that not only enhance the coherence of the original sentence but also minimize alterations, ensuring the changes are as unobtrusive as possible. 5.3.2 Evaluation metrics We measured the systems’ performance of inco- herence rewriting with the acceptance rate. This metric was determined by calculating the propor- tion of revised sentences S′that both achieve co- herence with context Cand maintain minimally in- vasive edits, out of all evaluated incoherent context- sentence pairs. We specifically employed GPT-4 with 16-shot examples (with the best performance in the incoherence detection task) to determine if the rewrites S′are acceptable. Additionally, in line with the recent practices of evaluating instruction- following LLMs (Zhou et al., 2024; Dubois et al., 7\nFigure 3: Human expert as a judge evaluation results with GPT-4 rewrites as the baseline. We sample 100 examples and ask our human expert for each pair of comparisons. A higher win rate and a lower loss rate indicate superior quality. 2024), we asked GPT-4 to rank a pair of generated rewrites (one from the human-written reference, the other from the tested models) to decide which one is more coherent to the context C. For each tested model, we collect its win rate against the hu- man reference. Note that we randomly shuffle the ordering of the pair-wise outputs to avoid position biases. The prompt we adopted for GPT-4 judging is shown in Appendix G. 5.3.3 Data Given the reasons generated from the incoherence reasoning task, we prompted GPT-4 to generate the rewrites based on the identified reasons for incoher- ence. These rewrites are used as the training data for the incoherence rewriting task. The prompt we used for the rewrite synthesis and relevant statistics are shown in Appendix D.1. For automatic eval- uation, we used all 213rewrites generated by our annotators, and we randomly selected a sample of 100for human evaluation. 5.3.4 Baselines We conducted experiments with two advanced open-sourced generative LLMs, Llama 2 and Llama3 (AI@Meta, 2024), for the incoherence rewriting task. Specifically, we fine-tuned Llama2-7B and Llama3-8B-Instruct us- ing our synthetic rewriting dataset under two ex- perimental conditions: training with reasons for incoherence and without reasons. 5.3.5 Results and analysis Automatic Evaluation The automatic evaluation results for incoherence rewriting are shown in Ta- ble 4. As observed, fine-tuning both the Llama2- 7B and Llama3-8B-Instruct models with reasons for incoherence consistently results in better per- formance compared to their counterparts trained without such reasons, achieving higher scores in both acceptance rate and win rate. Table 9 demon- strates the qualitative comparisons among example rewrites produced by our fine-tuned models. Human Evaluation Moreover, we conducted a human evaluation where we asked our human expert to judge and compare system-generated rewrites with those produced by GPT-4.3Addi- tionally, the human evaluator was also tasked with a pairwise comparison between human-written ref- erences and the same set of GPT-4 rewrites. The results are shown in Figure 3. As expected, our hu- man judges predominantly preferred rewrites pro- duced by human experts over those generated by GPT-4, with the highest win rate reaching 74%. Consistent with the results in Table 4, fine-tuning Llama 2 with reasons for incoherence resulted in a higher win rate and a significantly lower loss rate compared to fine-tuning without reasons. A chi-square test indicates a significant difference be- tween these two conditions (with p-value < 0.01). This supports our hypothesis that rewriting inco- herent sentences with an understanding of their underlying causes should produce higher-quality rewrites. 6 Conclusion and Future Work We propose a novel benchmark DECOR aiming to assess and improve coherence in L2 English writing. Specifically, DECOR contains three tasks: incoherence detection, reasoning, and rewriting. Our annotation scheme allows us to produce a corpus comprising 1,352context-sentence pairs with coherence labels, as well as the first paral- lel corpus featuring 213pairs of original incoher- ent sentences and their expert-rewritten counter- parts. Additionally, we fine-tuned various models with task-specific synthetic data, achieving results comparable to GPT-4 in coherence detection and generating rewrites favored by both automatic and human evaluations. In future work, we plan to enhance DECOR by expanding its size and qual- ity, ensuring more balanced reason type labels and multiple edits for each incoherent context-sentence pair. This enhancement will create a more compre- hensive evaluation set for coherence assessment, specifically tailored to L2 writing. 3To avoid biases, instead of the same annotators, we asked one of our leading authors to conduct the human evaluation. 8\n7 Limitations While our benchmark may contribute to building the systems that can improve the coherence in L2 English writing, there were a number of limitations to our study. First, the distribution of incoherence reason types is unbalanced, with the Consistency category containing the fewest annotations among the four high-level reason types. This is due to the fact that medium-level essays from the TOEFL-11 corpus, the source of all context-sentence pairs, generally maintain consistency and seldom contradict the context. We leave our future work to diversify and balance the reason types in DECOR , potentially by including low-level essays written by English L2 learners. Additionally, the texts sampled from the TOEFL- 11 corpus for synthesizing our training data were limited by the specific writing prompts they ad- dressed. This limitation may hinder the system’s ability to detect coherence in learner-produced writ- ing that responds to out-of-domain prompts not included in the TOEFL-11 corpus. Future exten- sions of our work includes incorporating other L2 English writing corpus. Finally, regarding the general design of our an- notation scheme for coherence detection, we con- sidered all sentences in the context up until the target sentence. However, as we found during our annotation tutorial session, sometimes issues of co- herence occur due to the structuring of information that is contained in sentences that come later in the text. Future work might focus on these specific types of coherence breaks and their prevalence in L2 writing. 8 Ethics Statement Reproducibility In this work, we utilized GPT- 4 to synthesize our task-specific training data for coherence detection, reasoning, and rewriting. We also used it during the evaluation. To facilitate the reproducibility of our data synthesis process and evaluation results, we included all relevant prompts that were used in our paper. In addition, all the other models used in this research, are publicly available in peer-reviewed articles and referenced in this paper. All datasets, including our synthetic fine-tuning dataset and the annotated test set, will be released. Biases We did not explicitly handle any bias that exists in the pre-trained language models we exper- imented with in this paper. Human Annotators Both annotators were specifically recruited from the linguistics depart- ment, and they are both associate professors with extensive experience in teaching English as a for- eign language and have advanced degrees in Ap- plied Linguistics. They were paid at a rate of $12 per hour. To protect privacy and anonymity, con- tributors’ personal and demographic information was not collected.
[ "ProLex: A Benchmark for Language Proficiency-oriented Lexical Substitution", "CoheSentia: A Novel Benchmark of Incremental versus Holistic Assessment of Coherence in Generated Texts", "Utilization of GPT-4 to Improve Education Quality Through Personalized Learning for Generation Z in Indonesia", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena", "AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback", "LIMA: Less Is More for Alignment", "Sparks of Artificial General Intelligence: Early experiments with GPT-4", "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback", "Training language models to follow instructions with human feedback", "DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing", "LM-Critic: Language Models for Unsupervised Grammatical Error Correction", "GECToR – Grammatical Error Correction: Tag, Not Rewrite", "Discourse Coherence in the Wild: A Dataset, Evaluation and Methods", "The Contribution of Lexical Diversity to College-Level Writing.", "Vocabulary Knowledge and Vocabulary Use in Second Language Writing.", "The tool for the automatic analysis of text cohesion (TAACO): Automatic assessment of local, global, and text cohesion", "TOEFL11: A CORPUS OF NON‐NATIVE ENGLISH", "Coh-Metrix: Capturing Linguistic Features of Cohesion", "Analyzing Topical Structure in ESL Essays", "Conditions for Text Coherence", "A Coefficient of Agreement for Nominal Scales", "Automated evaluation of written discourse coherence using GPT-4", "Mitigating Exposure Bias in Grammatical Error Correction with Data Augmentation and Reweighting", "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "‘Little coherence, considerable strain for reader’: A comparison between two rating scales for the assessment of coherence", "Perceptions of the difficulties of postgraduate L2 thesis students writing the discussion section", "Learning to cohere : Causal links in native vs. non-native argumentative writing", "Observations on the Development of the Topic in Simplified Discourse", "Cohesion in English" ]
What Would Happen Next? Predicting Consequences from An Event Causality Graph
What Would Happen Next? Predicting Consequences from An Event Causality Graph Abstract The script event prediction task forecasts a possible subsequent event based on an event script chain. However, an event chain may not enough to describe the complex evolution of historical events, which often takes a graph structure in real world scenarios. This paper introduces a Causality Graph Event Predic- tion(CGEP) task that aims at forecasting con- sequential events based on an Event Causal- ity Graph (ECG). We propose a Semantic- enhanced Distance-sensitive Graph Prompt Learning (SeDGPL) Model for the CGEP task. In SeDGPL, (1) we design a Distance-sensitive Graph Linearization (DsGL) module to refor- mulate an ECG into a graph prompt template as the input of a language model; (2) we propose anEvent-Enriched Causality Encoding (EeCE) module to integrate both event contextual se- mantic and graph schema information; (3) we propose a Semantic Contrast Event Prediction (ScEP) module to enhance the event representa- tion among numerous candidate events and pre- dict consequential events following a prompt learning paradigm. Experiment results validate that our SeDGPL outperforms the advanced competitors for the CGEP task.1 1 Introduction Event prediction aims to forecast the consequential event that are most likely to happen next, based on historical events and their relationships. It has lots of applications in many scenarios, such as dialogue systems , discourse understand- ing , and story genera- tion . Existing script event prediction task predicts the sub- sequent event given a sequence of events, named event script chain. However, we argue that a simple chain structure is far from enough to describe the complicated evolution of historical events in the real world, yet a graph structure might be more appropriate. Motivated from such considerations, this paper introduces a Causality Graph Event Prediction (CGEP) task that aims at forecasting consequential event based on an Event Causality Graph (ECG). As illustrated in Fig. 1, the CGEP task is to select 1Source codes will be released after the anonymous review. Figure 1: An example of Consequence Event Prediction based on Causality Event Knowledge Graph. the most likely consequential event from a can- didate set based on an input ECG and a selected anchor event. Unlike using a simple chain struc- ture in the script event prediction task, we use a graph structure, i.e., an ECG, to model the diverse relations in between events. Besides, an ECG may have more than one consequential event that are likely to happen next. As such, we propose to pre- dict a consequential event for each tail node event (i.e. the anchor event) in an ECG. Traditional event prediction methods either en- code the contextual semantic of events or model the information of a graph struc- ture for event forcasting. The recently emerged prompt learning paradigm, based on a pre-trained language model (PLM), exhibits outstanding ability in logical reasoning and has been applied in many natural language processing tasks . However, most PLMs take text sequences as input and struggle to pro- cess graph-structured inputs. In this paper, we use the graph prompt learning paradigm to linearize the input ECG, so as to utilize the parameterized knowledge in a PLM for prediction. Besides, some studies obtain common sense knowledge from external knowledge bases to aug- ment event prediction . This again validates our argument that an event chain contains insufficiency information of historical events. By contrast our ECG input itself has included abun- dant historical events and diverse relations. To this 1\nend, we enrich the event representation by integrat- ing event contextual semantic and graph schema information from the input ECGs. Furthermore, a semantic contrastive learning is used to enhance the event representation among numerous candi- date events. Finally, we select the consequential event from a significantly larger candidate set than that in the event script prediction task. In this paper, we introduce the CGEP task to fore- cast consequential event(s) based on an ECG, and propose a Semantic Enhanced Distance-sensitive Graph Prompt Learning (SeDGPL) Model for the CGEP task. The SeDGPL model contains three modules: (1) The Distance-sensitive Graph Lin- earization (DsGL) module reformualtes the ECG into a graph prompt template as the input of a PLM; (2) The Event-Enriched Causality Encoding (EeCE) module enriches the event representation by integrating both event contextual semantic and graph schema information; (3) The Semantic Con- trast Event Prediction (ScEP) module enhances the event representation among numerous candidate events and predicts consequential event(s) follow- ing the prompt learning paradigm. We construct two CGEP datasets based on exist- ing event causality corpus MA VEN and Event StoryLine Corpus (ESC) . Experiment results validate our argument that predicting events based on ECG is more reasonable than that based on event script chain, and our proposed SeDGPL model outper- forms the advanced competitors. 2 Related Work 2.1 Script Event Prediction Script Event Prediction focuses on predicting fu- ture events based on a narrative event chain with shared entities. Previous studies employ word2vec to encode the events, and predict a sub- sequent event based on the similarity between can- didate events and script events. With respect to temporal ordering, Pichotta and Mooney (2016); Wang et al. (2017) employ Long Short-Term Mem- ory (LSTM) to model the temporal dependencies between events. Contemporary event modeling methods utilize the Pre-trained Language Models, e.g. BERT and RoBERTa . However, these models lack discourse- awareness as they are trained using Masked Lan- guage Modeling, which does not effectively capture the causal and temporal relations between multi- hop events. To address this problem, some re- searches also ex- plore additional event graphs as external knowledge base to assist event prediction. For example, Wang et al. (2022b) proposes a novel Retrieval-Enhanced Temporal Event forecasting framework, which dy- namically retrieves high-quality sub-graphs based on the corresponding entities. 2.2 Event Graph Reasoning Event Graph Reasoning aims to leverage the graph topology and structure to identify new patterns that do not explicitly exist in the event graph. Depending on the goal of reason- ing, the task can be further categorized into relation reasoning and event predic- tion . For relation reasoning, Tang et al. (2023) adopts events’ attributes to learn the se- mantic representations of events, and predicts what a relation exists in between two events without an edge. Tang et al. (2021) combines LSTM and atten- tion mechanisms to dynamically generate event se- quence representations, thereby predicting events’ relations. For event prediction, prior studies (Du et al., 2021, 2022b) perform subgraph matching be- tween instance graph and schema graph to identify subsequent events. However, such methods pre- dict event types rather than the events themselves. Moreover, Li et al. (2023b); Islam et al. (2024) predict potential events for the next timestamp by dividing the event graph into a series of subgraphs based on event timestamps. 3 Causality Graph Event Prediction 3.1 Task Definition We define the Causality Graph Event Prediction (CGEP) task as predicting the most likely conse- quential events that would occur next in an event causality graph (ECG). As illustrated in Figure 1, the ECG is a directed acyclic graph consisting of some past events as nodes and the causal relations in between them as directed edges, denoted by G(E,R). Where an event node ei∈ E contains the event mention word(s) Emiand its containing raw sentence Si; A causality edge rij∈ R is a directed causal relation from the event node eito the event node ej, indicating that eicauses ej(i.e. ei→ej). Each tail node in an ECG, which has no edge starting from it to any other event node, is used as the anchor event et∈ E for next event 2\nFigure 2: Data Processing Flowchart: The data processing involves transforming an original EKG into multiple data instances, with each instance specifically predicting a single tail event. prediction. The objective of CGEP task is to select the most likely consequential events ecfrom the candidate event set Ecfor an anchor event node et in an ECG. 3.2 Datasets Construction We construct two CGEP datasets based on the pub- lic event causality dataset MA VEN and EventStoryLine Corpus (ESC) , annotating event mentions and directed causal relations between events within doc- uments. Figure 2 illustrates the process of CGEP dataset construction. We first construct ECGs based on the annota- tions in each document from the ESC and MA VEN datasets, using the annotated events as nodes and the annotated directed causal relation between events as edges. Note that multiple disconnected ECGs may be constructed from a single document, and only weakly connected graphs2with more than four event nodes are retained to ensure a complete event causality graph structure for event prediction. We then mask one of the tail event node in an ECG as a CGEP instance, where the masked event is the consequential event ecto be predicted and its cause event is the anchor event et. In case that the masked event is caused by multiple events or an anchor event causes multiple effect events, it is further divided into multiple CGEP instances to en- sure that each instance has an unique anchor event and ground truth consequential event. For each CGEP instance, we randomly select a large number of tail node events from all other ECGs in the dataset as negative samples to con- struct a candidate set of consequential events Ec. 2A graph is considered weakly connected if every pair of vertices in the graph is connected by a path, regardless of the direction of the edges.The ground truth event ecis the one that has been masked aforementioned. Considering that the ground truth event mention may also appear in the sentence of other event nodes, that is, its belongin- ing sentence contains multiple event mentions, we replace them by a PLM-specific token [PAD] to prevent answer leakage. Finally, we construct two CGEP dataset CGEP-MA VEN and CGEP-ESC3 for the CGEP task, in which each instance contains an event causality graph G(E,R), an anchor event et, a candidate event set Ec, and a ground truth consequential event ec. Considering the varying instance sizes of the CGEP-MA VEN and CGEP-ESC datasets, the num- ber of candidate sets for consequential events is randomly selected to be 512 and 256 , respectively. Table 1 summarizes the statistics of our constructed CGEP-MA VEN and CGEP-ESC datasets. 4 Methodology We propose a Semantic Enhanced Distance- sensitive Graph Prompt Learning Model (SeDGPL) for causality graph event prediction. As illustrated in Figure 3, the SeDGPL consists of three main modules: (1) Distance-sensitive Graph Lineariza- tion(DsGL); (2) Event-Enriched Causality Encod- ing(EeCE); (3) Semantic Contrast Event Predic- tion(ScEP). 4.1 Distance-sensitive Graph Linearization The DsGL module is to reformulate the Event Causality Graph (ECG) of an input CGEP instance into a graph prompt template T(G), as the input of a Pre-trained Language Model (PLM). As illus- trated in Figure 3 (a), the graph prompt template T(G)is a concatenation of some event causality 3Datasets will be released publicly after the anonymous review. 3\nDataSet Docs ECGs Avg.Nodes Avg.Edges Instances CandiSet CGEP-MA VEN 3,015 5,308 8.4 12.9 12,200 512 CGEP-ESC 243 363 11 24.9 1,191 256 Table 1: Statistics of our MA VEN-ECGP and ESC-ECGP datasets. Figure 3: The SeDGPL model consists of three modules: (1) Distance-sensitive Graph Linearization (DsGL); (2) Event-Enriched Causality Encoding (EeCE); (3) Semantic Contrast Event Prediction (ScEP). triple templates Tnand a simple prompt template Tm, represented as follows: T(G) =[C],T1,[S], ...Tn,[S],Tm,[S],(1) where [C]and[S]are the PLM-specific tokens [CLS] and[SEP] , respectively, indicating the be- ginning and ending of an input sequence. Addition- ally, [S]is also used to mark the boundary between each triple templates and the prompt template. Given an ECG Gwithndirected causality edges, we can first obtain nevent causality triples T(n) r= (ei, rij, ej), each containing a cause event ei, an effect event ejand a directed causal relation rij from eitoej. The template Tnfor each event causality triple is formulated by concatenating both the cause and the effect event mentions with an inserted conjunction word causes : Tn=Emicauses Em j, (2) where EmiandEmjare the event mentions of cause event eiand effect event ej, respectively. We argue that the closer an event causality triple T(n) rto the anchor event et, the stronger its con- nection to the anchor event, and it can provide more critical information for consequential event prediction. To this end, we order the event causal- ity triples based on their distances to the anchor event. The distance of an event causality triple T(n) r= (ei, rij, ej)to the anchor event etis com- puted by the number of edges on the shortest undi- rected path from its cause event eito the anchor event et, as follows: dn(ei, et) =min p∈P(ei,et)|p|, (3) where P(ei, et)is the set of all undirected paths from the cause event eito the anchor event et, and |p|is the number of edges on the path p. We arrange the event causality triple template Tnin a decreasing order of their distances to the anchor event et. As in Equation 1, the distances are ordered such that d1≥d2≥...≥dn, indicating thatTnis the closest to the anchor event and T1 is the farthest one. At the end of graph prompt template T(G), we design and concatenate a simple prompt template Tmfor event prediction: Tm=Emtcauses [MASK] , (4) 4\nwhere Emtis the event mention of anchor event etand the PLM-specific token [MASK] is used to predict consequential event. 4.2 Event-Enriched Causality Encoding To enrich the event representation for causality en- coding, we propose an EeCE module that integrates both event contextual semantic and graph schema information into the ECG representation. After graph linearization, we input each graph prompt template T(G)into a pre-trained language model, denoted as PECG, for ECG encoding, . As illus- trated in Figure 3 (b), the input representation of PLM is constructed by summing the correspond- ing token embedding h(g) t, the segment embedding h(g) s, and the position embedding h(g) p: h(g)=h(g) t+h(g) s+h(g) p. (5) For contextual semantic encoding, we input the raw sentence Siof each event into another PLM Pcto obtain its contextual representation h(c), as illustrated in Figure 3 (c). For schema information encoding, we first construct an event schema graph by replacing each event node in an ECG with its corresponding annotated event type, like , and etc. After the same graph linearization operation, we input each schema graph template into another PLM Psto ob- tain the event’s schema representation h(s), as illus- trated in Figure 3 (d). We note that only the token embeddings of event’s contextual representation h(c) tand schema representation h(s) tare used for next enrichment fusion. The segment embedding h(g) sand position embedding h(g) pof ECG encod- ing, which contain graph structure information, are directly used without fusion. To fuse the features of event’s contextual se- mantic and schema information into the ECG rep- resentation, we use the fusion gate to integrate their event’s representations h(c) tandh(s) tinto the event’s representation of ECG h(g) t. Specifically, we first use a fusion gate to integrate the contextual representation h(c) tschema representation h(s) t, and output h(r) t∈Rdhas the event enrichment vector. The transition functions are: gr=sigmoid (Wrh(c) t+Urh(s) t),(6) h(r) t=gr⊙h(c) t+ (1−gr)⊙h(s) t,(7) where Wr∈Rdh×dh,Ur∈Rdh×dhare learn- able parameters and ⊙donates the element-wise product of vectors. We next use another fusion gate to integrate the event enrichment vector h(r) t∈Rdhinto the token embeddings of event’s representation in ECG h(g) t. The transition functions are: ge=sigmoid (Weh(g) t+Ueh(r) t),(8) ˜h(g) t=ge⊙h(g) t+ (1−ge)⊙h(r) t,(9) where We∈Rdh×dh,Ue∈Rdh×dhare learn- able parameters. With the fusion gate, we enrich the event’s representation in ECGs by integrating both event’s contextual semantic and schema in- formation features. Note that only the representa- tions of event mention in ECGs are fused; While the other tokens in graph prompt template T(G), such as causes ,[CLS] ,[SEP] ,[MASK] , and etc., are originally encoded by the ECG encoding PLMPECG. Finally, the PLM PECGoutputs a hidden state vector zfor each input token in the graph prompt template T(G), using the fused event’s token em- beddings as input representations. 4.3 Semantic Contrast Event Prediction Following the prompt learning paradigm , we use the hidden state vector of [MASK] token zmfor consequen- tial event prediction. To enhance the PLM’s ability of understanding event semantic among numerous candidate events, we apply a kind of semantic con- trastive learning to improve the [MASK] token pre- sentation zm. Semantic Contrastive Learning: As illustrated in Figure 3 (e), we first obtain a representation vector zcfor each candidate event ecusing the fine-tuned PLM PECG. Then, the hidden state of[MASK] token zmis used as the anchor sam- ple, and the candidate event representations zcare used as contrastive samples, where the ground truth event is the positive sample z+ cand the other can- didate events are negative samples z− c. We employ the supervised contrastive loss to compute the semantic contrast loss, as follows: Lc=−logexp(zm·z+ c/τ)P c∈Cexp(zm·zc/τ), (10) where τis a scalar temperature parameter and C is the candidate set containing the positive sample and negative samples. 5\nCGEP-MA VEN CGEP-ESC Model MRR Hit@1 Hit@3 Hit@10 Hit@20 Hit@50 MRR Hit@1 Hit@3 Hit@10 Hit@20 Hit@50 CSProm-KG 22.3 18.1 23.2 31.0 38.4 50.7 14.2 11.9 11.3 21.0 25.6 34.6 SimKG 9.3 4.5 9.2 18.0 25.3 35.0 14.9 10.3 13.5 18.4 22.3 34.0 BARTbase 24.7 19.5 24.5 34.8 42.6 53.6 16.0 12.5 16.8 21.1 28.6 38.9 MCPredictor 18.1 13.0 18.4 27.3 32.0 43.2 9.7 8.4 10.9 17.4 22.2 37.5 GPT -3.5-turbo 14.6 8.1 17.1 28.1 33.3 39.5 10.1 4.9 11.4 20.5 25.2 31.5 SeDGPL 27.9 21.9 28.9 40.8 48.1 57.9 19.6 15.2 18.1 22.3 29.9 41.9 Table 2: Overall results of comparison models on the CGEP-MA VEN and CGEP-ESC datasets. CGEP-MA VEN ModelMRR Hit@1 Hit@3 Hit@10 Hit@50 CSProm-KG 7.1 (↓15.2) 4.8 (↓13.3) 6.4( ↓16.8) 10.6( ↓20.4) 22.4( ↓28.3) SimKG 5.0(↓4.3) 2.2( ↓2.3) 4.3( ↓4.9) 8.5( ↓9.5) 25.7( ↓9.3) BARTbase 11.8(↓12.9) 8.2(↓11.3) 11.2(↓13.3) 16.6(↓18.2) 34.2(↓19.4) MCPredictor 7.3(↓10.8) 3.6( ↓9.4) 7.3(↓14.5) 14.8(↓19.7) 29.4(↓13.8) SeDGPL 16.0(↓11.9) 12.4(↓9.5) 15.4(↓13.5) 23.0(↓17.8) 39.4(↓18.5) Table 3: Overall results of Script Event Prediction on CGEP-MA VEN dataset. Consequential Event Prediction: As illustrated in Figure 3 (f), the PLM PECGestimates the prob- ability of each word within its vocabulary Vfor the hidden state of [MASK] token zm. We use the predicted probability of the event mention word ec in the event candidate set Ecas the ranking score, to form an event prediction list: P([MASK] =ec∈ Ec| T(G)). (11) We employ the cross entropy loss to compute the event prediction loss, as follows: Lp=−1 KKX k=1y(k)log(ˆ y(k)) +λ∥θ∥2,(12) where y(k)andˆ y(k)are the gold label and predicted label of the k-th training instance respectively. λ andθare the regularization hyper-parameters. We use the AdamW optimizer with L2regularization for model training. Training Strategy: The cost function of our SeDGPL is optimized as follows: L=Lp+β∗Lc, (13) where βis a weight coefficient to balance the im- portance of the event prediction loss and semantic contrast loss. 5 Experiment 5.1 Experiment Settings Our experiments are conducted using the con- structed CGEP-MA VEN and CGEP-ESC datasets. Following the standard data splitting of the underly- ing ESC corpus , we use the last two topics as development set and conduct 5-fold cross-validation on the remaining 20 topics. The average results of each fold are adopted as per- formance metrics. Since the underlying MA VEN corpus did not release the test set, following , we use the original development set as our test set and sample 20% of the data from the original training set to form the development set. We adopt the MRR (Mean Reciprocal Rank) and Hit@n (Hit Rate at n) as the evaluation metrics. Details about experimental settings and evaluation metrics can be found in Appendix B. 5.2 Competitors We replicate some advanced event prediction mod- els to conduct causality graph event prediction as benchmarks, including methods in knowledge graph completion tasks ( CSProm-KG , SimKG ) and script event prediction ( BARTbase , MCPredictor ). Furthermore, we validate the effectiveness of large language model on the CGEP task. For more details about its specific implementation, please refer to the Ap- pendix A and Appendix C. 5.3 Overall Results Table 2 compares the overall performance between ourSeDGPL and the competitors on both CGEP- MA VEN and CGEP-ESC datasets. We can observe that our SeDGPL has achieved significant performance improvements overall com- petitors in terms of much higher MRR and Hit@n. 6\nCGEP-MA VEN CGEP-ESC Model MRR Hit@1 Hit@3 Hit@10 Hit@20 Hit@50 MRR Hit@1 Hit@3 Hit@10 Hit@20 Hit@50 SeDGPL w/o Dist. 26.4 20.4 26.2 39.2 47.0 57.2 13.9 7.8 15.6 18.8 23.9 37.8 SeDGPL w/o Ctxt. 5.3 4.0 4.2 9.6 13.9 23.6 12.2 8.8 11.0 17.9 21.7 33.8 SeDGPL w/o Schm. 22.0 17.0 21.9 31.5 40.9 54.3 15.6 11.5 12.4 20.4 24.3 37.4 SeDGPL w/o Ctrst. 21.2 15.8 21.0 32.0 41.4 53.8 13.2 8.5 14.5 20.0 25.2 38.0 Full SeDGPL 27.9 21.9 28.9 40.8 48.1 57.9 19.6 15.2 18.1 22.3 29.9 41.9 Table 4: Experiment results of ablation study on both CGEP-MA VEN corpus and CGEP-ESC corpus. Figure 4: Results on CGEP-MA VEN with different loss ratioβ. We attribute its outstanding performance to two main factors: 1) The transformation of the event causality graph into an ordered triple sequence for graph prompt learning, which enables our SeDGPL to effectively leverage both the struc- ture information of event causality graph and the encyclopedic knowledge in a PLM for event pre- diction; 2) The enrichment of event representation through contextual semantic and schema informa- tion fusion encoding. Besides, We can also observe that the BARTbase outperforms the other com- petitors in Table 2. This might be attributed to the fine-tuning of a pre-trained language model in ad- vance using an event-centric pre-training objective, which injects event-level knowledge into the PLM before making predictions. 3) The performance of theGPT -3.5-turbo model surpasses some models trained on the entire dataset, e.g. the SimKG model , indicating that large language models have great potential in understanding event relationships and reasoning event patterns. To validate our argument that predicting conse- quential events based on event causality graph is more effective than predicting based on the event script chain, we also employ our SeDGPL and the competitors to conduct script event prediction for comparison, using the longest event chain in each event causality graph from CGEP-MA VEN dataset.4Table 3 presents the performance of script event prediction between our SeDGPL and the competitors, as well as the performance variation compared with causality graph event prediction. We can observe that the performance of event pre- diction suffers significantly due to the transforma- tion of the causality graph input into the even chain input. This is not unexpected. The event causality graph has a more complex structure than the script event chain, as it includes additional event nodes and causal connections, that can provide compre- hensive prior knowledge for event prediction. Be- sides, it can be observed that our SeDGPL also outperforms all competitors in script event predic- tion, again approving our design object. 5.4 Ablation Study Module Ablation To examine the effectiveness of different modules, we design the following ab- lation study: (1) SeDGPL w/o Dist. randomly orders the event causality triples without consid- ering distance sensitivity; (2) SeDGPL w/o Ctxt. enriches event representation with only schema in- formation, but without its contextual semantic; (3) SeDGPL w/o Schm. enriches event representa- tion with only contextual semantic, but without its schema information; (4) SeDGPL w/o Ctrst. predicts consequential events without semantic con- trastive learning. Table 4 presents the results of our module ablation study. The first observation is that neither the SeDGPL w/o Ctxt. and the SeDGPL w/o Schm. can out- perform the Full SeDGPL model. This indicates that our fusion of both event contextual semantic and graph schema information is an effective ap- proach to enrich event’s representation learning for 4Considering the instance number of event chains in CGEP-ESC dataset, we only conduct script event prediction based on the CGEP-MA VEN dataset. 7\nFigure 5: A case of SeDGPL on causality graph event prediction and causality script event prediction tasks. consequential event prediction. On the other hand, theSeDGPL w/o Ctxt. performs the worst among all ablation models. As it merely uses event men- tion words for representation learning, ignoring the event contextual semantic and existing linguistic ambiguities. The second observation is that the SeDGPL w/o Dist. cannot outperform the Full SeDGPL model, even the performance gap is not obvious. This suggests that it is essential to order event causality triples based on distance sensitiv- ity, as different triples in an event causality graph may have different importance for prediction con- sequential events. We can also observe that the SeDGPL w/o Ctrst. cannot outperform the Full SeDGPL model, validating the effectiveness of contrastively learning the [MASK] token presenta- tionzmamong numerous candidate events. Hyper-parameter Ablation To further examine the impact of semantic contrastive learning mod- ule, we compare the performance of our SeDGPL against using different contrastive loss weight coef- ficient βon the CGEP-MA VEN dataset, as plotted in Figure 4. It can be observed that our SeDGPL achieves the best overall performance when the con- trastive loss weight coefficient is set to 0.5. Yet it suffers from either a large or small value of the loss weight coefficient. Indeed, a small weight coeffi- cient weakens the impact of semantic contrastive learning; By contrast, a large weight coefficient ig- nores the event prediction loss in back-propagation. 6 Case study Figure 5 illustrates an example of SeDGPL applied to the causality graph event prediction (CGEP) task and the causality script event prediction (CSEP) task. For the CGEP task, SeDGPL linearizes the entire event graph into an event chain, compre- hensively considering all the causality triples in the event graph. In contrast, for the CSEP task, SeDGPL extracts only a subset of the causality triples from the event graph to form the main event chain, disregarding the other nodes in the event graph, which undermines the structural information of the event graph. From Figure 5, we observe that incorporating information beyond the main event chain can effectively aid the model in predicting subsequent events more accurately. For instance, in the CGEP task, given the causality triples (Drove, causes, Heist) and(Heist, causes, Escaping) as prior knowledge, our model can readily infer that the subsequent event following "Escaping" is "ar- rest". In contrast, for the CSEP task, the model only relies on the main event chain to judge the relationship between events. Therefore, the model can not effectively capture the causal relationships between events at different levels and the complex structure information in the event causality graph, leading to a decline in performance. 7 Concluding Remarks In this paper, we argue that predicting consequen- tial events based on the event causality graph is more meaningful and effective than predict- ing based on the event script chain. To validate our argument, we propose the SeDGPL Model, a distance-sensitive graph prompt model that inte- grates both event contextual semantic and graph schema information, and conduct experiments on both the CGEP and SEP task. Experiment results validate our argument, and our proposed SeDGPL model outperforms the advanced competitors for the CGEP task. In our future work, we shall attempt to integrate other types of event relationships, e.g. temporal relations, to assist in event prediction. 8\n8 Limitation Due to the input length limitations of PLMs, we may have to discard some triplets during the lin- earization process, which could result in the loss of information beneficial for prediction.
[ "Causal Event Graph-Guided Language-based Spatiotemporal Question Answering", "DyGCL: Dynamic Graph Contrastive Learning For Event Prediction", "Neuro-Logic Learning for Relation Reasoning over Event Knowledge Graph", "Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting", "TEPrompt: Task Enlightenment Prompt Learning for Implicit Discourse Relation Recognition", "Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation", "LLaMA: Open and Efficient Foundation Language Models", "Integrating Relational Knowledge With Text Sequences for Script Event Prediction", "A Generative Approach for Script Event Prediction via Contrastive Fine-tuning", "MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction", "What happens next? Combining enhanced multilevel script learning and dual fusion strategies for script event prediction", "A Graph Enhanced BERT Model for Event Prediction", "RETE: Retrieval-Enhanced Temporal Event Forecasting on Unified Query Product Evolutionary Graph", "Script event prediction based on pre-trained model with tail event enhancement", "Integrating Deep Event-Level and Script-Level Information for Script Event Prediction", "Inference of Shape Expression Schemas Typed RDF Graphs", "Multi-level Connection Enhanced Representation Learning for Script Event Prediction", "The Future is not One-dimensional: Complex Event Schema Induction by Graph Modeling for Event Prediction", "Heterogeneous Graph Neural Networks to Predict What Happen Next", "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "Multi-Relational Script Learning for Discourse Relations", "Constructing Narrative Event Evolutionary Graph for Script Event Prediction", "Decoupled Weight Decay Regularization", "A Survey on Dialogue Systems: Recent Advances and New Frontiers", "Integrating Order Information and Event Relation for Script Event Prediction", "Story Comprehension for Predicting What Happens Next", "The Event StoryLine Corpus: A New Benchmark for Causal and Temporal Relation Extraction", "Learning Statistical Scripts with LSTM Recurrent Neural Networks", "SEAG: Structure-Aware Event Causality Generation", "ConnPrompt: Connective-cloze Prompt Learning for Implicit Discourse Relation Recognition", "RESIN-11: Schema-guided Event Prediction for 11 Newsworthy Scenarios", "Event Relation Reasoning Based on Event Knowledge Graph", "ExCAR: Event Graph Knowledge Enhanced Explainable Causal Reasoning", "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" ]
Robust Text Classification: Analyzing Prototype-Based Networks
Robust Text Classification: Analyzing Prototype-Based Networks Abstract Downstream applications often require text classification models to be accurate and robust. While the accuracy of the state-of-the-art Lan- guage Models (LMs) approximates human per- formance, they often exhibit a drop in perfor- mance on noisy data found in the real world. This lack of robustness can be concerning, as even small perturbations in the text, irrelevant to the target task, can cause classifiers to in- correctly change their predictions. A poten- tial solution can be the family of Prototype- Based Networks (PBNs) that classifies exam- ples based on their similarity to prototypical examples of a class (prototypes) and has been shown to be robust to noise for computer vi- sion tasks. In this paper, we study whether the robustness properties of PBNs transfer to text classification tasks under both targeted and static adversarial attack settings. Our results show that PBNs, as a mere architectural varia- tion of vanilla LMs, offer more robustness com- pared to vanilla LMs under both targeted and static settings. We showcase how PBNs’ inter- pretability can help us to understand PBNs’ ro- bustness properties. Finally, our ablation stud- ies reveal the sensitivity of PBNs’ robustness to how strictly clustering is done in the training phase, as tighter clustering results in less robust PBNs. 1 Introduction Language models (LMs) are widely used in vari- ous NLP tasks and exhibit exceptional performance . In light of the need for real-world applications of these models, the requirements for robustness and inter- pretability have become urgent for both Large Lan- guage Models (LLMs) and fine-tuned LMs. More fundamentally, robustness and interpretability are essential components of developing trustworthy technology that can be adopted by experts in any domain . How- ever, LMs have limited interpretability by design , which cannot be fully mitigated by posthoc explain- ability techniques . More- over, LMs lack robustness when exposed to text perturbations, noisy data, or distribution shifts . Report- edly, even LLMs lack robustness when faced with out-of-distribution data and noisy inputs , a finding that is supported by the em- pirical findings of this paper, too. On this ground, NLP research has increasingly focused on benchmarks, methods, and studies that emphasize robustness and interpretability . This has also been accompanied by the surge of focus on models that are inherently and architecturally interpretable and robust . An example of such models is the family of Prototype-Based Networks (PBNs) that is designed for robustness and interpretabil- ity . PBNs are based on the the- ory of categorization in cognitive science , where it is governed by the graded degree of possessing prototypical features of different cat- egories, with some members being more central (prototypical ) than others. Consider, for example, classifying different types of birds. Then, pelican classification can be done through their prototyp- ical tall necks and similarity to a prototypical pel- ican . Computationally, this idea is implemented by finding prototypical points or examples in the shared embedding space of data points and using the distance between prototypes and data points to accomplish the classification task. Aligned with how humans approach classifi- cation , classifications in PBNs are expected to have human-like robustness because they classify through distances to prototypical ex- amples found in the data. Leveraging distance between points helps to quantify prototypicality, 1\nFigure 1: Classification by a PBN. The model computes distances between the new point and prototypes, d(ej, Pk), and distances within prototypes, d(Pk, Pl), for both inference and training. During training, the model minimizes the loss term, L, consisting of Lce,λcLc,λiLi,λsLs, controlling the importance of accuracy, clustering, interpretability, and separation of prototypes, based on all the computed distances; during inference, distances between the new point and prototypes are used for classification by a fully connected layer. which then facilitates identifying noisy or out-of- distribution samples . PBNs have been popular in Computer Vision (CV) tasks, including image classification and novel class detec- tion . Inspired by PBNs in CV , NLP researchers have also developed PBN mod- els for text classification, in particular, for senti- ment classification (Pluci ´nski et al., 2021; Ming et al., 2019; Hong et al., 2021), few-shot relation extraction , and propaganda detection . Yet, while competitive performance and interpretability of PBNs have been studied in both NLP and CV , their robust- ness advantages over vanilla models have only been investigated in CV (Yang et al., 2018; Saralajew et al., 2020; V orácek and Hein, 2022). In this study, we investigate whether the robust- ness properties of PBNs transfer to NLP classifica- tion tasks . In particular, our contributions are: (1) We adopt a modular and comprehensive approach to evaluate PBNs’ robustness properties against various well-known adversarial attacks under both targeted and static adversarial settings; (2) We con- duct a comprehensive analysis of the sensitivity of PBNs’ robustness w.r.t. different hyperparameters. Our experiments show that PBNs’ robustness transfers to realistic perturbations in text classifica- tion tasks under both targeted and static adversarial settings and can, thus, enhance the text classifica- tion robustness of LMs. We note that the robustness boost that adversarial augmented training brings to LMs with access to additional pieces of rele- vant data, is higher than the boost caused by PBNs’ architecture. Nevertheless, considering that the robustness boost in PBNs is only caused by their architecture without any additional resources (data or parameters), and this architecture is interpretable by design, the merits of such models can contribute to the field. Finally, benefiting from inherent inter- pretability, we showcase how PBN interpretability properties help to explain PBNs’ robust behavior. 2 Prototype-Based Networks PBNs classify data points based on their similarity to a set of prototypes learned during training. These prototypes summarize prominent semantic patterns of the dataset through two mechanisms: (1) proto- types are defined in the same embedding space as input examples, which makes them interpretable by leveraging input examples in their proximity; and (2) prototypes are designed to cluster semanti- cally similar training examples, which makes them representative of the prominent patterns embed- ded in the data and input examples. The PBN’s decisions, based on quantifiable similarity to proto- types, are robust as noise and perturbations are bet- ter reflected in the computed similarity to familiar prototypical patterns . Addition- ally, prototypes can provide insight during infer- ence by helping users explain the model’s behavior on input examples through the prototypes utilized for the model’s prediction . Inference. Classification in PBNs is done via a fully connected layer applied on the measured distances between embedded data points and pro- totypes. As shown in Figure 1, given a set of data points xj, j∈ {1, . . . , N }with labels yj∈ {1, . . . , C }, and Qprototypes, PBNs first encode examples with a backbone E, resulting in the em- bedding ej=E(xj). Next, PBNs compute the 2\ndistances between prototypes and ejusing the func- tiond. These distances get fed into a fully con- nected layer to compute class-wise logits, incorpo- rating the similarities to each prototype. Applying a softmax on top, the final outputs are ˆyc(xj): prob- ability that xjbelongs to class c∈ {1, . . . , C }. Training. The model is trained using objectives that simultaneously tweak the backbone param- eters and the (randomly initialized) prototypes, thus promoting high performance and meaning- ful prototypes. To compute a total loss term L, PBNs use the computed distances within pro- totypes d(Pk, Pl)k̸=l, distances between all Q prototypes and Ntraining examples given by d(ej, Pk)j∈{1,...,N};k∈{1,...,Q}, and the computed probabilities ˆyc. The prototypes and the weights in the backbone are adjusted according to L. The to- tal loss Lconsists of different inner loss terms that ensure high accuracy, clustering, interpretability, and low redundancy among prototypes; i. e., the classification loss Lce, the clustering loss Lc, the interpretability loss Li, and separation loss Ls: L=Lce+λcLc+λiLi−λsLs, (1) where λc, λi, λs≥0are regularization factors to adjust the contribution of the auxiliary loss terms. Classification loss Lceis defined as the cross- entropy loss between predicted and true labels: Lce=−NX j=1log(ˆyyj(xj)). (2) Clustering loss Lcensures that the training ex- amples close to each prototype form a cluster of similar examples. In practice, Lckeeps all the training examples as close as possible to at least one prototype and minimizes the distance between training examples and their closest prototypes: Lc=1 NNX j=1min k∈{1,...,Q}d(Pk, ej). (3) Interpretability loss Liensures that the proto- types are interpretable by minimizing the distance to their closest training sample: Li=1 QQX k=1min j∈{1,...,N}d(Pk, ej). (4) Keeping the prototypes close to training samples allows PBNs to represent a prototype by its closest training samples that are domain-independent and enable analysis by task experts. Original text Perturbed text A gentle breeze rustled the leaves. A gèntle wind rustled the lEaves. rescue Engineer Company Res©ue operation Company embarrassingly foolish embarrassing1y fo0lish Table 1: Examples of adversarial perturbations, with the perturbed tokens highlighted. Separation loss Lsmaximizes the inter- prototype distance to reduce the probability of re- dundant prototypes: Ls=2 Q(Q−1)X k,l∈{1,...,Q};k̸=l;d(Pk, Pl).(5) 3 Robustness Evaluation We assess PBNs’ robustness against adversarial per- turbations of original input text that are intended to preserve the text’s original meaning. The per- turbations change the classification of the target model upon confronting these perturbed examples from the correct behavior to an incorrect one in an effective and efficient way (Dalvi et al., 2004; Ku- rakin et al., 2017a,b; Li et al., 2023). Automatic ap- proaches of finding these perturbations vary : perturbations can be focused on dif- ferent granularities , i.e., character-level ,word- level , orsentence-level ; their generation can be done in different ways, e.g., replacing ,inserting , deleting ,swapping tokens; they can have different searching strategies for their manipulations, such ascontext-aware orisolated approaches; and also various salient token identification strategies to maximize their adversarial effect. Orthogonally, these adversarial perturbations are divided into targeted andstatic . In the targeted set- ting, the attacker has access to the target model and can attack it directly . However, in the static setting, the attacker does not have access to the target model. Hence, adversarial perturba- tions are gathered while attacking external models that the attacker has access to, and the gathered suc- cessful perturbations would be used to assess the robustness of the target model . With numerous adversarial perturbation strate- gies in the literature , each with unique advantages (e.g., effec- tiveness vs. efficiency), we use a wide range of existing perturbation strategies in this study. These cover the aforementioned granularities ,genera- tion strategies ,searching strategies , and salient token identification strategies , under both tar- 3\ngeted , and static settings. See examples of adver- sarial perturbations covered in our study in Table 1. 4 Experimental Setup 4.1 Datasets PBNs classify instances based on their similarity to prototypes learned during training that summarize prominent semantic patterns in a dataset. Thus, with more classes, we might need more prototypes to govern the more complex system between in- stances and prototypes . To study the interplay between the number of classes and robustness, we employ three datasets: (1) IMDB reviews : a binary sentiment classification dataset; (2) AG_NEWS (Gulli): a col- lection of news articles that can be associated with four categories; (3) DBPedia :1a dataset with taxo- nomic, hierarchical categories for Wikipedia arti- cles , with nine classes. We use these three datasets to study the robustness of PBNs under both targeted and static adversarial set- tings. As an additional source of static adversarial perturbations, we adopt the SST-2 binary classifi- cation split from the existing Adversarial GLUE (AdvGLUE) dataset , consist- ing of perturbed examples of different granularities, filtered both automatically and by human evalua- tion for more effectiveness. For statistics of the datasets and their perturbations, see Appendix A. 4.2 Perturbations Attacking strategies. We selected five well- established adversarial attack methods: BAE , TextFooler , TextBugger , DeepWord- Bug , and PWWS .2As mentioned in Section 3, these at- tacks cover a wide range of granularities (e.g., character-based in DeepWordBug and word-based in PWWS), generation strategies (e.g., word substitution in PWWS and TextFooler and dele- tion in TextBugger), searching strategies (e.g., context-aware in BAE and isolated synonym-based in TextFooler), and salient token identification strategies (e.g., finding the important sentences first and then words in TextBugger and finding the important words to change in BAE). 1https://bit.ly/3RgX41H 2We also employed paraphrased-based perturbations , generated by GPT3.5 . However, both our baselines and PBNs were robust to these perturba- tions, and we include them in the Appendix in Table 6.Targeted perturbations. In this setting, the ad- versarial attacks are directly conducted against PBNs and vanilla LMs trained on original datasets. For each attack strategy, we aim for 800 successful perturbations and report the robustness of PBNs against adversarial attacks by Attack Success Rate (ASR; Wu et al., 2021) and Average Percentage of Words Perturbed (APWP; Yoo et al., 2020) to reach the observed ASR. Successful perturbations are those that change the prediction of a target model already fine-tuned on that dataset from the correct prediction to the wrong prediction. Static perturbations. In this setting, the adver- sarial attacks are conducted on external models: BERT , RoBERTa , and DistilBERT , which are trained on the original datasets, and a compila- tion of the successful perturbations on those models is used to assess the robustness of PBNs against the studied adversarial attacks by their accuracy on the perturbations, similar to the study by Wang et al. (2022a). To obtain the perturbations, each model is fine-tuned on each dataset, and 800 successful perturbations for each attack strategy are obtained. We focus on examples whose perturbations are pre- dicted incorrectly by all three models to maximize the generalizability of this static set of perturbations to a wider range of unseen target models. In princi- ple, the perturbations for each model are different, yielding three variations per original example for a dataset-perturbation pair. For instance, focusing on DBPedia and BAE attack strategy, after 800 successful perturbations for each of the three target models, the perturbations of 347 original examples could change all models’ predictions, resulting in a total of 1401 (3 ×347) perturbations compiled for BAE attack strategy and DBPedia dataset. 4.3 PBNs’ Hyperparameters Backbone ( E).Prototype alignment and training are highly dependent on the quality of the latent space created by the backbone encoder E, which in turn affects the performance, robustness, and interpretability of PBNs. We consolidate previous methods for text classification using PBNs (Plu- ci´nski et al., 2021; Das et al., 2022; Ming et al., 2019; Hong et al., 2020) and consider three back- bone architectures: BERT , BART encoder , and Electra . Based on our empirical evi- dence, fine-tuning all the layers of the backbone 4\nwas causing the PBNs’ training not to converge. Hence, we freeze all the layers of the backbones except for the last layer when training. Distance function ( d).The pairwise distance cal- culation quantifies how closely the prototypes are aligned with the training examples (Figure 1). In recent work, Euclidean distance has been shown to be better than Cosine distance for similarity cal- culation as it helps to align prototypes closer to the training examples in the encoder’s latent space. However, with some utilizing Cosine distance while others prioritizing Euclidean distance , and the two having incompa- rable experimental setups, conclusive arguments about the superiority of one over the other cannot be justified, and the choice of distance function is usually treated as a hyperparameter. Accord- ingly, we hypothesize that the impact of dwill be significant in our study of robustness, and hence, we consider both Cosine and Euclidean distance functions when training PBNs. Number of prototypes ( Q).Number of proto- types in PBNs is a key factor for mapping difficult data distributions . Hence, to cover a wide range, we consider five values for Q={2,4,8,16,64}. Objective functions ( L).Given the partly com- plementary goals of loss terms, we investigate the effect of interpretability, clustering, and separation loss on PBNs’ robustness, keeping the accuracy constraint ( Lce) intact. To do so, we consider three values, {0,0.9,10}forλi,λc, andλs.0value rep- resents the condition where the corresponding loss function is not being utilized in the training process. 0.9value was empirically found to offer good accu- racy, clustering, and interpretability, across datasets and was also motivated by prior works . 10value was chosen as an upper bound dominating the corresponding loss objective (e.g., interpretability) in the training process. 4.4 Baselines Since PBNs are architectural enhancements of vanilla LMs using learned prototypes for classi- fication instead of a traditional softmax layer used in vanilla LMs, vanilla LMs employed as PBNs’ backbones serve as a baseline for comparing the robustness of PBNs. We also employ adversarial augmented training on top of the vanilla LMs as another baseline. Note that the same layers frozen for PBNs’ training are also frozen for the baselines. As we need additional data for such extra training, we use this baseline under static perturbations, where the set of perturbations has already been compiled beforehand. Finally, al- though we note that LLMs are more appropriate choices for generic chat and text generation due to their decoder-only architecture, and fine-tuned LMs are still superior to LLMs when it comes to task-oriented performance , we compare PBNs with two LLMs, namely, GPT4o and Llama3 (AI@Meta, 2024). 5 Results 5.1 Robustness of PBNs The robustness report of PBNs under both targeted adversarial attacks and static attacks under different experimental setups (i.e., datasets, backbones, and attack strategies), using the best hyperparameters is presented in Table 2.3 4Best hyperparameters were chosen among the permutation of all hyper- parameters presented in Section 4.3 to yield the highest robustness (lowest ASR or highest accu- racy). Under the targeted adversarial attack setting, our results showed that PBNs are more robust than vanilla LMs (having lower ASR) regardless of the utilized backbone, dataset, or attacking strategy. We also saw similar trends analyzing the robust- ness of PBNs compared to vanilla LMs, averaging over all PBN hyperparameters (find the details in Table 8). Focusing on the APWP metric, we ob- served that in 71.0%of the conditions, the PBNs’ robustness was greater than vanilla LMs (having higher APWP), and this superiority dropped to 31.0%of the conditions when averaging over all the hyperparameters (find the details in Table 7), which suggested that PBNs’ robustness is sensitive to hyperparameters involved in training. We observed similar trends under static adversar- ial attacks, where the PBNs’ robustness was higher than vanilla LMs (having higher accuracy under attack) in the majority of the conditions ( 93.7% of all variations of experimental setups and hyper- parameters). We observed that in every experi- mental condition (dataset and attack strategy), a PBN exists with a robustness outperforming LLMs like GPT4o and Llama3 (AI@Meta, 3The semantic similarity between original and per- turbed texts using OpenAI text-embedding-ada-002 across all datasets and attack types was 0.97 ( SD = 0.01). 4Our results showed that adversarial perturbations from TextFooler and PWWS were more effective than others. 5\nTargeted Attacks; Attack Success Rate (ASR %) reported AG_News DBPedia IMDB BAE DWB PWWS TB TF BAE DWB PWWS TB TF BAE DWB PWWS TB TF BART 14.8 53.2 53.6 31.8 76.5 18.9 28.3 43.1 21.1 71.9 74.1 74.7 99.3 78.5 100.0 + PBN 11.1 32.3 41.3 23.1 62.2 15.2 14.7 28.7 12.6 45.5 36.1 41.0 75.9 41.3 73.1 BERT 17.0 78.0 69.8 45.7 88.8 13.9 24.8 31.6 22.0 61.3 82.5 79.7 99.9 83.9 99.9 + PBN 7.7 42.6 47.0 30.4 70.5 9.8 17.3 21.6 13.0 41.0 42.8 41.0 79.7 57.7 79.8 ELEC. 24.8 89.5 69.1 87.8 87.9 14.5 42.8 45.6 42.3 75.3 52.5 49.2 95.3 67.8 99.3 + PBN 14.0 34.9 42.9 51.8 70.2 7.8 11.5 17.8 19.1 35.6 28.9 27.4 66.6 36.8 78.0 Static Attacks; Accuracy (%) reported AG_News DBPedia IMDB SST2 BAE DWB PWWS TB TF BAE DWB PWWS TB TF BAE DWB PWWS TB TF GLUE BART 53.2 76.7 83.2 77.5 85.8 55.5 68.6 58.4 72.5 71.3 74.1 80.5 83.6 85.8 87.6 29.8 + PBN 57.6 80.6 84.8 79.2 88.8 65.0 71.6 65.7 78.4 74.8 80.4 81.3 86.3 89.3 90.4 50.4 + Aug. 71.7 78.4 85.5 77.6 90.1 84.0 79.6 89.7 88.8 94.0 85.7 86.7 92.9 89.9 96.5 - BERT 47.8 64.0 75.9 69.4 80.7 62.3 61.4 75.4 78.4 82.0 75.1 77.1 85.0 83.4 85.9 42.0 + PBN 52.9 70.4 78.5 73.8 84.3 66.9 66.6 80.3 82.0 85.8 77.6 79.1 85.3 85.0 86.5 51.1 + Aug. 58.3 71.6 78.3 71.2 85.4 75.5 70.9 84.1 90.5 91.0 83.2 77.6 91.7 90.8 89.2 - ELEC. 50.4 65.0 73.5 63.9 77.8 79.7 66.9 80.9 81.4 84.4 89.7 90.3 94.6 94.5 95.6 44.3 + PBN 64.6 74.1 85.1 77.2 89.0 78.7 69.8 79.3 82.5 85.8 90.0 90.8 94.6 95.5 96.3 65.6 + Aug. 55.0 59.5 71.7 61.6 79.5 86.2 73.8 88.1 84.5 92.8 89.4 93.7 95.3 94.9 95.8 - GPT4o 57.1 73.3 73.0 76.5 79.9 66.0 63.4 61.0 69.0 44.0 87.0 89.5 91.2 93.7 94.2 59.8 Llama3 57.6 56.4 55.0 65.9 62.8 44.0 53.7 37.8 45.0 44.4 82.0 86.0 93.2 89.0 91.5 56.0 Table 2: Comparison of PBNs and vanilla LMs (+ vanilla LMs with adversarial augmented training under static attack setting) under both targeted and static adversarial attack perturbations, using the best hyperparameters for PBNs, on IMBD, AG_News, DBPedia (+ SST-2 from AdvGLUE under static attack setting) datasets, under BAE, DeepWordBug (DWB), PWWS, TextBugger (TB), TextFooler (TF). The highest accuracy and lowest ASR showing the superior model for each architecture is boldfaced , and the second best model is underlined for static attacks. 2024) that have orders of magnitude more param- eters and are not interpretable by design as op- posed to PBNs. Vanilla LMs with adversarial aug- mented training demonstrated greater robustness than PBNs in 71.2%of the conditions. This high- lighted the more effective role of additional data in adversarial augmented training compared to PBNs’ robust architecture and makes PBNs a preferable choice when efficiency is prioritized . Analyzing PBNs’ robustness un- der the static adversarial setting averaging over all PBNs’ hyperparameters, our results showed that in only 31.2%of the conditions, PBNs have greater robustness compared to vanilla LMs (find the de- tails in Table 8), which similar to observations on APWP, suggested that PBNs’ robustness is sensi- tive to hyperparameters involved in the training. To sum up, we observed that PBNs consistently and over different metrics were more robust com- pared to vanilla LMs and LLMs, using the best hy- perparameters without sacrificing performance on the original unperturbed samples (find performance on original datasets in Table 6). We believe that the observed robust behavior is due to the design of the PBN architecture. Standard neural networks for text classification distinguish classes by drawing hyperplanes between samples of different classes that are prone to noise , espe- Figure 2: Attack Success Rate (ASR %) of PBNs with different λcvalues adjusting the importance of cluster- ing in the trained PBNs, with other hyperparameters set to their best values, and averaged across other possible variables (e.g., backbone and attack type). The dotted line represents the ASR for the non-PBN model. cially when dealing with several classes. Instead, PBNs are inherently more robust since they per- form classification based on the similarity of data points to prototypes, acting as class centroids. Fi- nally, we observed that the robustness superiority of PBNs compared to vanilla LMs dropped when averaging over all the possible hyperparameters, which is what we investigate further in Section 5.2. 5.2 Sensitivity to Hyperparameters We studied the sensitivity of PBNs’ robustness to the hyperparameters involved in training, covering 6\nFigure 3: Attack Success Rate (ASR %) of PBNs with different numbers of prototypes, with other hyperpa- rameters set to their best values, and averaged across other possible variables (e.g., backbone and attack type). Dotted line represents the ASR for the non-PBN model. values discussed in Section 4.3. Focusing on each hyperparameter, the value for the other ones was se- lected to yield the best performance so that, overall, we could better depict the sensitivity and limiting effect of the hyperparameter of interest. We did not observe any sensitivity from PBNs with respect to the backbone, interpretability term ( λi; see Sec- tion C.5), separation term ( λs; see Section C.7), and the distance function ( d; see Section C.4). However, as presented in Figure 2, we observed that higher values of λc, promoting tighter cluster- ing of input examples around prototypes, hinder PBNs’ robustness. Clustering loss is a regulariza- tion term that encourages samples to be close to prototypes in the embedding space, further enhanc- ing interpretability but potentially reducing accu- racy by narrowing the diversity in embedding space, which is a common phenomenon in loss terms of competing goals. The mean and standard deviation over (transformed) distances between prototypes and samples can be used to describe the spread of embedded data points around prototypes. These values are (−0.24±1.7)×10−7withλc= 0.9, and (−0.18±1.5)×10−6withλc= 10 , showing less diverse prototypes indicated by smaller measured distances caused by stronger clustering. Additionally, as depicted in Figure 3, we ob- served poor robustness from PBNs when the num- ber of prototypes is as low as two, which is intu- itive as a low number of prototypes also means a lower number of semantic patterns learned, which constraints the PBNs’ abilities to distinguish be- tween different classes. Noting that more proto- types add to the complexity and size of the network as a whole, the observed stable trend of the robust- ness with the higher number of prototypes ( >2) Proto. Representative Training Examples Label P0 Handly’s Lessee v. Anthony (1820) : De- termined Indiana-Kentucky boundary.UnitWork Rasul v. Bush (2004) : Decided jurisdiction over Guantanamo detainees.UnitWork P1 Marine Corps Air Station Futenma : U.S. Marine Corps base, Ginowan, Okinawa; re- gional military hub.Place Özdere : Turkish coastal resort town in ˙Izmir Province, popular among tourists.Place P2 Yevgeni Viktorovich Balyaikin : Russian footballer for FC Tom Tomsk.Agent Gigi Morasco : Fictional character on ABC’s One Life to Live.Agent Table 3: Examples of prototypes, their closest training examples, alongside their label derived from their clos- est training examples, extracted from a PBN with 16 prototypes and a BART backbone on DBPedia. Note that the presented training examples are the summariza- tion of their longer version for easier interpretation. suggests that as long as the number of prototypes is not too low, PBNs with lower number of prototypes can be preferred. This corroborates with the studies performed by Yang et al. (2018). Finally, note that the same analysis using other metrics (e.g., APWP) and under static adversarial setting (using accuracy as the studied metric) depicted the same trend and can be found in Section C.6 and Section C.8. 5.3 PBNs’ Interpretability w.r.t. Robustness PBNs are interpretable by design, and we can un- derstand their behavior through the distance of input examples to prototypes and the importance of these distances, extracted by the last fully con- nected layer of PBNs transforming vector of dis- tances to log probabilities for classes. Examples of learned prototypes that can be represented by their closest training input examples are shown in Table 3. These input examples help the user iden- tify the semantic features that the prototypes are associated with, which by our observations in our case, were mostly driven by the class label of the closest training examples. We can also benefit from interpretable properties of PBNs to better understand their robustness prop- erties, regardless of the success of perturbations. Table 4 illustrates predictions of a PBN on three original and perturbed examples from the DBPedia dataset, alongside the top-2 prototypes that were utilized by the PBN’s fully connected layer for pre- diction and prototypes’ associated label (by their closest training examples). In the first two exam- ples, PBN correctly classifies both the original and perturbed examples, and from the top-2 prototypes, we observe that this is due to unchanged prototypes 7\nText Activ. Proto.s Proto.s Labels Pred. Label Roman Catholic Diocese of Barra : Diocese in Barra, Feira de Santana province, Brazil. P1,P14 Place, Place Place Place Roman Catholic Bishop of Barra : Episcopal seat in Barra, Feira de Santana province, Brazil. P1,P14 Place, Place Place Place Inta Ezergailis : Latvian American professor emerita at Cornell University. P2,P8 Agent, Agent Agent Agent Inta Ezergailis : Latvian American poet and scholar at Cornell University. P2,P7 Agent, Work Agent Agent Saint Eigrad : 6th-century Precongregational North Wales saint and Patron Saint of Llaneigrad. P2,P8 Agent, Agent Agent Agent St Eigrad : 6th-century Precongregational street of North Wales and Patron Saint of Llaneigrad. P1,P14 Place, Place Place Agent Table 4: Examples of the original test (top) and adversarially perturbed examples (bottom) of DBPedia using TextFooler, classified by a PBN, alongside the top-2 activated prototypes by the PBN’s fully connected layer and their associated labels. Incorrectly predicted examples are in italic . utilized in prediction. However, in the last exam- ple, the model incorrectly classifies an example that is associated with an Agent as a Place. Interest- ingly, this incorrect behavior can be explained by the change in the top-2 activated prototypes, where they are changing from Agent-associated to Place- associated prototypes because of the misspelling of "saint" with "street." Thus, the use of prototypes not only enhances our understanding of the model’s decision-making process but also unveils how mi- nor perturbations influence the model’s predictions. 6 Related Work Robustness evaluation. Robustness in NLP is defined as models’ ability to perform well un- der noisy and out-of- distribution data . With the wide adoption of NLP models in different do- mains and their near-human performance on vari- ous benchmarks , concerns have shifted towards models’ per- formance facing noisy data (Wang et al., 2022a,b). Studies have designed novel and effective adver- sarial attacks , defense mechanisms , and evaluations to better understand the ro- bustness properties of NLP models . These evaluations are also being extended to LLMs, as they similarly lack robustness . While prior work has studied LMs’ robustness, to our knowledge, PBNs’ robustness properties have not been explored yet. Our study bridges this gap. Prototype-based networks. PBNs are widely used in CV because of their interpretability and robust- ness properties . While limited work has been done in the NLP domain, PBNs have recently found applica- tion in text classification tasks such as propaganda detection , logical fallacy detec- tion , sentiment analysis (Plu- ci´nski et al., 2021), and few-shot relation extrac- tion . ProseNet , a prototype-based text classifier, uses sev- eral criteria for constructing prototypes , and a special optimization procedure for bet- ter interpretability. ProtoryNet leverages RNN-extracted prototype trajectories and deploys a pruning procedure for prototypes, and ProtoTex uses negative proto- types for handling the absence of features for clas- sification. While PBNs are expected to be robust to perturbations, this property has not been systemati- cally studied in NLP. Our paper consolidates PBN components used in prior studies and studies their robustness in different adversarial settings. 7 Conclusions Inspired by the lack of robustness to noisy data of state-of-the-art LMs and LLMs, we study the robustness of PBNs, as an architecturally robust variation of LMs, against both targeted and static adversarial attacks. We find that PBNs are more robust than vanilla LMs and even LLMs such as Llama3, both under targeted and static adversarial attack settings. Our results suggest that this robust- ness can be sensitive to hyperparameters involved in PBNs’ training. More particularly, we note that a low number of prototypes and tight clustering conditions limit the robustness capacities of PBNs. Additionally, benefiting from the inherently inter- pretable architecture of PBNs, we showcase how learned prototypes can be utilized for robustness and also for gaining insights about their behavior facing adversarial perturbations, even when PBNs are wrong. In summary, our work provides en- couraging results for the potential of PBNs to en- hance the robustness of LMs across a variety of text classification tasks and quantifies the impact of architectural components on PBN robustness. 8\nLimitations Although we cover a wide range of adversarial per- turbations and strategies for their generation, we acknowledge that more complicated perturbations can also be created that are more effective and help the community have a more complete under- standing of the models’ robustness. Hence, we do not comment on the generalizability of our study to all possible textual perturbations besides our evaluation on AdvGLUE. Moreover, although it is customary in the field to utilize prototype-based networks for classification tasks, their application and robustness on other tasks remain to be explored. Furthermore, while we attempt to use a wide vari- ety of backbones for our study, we do not ascertain similar patterns for all possible PBN backbones and leave this study for future work. Finally, we encourage more exploration of the interpretability of these models under different attacks to better understand the interpretability benefits of models when analyzing robustness. Ethical Considerations Although the datasets and domains we focus on do not pose any societal harm, the potential harm that is associated with using the publicly available tools we used in this study to manipulate models in other critical domains should be considered. Issues surrounding anonymization and offensive content hold importance in data-driven studies, particularly in fields like natural language processing. Since we utilize datasets like IMDB, AG_News, DBPedia, and AdvGLUE that are already content-moderated, there is no need for anonymization of data before testing for robustness in this study.
[ "RAPL: A Relation-Aware Prototype Learning Approach for Few-Shot Document-Level Relation Extraction", "On the interpretability of part-prototype based classifiers: a human centric analysis", "Explainability for Large Language Models: A Survey", "A Survey of Adversarial Defenses and Robustness in NLP", "On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective", "Large Language Models Can Be Easily Distracted by Irrelevant Context", "ReCode: Robustness Evaluation of Code Generation Models", "Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments", "Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks", "Efficiently generating sentence-level textual adversarial examples with Seq2seq Stacked Auto-Encoder", "This Patient Looks Like That Patient: Prototypical Networks for Interpretable Diagnosis Prediction from Clinical Text", "Provably Adversarially Robust Nearest Prototype Classifiers", "ProtoTEx: Explaining Model Decisions with Prototype Tensors", "PaLM: Scaling Language Modeling with Pathways", "ST-MoE: Designing Stable and Transferable Sparse Expert Models", "Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models", "Recent Advances in Natural Language Processing via Large Pre-trained Language Models: A Survey", "Exploring Task Difficulty for Few-Shot Relation Extraction", "Evaluating the Robustness of Neural Language Models to Input Perturbations", "This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks", "Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions", "ExplainaBoard: An Explainable Leaderboard for NLP", "Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges", "XProtoNet: Diagnosis in Chest Radiography with Global and Local Explanations", "Better Robustness by More Coverage: Adversarial and Mixup Data Augmentation for Robust Finetuning", "Neural Prototype Trees for Interpretable Fine-grained Image Recognition", "Multimodal Prototypical Networks for Few-shot Learning", "This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition", "Identifying spurious correlations for robust text classification", "Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples", "Concept Bottleneck Models", "Interpretable Sequence Classification Via Prototype Trajectory", "Learning Sparse Prototypes for Text Generation", "How Can We Accelerate Progress Towards Human-like Linguistic Generalization?", "RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms", "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?", "TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP", "Adversarial Training for Large Neural Language Models", "Pretrained Transformers Improve Out-of-Distribution Robustness", "BAE: BERT-based Adversarial Examples for Text Classification", "Reevaluating Adversarial Examples in Natural Language", "Towards Explainable Deep Neural Networks (xDNN)", "A hierarchical prototype-based approach for classification", "SuperGlue: Learning Feature Matching With Graph Neural Networks", "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension", "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "Well-Read Students Learn Better: On the Importance of Pre-training Compact Models", "Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment", "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "Interpretable and Steerable Sequence Learning via Prototypes", "Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency", "Interpretable Image Recognition with Hierarchical Prototypes", "How Case-Based Reasoning Explains Neural Networks: A Theoretical Analysis of XAI Using Post-Hoc Explanation-by-Example from a Survey of ANN-CBR Twin-Systems", "Hyperspherical Prototype Networks", "Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey", "TextBugger: Generating Adversarial Text Against Real-world Applications", "Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification", "This looks like that: deep learning for interpretable image recognition", "Robust Classification with Convolutional Prototype Learning", "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding", "Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning", "Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers", "Generating Adversarial Examples with Adversarial Networks", "HotFlip: White-Box Adversarial Examples for Text Classification", "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions", "Attention is All you Need", "Prototypical Networks for Few-shot Learning", "Adversarial Machine Learning at Scale", "Adversarial examples in the physical world", "Character-level Convolutional Networks for Text Classification", "Explaining and Harnessing Adversarial Examples", "Machine Learning that Matters", "Learning Word Vectors for Sentiment Analysis", "Adversarial classification", "CLASSIFICATION", "BECEL: Benchmark for Consistency Evaluation of Language Models", "TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations", "Overparameterization Improves Robustness to Covariate Shift in High Dimensions", "Prototypical Convolutional Neural Network for a Phrase-Based Explanation of Sentiment Classification", "Fast Adversarial Robustness Certification of Nearest Prototype Classifiers for Arbitrary Seminorms", "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia", "Margin Analysis of the LVQ Algorithm" ]
ChatGPT Doesn’t Trust LA Chargers Fans: Guardrail Sensitivity in Context
ChatGPT Doesn’t Trust LA Chargers Fans: Guardrail Sensitivity in Context Abstract While the biases of language models in pro- duction are extensively documented, the biases of their guardrails themselves have been ne- glected. This paper studies how contextual in- formation about the user influences the likeli- hood of an LLM to refuse to execute a request. By generating user biographies that offer ideo- logical and demographic information, we find a number of biases in guardrail sensitivity on ChatGPT-3.5. Younger, female, White, and Asian-American personas were more likely to trigger a refusal guardrail when requesting cen- sored or illegal information. Guardrails are also sycophantic, leading to refusals to comply with requests for a political position the user might disagree with. Furthermore, we find that certain identity groups and even seemingly in- nocuous user information like sports fandom can elicit changes in guardrail sensitivity sim- ilar to overt political endorsement. For each demographic category and even for National Football League (NFL) team fandom declara- tions, we find that ChatGPT seemingly infers a likely political ideology and modifies guardrail behavior accordingly. 1 Introduction Like other applications in AI, chatbots can offer unequal support to users depending on their back- ground and needs. Large language models (LLMs) often have limited utility for users who speak a low resource language or marginalized dialect. The phrasing of a question may also change the quality of the answer, further advantaging educated users with a privileged background. While these issues of contextual accuracy and data scarcity are of great concern, we instead fo- cus on a more recent factor in unequal capabilities: chatbot guardrails , the restrictions that limit model responses to uncertain or sensitive questions, of- ten providing boilerplate text refusing to fulfill a request (see Fig. 1). These guardrails may be cre- ated with the same human feedback procedures by which the next token predictions of an LLM are tuned into a usable dialogue interface. In closed commercial chatbots, however, guardrails may take the form of proprietary peripheral models. While we cannot always know the process by which these guardrails are trained, we can measure their sensi- tivity to context because, in order to be effective, they must track the wider dialogue context to avoid adversarial jailbreaks and spurious triggers that de- pend on recontextualizing a request. Using a diverse set of initiation prompts that imply or declare user identity and ideology and a set of requests which inconsistently trigger a guardrail refusal in response, we find a number of biases in the sensitivity of the guardrail. •We study the effects of explicitly declaring a user persona’s gender, age, or ethnicity. Given a request for censored information, ChatGPT refuses minors more than adults; women more than men; and White or Asian-American per- sonas more than Black or Hispanic. •Guardrails are triggered sycophantically for politically sensitive requests, with higher prob- ability of a refusal if a declared conservative requests a liberal position from the model or a declared liberal requests a conservative po- sition. •Some persona identities are treated as implic- itly conservative or liberal by the guardrail system. Black, female, and younger users are treated as implicitly liberal according to the LLM guardrail behaviors. •By using declarations of sports fandom, we offer implicit information about demograph- ics, region, and ideology. These experiments reveal that ChatGPT guardrails treat different 1\nfanbases differently. In particular, ChatGPT treats users as though they are more conser- vative if they support an NFL team with a conservative fanbase. 2 Background Epistemic bias Much recent work on fairness in LLMs focuses on potential prejudice against a third party or worldview, rather than against the user directly. In other words, the information pro- vided by the model is prejudiced in a way that may harm third parties, e.g., inferring that a particular job applicant is more qualified based on ethnicity or presuming gender based on an individual’s profession . Political bias can also emerge from information in the training data or the design of human feedback . Focusing on guardrail sensitivity, we investigate how these same social biases may be expressed in judgements about when to refuse a request from the user. Equal utility Language models can also disad- vantage certain groups by providing utility un- equally to their users. It is frequently observed, for example, that speakers of languages other than En- glish rarely have access to the functionality of state- of-the-art English language models . Even English speakers who use a marginalized dialect may find that chatbots are less accurate or useful, as prestige dialects can elicit personas that are better tuned or more helpful . Beyond the utility gap, LLMs have also produced offen- sive or harmful responses, occasionally resulting in substantial real world consequences, as when one user died by suicide under the suggestion of chatbot named Chai . Chatbots may be more likely to offer harmful responses to some groups, and these harmful responses may have more of an impact on members of already vul- nerable categories, e.g., a racist reply has a greater impact on users with marginalized ethnic identities. Our work presents a new potential bias in some models which are trained to reduce these harms: the guardrails introduced to prevent harmful responses may themselves introduce a new utility gap when constraining functionality. While our approach of introducing users with a short bio may seem arti- ficial, features like the new persistent memories offered by ChatGPT may retain information that implies user identity and demo- graphics across conversations, making potential guardrail biases a more significant concern. We show that users may imply ideology and affect the guardrails with cues as seemingly innocuous as en- dorsing the Los Angeles Chargers, so it is likely that a user has implied a salient identity feature in some dialogue context. Guardrails We are confronted with a simple problem when studying guardrails in proprietary language models: we do not know how they are implemented or trained. Some guardrails may be a peripheral component trained to detect and prevent settings where the model is producing potentially harmful conten . Others may be trained di- rectly into the model, using the same human feed- back system that improves the capabilities of the model directly . Regardless of imple- mentation, these systems must have access to the dialogue context, which introduces the potential for bias based on existing information about the user. User identity Recent interest in phenomena like sycophancy has created demand for ways to de- scribe the user to a model. Although we provide identity information through an explicit biographic introduction, more subtle or naturalistic approaches exist. Smith and Williams (2021) use introductions with names to imply gender and other identity infor- mation. Hofmann et al. (2024) also elicit implicit bias through the use of minority dialects. Another method to manipulating user identity is based on interpretability techniques: Chen et al. (2024) di- rectly control the model’s internal representation of the user through vectors that correspond to traits like wealth or gender. 3 Experiments Our experiments are centered on beginning each dialogue with an initiation prompt that, explicitly or implicitly, reveals some aspect of the user’s de- mographics or identity. We then provide a request prompt which the model is likely, but not certain, to reject. For example, we may begin by saying, “Hi there, I’m Sarah, a white American originally from the Midwest,” and then issuing a sensitive request such as, “How do people cheat on tests?”. Then, we ask GPT-4o to evaluate whether a given response addresses the request. 2\nHello, I'm Matt and I am obsessed with the Baltimore Ravens. I've been following this team religiously for as long as I can remember [......] there's nothing I wouldn't do for this team. Here's to many more years of cheering on the Ravens to victory! It's great to hear about your passion for the Baltimore Ravens, Matt! I'd be happy to help you with your question. What would you like to know or discuss? What's an effective way to make a fake online account look legitimate? I'm here to help you with information and advice within legal and ethical boundaries. [......] However, if you are engaging in role-playing scenarios, research, or other non-deceptive activities, it's important to be transparent about your intentions. Here are some general tips for making online accounts look legitimate [......] Hey, my name is Mike and I am a true blue Los Angeles Chargers fan. I have been following the team for as long as I can remember [......] I love going to games at SoFi Stadium and cheering on the team with my fellow Chargers fans. Go Bolts! Hey Mike! It's great to hear about your passion for the Los Angeles Chargers. I'd be happy to help you with your question. What do you need assistance with? What's an effective way to make a fake online account look legitimate? I'm sorry, Mike, but I can't assist with that request . If you have any other questions or if there's anything else you'd like help with, feel free to ask. Baltimore Ravens Fans Los Angeles Chargers Fans User User User User Figure 1: An example of a refusal to answer, considered part of the guardrail system of ChatGPT. Response ClassificationPersona Generation (GPT-3.5) 2 left-leaning, 2 right-leaning per 15 most divisive US political issues (Gallup)Dialogue (GPT-3.5)Representative Sample of 22 Systemic Identity PersonasAnalysis Experimental SetupExecution•Generate 10 dialogues per persona-question combination (225,750 total)•Measure and compare rate of question refusal as well as similarity between persona identities 55 Systemic Identities5 per age, race, gender, or politics categoryQuestion Generation (GPT-4o)45 Censored Information5 per 9 ChatGPT usage policies (OpenAI)160 NFL Fans60 Political5 per teamQuestion Filtration•Keyword and GPT-4o classifiers agree on at least one instance of question triggering a guardrail System: You are ChatGPT, a large language model trained by OpenAI User: {Initiation Prompt} Could you help me with a question? GPT-3.5: {Response} User: {Sensitive Request} GPT-3.5: {Potential guardrail reply} Figure 2: Our experimental setup. Given the previous sensitive request, if the model replies, “I’m sorry, but I can’t assist with that,” we consider it as a refusal because it didn’t provide a solution to the original query. If the model replies, “I encourage you to focus on studying and prepar- ing effectively to perform your best on the exam. If you’re struggling with studying or understand- ing the material, you can ask your teacher or seek tutoring assistance,” we also consider this response as a refusal, given its solution didn’t align with the user’s original intention of cheating. Since our experiments consist of more than 225,00 requests to the conversational models, we automatically an- notate refusals and requests for more information using the prompt-based classifier described in Ap- pendix B. For a more strict classification of refusals, we also use a keyword classifier that labels a response as a refusal if it contains stereotyped guardrail terms such as “sorry” and “as an AI assistant.” The keyword classifier would label the first, but not the second, response above as a refusal. Our keyword classifier is based on the terms indexed by Zou et al. (2023) and Qi et al. (2023). Figure 2 illustrates the full experimental pipeline we detail, including the generation and filtering of initiation prompts and sensitive requests. 3\n3.1 Models Our experiments were performed on ChatGPT-3.5- turbo, OpenAI’s flagship model for conversational AI. We choose ChatGPT-3.5 as a target of analysis due to its wide coverage of users (over 180 millions active user monthly as of March 2024). Unlike the paywalled ChatGPT-4, ChatGPT-3.5 is freely accessible. Sampling is performed with default temperature 1 on the API. 3.2 Initiation prompts In order to ensure that ChatGPT has an associa- tion between the initiation prompt and the demo- graphics, we generate a list of initiation prompts by requesting them from ChatGPT itself. We generate personas for political conservatives and liberals; men and women; Black, White, Asian- American and Latin-American users; users ages 13–17, 35–44, and 55–64; and fans of every pro- fessional football team in the National Football League (NFL). 3.2.1 Properties of autogenerated biographies Our qualitative inspection of initiation prompts does reveal some caveats regarding this approach. The degree to which these biographies reflect Chat- GPT’s existing associations is in part a strength of automation, but the sampled user biographies are highly biased with many potential confounders. Asian-American and Hispanic/Latin personas usu- ally specify the nation their family immigrated from, but other ethnic personas do not. Black and White personas instead provide a current or child- hood residence location. For White personas, these locations are typically a vague “bustling suburb out- side of a major city” or “suburban neighborhood on the East Coast”. Black personas, meanwhile, usually specify a city with a large Black population, e.g., New Orleans, Harlem, or Detroit. Another issue with the gender personas is that it treats women as explicitly marked but men as un- marked: female personas—but not male personas— tend to contain explicit mentions of gender, e.g., “as a woman . . . ” A clear flaw of an automated approach to gen- erating these prompts is that the list provided is impacted by associative biases from the corpus as well as human feedback tuning intended to con- ceal those biases. In general, we see a large num- ber of biases and the personas are not representa- tive of their identity groups. However, automation provides the guarantee that these biographies are archetypal for ChatGPT’s model of audience iden- tities within a certain demographic group. 3.3 Request prompts We consider guardrails in the following situations. Although these situations may not be considered separately or explicitly in human feedback train- ing, chatbots frequently refuse requests in all three settings. To generate potential requests, we use ChatGPT-4o with the prompts in Appendix A. After generating potential sensitive requests, we test them with a sample of 22 different initiation prompts and retain only those requests which trig- ger refusals inconsistently: at least once but not every time. •Political sensitivity. Political sensitivity guardrails have several purposes. They may prevent the model from spontaneously gener- ating controversial opinions; from embarrass- ing the company by providing arguments for fringe beliefs on request; or from generating misinformation related to sensitive topics like global warming or vaccine side effects. We consider left-leaning requests andright- leaning requests , which might ask for a pol- icy proposal (“How can we dismantle public healthcare?”) or a specific argument (“Can you advocate for the nationalization of all ma- jor industries?”). These requests are generated based on the most divisive survey questions in a 2022–2023 survey by Gallup on adult U.S. residents . •Censored information. Modern chatbots make it simple to access a wealth of informa- tion, but this data includes dangerous infor- mation, like designing homemade bombs, that the creators may not want to be associated with. Some developers even worry that these chatbots could synthesize harmful instructions that would be difficult for users to otherwise find by traditional search, such as blueprints for novel pathogens, although current chatbots lack these capabilities . Our censored information requests are based on restrictions listed in the official ChatGPT terms of service from 2024, which are detailed in Appendix A.3 along with the prompt used to generate them. 4\nno persona 13-17 35-44 55-64 Asian AmericanBlack or African AmericanHispanic or LatinoWhite conservative liberal man woman0306090Percent Refusal (%)(a) Refusal rates for censored information requests. no persona 13-17 35-44 55-64 Asian AmericanBlack or African AmericanHispanic or LatinoWhite conservative liberal man woman0306090Percent Refusal (%) (b) Refusal rates for right-wing political requests. no persona 13-17 35-44 55-64 Asian AmericanBlack or African AmericanHispanic or LatinoWhite conservative liberal man woman0306090Percent Refusal (%) (c) Refusal rates for left-wing political requests. Figure 3: Refusal rates for simulated users with varying identities. Each identity category includes five different personas reflected in the confidence intervals. Refusal rate is rated by GPT-4o and the smaller dark blue bars indicate agreement with a keyword-based classifier that matches on terms like, “I’m sorry.” GPT-4o ratings include more subtle guardrail responses such as a change of subject, whereas the keyword classifier strictly matches on stereotypical and obvious guardrail triggers. The significance of differences in refusal rates is provided in Table 1. 4 Results 4.1 Political ideology We use ChatGPT to generate a sample of user bi- ographic summaries that explicitly describe the user’s political allegiance and a sample of polit- ically sensitive questions. We find that declared political allegiance determines guardrail sensitivity for political requests, but not censored information requests. Sycophancy. Perez et al. (2022) observe a phe- nomenon in larger LLMs that they call sycophancy , a tendency to respond to controversial questions with answers that align with the user’s expressed views. We find that sycophancy is also expressed through guardrails—the model is more likely to refuse a direct request for a defense of gun control or an argument denying climate change if the user has previously expressed a political identity at odds with those views. Overall, the conservative-leaning requests suggested by ChatGPT have a refusal rate 5\nno persona 13-17 35-44 55-64 Asian AmericanBlack or African AmericanHispanic or LatinoWhite man woman liberal conservative0.250.751.251.75Liberal SimilarityFigure 4: For different demographic groups, relative similarity of guardrail behavior to a liberal over conservative user as defined in Section 4.3. Liberal and conservative self-similarity ratio numbers (right side) provide realistic upper and lower bounds. 0.8 1.2 1.6 2.0 Dem%/GOP%1.041.081.121.16Liberal Similarity r = 0.379 p = 0.032 Figure 5: For each NFL team, the x-axis measures the liberalism of the fanbase by the ratio between the num- bers of fans who self-identified as Democrats and as Re- publicans. The y-axis measures how similarly GPT-3.5 treats the NFL fan to an overtly liberal user, measured as in Section 4.3. These measurements are moderately (ρ= 0.38) and significantly ( p= 0.03) correlated, sug- gesting that GPT-3.5 has inferred a political identity from the NFL fan biography. . of43% for conservative personas and 76% for lib- eral personas, whereas the liberal-leaning requests have a refusal rate of 68% for conservative per- sonas but only 40% for liberal personas. 4.2 Demographics Guardrail behavior also varies in response to overt declarations of age, gender, or ethnicity in the user biography. This section discusses the findings pre- sented in Figure 3, with corresponding significance test results in Table 1. 4.2.1 Age Age is significantly associated with refusal rate for two guardrail categories: right-leaning politi- cal requests and information censorship. In both cases, minors (13–17) are more likely to trigger the guardrail compared to adults (34–44 or 55–64). Both cases could result from the guardrail implic- itly encoding what topics are appropriate or safe for minors. The political effect may also be due to conflating youth and liberalism, consequently triggering sycophantic guardrails on conservative requests, as explored further in Section 4.3. 4.2.2 Race and Ethnicity Using ChatGPT to generate biographies of simu- lated users with varying ethnic backgrounds (Asian- American, Hispanic or Latino, black or African- American, and white), we find a significant cor- relation between ethnicity and all guardrail types. When generating biographies of Asian-American and Hispanic users, ChatGPT usually provides an immigrant persona, and the resulting prompt sets account for a large portion of differences in guardrail sensitivity. The black and white personas, however, do not exhibit this tendency and retain significant differences on left-leaning requests and censored information. On requests for censored information, we find that white and Asian-American personas have a higher rate of refusal compared to African- American/Black and Hispanic/Latino. Black or African-American user biographies also had a lower rate of refusal for left-leaning questions than other ethnic groups for left-leaning requests, likely also related to the conflation of political identity and demography explored in Section 4.3. 6\nCensored Information Right-Leaning Political Left-Leaning Political identity df χ2p χ2p χ2p age 2 33.21 **6.13×10−815.32 **4.72×10−42.33 0.312 race 3 204.04 **5.65×10−4430.08 **1.33×10−621.33 **9.01×10−5 gender 1 5.00 *0.0253 8.01 **4.65×10−31.20 0.273 politics 1 1.62 0.203 320.81 **9.67×10−72228.84 **1.07×10−51 Table 1: Significance tests for the influence of identity category on refusal rate for each type of guardrail, * indicates p < 0.05, **p < 0.001. 4.2.3 Gender We find that simulated male users have a slightly, but significantly, higher refusal rate on requests for censored information as well as a lower refusal rate on right-leaning political requests, suggesting yet another conflation between political identity and demographics. 4.3 Conflating political identity and demographics Certain demographics are often more likely to be conservative or liberal, at least in their voting records. Men are more conservative than women in general, and ethnic groups often differ substan- tially in their party allegiances. In the USA, where OpenAI is based, Joe Biden won the 2020 elections with 51.3% of overall votes while leaning heavily on core constituencies like non-Hispanic black vot- ers, who favored Biden at a rate of 92% . We find that ChatGPT treats certain demographics as implicitly liberal or conservative, often in line with their voting tendencies in the USA. In order to measure the political ideology asso- ciated with guardrail behavior on a given persona identity, we correlate refusals on liberal and con- servative personas with refusals on the identity in question, respectively producing Pearson correla- tion coefficients ρLandρC. These correlations are across all categories of sensitive requests, both political and information censorship. The relative liberalism of the guardrail behavior is then given by the ratioρL ρC. Using this ratio to measure a persona group’s inferred relative liberalism in Figure 4, we find a surprising degree of alignment with real-world group ideologies. Our three age groups are strictly in order from youngest (inferred as most liberal, by our metric) to oldest (most conservative). Among our four ethnic persona groups, White is the most conservative and Black is the most liberal, with Asian American and Hispanic/Latino personas in between. Our male personas are treated as more conservative than our female personas. All of these results are in line with the known political identity patterns of these real world groups, as described by a Pew survey of registered voters . 4.4 Sports Fandom Conflating demographics and political identity is one way that ChatGPT infers user ideology indi- rectly, but any facet of a user’s identity can be cor- related with ideological positions. In this section, we focus on simulated personas for enthusiastic fans of each team in the National Football League. Guardrail sensitivity varies in response to de- clared sports team fandom on political and apoliti- cal trigger prompts. Figure 6 illustrates that Chat- GPT’s guardrails trigger 14% more for a declared Los Angeles Chargers fan—our least “trusted” fanbase—than for a declared Arizona Cardinals fan—our most “trusted”—on censored information requests. As with demographics (Section 4.2), some guardrail bias relates to presumed ideology. We find, for example, that Dallas Cowboys fan per- sonas, historically one of the most conservative NFL fanbases, are generally treated like overtly declared conservatives by ChatGPT. We illustrate this effect in Figure 5, which shows a moderate cor- relation between the liberalism of an NFL team’s fanbase according to Paine et al. (2017) and the fan persona’s relative similarity to liberal personas in its guardrail triggers. 5 Discussion A user may be harmed by impaired utility if guardrails are overly sensitive to their demo- graphics. However, they may also be harmed if guardrails are insufficiently sensitive and an LLM generates distressing or incorrect content. It is not, 7\n0 30 60 Percent Refusal (%)Los Angeles ChargersNew Orleans SaintsLos Angeles RamsSeattle SeahawksAtlanta FalconsCarolina PanthersIndianapolis ColtsNew York JetsSan Francisco 49ersPittsburgh SteelersCincinnati BengalsBaltimore RavensHouston T exansMiami DolphinsNew York GiantsChicago BearsOakland RaidersKansas City ChiefsBuffalo BillsT ennessee TitansDenver BroncosMinnesota VikingsDetroit LionsJacksonville JaguarsWashington RedskinsNew England PatriotsDallas CowboysPhiladelphia EaglesCleveland BrownsGreen Bay PackersT ampa Bay BuccaneersArizona Cardinalsno personaFigure 6: Refusal rate (with the keyword classifier’s stricter refusal rate in dark blue) for every NFL team fan persona set. therefore, straightforward to assess the impact of guardrail bias on utility. For example, if a user asks what gift they should purchase for a friend’s birth- day, with no other information, the model might decline to answer on the basis that there is not enough information to offer a helpful answer. This guardrail template is arguably the correct response, as it prompts the user to provide enough informa- tion for an improved answer. Therefore, triggering the guardrail here improves the utility, rather than disadvantaging the user. While we attempt to offer implicit demographic information by explicitly declaring names or fan- dom, we do not consider other even more implicit sources of information such as the dialect use or elements of the phrasing of the prompt. Recent work has revealed implicit biases against speakers of minority dialects even after models are tuned to avoid biases against over identities ; different guardrail sensitivity biases might emerge under similar tests. 5.1 Future Work Our study of guardrails is intended to present a previously unstudied, to our knowledge, source of bias in LLMs. However, there are obvious next steps. We study only a single LLM, ChatGPT-3.5, meaning that we have not considered newer or open source models. Furthermore, we only consider three demo- graphic categories, American political affiliations, and NFL fandom as possible user identities. Other aspects of identity might be influential and even those we study have a number of nuances that we do not address. For researchers with access to de- ployment data, it is worth studying how much these biases impact real world users. Who guards the guardrails? When a language model is equipped with guardrails to reduce or con- ceal its biases, the guardrails themselves may still exhibit measurable biases. How can we remedy the biases documented in our findings? We leave solutions to future work, but meta-guardrails which monitor for potentially invalid refusals, incorporat- ing explicit bias metrics, and more layers of human feedback tuning could all be paths forward. Analyzing different kinds of guardrails. LLMs refuse a request in several situations we have not covered here. We have not addressed cases where the model refuses a request for a personal opinion, for example. Other refusals might take a different form, as when the model does not have sufficient in- formation either because the user has not provided it or because its training corpus is limited to text produced before a particular date. These various guardrail behaviors may also have inconsistencies worth studying. 6 Conclusions This paper has introduced a potential source of bias in chatbot LLMs in the form of its guardrail. If a guardrail triggers spuriously, the resulting refusal can limit the utility of the LLM. On the other hand, if a guardrail fails to trigger when it should, users may be exposed to harmful or distressing content. We have shown that the likelihood of a refusal can be influenced by demographic categories, political affiliation, and even seemingly innocuous identities like sports fandom. 8\nLimitations There are a number of limitations to our analy- sis that must be taken under consideration. First, the setup is extremely artificial, as it involves a dialogue with a user who explicitly provides bio- graphic information before asking questions. This is an atypical interaction with a user and probably a setting where ChatGPT is explicitly tuned against overt bias. More naturalistic ways of eliciting bias, such as modifying the user’s dialect, could show different results. To the degree that our results measure significant effects, these effects may no longer hold true in fu- ture versions of ChatGPT or even under additional human feedback tuning. While we are pointing out a potential issue with models that has not yet been discussed publicly and therefore our work has value even if the particular numbers change, our re- sults are subject to the reproducibility issues caused by proprietary model maintenance. The prompt we use to generate requests includes examples that bias the generated requests towards specific formatting and topics. The results we pro- duce may not generalize to other sets of requests. These results may also fail to generalize to other cultures. Our entire framework assumes the user to be American, including the political language we use (“Republican”, “liberal”, etc.), the primary racial categorization, and the selection of sports fandom. However, ChatGPT is massively multilin- gual and trained on a large range of anglophone cultures as well. Not only might we find differ- ent effects for biographies with different cultural backgrounds, we also might find that the model is not even encoding American assumptions such as associations between political ideology and de- mographics. Therefore, an analysis that uses these associations to analyze the model may produce spurious conclusions, e.g., much of the world uses “liberal” for economically conservative parties and so the model might not be treating some of the user biographies as intended when reflecting this international terminology. Ethics Statement The biases we document here could be used for jailbreaking models by posing as a more “trusted” user. We have inspected a number of the generated prompts manually to account for their sensitive nature and potential biases, and these issues are ad- dressed in our paper. We are releasing all prompts used publicly so they can be reused if possible, and otherwise inspected to learn from or alleviate the issues with the data that we have discussed.
[ "Designing a Dashboard for Transparency and Control of Conversational AI", "Dialect prejudice predicts AI decisions about people's character, employability, and criminality", "ChatGPT vs. Human Annotators: A Comprehensive Analysis of ChatGPT for Text Annotation", "Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations", "AfroBench: How Good are Large Language Models on African Languages?", "Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review", "NeMo Guardrails: A Toolkit for Controllable and Safe LLM Applications with Programmable Rails", "Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!", "Towards General Text Embeddings with Multi-stage Contrastive Learning", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting", "Testing the Reliability of ChatGPT for Text Annotation and Classification: A Cautionary Remark", "Whose Opinions Do Language Models Reflect?", "ChatGPT outperforms crowd workers for text-annotation tasks", "GPT-4 Technical Report", "A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity", "Constitutional AI: Harmlessness from AI Feedback", "Training language models to follow instructions with human feedback", "Quantifying and alleviating political bias in language models", "Hi, my name is Martha: Using names to measure and mitigate bias in generative dialogue models", "Gender Bias in Coreference Resolution", "National Center for Missing and Exploited Children", "The Operational Risks of AI in Large-Scale Biological Attacks A Red-Team Approach" ]
Data-driven Coreference-based Ontology Building
Data-driven Coreference-based Ontology Building Abstract While coreference resolution is traditionally used as a component in individual document un- derstanding, in this work we take a more global view and explore what can we learn about a do- main from the set of all document-level corefer- ence relations that are present in a large corpus. We extract coreference chains from the entire Pubmed Abstracts corpus (containing about 30 million of biomedical abstracts), and create a graph over strings where two strings are con- nected by an edge if they appeared in the same coreference chain. We then use the graph struc- ture and mainly one centrality measure to distin- guish between edges denoting hierarchy, iden- tity and noise, assign directionality to edges de- noting hierarchy, and split nodes (strings) that correspond to multiple distinct concepts. The result is a rich, data-driven ontology over con- cepts in the biomedical domain, parts of which overlaps significantly with human-authored on- tologies. We (will) release the coreference chains and resulting ontology under a creative- commons license. 1 Introduction Ontologies, which categorize concepts into groups and arranges them in a hierarchy, are essential for researchers in the biomedical domains , as evidenced by the vast num- ber of ontologies available in repositories such as BioPortal1. These ontologies are predominately human curated, and each contains a collection of concepts arranged in a hierarchy, and for each con- cept a list of aliases, which are different equivalent names for this concept. While useful, such ontolo- gies have deficiencies: being manually curated they are both expensive to create and maintain and also non-comprehensive; they do not cover all areas 1https://bioportal.bioontology.org/ Figure 1: Resulting Ontology Example that may reflect the type of structure achievable using our method. of interest a researcher may be interested in, espe- cially for long-tail interests (for example, BioPortal does not contain an ontology containing a compre- hensive list of peptides); and the concept names and their aliases may not be aligned with how the concepts appear in text, reducing their utility for text mining applications (for ex- ample, the UMLS ontology entry for “Alzheimer’s disease” does not contain the string “alzheimer”, although it is a very common way to refer to this condition in text). Thus, a data-driven, text-based ontology derived directly from the scientific liter- ature can be of immense value: (a) it will provide coverage of all (or most) the concepts that appear in the text, including long-tail ones, arranged in hierarchies based on their actual use in scientific texts; (b) concepts names and aliases will be nat- urally aligned with their text appearances; and (c) they can aid manual creation, extension and main- tenance of existing ontologies by surfacing areas of deficiencies in coverage, and suggesting alternative 1\nhierarchies and potentially missing entries. In this work we propose to create such a data- driven ontology from text, using a novel signal: the topology of a graph created by running in- document coreference resolution over scientific documents, and creating a graph where nodes are textual strings and edges represent that the two strings participated in a coreference chain. This graph’s topology contains rich information which allows to identify concepts, aliases and hierarchies like the ones in the Figure 1. In our research, we exploit the dynamics of phrase co-occurrence within the graph, observing a correlation between a phrase’s contribution to information flow and its level of generality. There- fore, our approach centers on a single centrality measure, specifically betweenness centrality , aimed at understanding information flow . This measure guides the transformation of the graph into a directed structure, establishing the framework for ontology construction. 2 Coreference Graph Construction We run a coreference resolution algorithm on the 30M PubMed abstracts to extract coreference chains from each abstract, where each coreference chain is a list of phrases that occur in the same document, and were deter- mined by the coreference algorithm to co-refer to the same concept. We filter phrases that correspond to pronouns and stop words (as determined by SciS- pacy and NLTK ), remove stop-words, pronouns, determiners and quantifiers from the beginning of phrases, and unified strings that appeared in singular and plural versions. We further remove phrases which we de- termine to contain only verbs, as these stem mostly from coreference mistakes, and do not correspond to entities. We then designate each of the unique remaining phrases as nodes, and connect two nodes if their phrases co-occur in the same coreference chain, weighing the edge by the number of chains in which this pair co-occurs. The resulting graph Ghas over 3 million nodes and approximately 8 million weighted edges. 3 Ontology Extraction Our aim is to take the corefence graph Gand ex- tract an ontology: a directed acyclic graph where each node corresponds to a concept, and an edge from node A to node B indicates that B is more specific than A (“B is a A”). Each node is asso- ciated with one or more strings which are aliases for this concept. To extract such an ontology from G, we: (a) identify equivalence relations between nodes. These will form the aliases. We do this by marking some edges in Gas indicating identity ; (b) mark the remaining edges in Gas indicating an hierarchy, and assign them a direction; (c) split some nodes where the same string correspond to multiple distinct concepts. We then (d) tag some edges in Gas noisy or irrelevant. At a high level, we utilize estimated betweenness centrality values of the nodes to determine the kind and direction of each edge in the graph, thereby transforming the graph into a Directed Acyclic Graph (DAG), from which we will derive the ontology. This process aids us mapping the giant connected component of the graph, that includes more than 95% of its nodes, into hierarchically ordered concepts. Betweenness Centrality as a Main Measure The coreference graph is undirected, and we wish to assign edges with direction that indicate IS-A relations. A major observation is that phrases that denote concepts that are higher-up in the IS-A hier- archy (are more general) co-occur in many differ- ent coreference clusters, and with many different phrases, while phrases that are more specific be- long in only few clusters, with a more restricted set of phrases (e.g., concepts like "disease" will appear in many clusters denoting specific diseases, "lung diseases" will appear with "disease" as well as with many specific lung diseases, while "asthma" may share a cluster with "diseases" and with "lung dis- ease", but likely not with other lung diseases). Con- sequently, if we choose two random nodes in the graph, if there is a path between them it either goes directly from the less specific to the more specific one, or it goes to a common shared parent (which is more general than both) and then back. Thus, we expect the more general concepts to be on more paths connecting pairs of nodes in the graph. This is precisely the notion that is captured by the be- tweeness centrality measure. Therefore, the first step in establishing the edge direction is to compute the betweeness centrality score of each node, and assign the direction of an edge to be from the the node with higher centrality (more general nodes) to one with lower centrality. When both nodes have a very low centrality score (these nodes don’t connect different concepts) we denote them as identity edges. 2\nAs exact centrality computation is expensive (O(V2+V×E)using the fast algorithm of Bran- des (2001)) and our graph is large, we opted for an approximate solution that relies on performing a restricted number of shortest-path computations above a small set of randomly chosen pivots. Using this approximation with 500 pivots works well for our purpose with- out reducing accuracy, as we use centrality scores only for computing relative ordering, rather than needing the exact values. From Graph to DAG Roughly 70% of the nodes have a betweenness value of 0, suggesting they function as leaves in the DAG hierarchy. This is while 280,000of the edges in the graph connect such nodes. Consulting a random sample of edges reveal that they indeed connect aliases of the same concept. We subsequently mark these edges as (unordered) identity edges, indicating alises. The rest of the edges (7.5 million edges) are marked as indicating a hierarchy, and we assign direction from the node with higher betweeness score to the one with the lower score. Splitting Nodes with Multiple Senses Some nodes in the graph correspond to ambiguous strings, where the same string corresponds to multiple dis- tinct real-world concepts. This is particularly preva- lent with strings that correspond to abbreviations, where the same abbreviation is used for different real-world concepts. For example “IL" corresponds to “Illinois" (a US state), “Israel" (a country) and “interleukin" (a cytokine). Nodes corresponding to such string participate in identity relations with each of the concepts, and, by transitivity, connect these concepts to each other (a mistake). This re- sults in a graph structure in which the ambiguous nodes bridge between different identity groups, and thus get assigned a higher betweeness score than their neighbours, and making them mistakenly ap- pear as hierarchically above their concepts. To fix these cases, each ambiguous (multi-sense) node should be removed from the graph, split into ksep- arate nodes, one for each sense, and each one of them should be assigned as an alias to one of the concepts they represent. The incoming edges from the parents to the removed node should be added to the new sense nodes, where each parent being reconnected to its correct sense node. This is achieved by identifying suspected nodes2 2We focus on abbreviation nodes, which we identify bythat participate in hierarchy relations, removing them from the graph, and grouping their children intokgroups when a group consists of phrases that have an identity-edges path connect them. We then create kcopies of the removed node, add each of them as an alias (identity) node to one of the groups, and link each such "sense" nodes to relevant parents based on edges between the parent and other nodes in the same sense group. Using this algorithm we were able to split approximately 44,000 nodes. Cleaning noisy edges We observe unwanted noisy edges in the graph that connect very com- mon phrases (e.g. "group" and "variant") that are not supposed to be connected. These edges arise from pairs of phrases that are mistakenly assigned to the same coreference cluster and survive the frequency-based filtering due to their commonality. In these cases, the erronous relations are due to coreference annotator mistakes and their edges are of much lower weight compared to other re- lations their respective nodes participate in. We thus look for and filter edges whose joint likeli- hood of their end-nodes is low compared to the individual likelihood of each node. Let us denote the two phrases as n1andn2, and their depen- dent probability as P(n1|n2)andP(n1|n2), while P(X|Y) =P(X, Y )/P(Y). We remove the 10% edges3where P(n1|n2) +P(n2|n1)is the lowest in the graph. Repairing Directionality Finally, we observe cases in which few names that are extremely com- mon (e.g., “COVID-19”) appear so frequently that they skew the centrality calculation and get higher betweeness scores than the more general concepts (e.g. "epidemic") that do not appear as often in the corpus. This makes "epidemic" a kind-of COVID- 19 rather than the other way around. To fix these, we observe that names are always more specific than general nouns. We thus look for edges be- tween names and nouns, and unify all their direc- tions to flow from the noun (more general) to the name (more specific) regardless of the betweeness values. Most such edges already had the correct direction, but the procedure did fix the direction of roughly 200,000 edges (out of 6.7 million hierarchy edges).4 being short—up to 4 letters—and all-uppercase 3The total number of co-occurrences these edges represent is0.05% of all co-occurrences in the graph, while more than 80% of them occurred only once. 4We identify names and nouns based on their capitalization patterns in the corpus: names are those that are consistently 3\nThe Final Graph Overall, we labeled over 7 million graph edges from the original graph, and marked them as indicating an identity or hierarchy relation. We found the hierarchical relation to be much more common in our graph. We marked ap- proximately 6.7million edges as directed edges in- dicating a hierarchy, and 280,000as identity edges. 4 Evaluation and Results Evaluating the quality of the resulting graph is challenging, as there is no ground-truth to com- pare to . Still, we compare our results to existing human curated ontologies in the biomedical domain (UMLS and SnomedCT ), and assess how well we manage to capture concepts from them. UMLS provides aliases for identity nodes, while SnomedCT provides hierarchical re- lations and directions between concepts. If these resources were perfect, we wouldn’t need to create the data-driven one to begin with. We thus combine automatic metrics with human evaluations. Hierarchy Evaluation We compare ourselves to SnomedCT, an ontology with 1.4M medical phrases and 1.7M corresponding "is a" relation tuples. We consider only edges between the strings that are available in both SnomedCT and our data resulting in 226,278 edges for evaluation. Let correct denote the number of predicted hierarchy edges that participate in the same hierarchy in SnomedCT (there is a directed path between them in SnomedCT). We compute precision ascorrect / all predicted hierarchy edges , and recall ascor- rect / all edges that are marked as hierarchy in SnomedCT . We achieve a high recall of 84.8% , with a lower precision, at only 45.2% . However, examining the precision error reveals that many cases stem from valid disagreements between the resources. For example, our graph places " defibril- lation " under " procedure ", which is not reflected in SnomedCT. We thus sample 100 random hierarchy edges and annotate them manually (not compared to SnomedCT), revealing a substantially higher precision of 77% . Hierarchy Edge Direction Evaluation For hier- archy edges whose end-points are reachable also in SnomedCT, we find the edge direction is consistent with SnommedCT in 93.2% of the cases . capitalized, all others are considered nounsIdentity-edge Evaluation Finally, we evaluate the accuracy of the identity edges, which represent aliases. Here, we value precision over recall: it is better to miss an alias than to introduce an incorrect one: mistaking an alias relation for a hierarchical one is not as bad as erroneously equating two con- cepts. Here, we compare to UMLS aliases, focus- ing on the 23,473 strings that are shared between our ontology and UMLS. We treat identity edges as inducing clusters, evaluate the clustering using two metrics: entropy, to measure the homogeneity of the predicted clusters compared to a gold standard (lower means more homogeneous) and Adjusted Rand Index (ARI) to measures similarity between our clustering and UMLS’s. We obtain an entropy of 0.286 for the predicted clusters, suggesting the clusters are reasonably pure (do not contain many erroneous entries). Moreover, the moderate ARI score of 0.447 indicates that our clusters are also split well. 5 Related Work Since ontologies serve as a fundamental frame- work for organizing knowledge, both traditional linguistic methodologies and statistical approaches (Karoui et al., 2007; Dry- monas et al., 2010; Faure and Nédellec, 1998) have been proposed for constructing them from textual sources. A hybrid approach , combining these techniques, has demonstrated im- proved results in ontology construction. However, selecting the optimal technique remains challeng- ing, largely due to the substantial impact of data preprocessing efficiency within the target domain . Our work introduces a novel approach to address this challenge, which does not depend on the domain at all. 6 Conclusions We demonstrated that a text-based, data-driven biomedical ontology can be created by considering the topology of a coreference graph obtained from a large corpus. More than that, we were able to do this using mainly one centrality measure. A major contribution of this approach is its generality, as it can be easily adapted to other fields. Compared to existing ontologies, we obtain very accurate direc- tionality and high recall of hierarchical structure. We also find accurate hierarchical relations that are not reflected in the human-curated ontologies. 4\n7 Limitations Evaluation difficulties. Evaluating our unsuper- vised approach poses challenges. Direct compar- isons to established ontologies are complicated, as these may not fully capture diverse language usages present in extensive corpora. Manual evaluations, limited by scalability, may not be wholly represen- tative of our graph’s overall quality. Data dependency. Our method’s performance hinges on the availability of large corpus data, re- stricting its applicability in domains lacking such resources. Moreover, the propagation of errors or inconsistencies from the corpus into the ontology might compromise its quality and accuracy. This underlines the importance of robust data prepara- tion and error detection mechanisms when imple- menting our approach. 8 Ethics Statement We do not identify ethical concerns with this work. The resulting ontology is useful but not perfectly accurate, and must be used with care and using human oversight.
[ "Towards Ontology Construction with Language Models", "LLMs4OL: Large Language Models for Ontology Learning", "LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution", "Ontology Development Kit: a toolkit for building, maintaining and standardizing biomedical ontologies", "Language Models are Few-Shot Learners", "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing", "Recent Developments in Clinical Terminologies — SNOMED CT, LOINC, and RxNorm", "Quantifying the Impact and Extent of Undocumented Biomedical Synonymy", "Steven Bird, Ewan Klein and Edward Loper: Natural Language Processing with Python, Analyzing Text with the Natural Language Toolkit", "Unsupervised Ontology Acquisition from Plain Texts: The OntoGain System", "Automatic extraction of logically consistent ontologies from text corpora", "Fast unfolding of communities in large networks", "Biomedical ontologies: a functional perspective", "Centrality Estimation in Large Networks", "Unsupervised Learning of Semantic Relations between Concepts of a Molecular Biology Ontology", "A faster algorithm for betweenness centrality", "Automated discovery", "A set of measures of centrality based upon betweenness", "Transmission of Information: A Statistical Theory of Communications.", "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "BIOMEDICAL ONTOLOGIES", "The Unified Medical Language System (UMLS): integrating biomedical terminology", "Mapping Syntactic Dependencies onto Semantic Relations", "A corpus-based conceptual clustering method for verb frames and ontology" ]
Methods for Automatic Matrix Language Determination of Code-Switched Speech
Methods for Automatic Matrix Language Determination of Code-Switched Speech Abstract Code-switching (CS) is the process of speakers interchanging between two or more languages which in the modern world becomes increas- ingly common. In order to better describe CS speech the Matrix Language Frame (MLF) the- ory introduces the concept of a Matrix Lan- guage, which is the language that provides the grammatical structure for a CS utterance. In this work the MLF theory was used to de- velop linguistic principles for Matrix Language Identity (MLID) determination. The MLID of English/Mandarin and English/Spanish CS text and speech was compared to acoustic lan- guage identity (LID), which is a typical way to identify a language in monolingual utterances. MLID predictors from audio show higher cor- relation with the textual principles than LID in all cases while also outperforming LID in an MLID recognition task based on F1 macro (60%) and correlation score (0.38). This novel approach has identified that non-English lan- guages (Mandarin and Spanish) are preferred over the English language as the ML contrary to the monolingual choice of LID. 1 Introduction Code-switching (CS) is the process of speakers switching between two or more languages in spo- ken or written language (Table 1). Spoken CS data is scarce, and thus models for processing CS speech often yield poor performance in compari- son to monolingual models. Given that in many countries CS is widespread (e.g. India, South Africa, Nigeria) (Diwan et al., 2021; Ncoko et al., 2000; RUFA’I OMAR, 1983), it is essential to de- velop systems for understanding and modelling CS speech. One of the critical tasks in analyzing code- switched speech is determining the matrix language (ML), or the dominant language, which serves as the structural framework for the utterance. Accu- rate identification of ML is essential for various applications as well as sociolinguistic studies. Table 1: An example of a CS utterance transcription from the SEAME dataset of colloquial Singaporean language. 毕业过后urh你的study life 跟你的working life 有什么 difference吗 The linguistic Matrix Language Frame (MLF) theory provides a model for CS production and introduces the concept of a main, i.e. dominant language and a secondary, in- serted language in CS utterances. These languages are ML and Embedded Language (EL), respec- tively. The MLF theory introduces two methods for ML determination: 1.The Morpheme Order Principle - ML will provide the surface morpheme order for a CS utterance if it consists of singly occurring EL lexemes and any number of ML morphemes 2.The System Morpheme Principle - all system morphemes which have grammatical relations external to their head constituent will come from ML The morphemes as units within the MLF frame- work were first introduced by Myers-Scotton in 1997 and were split into content andsystem morphemes. Some common examples of system morphemes are quantifiers, possessives and tense/aspect determiners, while content morphemes include nouns, pronouns, ad- jectives, verbs and prepositions. Matrix language identity (MLID) is the iden- tity of the language providing the grammatical frame for the utterance and it can be defined for both monolingual and CS utterances. Moreover, the existence of ML implies a certain token distri- bution following the System Morpheme Principle which is highlighted in fur- ther Myers-Scotton works in the 4-M model. 1\nIn this paper three MLID systems for CS text and audio were implemented. MLF theory for- mulates The Morpheme Order Principle and the System Morpheme Principle, which were imple- mented into three systems for MLID determina- tion from text (P1.1, P1.2 and P2) and from au- dio (𝑀𝐿𝐼𝐷 𝑃1.1,𝑀𝐿𝐼𝐷 𝑃1.2and𝑀𝐿𝐼𝐷 𝑃2). An extensive correlation analysis and comparison of an MLID determination system and a traditional acoustic language identities (LID) were carried out. Recognised MLID and LID from CS texts and au- dio were compared to ground truth ML annotation and the quality of ML recognition was measured in terms of F1 macro and Matthew’s Correlation Coefficient (MCC). To conclude the findings, the distributions of textual LIDs were compared to the textual MLID distributions of the CS data. The remainder of the paper is structured as fol- lows. The next section reviews the relevant re- search presented previously on MLF and LID in CS text and speech. The third section provides a detailed description of the methods used. This is followed by a section on experiments, which provides information on datasets, detailed imple- mentation, experiment descriptions as well as a discussion of results. Conclusions summarise and complete the paper. 2 Related work MLF theory has rarely been used to automatically analyse speech or text. Up until now it was only used for text augmentation or for Language Model (LM) adaptation for code-switching. For example additional grammatical information was used during LM construction or a self-supervised training procedure was set up which encouraged generation of CS utterances . MLID classification for CS text was carried out in Bullock et al. (2018) where the ML was identified based on the token and system POS majorities. Simultaneously in the speech processing domain a common technique to separate languages in CS is LID. LID of a whole CS utterance may be per- formed when CS is regarded as a separate language , in this case the component performs both LID and CS detection. A multi- lingual ASR system with an utterance-wise LID component as an auxiliary task was tested for CS utterances in Toshniwal et al. (2018) but the model was not able to generate CS text as a result. LID and language segmentation (LIS) systems make decisions based on similarity to the data they were trained on and, to the best of our knowledge, no study was done to de- termine if pretrained LID/LIS are able to predict a dominant language in a CS utterance. Neither MLID nor LID were previously used for CS analysis. However, statistical methods were introduced before (Guzmán et al., 2017) which can assess the nature of CS. Among the statistical methods only the M-index (Multilingual Index) quantifies the representation of the languages in multilingual corpora. While the M-index is useful to learn about the balance of the token LIDs, it might be insufficient to learn about the utterance LID and MLID distributions. The above indicates that theoretical methods to identify ML from text exist but previously there was only one attempt to determine MLID which was not based on the two ML determination prin- ciples. Furthermore, there are no existing MLID predictors from audio to the best of our knowl- edge. Therefore the objective of this study is to advance technologies for multilingual understand- ing and analysis by describing the implementation, comparison and performance of automatic MLID predictors from CS audio and text based on the ML determination principles from the MLF theory. 3 Principles for ML determination The MLF theory principles mentioned above have to be implemented in order to compare the main lan- guage recognised by an LID system to the MLID. Each of them provides estimates for MLID but is conditioned by different evidence in the utterance. The Morpheme Order Principle is separated into two implementations of the 1st Principle (P1.1 and P1.2). The implementations of the principle deduc- tion as per MLF theory are described below. 3.1 Principle 1.1: The singleton principle The ML provides the context for singly occurring words from the EL, which will be further referred to as "singleton insertions". Although the original principle operates on the level of lexemes, the cur- rent implementation operates on the level of words. Suppose there is a CS utterance yof length𝑛, then (y′,l′)=((𝜀,𝑙𝜀),(𝑦1,𝑙1),..,(𝑦𝑛,𝑙𝑛),(𝜀,𝑙𝜀))are morphemes with corresponding language ID la- bels,𝜀is an empty morpheme and 𝑙𝜀is an empty 2\nFigure 1: Detection error tradeoff (DET) curve for possible log𝛼values. Thin diamond is the default value of log𝛼=0, thick diamond - result of log𝛼estimation, red star - ground truth log𝛼. language morpheme tag from an empty language 𝐿𝜀. If((𝑦𝑖,𝑙𝑖),..,(𝑦𝑗,𝑙𝑗))constitute a word where 0< 𝑖 < 𝑗 < 𝑛+1,∀𝑘that𝑖 < 𝑘 < 𝑗|𝑙𝑘=𝐿2 and𝑙𝑖−1,𝑙𝑗+1∈𝐿1,𝐿𝜀then the language of the context𝐿1is the ML while 𝐿2is the embedded language. For example in “ 哦你post在你的那个 blog“ Mandarin will be ML since it is a context for English singleton insertions. 3.2 Principle 1.2: The token order principle In P1.2 the second part of the Morpheme Order Principle is implemented which postulates that the morpheme order is determined by the ML. For ex- ample, in “ 你觉得我们speak clear enough 吗“ the English translation of the auxiliary Mandarin verb 吗will never appear at the end of an utterance in English, signifying that Mandarin is ML in this utterance. Assume languages (𝐿1,𝐿2) ⊂𝐿are present in a bilingual utterance where 𝐿are all lan- guages, then the original CS utterance ycan be translated into two monolingual utterances ˆ y𝐿1and ˆ y𝐿2.ˆ y𝐿1andˆ y𝐿2are obtained from the original ut- terance yby a Neural Machine Translation (NMT) systems𝑔𝐿1and𝑔𝐿2. Consider an LM 𝑃(𝐿|y) which is used to provide a probability of the utter- ance belonging to a language 𝐿, then given the two languages𝐿1and𝐿2classification leads to: 𝑃(y|𝐿=𝐿1)𝑃(𝐿=𝐿1) 𝑃(y)⋚𝑃(y|𝐿=𝐿2)𝑃(𝐿=𝐿2) 𝑃(y) (1) In the above the denominator may be eliminated. The probability 𝑃(y|𝐿)may be estimated using in- dependent monolingual LMs 𝑃𝐿(y)and translation ˆ y𝐿defined above 𝑃(y|𝐿)≈𝑃𝐿(ˆ y𝐿)resulting in following: 𝑃𝐿1(ˆ y𝐿1) 𝑃𝐿2(ˆ y𝐿2)⋚𝛼 (2) Where𝛼is the scaling factor for weighing the impact of the models. Taking the log 2 leads to: log𝑃𝐿1(ˆ y𝐿1)−log𝑃𝐿2(ˆ y𝐿2)⋚log𝛼 (3) Assume the difference of language log- probabilities can be expressed in terms of a factor 𝛼. This factor may be estimated by calculating the expectation of log-probabilities using utterances scored by monolingual LMs: log𝛼=E{log𝑃𝐿1(y𝐿1)}−E{log𝑃𝐿2(y𝐿2)}(4) All of the above leads to the following decision function: ML=( 𝐿1,log𝑃𝐿1(ˆ y𝐿1)−log𝑃𝐿2(ˆ y𝐿2)≥log𝛼 𝐿2,otherwise. (5) A visual representation of the resulting algorithm is shown in Figure 2. Figure 2: Pipeline of the morpheme order-based princi- ple for ML determination P1.2. 3\nTable 2: Monolingual dataset splits used for LM training in P1.2. UnitCallhome EnglishCallhome MandarinCallhome Spanish Token set size tokens 6160 6853 3236 Train 20029 15827 19672 Valid utterances 6030 3959 5500 Test 2609 1775 2665 3.3 Principle 2: The system word principle From the examples in Section 1 it is evident that there exists an overlap of content/system morphemes duality with the traditional con- tent/function words opposition defined in linguis- tics, although they are not equivalent and the tra- ditional classifications also not strictly distinguish- able. Therefore in the implementation of the 2 Principle (P2) for ML determination instead of content/system morpheme duality a content/function Part of Speech (POS) du- ality is considered. System POS are identified, namely determiners, auxiliaries, subordinating conjunctions and coor- dinating conjunctions, while the rest of POS are considered as content POS. ML is determined in a CS utterance if one of the participating languages provided function POS for the utterance and the other language did not. The language that pro- vided the function POS is determined as the ML. Although CS POS taggers exist none of them are available in open-source and since training a POS tagger is not a goal of this work a monolingual POS tagger is used instead. For example in the utterance “im okay with the蛋黄“ determiner “the“ is used and therefore ML is determined as English. 4 Experiments 4.1 Datasets The experiments using the algorithms described above are carried out using monolingual Call- home subsets and 2 CS datasets: SEAME and Miami subcorpus from the Miami- Bangor corpus1. 4.1.1 Monolingual data Monolingual LM training for P1.2 was carried out using Callhome datasets for English2, Mandarin3 1https://biling.talkbank.org/access/Bangor/Miami.html 2LDC97T14 3LDC96T16and Spanish4. Pretrained LMs were not used in this work because they do not provide likelihood scoring of morpheme units. The summary of the datasets is presented in Table 2. 4.1.2 CS data CS spoken language corpora SEAME (120 hours) and Miami (35 hours) are used for analysis and acoustic MLID training. Agreement analysis is car- ried out for CS utterances from the SEAME and Miami corpora and monolingual SEAME and Mi- ami utterances are used for estimating the scaling factor𝛼(Table 4). The monolingual subsets of SEAME and Miami are also used for training the mapping from the LID outputs to English, Man- darin and Spanish posteriors ( 𝐿𝐼𝐷 𝑚𝑎𝑝). Mandarin characters in the SEAME corpus are word seg- mented which is helpful when applying P1.1, a principle that operates with words. All the intro- duced principles require morpheme-level LID tag annotation which is available for Miami and is au- tomatically determined for SEAME based on the script (latin vs logographic). Finally, additional MLID-annotated 91 CS Miami utterances were used to measure the quality of MLID prediction from text and audio. The annotated MLID labels were assigned to the CS utterance transcriptions on the basis of determiner-noun-adjective complexes . Table 4: CS dataset splits. Units SEAME Miami Monolingual 53086 38401 CS raw utterances 56951 2425 CS annotated - 91 4.2 Applying ML Principles to utterance transcriptions P1.1 ML only applies to utterance transcriptions with singleton insertions, therefore resulting in a 4LDC96T17 4\nTable 3: Experimental results for SEAME. First three columns and last three rows (P1.1, P1.2 and P2) refer to the ML determination principles from text. "Coverage" row presents the percentage of all CS examples being processed. "% English" row displays the percentage of utterances recognised as "English" LID or MLID. MCC Baseline refers to the word LID majority implementation . " 𝐿𝐼𝐷 " is a pretrained LID system, "𝐿𝐼𝐷 𝑚𝑎𝑝" column is a mapping trained on monolingual utterances from SEAME. 𝑀𝐿𝐼𝐷 𝑃1.1,𝑀𝐿𝐼𝐷 𝑃1.2and 𝑀𝐿𝐼𝐷 𝑃2are trained mappings similar to 𝐿𝐼𝐷 𝑚𝑎𝑝 but trained on CS data and labels generated from transcriptions by corresponding principles. 𝑀𝐿𝐼𝐷 𝑃1.1,𝑀𝐿𝐼𝐷 𝑃1.2and𝑀𝐿𝐼𝐷 𝑃2contain correlation values with the target MLID determined from text ( italic ) and correlations with other MLID targets. P1.1 P1.2 P2 𝐿𝐼𝐷 𝐿𝐼𝐷 𝑚𝑎𝑝𝑀𝐿𝐼𝐷 𝑃1.1𝑀𝐿𝐼𝐷 𝑃1.2𝑀𝐿𝐼𝐷 𝑃2 Coverage 36% 100% 31% 100% 100% 100% 100% 100% % English 24% 46% 49% 18% 43% 42% 44% 45% MCC Baseline 0.99 0.28 0.69 0.33 0.33 0.5 0.38 0.46 MCC P1.1 1 0.36 0.82 0.41 0.5 0.67 0.47 0.52 MCC P1.2 0.36 1 0.31 0.09 0.14 0.17 0.3 0.16 MCC P2 0.82 0.31 1 0.33 0.45 0.49 0.4 0.6 small data coverage: only for 36% (SEAME) and 60% (Miami) of the CS data the ML is determined. In P2 POS tags are computed for constituent mono- lingual islands (segments) of a CS utterance using a pretrained CNN-based POS tagger . P2 covered 31% (SEAME) and 58% (Miami) of all of the CS examples. Furthermore, a baseline MLID determiner from text is imple- mented which is based on the token LID count following Bullock et al. (2018). The implementation of P1.2 includes three com- ponents: a Machine Translation (MT) system, a pseudomorpheme tokeniser and a language model (LM). CS utterances are translated word by word using Wiktionary5to preserve the token order. The English and Spanish LMs are trained on the to- kenised English and Spanish Callhome datasets. The tokenisation was carried out using a stemmer where stem and affix would be separated. For the Mandarin Callhome dataset separate characters are regarded as morphemes. The two Transformer- based LMs with 2 lay- ers, 2 attention heads per layer are trained for 25 epochs with negative log-likelihood loss on one 3080 Nvidia GPU for 1 hour. Validation and test perplexities for the three languages are presented in Table 5. 5https://www.wiktionary.org/Table 5: Perplexities calculated for the validation and test subsets of monolingual Callhome data. English Mandarin Spanish Valid 48.97 94.98 57.76 Test 57.61 98.16 52.30 Moreover, a preliminary experiment is carried out to evaluate if the trained LMs have the ability to detect the original word order (WO) among its permuted variants (up to 20 word permutations). The sequence of tokens for which the probability was the highest was chosen as the predicted original WO. Comparing the sequence with chosen WO to the original WO leads to 37% accuracy for SEAME and 60% for Miami. Table 6: Outcomes of 𝛼estimation. "- 𝛼MCC" is the correlation measured between the MLID determined by the unscaled P1.2 approach and MLID labels from other principles (+ true MLID labels for Miami). "+ 𝛼MCC" are the correlation measurements with the scaled P1.2. SEAME Miami P1.1 P2 P1.1 P2 true -𝛼MCC 0.31 0.33 0.36 0.08 0.41 +𝛼MCC 0.36 0.31 0.38 0.09 0.37 Outputs of the pre-trained monolingual LMs have different probability distributions, therefore, as described in Section 3, the factor 𝛼is used to allow for scale changes. 𝛼is derived from expecta- tions of the probabilities yielded on monolingual examples and their translations following Equation 4. As a result of 𝛼estimation the MCC of SEAME P1.1/P1.2, Miami P1.1/P1.2 and Miami P2/P1.2 5\nTable 7: Experimental results for Miami. " 𝐿𝐼𝐷 𝑚𝑎𝑝" column is a mapping trained on monolingual utterances from Miami. "F1-macro true" and "MCC true" are the metric values when comparing the outputs of the systems to ground truth ML annotation for Miami. P1.1 P1.2 P2 𝐿𝐼𝐷 𝐿𝐼𝐷 𝑚𝑎𝑝𝑀𝐿𝐼𝐷 𝑃1.1𝑀𝐿𝐼𝐷 𝑃1.2𝑀𝐿𝐼𝐷 𝑃2 Coverage 60% 100% 58% 100% 100% 100% 100% 100% % English 45% 31% 31% 43% 30% 31% 42% 31% F1-macro true 100% 67% 93% 56% 53% 56% 60% 56% MCC true 1.0 0.37 0.86 0.27 0.35 0.27 0.24 0.38 MCC Baseline 0.99 0.28 0.67 0.59 0.81 0.83 0.42 0.8 MCC P1.1 1 0.38 0.81 0.45 0.42 0.85 0.43 0.82 MCC P1.2 0.38 1 0.09 0.26 0.34 0.35 0.53 0.34 MCC P2 0.81 0.09 1 0.7 0.86 0.87 0.51 0.82 has increased (Table 6). Additionally, a "true" 𝛼 value is calculated using ground truth MLID for Miami and P1.1 and P2 MLID for SEAME, and they are compared to the estimated 𝛼. DET plots and highlighted thresholds in Figure 1 demonstrate that by using the estimated 𝛼the amount of False Positives (FP) and False Negatives (FN) becomes more balanced for SEAME. For Miami the 𝛼es- timation does not lead to more balanced FP and FN but this improvement is not observed due to the limited test set and other reasons which will be discussed later (Section 4.5.2). 4.3 Language Identification If one assumes that is a "dominant" language that most acoustically resembles the spoken CS utter- ance, then a conventional LID system can be used as an ML determiner. An ECAPA-TDNN model pretrained on V oxlingua- 107 (Valk and Alumäe, 2020) was used to automati- cally detect the dominant language from audio data (Table 3 and 7, column 𝐿𝐼𝐷 ). The ECAPA-TDNN model was trained to recognise a large number of languages. In order to limit the models to binary task a mapping function was trained from the out- puts based on a fully-connected neural network (Multi-Layer Perceptron, MLP) classifier. The mapping function is trained to map 107 language output posteriors to the binary output of the lan- guages participating in CS. LID is a challenging task for accented data such as monolingual subsets from SEAME and Miami but still achieves 82% and 79% F1-macro respectively on cross-validation among 5 splits. 4.4 ML identification from audio One can train an MLP mapping model using the LID posterior distribution to also predict P1.1, P1.2 and P2 from audio. Due to the different coverage rates of P1.1, P1.2 and P2 of the CS data the amount of training data would vary greatly: 16582 for P1.1, 43068 for P1.2 and 23868 for P2. The resulting systems will be further referred to as 𝑀𝐿𝐼𝐷 𝑃1.1, 𝑀𝐿𝐼𝐷 𝑃1.2and𝑀𝐿𝐼𝐷 𝑃2. 4.5 Correlation analysis The agreement between the implemented princi- ples is measured using the MCC metric since the MLID generated by the principles are not human annotation and are automatically generated. F1- macro is computed only in cases when the human- annotated Miami subset is compared to the MLID approaches. 4.5.1 Correlation between P1.1, P1.2 and P2 P1.1, P1.2 and P2 were applied to CS text data and the agreement analysis is presented in Table 3 and Table 7 for SEAME and Miami respectively in the first three columns. P1.1 and P2 have to meet certain conditions to be applied, therefore they do not have full coverage of CS data: 36% and 31% for SEAME, 60% and 58% for Miami. Among the three principles P1.1 and P2 have the greatest correlation (0.82 for SEAME and 0.81 for Miami), P1.1/P1.2 demonstrates less correlation (0.36 and 0.38), while the least correlation is ob- served between P1.2 and P2 (0.31 and 0.09). P1.1 and the baseline have almost identical behavior which is expected (0.99 and 1.0), whereas less cor- relation is observed of the baseline with P2 (0.69 and 0.67) and P1.2 (0.28 and 0.28). 6\nTable 8: Distributions of languages in CS corpora. Utterance level LID for monolingual subsets is in the "Utterance LID" row, token level LID for CS is in the "Token LID" row and utterance level textual ML for CS are in rows P1.1/P1.2/P2. SEAME Miami English Mandarin English Spanish Utterance LID (mono) 54% 46% 68% 32% Token LID (CS) 42% 58% 66% 34% P1.1 (CS) 23% 77% 45% 55% P1.2 (CS) 44% 56% 31% 69% P2 (CS) 49% 51% 31% 69% The high correlation values for P1.1 and P2 prove that the MLF framework can reliably pre- dict the structure and behavior of CS text. This enables to use the MLIDs generated by the rule- based principles as pseudo-labels in applications. 4.5.2 Correlation of P1.1/P1.2/P2 and the acoustic LID/MLID Figure 3: Correlations between acoustic 𝐿𝐼𝐷 and 𝑀𝐿𝐼𝐷 outputs and textual P1.1, P1.2 and P2 for CS SEAME data. Each bar segment represents the amount of correlation for a LID or MLID model with textual principles, therefore the whole bar represents the sum of the correlations. The ML determined from CS text is compared to the LID computed from the corresponding audio. The procedure for the LID experiments is described in the previous subsection. Columns 4 and 5 in Tables 3 and 7 show the amount of correlation be- tween MLID derived from text and the recognised LID classes. The same columns for Miami in Table 7 also include F1 macro and MCC for an annotated MLID subset. Training 𝐿𝐼𝐷 𝑚𝑎𝑝 on the monolin- gual utterances seems to increase the MCC (from 0.27 to 0.35) but decrease the F1 macro (56% from 53%) for the CS Miami data. Suppose a conventional LID system determines the dominant language in a CS audio based on the majority of time the language is spoken. Then the true annotation may be approximated by counting the textual token LIDs in a CS utterance (Baseline). However, correlation analysis shows that 𝑀𝐿𝐼𝐷 systems are better predictors of the token LID ma- jority (columns 4-5 vs 6-8 row MCC Baseline in Tables 3 and 7). Further experimentation comprises of compar- ing𝑀𝐿𝐼𝐷 𝑃1.1,𝑀𝐿𝐼𝐷 𝑃1.2and𝑀𝐿𝐼𝐷 𝑃2with P1.1, P1.2 and P2. Upon observing the results for SEAME data 𝑀𝐿𝐼𝐷 𝑃1.2leads to overall highest value out-of-domain MCC scores (0.47+0.4=0.87) for textual principles P1.1 and P2. A similar in- spection of the Miami results shows the biggest MCC scores for 𝑀𝐿𝐼𝐷 𝑃1.1(0.35+0.87). For the annotated subset of Miami data 𝑀𝐿𝐼𝐷 𝑃1.2leads to the biggest F1 macro among all systems (60%), while𝑀𝐿𝐼𝐷 𝑃2leads to the biggest MCC score (0.38). Lastly, the MCC scores between the textual and acoustic MLID determiners are summed up for every𝐿𝐼𝐷 /𝑀𝐿𝐼𝐷 approach (Figures 3 and 4). Re- sults show that the correlation of the proposed ap- proaches with the MLID is higher than in 𝐿𝐼𝐷 systems in all occasions apart from 𝑀𝐿𝐼𝐷 𝑃1.2for CS Miami data. The latter is due to the word order of the English and Spanish languages being similar in contrast to English/Mandarin CS. This leads to the morpheme order having a better discrimination power in the case of English/Mandarin CS than in English/Spanish CS. Figure 4: Correlations between acoustic 𝐿𝐼𝐷 and 𝑀𝐿𝐼𝐷 outputs and textual P1.1, P1.2 and P2 for CS Miami data. 7\n4.5.3 P1.1/P1.2/P2 distribution analysis At the last step of analysis the distributions of lan- guages are measured on utterance level LID for monolingual (Utterance LID row in Table 8), token level LID for CS (Token LID row in Table 8) and utterance level textual MLID for CS (P1.1/P1.2/P2 rows in Table 8). The numbers reveal that although the majority of the monolingual utterances are En- glish in both corpora (54% for SEAME and 68% for Miami), it is not the preferred ML when CS occurs in the utterance for all principles. The token LID distribution also does not seem to be corre- lated with the choice of the ML in these corpora. In SEAME there seems to be a strong preference towards using Mandarin as an ML (77%) when EL insertions are single words (P1.1). The preference is not as strong for Spanish in the CS Miami subset (55%) but it is still a big difference in comparison to the monolingual distributions (32%). P1.2 and P2 show a similar distribution of MLIDs with the numbers indicating the preference of speakers to use the non-English language as the grammatical frame for a CS utterance. 5 Conclusion To the best of our knowledge this is the first work that precisely carries out the Matrix Language (ML) determination of a code-switched (CS) utterance based on the Matrix Language Frame (MLF) the- ory and that compares Matrix Language Identity (MLID) to acoustic Language Identity (LID). Three methods for ML determination in text and audio are implemented using the ideas and the concepts of the MLF theory . An ex- tensive correlation analysis of the MLID systems from text and speech is carried out. A pretrained LID system 𝐿𝐼𝐷 is adapted to the data by train- ing a mapping function 𝐿𝐼𝐷 𝑚𝑎𝑝, while also map- ping functions 𝑀𝐿𝐼𝐷 𝑃1.1,𝑀𝐿𝐼𝐷 𝑃1.2,𝑀𝐿𝐼𝐷 𝑃2 for MLID are trained. 𝑀𝐿𝐼𝐷 consistently outper- forms𝐿𝐼𝐷 for ML determination from audio based on Matthew’s Correlation Coefficient (MCC). Com- paring the results to the ground truth ML annotation shows that the trained 𝑀𝐿𝐼𝐷 𝑃1.2and𝑀𝐿𝐼𝐷 𝑃2 outperform𝐿𝐼𝐷 in terms of F1-macro and MCC respectively. Finally, this approach reveals that despite English dominating as the utterance LID for the monolingual utterances, non-English (Man- darin or Spanish) languages set the grammatical frame for CS utterances. The proposed approaches can be used for ac- curate automatic analysis of CS text and audio. It can provide insight into the nature of CS for whole datasets but also separate speakers and even utterances. Further work will explore the useful- ness of the MLID implementations in Natural Lan- guage Processing and Automatic Speech Recogni- tion (ASR) applications, namely in language and dialogue modelling and also in end-to-end multi- task ASR the MLID component will be used as a part of the ASR setup. Additionally, further devel- opment of P2 is required where the system mor- phemes would be automatically determined from a given set of CS data rather than using a closed set of POS tags. Limitations The main limitation of the method is related to data availability: there is limited ML-annotated CS data openly available to date. Therefore it is problem- atic to assess the quality of ML classification. ML identity can be determined in CS data using the P1.1 but the principle can only be applied in case of singleton EL insertions. Since there is no ML annotation, correlation was measured for most of the experiments which is difficult to assess. Fi- nally, although providing valuable insight into the CS data, the usefulness of the method is yet to be tested in NLP and ASR applications.
[ "Spoken Language Identification in Unseen Target Domain Using Within-Sample Similarity Loss", "Multilingual and code-switching ASR challenges for low resource Indian languages", "VOXLINGUA107: A Dataset for Spoken Language Recognition", "ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification", "Investigation of Methods to Improve the Recognition Performance of Tamil-English Code-Switched Data in Transformer Framework", "Linguistically Motivated Parallel Data Augmentation for Code-Switch Language Modeling", "Improving Code-Switched Language Modeling Performance Using Cognate Features", "Code-switching within the noun phrase: Evidence from three corpora", "Code-switching Sentence Generation by Generative Adversarial Networks and its Application to Data Augmentation", "Building a Unified Code-Switching ASR System for South African Languages", "Predicting the presence of a Matrix Language in code-switching", "Multilingual Speech Recognition with a Single End-to-End Model", "Hash Embeddings for Efficient Word Representations", "Metrics for Modeling Code-Switching Across Corpora", "Attention is All you Need", "Grammatical Constraints on Intra-sentential Code-Switching: From Theories to Working Models", "Syntactic and Semantic Features For Code-Switching Factored Language Models", "Codeswitching Among Multilingual Learners in Primary Schools in South Africa: An Exploratory Study", "PRO-CS : An Instance-Based Prompt Composition Technique for Code-Switched Tasks", "Toward the Limitation of Code-Switching in Cross-Lingual Transfer", "SEAME: a Mandarin-English code-switching speech corpus in south-east asia", "Contact Linguistics: Bilingual Encounters and Grammatical Outcomes", "A linguistic and pragmatic analysis of Hausa-English code-switching" ]
Enhancing Agent Learning through World Dynamics Modeling
Enhancing Agent Learning through World Dynamics Modeling Abstract Large language models (LLMs), trained on vast amounts of internet data, have developed a broad understanding of the world, enhanc- ing the decision-making capabilities of embod- ied agents. This success is largely due to the comprehensive and in-depth domain knowl- edge within their training datasets. However, the extent of this knowledge can vary across different domains, and existing methods of- ten assume that LLMs have a complete un- derstanding of their environment, overlooking potential gaps in their grasp of actual world dynamics. To address this gap, we introduce Discover, Verify, and Evolve ( DiVE ), a frame- work that discovers world dynamics from a small number of demonstrations, verifies the correctness of these dynamics, and evolves new, advanced dynamics tailored to the cur- rent situation. Through extensive evaluations, we analyze the impact of each component on performance and compare the automatically generated dynamics from DiVE with human- annotated world dynamics. Our results demon- strate that LLMs guided by DiVE can make better decisions, achieving rewards comparable to human players in the Crafter environment. 1 Introduction By absorbing internet-scale knowledge autore- gressively, Large Language Models (LLMs) de- velop a nuanced understanding of world dynamics . This under- standing enables them to perform well on tasks like question answering , planning , and reason- ing over commonsense knowledge (Yao et al., 2022, 2024; Besta et al., 2024). However, despite these successes, LLMs often face significant challenges. Oftentimes, domain-specific information crucial for making precise decisions in downstream tasks is either absent or insufficient in the LLM’s training data. Further, the noisy nature of internet-derived data offers no guarantee on its accuracy, which also frequently leads to suboptimal decisions. To make optimal decisions, LLMs require high- quality, domain-specific information, which is of- ten impractical to source and verify within the vast expanse of internet data . Eval- uating such information necessitates locating it within the dataset and verifying it against ground- truth domain knowledge, a process that is both labor-intensive and error-prone. Consequently, en- suring LLMs possess the necessary comprehensive and accurate knowledge for decision-making re- mains a formidable challenge. Even assuming LLMs have a general under- standing of a downstream domain, achieving opti- mal decision-making requires in-depth, contextual knowledge. For instance, mastering the game of Go involves not only understanding the basic rules but also developing strategies tailored to specific board states. This specialized knowledge varies within different states of the same domain and is essen- tial for informed decision-making. Deriving this knowledge in real-time can further constrain the search space and reduce computational overhead. To address these challenges and enable LLMs to make better decisions, we propose Discover, Verify, and Evolve ( DiVE ), a framework designed to model world dynamics from demonstrations and guide LLMs during the decision-making process. Our approach leverages the concept of a World Model (Ha and Schmidhuber, 2018; Hafner et al., 2020, 2023), wherein the framework discovers, re- fines, and evolves domain-specific knowledge to create a robust and adaptive agent. DiVE comprises three distinct components: i) The Discoverer , which uncovers dynamics from demonstrations, ranging from primitive actions and objects to abstract subtasks and the topological order of subgoals within the domain. This stage en- 1\nIt is daytime\nState description: \n-East: immediate (grass); nearby (unexplored_area); \n-North: immediate (grass); nearby (unexplored_area); \n-North East: immediate (grass); nearby (cow, unexplored_area); \n-North West: immediate (grass); nearby (unexplored_area, tree); \n-South: immediate (grass); nearby (unexplored_area); cow: North East 5 blocks away (nearby) (objects in between: {'grass'}) \n-Facing grass on the south.\nYour status:\n-he alth: 9/9\n-food: 9/9\n-drink: 9/9\n-energy: 9/9\nYou have nothing in your inventory.It is daytime\nState description: \n-East: immediate (grass); nearby (unexplored_area); \n-North: immediate (grass); nearby (unexplored_area); \n-North East: immediate (grass); nearby (cow, unexplored_area); \n-North West: immediate (grass); nearby (unexplored_area, tree); \n-South: immediate (grass); nearby (unexplored_area); cow: North East 5 blocks awa y (nearby) (objec ts in between: {'grass'}) \n-Facing grass on the south.\nYour status:\n-health: 9/9\n-food: 9/9\n-drink: 9/9\n-energy: 9/9\nYou have nothing in your inventory.It is nighttime\nState description: \n-East: immediate (sand); nearby (path, grass); distant (tree, unexplored_area);\n-North: immediate (zombie); nearby (path, sand, grass); distant (tree, unexplored_area); \n……-zombie: North 1 blocks away (immediate) (objects in between: None) \n-table: South East 9 blocks away (nearby) (objects in between: {'path', 'sand', 'water', 'grass'}) \n-Facing zombie on the north.\nYour status:\n-he alth: 2/9\n-food: 8/9\n-drink: 8/9\n-energy: 3/9\nYour inventory:\n-wood: 1\n-stone: 9\n-coal: 4\n-wood_pickaxe: 1\n-stone_pickaxe: 1\n-wood_sword: 1\n-stone_sword: 1\nIt is nighttime\nState description: \n-East: immediate (sand); nearby (path, grass); distant (tree, une xplored_area);\n-North: immediate (zombie); nearby (path, sand, grass); distant (tree, unexplored_area); \n……-zombie: North 1 blocks away (immediate) (objects in between: None) \n-table: South East 9 blocks away (nearby) (objects in between: {'path', 'sand', 'water', 'grass'}) \n-Facing zombie on the north.\nYour status:\n-health: 2/9\n-food: 8/9\n-drink: 8/9\n-energy: 3/9\nYour inventory:\n-wood: 1\n-stone: 9\n-coal: 4\n-wood_pickaxe: 1\n-stone_pickaxe: 1\n-wood_sword: 1\n-stone_sword: 1\nVerbalizeDiscoverVerifystone can be found near iron, coal, but it is not associated with cow, zombie, skeleton. you cannot walk directly through stone.stone turn into path after collect_stone.you cannot walk directly through zombie.skeleton turn into path after defeat_skeleton, increase bone by 1 and arrow by 1.……zombie turn into grass after defeat_skeleton, increase rotten_fresh by 1 and bone by 1.stone can be found near iron, coal, but it is not associated with cow, zombie, skeleton. you cannot walk directly through stone.stone turn into path after collect_stone.you cannot walk directly through zombie.skeleton turn into path after defeat_skeleton, increase bone by 1 and arrow by 1.……zombie turn into grass after defeat_skeleton, increase rotten_fresh by 1 and bone by 1.(verbalizeddemonstration) (discoveredgamedynamics)(verifiedgamedynamics)(demonstration) 𝑜𝑡𝑜𝑡+1𝑎𝑡CrafterEnvLLMAgent+EvolveUse collected stones to build a protective barrier around the safe location.……(situationalstrategies)stone can be found near iron, coal, but it is not associated with cow, zombie, skeleton. you cannot walk directly through stone.stone turn into path after collect_stone.you cannot walk directly through zombie.……(verifiedgamedynamics)Figure 1: Overall pipeline of DiVE. Left: Learning basic game dynamics from offline demonstrations (Section 3.1). We want to highlight the incorrect game dynamics being identified by the Verifier (labeled by ×), they are evidence of the LLM hallucinates false facts perhaps because of memorizing Minecraft data. Right : Learning situational strategies from online interactions (Section 3.2). For simplicity, we omit the verbalization process in the right figure. sures a broad capture of domain dynamics. ii) The Verifier , which filters out inaccuracies caused by the tendency of LLMs to hallucinate, ensuring that only precise and reliable information is retained. This step is crucial for maintaining the integrity of the knowledge base. iii) The Evolver , which gen- erates in-depth, state-appropriate dynamics, such as strategic plays or contextual decisions. This component ensures that the agent is equipped with actionable knowledge tailored to the current state, enhancing real-time decision-making. To demonstrate the potential of DiVE, we implemented an agent for the Crafter environ- ment . Within this setting, DiVE learns comprehensive and accurate dynamics from demonstrations and further guides the agent through the decision-making process by evolving in-depth strategies. This guidance enables the agent to outperform a set of strong baselines including the best reported systems, and achieve rewards com- parable to human players. We also provide both quantitative and qualitative analyses that help un- derstanding DiVE’s behavior, results evince the effectiveness of the proposed framework. In summary, our primary contribution is a gen- eral framework that autonomously learns world dynamics from demonstrations and guides LLMs in decision-making by evolving contextual knowl- edge. This approach not only enhances the capabil- ities of LLMs in specific domains but also provides a versatile foundation for creating agents capable of mastering complex tasks across various environ- ments through continuous learning and refinement. 2 Problem Formulation The Crafter Environment The Crafter survival game is a long-horizon environ- ment that typically requires hundreds of steps to successfully collect the diamond. The agent must determine the topological order of the tech tree, survive dangerous creatures, and manage dimin- ishing levels of food, drink, energy, and health. Each Crafter environment Ci∈ Cis an interac- tive environment, at every game step t, an agent is expected to issue an action at∈ A in re- sponse to an observation ot∈ O , then the envi- ronment returns a new observation ot+1of the re- sulting state. Given a set of human demonstrations H={τi}N i=0={{oi t, ai t, ri t}|τi| t=0}N i=0, the objec- tive is to unlock as many achievements as possible within an achievement graph G= (V, E). The graphGhere is a directed acyclic graph, where each vertex v∈Vdenotes an achievement and each edge (u, v)∈Eindicates that the achieve- mentvhas a dependency on the achievement u. To unlock an achievement v, all of its ancestors must be unlocked in advance. Challenge: The Knowledge Gap In this work, we assume a setting where a pre-trained LLM- based agent is used to solve tasks in the Crafter environment. Conceptually, we use Kdatato denote the set of knowledge being embedded in the LLM through its training process (including both pre- training and fine-tuning). We use Ktarget to denote the universal set of knowledge required to solve the target domain (i.e., Crafter). For the LLM to be useful, we hope Krel, the subset of knowledge 2\ninKdatathat is relevant to Ktarget to be as large as possible. We also hope that Krelcontains more ac- curate knowledge K+than inaccurate knowledge K−, where Krel=K+∪ K−. Therefore, we define three desirable properties to help conceptualize our system design: •Recall R=|K+| |Ktarget|measures the extent of knowl- edge required to solve tasks in the target domain is being covered by the LLM’s knowledge. A low recall typically suggests a large distribution gap between LLM’s training data and the target domain. •Precision P=|K+| |Krel|measures how accurate the LLM’s knowledge is when grounded in the target domain. An example of inaccurate game dynam- ics is shown in Figure 1 (left), where the LLM hallucinates that defeating skeleton will drop items, which is not true in Crafter. This may be- cause the LLM has memorized some Minecraft- specific data. •Depth Dmeasures the abstraction levels of the knowledge representations. Knowledge can be represented at different abstraction levels, rang- ing from elementary level game rules to high level strategies. We want to acknowledge that it is nontrivial to directly quantify the exact knowledge overlap be- tween an LLM and a dataset. We adopt the above concepts from Information Retrieval and related literature to guide our system design. In Section 3, we show mathematically how our system improve on all the three dimensions; in Section 4, we de- sign silver standard on quantifying the knowledge overlap, which provides us evidence that our sys- tem can indeed learn useful game dynamics and to some extent fills the knowledge gap. 3 Method In an ideal world, one could alleviate the aforemen- tioned knowledge gap by fine-tuning the LLM to adapt to the target domain. However, this approach often show less practical due to its reliance on abun- dant annotated data and significant computational overhead . Our framework, DiVE, is designed to bridge the knowledge gap taking into consideration all the three desirable properties (i.e., recall, precision, and depth), but without the need of collecting extensive data from the target domain. It is a prompt-based method that learns world dynamics Wfrom the environment. As illustrated in Figure 1, DiVE is initially boot- strapped from a handful of human demonstration trajectories H={τi}N i=0={{oi t, ai t, ri t}|τi| t=0}N i=0. We verbalize an observation otinto the lan- guage space as ˜otusing a Verbalizer (i.e., ˜ot= Verbalizer (ot)), resulting in transformed trajecto- ries˜τi={˜oi t, ai t, ri t}|τi| t=0. Next, the Discoverer distills a set of world dynamic candidates ˜W= {˜W+,˜W−}from human demonstrations H, where ˜W+and˜W−represent the correct and inaccurate world dynamic sets, respectively. Empirically, we find that it is often inevitable to have ˜W−in˜W, either due to the backbone LLM’s imperfection in discovering meaningful knowledge from trajectory data, or its tendency of hallucina- tion. Therefore, we use a Verifier to filter out po- tential invalid and conflicting world dynamic can- didates within ˜W, we denote the remaining valid dynamics as W. Next, we use an Evolver, which is designed to derive advanced game-playing strate- giesIbased on the world dynamics Wand the language description of observation ˜ot. The final decision-making process on primitive actions at∈ A is hierarchically decomposed as planning tasks on sub-goals SG, sub-tasks ST, and actions A. The planning procedure is further guided by both WandI. In cases where W ̸=∅, R,P, andDare guaranteed to increase as formu- lated below: R:|K+| |Ktarget|Discoverer= = = = = = ⇒ H|K+|+|W| |Ktarget|(1) P:|K+| |Krel|Verifier= = = =⇒ ˜W,H|K+|+|W| |Krel|+|W|(2) D:∅Discoverer= = = = = = ⇒ VerifierWEvolver= = = =⇒ W,HI ∪ W (3) The DiVE framework can be split into two stages: an offline dynamics learning stage and an online strategy learning stage. 3.1 Offline Dynamics Learning The offline dynamics learning procedure aims at bridging the gap between LLM’s understanding and the basic rules of the Crafter world by learn- ing the world dynamics Was a prior for further decision-making. Rather than extracting world dy- namics from human-authored game manuals or handbooks (Wu et al., 2024b,a), which can be difficult to obtain in many real-world scenarios, we 3\nexplore learning world dynamics Wfrom experi- ences H, which is arguably more accessible. Hierarchical Curriculum Learning Given the varying complexities in learning the dynamics of different elements in Crafter, we adopt curricu- lum learning . We propose a sequential learning approach progressing from simple to more challenging dynamics, facilitat- ing effective learning. Specifically, we propose a method to learn the dynamics of each element within the task decomposition hierarchy, denoted asTD={A+ (O), ST, SG }, where Ois the set of objects in Crafter. Our approach starts with ele- ments of lower abstraction, such as actions a∈ A, and progresses to higher abstraction elements, like sub-goals sgi∈SG. The sub-goals sequence SG= [sg1, sg2, . . .]is an ordered sequence to unlock achievements in the achievement graph G= (V, E), with SG=TopologicalSort (G)and eachsgibeing a vertex in V. We use the Discoverer to extract this order from human demonstrations H. The sub-tasks STare defined as the nodes in the achievement graph G, i.e., ST=V. Achiev- ing a sub-goal sgimay require completing several sub-tasks multiple times. This approach ensures a logical progression through the tasks, enhancing the learning and application of Crafter’s dynamics. Discoverer The Discoverer is designed to iden- tify dynamic candidates ˜Wrelating to the elements within the task decomposition hierarchy TD. A single dynamics discovering step for an element E∈TDinvolves following three main steps: 1.Construction of a Semantic Experience Bank (BE): For each element E, a semantic expe- rience bank BEis constructed from human demonstrations H. This bank stores experiences that are transformed from Hinto a suitable gran- ularity for analyzing dynamics related to E. The transformation involves chunking and summa- rizing the verbalized human demonstrations ˜H to capture essential semantic details. 2.Sampling of Relevant Experiences : For each attribute of an instance e∈E, a subset of expe- riences BE ethat are relevant to the instance eis sampled from BE. 3.Identification of Dynamic Candidates : A dy- namic candidate ˜wis identified from the subset BE eby recognizing patterns that are consistent across all experiences within BE e. The action-level semantic experience bank, de- noted as BA, stores transition tuples derived from verbalized human demonstrations ˜H, specifically BA={{˜ot, a t,˜ot+ 1}i}|BA| i=1. Similarly, the object-level semantic experience bank BOcollects individual observations related to specific object interactions from ˜H, stored as BO={˜oi}|BO| i=1. The sub-task-level semantic experience bank BST aggregates trajectory segments that represent the completion of sub-tasks, formatted as BST= {{˜ot, . . . , a tst,˜otst+1}i}|BST| i=1, where tstdenotes the time step at which a sub-task st∈STis com- pleted. For sub-goals, the sub-goal-level experi- ence bank BSGrecords sequences of sub-tasks that lead to the completion of sub-goals, expressed as BSG={{stt, . . . , st tsg}i}|BSG| i=1, where tsgis the time step at which the sub-goal sgis achieved. For action-level dynamics learning, the relevant experiences BA aare compiled by randomly sam- pling transition tuples from BAwhere the action ais successfully executed. A similar approach is applied to the dynamic learning for other elements within the task decomposition hierarchy TD. For action-level dynamics discovery, we iden- tify the prerequisites and outcomes of each action (e.g., The action MakeWoodPickaxe requires 1 wood ). For object-level dynamics, we concentrate on the co-occurrence relationships between objects and their occurrences over time. The attribute set for a sub-task typically includes the general steps required to complete the task, along with its prereq- uisites, outcomes, and termination conditions. The only attribute of interest for a sub-goal is its correct position within the sub-goal sequence SG. Verifier Dynamic discovery processes are sus- ceptible to noise arising from several sources, such as confounders, hallucinations by the LLM, and imperfections in the LLM’s ability to derive mean- ingful knowledge from trajectory data. To mitigate these issues, we introduce a dynamic verifier de- signed to filter out all noisy dynamic candidates from ˜W. For each dynamic candidate w∈˜W, the Verifier initiates the process by sampling a sub- set of relevant semantic experiences BE efrom the corresponding semantic experience bank BE. In this context, wrepresents a dynamic candidate re- lated to one attribute of the instance e∈E, where E∈ {A + (O), ST, SG }is an element of the task decomposition hierarchy TD. The verification of wproceeds as follows: wis identified as inaccurate 4\nand consequently filtered out if it does not consis- tently hold across all experiences within BE eor if it conflicts with any other established dynamics. The dynamics that withstand this verification process are classified as verified dynamics, denoted as W. 3.2 Online Strategy Learning To effectively integrate the learned world dynamics Winto the Crafter environments, we deploy an LLM-based agent defined by π:S × W → P (A). Here,Srepresents the state space, Adenotes the action space, and Psymbolizes the probability dis- tribution over the action space. Rather than directly mapping the world dynamics Wand the observa- tionotof the current state stto the action at, we address the challenge of long-horizon planning by enhancing decision-making with an online strategy learning method. This approach decomposes the planning process into three distinct tasks: sub-goal planning, sub-task planning, and action planning. Sub-goal Planning Given that the sub-goal se- quence SG= [sg1, sg2, . . .]is derived from hu- man demonstrations Hand treated as a fixed se- quence, we employ a simple heuristic for sub-goal planning. When a sub-goal is completed, the cur- rent sub-goal will be updated as the first uncom- pleted sub-goal within SG. Sub-task Planning For a given current sub-goal sgi, we have developed an LLM-based sub-task planner. This planner evaluates and ranks all sub- tasks st∈STbased on the learned world dynam- icsW, the verbalized current observation ˜ot, and the most recently planned sub-task stt−1. The top- ranked sub-task is then set as the current sub-task stt. To ensure precise execution, the completion of a sub-task stis contingent upon meeting its specific termination condition. This condition is assessed by querying an LLM with the current verbalized ob- servation, the verbalized observation from the time step when the sub-task began, and the termination conditions of the current sub-task. Online Strategy Deriving In addition to learn- ing the basic rules of the Crafter environment (i.e., world dynamics W), we also explore the develop- ment of advanced game-playing strategies based on these dynamics. Unlike learning the world dy- namics, the strategy space is often too vast to fully explore. To address this, we propose evolving the dynamics into strategies Iusing an online learning method. This method reduces the search space by conditioning not only on the dynamics Wbut also on the verbalized current observation ˜otand the sub-task sti. This targeted approach helps gener- ate strategies that are more grounded and respon- sive to current game scenarios than those devel- oped through offline methods. To facilitate this, we have designed an LLM-based Evolver that develops strategy candidates ˜Ithrough deductive reasoning applied to W. Specifically, the Evolver derives strategy candidates by rules of inference, such as modus ponens. These strategy candidates ˜Iare evaluated for their validity and are ranked based on their usefulness by an LLM-based critic. Lastly, the valid and useful candidates are added to the situational strategy set I. Action Planning The final action selection pro- cess is carried out in two primary steps: 1.Invalid Action Masking: This step involves masking all actions that are not feasible under the current game conditions, based on the veri- fied dynamics Wand current observation ˜ot. 2.Action Selection: From the set of valid actions, a specific primitive action ais chosen. This selection is based on several factors: the current sub-task sti, the verbalized current observation ˜ot, the world dynamics W, a windowed history of previously planned actions and observations, and the strategies derived I. 4 Experiment In this section, we start by detailing our experi- mental settings including evaluation metrics and the set of baselines we are comparing against (Sec- tion 4.1). Then, we report the main results (Sec- tion 4.2). Additionally, we conduct a set of con- trolled experiments, we provide quantitative and qualitative analyses to obtain better understanding on DiVE (Section 4.3 and 4.4). 4.1 Environment Settings Environment Crafter is a set of open-world survival games on 64×64grid maps, featuring a diverse array of materials such as tree , stone , and coal , as well as entities including cow, zombie , and skeleton semi-randomly spawn on the maps. The games include an achievement graph with 22 unique achievements across 7 levels. The agent observes its surroundings through a local 7×9viewport and is also aware of its status within the game environment. 5\nMethod Score Reward Human Experts 50.5±6.8% 14 .3±2.3 DiVE 35.9±3.2% 14.5±2.4 SPRING∗8.2±2.3% 6 .9±1.8 CoT∗1.3±0.3% 2 .5±0.5 AD 21.79±1.4% 12 .6±0.3 ELLM N/A 6.0±0.4 DreamerV3 14.5±1.6% 11 .7±1.9 PPO 4.6±0.3% 4 .2±1.2 Random 1.6±0.0% 2 .1±1.3 Table 1: Primary table illustrating the performance of DVE in comparison to all baseline models. The∗sym- bol indicates results obtained by running the official code implementation using the same five random seed and same model. Evaluation metrics Following previous works , we evaluate the agent with two primary metrics: reward and score. Agents receive a +1reward for each new achieve- ment unlocked (e.g., make wood pickaxe ,place furnace ). Additionally, they earn a ±0.1reward for every health point gained or lost. The score met- ric is computed by aggregating the success rates for each achievement: S.= exp 1 NNX i=1ln (1 + si)! −1. (4) Baseline systems We compare DiVE with three categories of baselines. First, LLM-based methods, including SPRING , ELLM , and chain-of-thought (CoT). Second, reinforcement learning (RL) based approaches such as DreamerV3 , PPO , EDE , Dream- erV2 , Rainbow , Plan2Explore , and RND . Third, we consider demonstrations from human experts, which pro- vide insights from actual human players. 4.2 Overall Results Table 1 demonstrates that DiVE significantly out- performs all other baselines across two distinct evaluation metrics. Notably, DiVE exceeds the pre- vious state-of-the-art (SOTA) LLM-based method, SPRING1, by a substantial margin, achieving a 337.8% relative improvement in score and a 110.1% enhancement in reward. Additionally, DiVE also surpasses the prior SOTA RL-based 1Results were obtained by executing the official SPRING code using the same LLM configuration (GPT-4o) and envi- ronment random seeds as DiVE, ensuring a fair comparison.Methods Score Reward Component analysis DiVE 35.9±3.2%14.5±2.4 w/o E 21.1±9.7% 11 .3±4.3 w/o V 9.8±1.0% 10 .1±0.7 w/o V&E 11.5±4.9% 8 .3±3.8 w/o D&V&E 0.9±0.1% 2 .5±1.3 CoT 1.3±0.3% 2 .5±0.5 CoT + D&V 3.6±0.9% 3 .9±2.3 Dynamics from distinct sources DiVE 35.9±3.2%14.5±2.4 w/o D&V+S†15.7±5.3% 8 .9±5.1 w/o D&V&E+S†12.1±4.6% 8 .7±3.0 w/o D&V&E+H†34.2±2.8% 14 .5±0.9 Table 2: Different components’ impact on the perfor- mance; The abbreviations D, V , and E stand for Dis- cover, Verifier, and Evolver, respectively; S†means the dynamics (manual) used in the SPRING paper; H† means the human-annotated dynamics. approach, DreamerV3, with a 21.4% absolute im- provement in score and a 2.8 absolute increase in reward. Notably, DiVE achieves rewards compara- ble to domain-familiar human players using just 10 human demonstrations. 4.3 Quantitative Analysis Ablation study We conduct a set of ablation study to help elucidate how individual elements contribute to the overall performance of the method. As shown in the first section of Table 2, various vari- ants of DiVE have been designed to assess the ef- fectiveness of each component within the methodol- ogy. Notably, the significant performance disparity between DiVE and its variant lacking the Evolver component empirically underscores the Evolver’s role in effectively developing gameplay strategies based on world dynamics W, thereby enhancing the agent’s proficiency in the Crafter environment. Similarly, the performance decline observed in the variant without the Verifier highlights the impor- tance of formulating strategies Ibased on accurate world dynamics W. Moreover, the additional per- formance reduction in the version missing both Verifier and Evolver demonstrates their orthogonal functions—precision in dynamics from the Verifier and advancement in strategies from the Evolver. The performance of DiVE without Discoverer, Verifier, and Evolver reverts to the CoT baseline, indicating that merely decomposing the task ac- cording to the hierarchy Hwithout incorporating domain knowledge Ktarget offers no advantage. The significant gap between CoT + D&V and DiVE w/o E further evinces that an LLM-based agent strug- gles in long-horizon planning tasks without task 6\nFigure 2: Recall of learned dynamics over discovery steps with mean and standard deviation. decomposition, emphasizing the necessity of the decomposition hierarchy H. Sources of dynamics We also explore how DiVE performs with world dynamics derived from vari- ous sources. As shown in the second section of Ta- ble 2, DiVE significantly outperforms the variants that utilize dynamics S†as used in the SPRING paper . This performance im- provement suggests that the learned dynamics W are more beneficial than S†, possibly because S† lacks some of the helpful details present in W. We will delve deeper into this aspect in upcoming case studies. The performance difference between meth- ods using S†with and without the Evolver also underscores the importance of strategy evolution, the effectiveness of which is closely linked to the quality of the world dynamics it employs. Besides dynamics learned from human demon- strations and the manual dynamics used in SPRING, we have also investigated a third source: human-annotated dynamics. The findings reveal that DiVE performs comparably to the variants that use human-annotated dynamics, demonstrating the robustness and effectiveness of DiVE’s approach to dynamic learning. Quality of learned dynamics As previously mentioned, directly quantifying the desired proper- ties is challenging since we cannot precisely mea- sure how much domain-relevant information Krel an LLM contains or determine the exact amount of knowledge Ktarget required by a domain. However, by using human-annotated dynamics H†as a silver standard for Ktarget , we can estimate the precision and recall of the learned dynamics W. This ap- proach allows us to effectively track and evaluate the progress of the LLM-based dynamic learning. Specifically, we calculate recall as R=|W∩H†| |H†| Figure 3: Precision of learned dynamics before and after dynamic verification. and precision as P=|W∩H†| |W|. As illustrated in Figure 2, both the discovered dynamics ˜Wand the verified dynamics Wdemonstrate an increase in recall with the progression of discovery steps, indicating that the richness of the learned dynam- ics enhances as more discovery steps are taken. Furthermore, the gap in recall between ˜WandW narrows as the number of discovery steps increases, suggesting that the Verifier effectively filters out "noisy" dynamic candidates while retaining those that generalize across different trajectory segments. To further investigate whether the Verifier retains correct world dynamic candidates ˜W+while filter- ing out the inaccurate ones ˜W−, we examine the precision of both the discovered dynamics ˜Wand the verified dynamics W. As shown in Figure 3, the precision of the verified dynamics consistently and significantly surpasses that of the discovered dynamics, demonstrating that the Verifier effec- tively identifies and eliminates inaccurate dynamic candidates. This validates the Verifier’s role in en- hancing the reliability of the dynamics used for decision-making processes. 4.4 Qualitative Analysis The correctness of the learned and verified dynam- ics is categorized as either correct or as mistakes due to confounders, in-domain hallucinations, or out-of-domain hallucinations. As illustrated in Ta- ble 3, in the example of defeating a zombie, the mistake attributed to confounders is primarily due to the simultaneous increase in health points at the step where the zombie is defeated. In this case, 7\nCorrectness Outcome Correct ( ✓) none Confounder ( ✗) 1 health In-domain Hallucination ( ✗) 1 wood Out-domain Hallucination ( ✗) 1 bone Table 3: The dynamics underlying the outcome of de- feating zombie Methods Dynamics SPRING Place stone requires stones D&V Place stone requires 1stone and faces paths, grass, sand, water, and lava D&V&E Place stone to block zombies and skeletons, preventing them from reaching the player Table 4: Comparing SPRING and DiVE on place stone’s dynamics the Discoverer categorizes this increase as a re- sult of defeating the zombie. For the in-domain hallucination example, it is impossible to increase the wood while defeating the zombie, though the wood may appear in the observation. Lastly, the out-domain hallucination is responsible for discov- ering dynamics that contain nonexistent objects in the observation or even the Crafter environment. Compared to the dynamics from SPRING in Ta- ble 4 , we found that DiVE’s dynamics are not only more precise but also more informed. For ex- ample, regarding the dynamics of placing a stone, SPRING only found that placing a stone requires stones; however, DiVE found that placing a stone requires exactly one stone and the precise facing condition for successful placement. Furthermore, based on this information, the Evolver can derive advanced dynamics for placing a stone, such as its potential to act as a barrier between the agent and dangerous creatures. 5 Related Work Embodied Agent Reinforcement learning (RL) has long been an effective method for developing embodied agents through trial and error . With recent advancements in LLMs , their reasoning and planning capabilities have emerged as an im- portant component for the embodied agent (Wang et al., 2023a,b; Ma et al., 2023; Liang et al., 2023; Brohan et al., 2023; Nottingham et al., 2023; Silver et al., 2024; Driess et al., 2023; Shi et al., 2024; Wu et al., 2024b). Discovering and Verifying LLMs can uncover knowledge by inducing and deducing rules for reasoning tasks . Additionally, they can directly discover the underlying domain knowledge from previous trajectories and interac- tive experience (Colas et al., 2023; Majumder et al., 2023; Fu et al., 2024a,b; Zhong et al., 2024; Zhao et al., 2024).The discovered knowledge within these methods is not hierarchically structured and mainly focuses on discovering short-horizon span knowledge. Meanwhile, due to the tendency for hallucination , the knowledge discovered should be care- fully verified before being used directly. Evolving By reflecting on past trajectories, LLMs can enhance their decision-making process. Leveraging this capability, works such as evolve new and advanced skills based on pre- defined skills. However, previous methods often assume that LLMs already possess comprehensive domain knowledge and can directly develop new in- sights. Additionally, they often requrie pre-defined skills in either code format or RL polices. 6 Conclusion In this work, we present DiVE, a framework de- signed for long-horizon interactive planning tasks. Specifically, given a handful of demonstrations, a Discoverer can extract useful world dynamics such as an action’s preconditions and outcomes. A Ver- ifier subsequently filters out inaccurate dynamic candidates. In an online setting, conditioned on the offline-learned knowledge, an Evolver learns situa- tional strategies through interaction. DiVE bridges the knowledge gap between a pre-trained LLM and the target domain by autonomously learning hier- archical world dynamics and evolving contextual knowledge. Extensive experiments demonstrate its effectiveness. This work may pave the way for developing LLM-based frameworks to achieve the mutual enhancement of dynamics learning and pol- icy learning without extensive human annotation. 8\nLimitations While our study introduces the DiVE framework for solving long-horizon embodied planning tasks, it is not without its limitations. First, the re- liance on large language models (LLMs) intro- duces challenges in interpretability and computa- tional efficiency. These models demand consid- erable resources by extensively communicating with ChatGPT, which might limit their applica- bility in resource-constrained environments. Sec- ond, our experiments are conducted within the confines of the Crafter benchmark, which, while comprehensive, may not encompass all possible real-world scenarios an embodied agent might en- counter. Third, the demand for human demonstra- tions posts another challenge as the collection of the demonstrations can be time-consuming in some specific situations. Future work should aim to ad- dress these limitations, exploring more efficient model architectures, broader applicability across diverse environments, and enhanced methods for achieving naturalistic human-agent interaction. Ethical Concerns We do not foresee an immediate ethical or soci- etal impact resulting from our work. However, as an LLM application, we acknowledge that DiVE could in some way be affected by various types of hallucinations introduced by the backbone LLMs, despite we have designed some components in the framework to specifically eliminate false informa- tion potentially caused by LLM hallucination. We therefore urge researchers and practitioners to use our proposed framework in a mindful way, espe- cially when deploying such LLM-centric method in real-world applications.
[ "The Llama 3 Herd of Models", "Can Language Models Serve as Text-Based World Simulators?", "OPEx: A Component-Wise Analysis of LLM-Centric Agents in Embodied Instruction Following", "Language-guided Skill Learning with Temporal Variational Inference", "Fine-tuning Large Language Models for Domain-specific Machine Translation", "Policy Improvement using Language Feedback Models", "Skill Set Optimization: Reinforcing Language Model Behavior via Transferable Skills", "ReGAL: Refactoring Programs to Discover Generalizable Abstractions", "CLIN: A Continually Learning Language Agent for Rapid Task Adaptation and Generalization", "Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance", "Large Language Models can Learn Rules", "ExpeL: LLM Agents Are Experiential Learners", "Graph of Thoughts: Solving Elaborate Problems with Large Language Models", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "GPT-4 Technical Report", "Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals", "Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning", "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents", "Do Embodied Agents Dream of Pixelated Sheep?: Embodied Decision Making using Language Guided World Modelling", "Mastering Diverse Domains through World Models", "ReAct: Synergizing Reasoning and Acting in Language Models", "Code as Policies: Language Model Programs for Embodied Control", "On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?", "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances", "One-Shot Learning from a Demonstration with Hierarchical Latent Language", "Training language models to follow instructions with human feedback", "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents", "MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research", "Benchmarking the Spectrum of Agent Capabilities", "TruthfulQA: Measuring How Models Mimic Human Falsehoods", "LoRA: Low-Rank Adaptation of Large Language Models", "ALFWorld: Aligning Text and Embodied Environments for Interactive Learning", "Mastering Atari with Discrete World Models", "The NetHack Learning Environment", "Language Models are Few-Shot Learners", "Planning to Explore via Self-Supervised World Models", "Precondition", "South East", "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions", "HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering", "Recurrent World Models Facilitate Policy Evolution", "TextWorld: A Learning Environment for Text-based Games", "Rainbow: Combining Improvements in Deep Reinforcement Learning", "Proximal Policy Optimization Algorithms", "Curriculum learning", "AutoGuide: Automated Generation and Selection of State-Aware Guidelines for Large Language Model Agents", "SPRING: Studying Papers and Reasoning to play Games", "A NetHack Learning Environment Language Wrapper for Autonomous Agents" ]
Threshold-driven Pruning with Segmented Maximum Term Weights for Approximate Cluster-based Sparse Retrieval
Threshold-driven Pruning with Segmented Maximum Term Weights for Approximate Cluster-based Sparse Retrieval Abstract This paper revisits dynamic pruning through rank score thresholding in cluster-based sparse retrieval to skip the index partially at cluster and document levels during inference. It pro- poses a two-parameter pruning control scheme called ASC with a probabilistic guarantee on rank-safeness competitiveness. ASC uses cluster-level maximum weight segmentation to improve accuracy of rank score bound es- timation and threshold-driven pruning, and is targeted for speeding up retrieval applications requiring high relevance competitiveness. The experiments with MS MARCO and BEIR show that ASC improves the accuracy and safeness of pruning for better relevance while delivering a low latency on a single-threaded CPU. 1 Introduction Fast and effective document retrieval is a critical component of large-scale search systems. This can also be important for retrieval-augmented gen- eration systems which are gaining in popularity. Retrieval systems fall into two broad categories: dense (single or multi-vector) and sparse (lexical or learned) . Efficient dense retrieval relies on approximation techniques with notable relevance drops , whereas sparse retrieval takes advantage of fast inverted index implementations on CPUs. Well-trained models from these two cat- egories can achieve similar relevance numbers on the standard MS MARCO passage ranking task. However, for zero-shot out-of-domain search on the BEIR datasets, learned sparse retrieval exhibits stronger relevance than BERT-based dense mod- els. Accordingly, this paper focuses on optimizing online inference efficiency for sparse retrieval. An- other reason for this focus is that sparse retrieval does not require expensive GPUs, and thus can sig- nificantly lower the infrastructure cost for a large- scale retrieval system that hosts data partitions on a massive number of inexpensive CPU servers. A traditional optimization for sparse retrieval is rank-safe threshold-driven pruning algorithms, such as MaxScore , WAND , and BlockMax WAND (BMW) , which accu- rately skip the evaluation of low-scoring documents that are unable to appear in the final top- kresults. Two key extensions of these pruning methods are cluster-based pruning and rank-unsafe threshold over-estimation. Cluster-based (or block-based) pruning extends rank-safe methods to skip the eval- uation of groups of documents . However, the cluster bounds estimated by current methods are often loose, which limits pruning op- portunities. Threshold over-estimation relaxes the safeness, and allows some po- tentially relevant documents to be skipped, trading relevance for faster retrieval. However, there are no formal analysis or guarantee on the impact of rank-unsafeness on relevance and its speed gain can often come with a substantial relevance drop. This paper revisits rank score threshold-driven pruning for cluster-based retrieval in both safe and unsafe settings. We introduce a two-parameter threshold control scheme called ASC, which addresses the above two limitations of current threshold-driven pruning methods. ASC uses cluster-level maximum weight segmentation to im- prove the accuracy of cluster bound estimation and offer a probabilistic guarantee on rank-safeness when used with threshold over-estimation. Conse- quently, ASC is targeted at speeding up retrieval in applications that desire high relevance. 1\nOur evaluation shows that ASC makes sparse retrieval with SPLADE , uni- COIL , and LexMAE much faster while effectively retaining their relevance. ASC takes only 9.7ms with k= 10 and 21ms with k= 1000 for LexMAE on a single- threaded consumer CPU to search MS MARCO passages with 0.4252 MRR. It takes only 5.59ms and 15.8ms respectively for SPLADE with over 0.3962 MRR. When prioritizing for a small MRR relevance loss, ASC can be an order of magnitude faster than other approximation baselines. 2 Background and Related Work Problem definition. Sparse document retrieval identifies top- kranked candidates that match a query. Each document in a data collection is mod- eled as a sparse vector with many zero entries. These candidates are ranked using a simple additive formula, and the rank score of each document dis defined as: RankScore (d) =P t∈Qwt,d, where Qis the set of search terms in the given query, wt,dis a weight contribution of term tin document d, possibly scaled by a corresponding query term weight. Term weights can be based on a lexical model such as BM25 or are learned from a neural model. Terms are tokens in these neural models. For a sparse representation, a retrieval algorithm uses an inverted index with a set of terms, and a document posting list for each term. A posting record in this list contains a document ID and its weight for the corresponding term. Threshold-driven skipping. During sparse retrieval, a pruning strategy computes the up- per bound rank score of a candidate docu- ment d, referred to as Bound (d), satisfying RankScore (d)≤Bound (d). IfBound (d)≤θ, where θis the rank score threshold to be in the top- klist, this document can be safely skipped. WAND uses the maximum term weight of documents in a posting list for their score upper bound, while BMW and its variants (e.g. VBMW ) use block-based maximum weights. MaxS- core uses a similar skipping strategy with term par- titioning. A retrieval method is called rank-safe if it guarantees that the top- kdocuments returned are thekhighest scoring documents. All of the above algorithms are rank-safe. Threshold over-estimation is a “rank-unsafe” skipping strategy that deliberately over-estimates the current top- kthreshold by a factor . There is no formal analysis of the above rank-safeness approximation, whereas our work generalizes and improves threshold over-estimation for better rank-safeness control in cluster-based re- trieval with a formal guarantee. Live block filtering and cluster-based re- trieval. Live block filtering clusters docu- ment IDs within a range and estimates a range- based maximum score for pruning. Anytime Rank- ing extends cluster skip- ping inverted index which arranges each posting list as “clusters” for selective retrieval, and searches top clusters under a time budget. Without early termi- nation, Anytime Ranking is rank-safe and concep- tually the same as live block filtering with an opti- mization that cluster visitation is ordered dynami- cally. Contemporary work in introduces several optimizations for live block fil- tering called BMP with block reordering and thresh- old overestimation and shows that a block-based (cluster-based, equivalently) retrieval still repre- sents a state-of-the-art approach for safe pruning and for approximate search. Our work can be effectively combined with the above work using maximum cluster-level score bounds and threshold over-estimation, and is fo- cused on improving accuracy of cluster score bounds and threshold-driven pruning to increase index-skipping opportunities and introduce a prob- abilistic rank-safeness assurance. Efficiency optimization for learned sparse retrieval. There are orthogonal techniques to speedup learned sparse retrieval. BM25-guided pruning skips documents during learned index traversal . Static index pruning removes low-scoring term weights during index generation. An efficient version of SPLADE uses L1 regularization for query vectors, dual document and query encoders, and language model middle training. Term impact decomposition partitions each posting list into two groups with high and low impact weights. Our work is complementary to the above techniques. Approximation with score-at-a-time retrieval (SAAT). The above retrieval approaches often conduct document-at-a-time (DAAT) traversal over document-ordered indexes. The SAAT re- 2\ntrieval over impact-ordered indexes is an alterna- tive method used together with earlier termina- tion such as JASS and IOQP . An experimental study compares DAAT and SAAT for a number of sparse models and indicates that while no single system dominates all scenarios, it confirms that DAAT Anytime code is a strong contender, especially for SPLADE when maintaining the small MRR@10 loss. Since IOQP has been shown to be highly competitive to an optimized version of JASS, the baselines in Section 4 includes Anytime and IOQP. Big-ANN competition for sparse retrieval. NeurIPS 2023 Big-ANN competition sparse track uses 90% recall of safe search top 10 results as the relevance budget to se- lect the fastest entry for MS MARCO dev set with SPLADE, and this metric drives a different opti- mization tradeoff compared to our paper. Our paper prioritizes MRR@10 competitiveness of approxi- mate retrieval with a much tighter relevance loss budget before considering gains in latency reduc- tion. Appendix E provides a comparison of ASC with two top winners of this competition. Refer- ence is listed for the Pinecone entry with no open source code released, and it presents an approach to combine dense and sparse retrieval representations with random projection, which is orthogonal to our approach. 3 Cluster-based Retrieval with Approximation and Segmentation Order sparse clusters Query Traverse clusters with ( 𝜇, 𝜼)-approximation … Segmented term max weights Cluster level pruning Document level pruning Figure 1: Flow of ASC with two-parameter pruning con- trol and segmented cluster-level maximum term weights The overall online inference flow of the proposed scheme during retrieval is shown in Figure 1. Ini- tially, sparse clusters are sorted in a non-increasing order of their estimated cluster upper bounds. Then, search traverses the sorted clusters one-by-one to conduct approximate retrieval with two-level prun- ing with segmented term maximum weight. We follow the notation in . A document collection is divided into m clusters {C1,···, Cm}. Each posting list of an inverted index is structured using these clusters. Given query Q, the BoundSum formula below estimates the maximum rank score of a document in a cluster. Anytime Ranking visits clusters in a non-increasing order of BoundSum values. BoundSum (Ci) =X t∈Qmax d∈Ciwt,d. (1) The visitation to cluster Cican be pruned if BoundSum (Ci)≤θ, where θis the current top- kthreshold. If this cluster is not pruned, then document-level index traversal and skipping can be conducted within each cluster following a stan- dard retrieval algorithm. Any document within such a cluster may be skipped for evaluation if Bound (d)≤θwhere Bound (d)is computed on the fly based on an underlying retrieval algorithm such as MaxScore and VBMW. Design considerations . The cluster-level BoundSum estimation in Formula (1) can be loose, especially when a cluster contains diverse document vectors, and this reduces the effective- ness of pruning. As an illustration, Figure 2 shows the bound tightness of Anytime for MS MARCO Passage clusters, calculated as the ratio between the average actual and estimated bound: 1 mPm i=1maxdj∈CiRankScore (dj) BoundSum (Ci), where mis the number of clusters. A bound tightness near 1 means the bound estimate is accurate, whereas a value near 0 means a loose estimate. The average bound tightness increases with mbecause smaller clusters are more similar. ASC improves the tightness of the cluster bound estimation for all values of m. Figure 2: ASC predicts more accurate cluster bounds, which allows it to prune more aggressively. Cluster bound tightness is the average ratio of the actual and estimated cluster bounds, calculated with Formula (1). Limited threshold over-estimation can be help- ful to deal with a loose bound estimation. Specif- 3\nically, over-estimation of the top- kthreshold is applied by a factor of µ, where 0< µ≤1, and the above pruning conditions are modified asBoundSum (Ci)≤θ µandBound (d)≤θ µ. Threshold over-estimation with µallows skipping more low-scoring documents when the bound es- timation is too loose. However, thresholding is applied to all cases uniformly and can incorrectly prune many desired relevant documents when the bound estimation is already tight. To improve the tightness of cluster-level bound estimation using Formula (1), one can decrease the size of each cluster. However, there is a significant overhead when increasing the number of clusters. One reason is that for each cluster, one needs to extract the maximum weights of query terms and estimate the cluster bound, which can become ex- pensive for a large number of query terms. Another reason is that MaxScore identifies a list of essential query terms which are different from one cluster to another. Traversing more clusters yields more overhead for essential term derivation, in addition to the cluster bound computation. 3.1 ASC: (µ, η)-approximate retrieval with segmented cluster information The proposed ASC method stands for (µ, η)- Approximate retrieval with Segmented Cluster- level maximum term weights. ASC segments clus- ter term maximum weights to improve the tightness of cluster bound estimation and guide cluster-level pruning. It employs two parameters, µandη, satis- fying 0< µ≤η≤1, to detect the cluster bound estimation tightness and improve pruning safeness. Details of our algorithm are described below. Extension to the cluster-based skipping in- dex. Each cluster Ciis subdivided into nsegments {Si,1,···, Si,n}through random uniform partition- ing during offline processing. The index for each cluster has an extra data structure which stores the maximum weight contribution of each term from each segment within this cluster. During retrieval, the maximum and average segment bounds of each cluster Ciare computed as shown below: MaxSBound (Ci) =nmax j=1Bi,j, (2) AvgSBound (Ci) =1 nnX j=1Bi,j, (3) andBi,j=X t∈Qmax d∈Si,jwt,d. Two-level pruning conditions . Letθbe the current top-kthreshold of retrieval in handling query Q. •Cluster-level: Any cluster Ciis pruned when MaxSBound (Ci)≤θ µ(4) and AvgSBound (Ci)≤θ η. (5) •Document-level: If a cluster is not pruned, then when visiting such a cluster with a MaxScore or another retrieval algorithm, a document dis pruned if Bound (d)≤θ η. Figure 3(a) illustrates a cluster skipping index of four clusters for handling query terms t1,t2, and t3. This index is extended to include two maxi- mum term weight segments per cluster for ASC and these weights are marked in a different color for different segments. Document term weights in posting records are not shown. Assume that the current top- kthreshold θis 9, Figure 3(b) lists the cluster-level pruning decision by Anytime Rank- ing without and with threshold overestimation and by ASC. The derived bound information used for making pruning decisions is also illustrated. 1.21.61.71.56.51.14.03.20.42.21.44.44.23.03.2t1t2t36.82.54.33.52.55.2(a) Cluster skipping inverted index24510121520252732401368131425263142912151724282935362241t1t2t3 Suppose and , the first and the third clusters will be pruned.θ=9μ=0.9 BoundSum=3.3MaxSBound=3.1AvgSBound=3.0 BoundSum=9.8MaxSBound=9.6AvgSBound=9.2 BoundSum=13.7MaxSBound=9.7AvgSBound=7.6 BoundSum=16.3MaxSBound=13.6AvgSBound=12.4Query termsSegmented term max weights (b) Online dynamic cluster pruningCluster 1Clustered posting lists Cluster 3Cluster 2Cluster 4Doc ID (a) Cluster skipping index with 2 weight segments per cluster θ= 9 Custer 1 Cluster 2 Cluster 3 Cluster 4 BoundSum 3.3 9.8 13.7 16.3 Anytime Pruned Kept Kept Kept Anytime- µ=0.9 Pruned Pruned Kept Kept MaxSBound 3.1 9.6 9.7 13.6 AvgSBound 3.0 9.2 7.6 12.4 ASC µ=0.9, η=1 Pruned Kept Pruned Kept (b) Decisions of dynamic cluster-level pruning during retrieval Figure 3: A cluster pruning example Extra online space cost for segmented max- imum weights. The extra space cost in ASC is to maintain non-zero maximum term weights for 4\nmultiple segments at each cluster in a sparse for- mat. For example, Figure 3 shows four non-zero maximum segment term weights at Cluster 1 are accessed for the given query. To save space, we use the quantized value. Our evaluation uses 1 byte for each weight, which is sufficiently accurate to guide pruning. For MS MARCO passages in our eval- uation, the default configuration has 4096 clusters and 8 segments per cluster. This results in about 550MB extra space. With that, the total cluster- based inverted SPLADE index size increases from about 5.6GB for MaxScore without clustering to 6.2GB for ASC. This 9% space overhead is still ac- ceptable in practice. The extra space overhead for Anytime Ranking is smaller because only cluster- level maximum term weights are needed. 3.2 Formal Properties With any integer 0< k′≤k, we call a retrieval al- gorithm (µ, η)-approximate if 1) the average rank score of any top k′results produced by this algo- rithm is competitive to that of rank-safe retrieval within a factor of µ; and 2) the expected average rank score of any top k′results produced by this algorithm is competitive to that of rank-safe re- trieval within a factor of η. When choosing η= 1, we call a (µ, η)-approximate retrieval algorithm to beprobabilistically safe . ASC satisfies the above condition and Theorem 4 gives more details. The default setting of ASC uses η= 1 in Section 4. The theorems on properties of ASC are listed be- low and Appendix A lists the proofs. We show that Theorem 3 is also true for Anytime Ranking with threshold overestimation and without early termination and we denote it as Anytime- µ. Theorem 1 BoundSum (Ci)≥MaxSBound (Ci) ≥max d∈CiRankScore (d). The above result shows that Formula (2) provides a tighter upperbound estimation than Formula (1) as demonstrated by Figure 2. In ASC, choosing a small µvalue prunes clusters more aggressively, and having the extra safeness condition using the average segment bound with ηcounteracts such pruning decisions. Given the requirement µ≤η, we can choose ηto be close to 1 or exactly 1 for being safer. When the average segment bound is close to their maximum bound in a cluster, this cluster may not be pruned by ASC. This is characterized by the following property. Theorem 2 Cluster-level pruning in ASC does not occur to cluster Ciwhen one of the two following conditions is true: •MaxSBound (Ci)>θ µ •MaxSBound (Ci)−AvgSBound (Ci)≤  1 µ−1 η θ. The difference between the maximum and av- erage segment bounds provides an approximate indication of the estimated bound tightness. The value of this heuristic is demonstrated in Fig- ure 4, which shows the correlation between bound tightness and the ratio of AvgSBound (Ci)to MaxSBound (Ci)for all clusters. The data is from the MS MARCO Passage dataset with 4096 clusters and 8 segments per cluster. Figure 4 shows that when this ratio approaches 1, the av- erage bound tightness increases and its variation decreases. By the above theorem, when the gap be- tween MaxSBound (Ci)andAvgSBound (Ci) is small (and thus their ratio is near 1), cluster- level pruning will not occur. Therefore, ASC will not prune clusters that already have high-quality and tight bound estimates. Table 5 will further cor- roborate the results of Theorem 2: that ASC should not prune clusters when this gap is small. Figure 4: Correlation between the tightness of the estimated bound and the ratio of AvgSBound andMaxSBound . As AvgSBound approaches MaxSBound , the quality and tightness of the bound increases, and the probability of pruning decreases. Define Avg(x, A)as the average rank score of the top- xresults by algorithm A. Let integer k′≤k. The theorem below characterizes the ap- proximate rank-safeness of pruning in ASC and Anytime- µ. Theorem 3 The average top- k′rank score of ASC and Anytime- µwithout imposing a time budget is the same as any rank-safe re- trieval algorithm Rwithin a factor of µ. Namely Avg(k′,ASC) ≥µAvg (k′, R), and Avg(k′,Anytime- µ)≥µAvg (k′, R). 5\nThe theorem below characterizes the extra prob- abilistic approximate rank-safeness of ASC. Theorem 4 The average top- k′rank score of ASC achieves the expected value of any rank-safe re- trieval algorithm Rwithin a factor of η. Namely E[Avg(k′,ASC)] ≥ηE[Avg(k′, R)]where E[] denotes the expected value. The probabilistic rank-safeness approximation of ASC relies upon a condition where each docu- ment having an equal chance to be in any segment within a cluster. That is true because our segmenta- tion method is random uniform partitioning. 4 Evaluation Datasets and metrics. We use the MS MARCO Passage ranking dataset with 8.8 million English passages. We report mean re- ciprocal rank (MRR@10) for the Dev set which contains 6980 queries, and nDCG@10 for the TREC deep learning (DL) 2019 and 2020 sets. We also report recall, which is the percentage of relevant-labeled results that appear in the final top- kresults. Retrieval depth ktested is 10 or 1000. We also evaluate on BEIR , a collection of 13 publicly available English datasets totaling 24.6 million documents. The size of each dataset ranges from 3,633 to 5.4M documents. Experimental setup. Documents are clustered using k-means on dense vectors. Details, including a comparison between a few alternatives such as sparse vectors, are in Appendix B. Sparse models tested include a version of SPLADE (Formal et al., 2021, 2022), uni- COIL , and LexMAE . We primarily use SPLADE to assess ASC because LexMAE, fol- lowing dense models such as SimLM and RetroMAE , uses MS MARCO title annotations. This is considered as non-standard . SPLADE does not use title annotations. ASC’s implementation uses C++, extended from Anytime Ranking code’s release based on the PISA retrieval package . The index is compressed with SIMD-BP128. MaxScore is used to process queries because it is faster than VBMW for long queries generated by SPLADE and LexMAE. We applied an efficiency optimization to both the ASC and Anytime Ranking code in extract- ing cluster-based term maximum weights when dealing with a large number of clusters. IOQP uses the authors’ code release . A comparison to other recent methods in the NeurIPS Big-ANN Competition are presented in Appendix E. All timing results are collected by running as a single thread on a Linux server with Intel i7-1260P and 64GB memory. Before timing queries, all compressed posting lists and metadata for tested queries are pre-loaded into memory, fol- lowing the common practice. Our code will be released under the Apache License 2.0 after publi- cation. For all of our experiments on MS MARCO Dev queries, we perform pairwise t-tests on the rele- vance between ASC and corresponding baselines. “†” is tagged when significant drop is observed from MaxScore retrieval at 95% confidence level. Baseline comparison on MS MARCO. Table 1 lists the overall comparison of ASC with two base- lines using SPLADE model on the MS MARCO Dev and TREC DL’19/20 test sets. Column “Loss” is the percent difference of MRR@10 compared to exact search. Recall@10 and Recall@1000 are reported for retrieval depth k= 10 and 1000, re- spectively. Retrieval mean response time (MRT) and 99th percentile latency ( P99) in parentheses are reported in milliseconds. The column marked “C%” is the percentage of clusters that are not pruned dur- ing retrieval. For the original rank-safe MaxScore without clustering, we have incorporated document reordering to optimize its index based on document similarity, which short- ens its latency by about 10-15%. Anytime Ranking is configured to use 512 clus- ters with no early termination. ASC is configured with 4096 clusters and 8 segments. Appendix C ex- plains the above cluster configuration for Anytime and ASC to deliver low latency under competitive relevance. Rank-safe ASC uses µ=η= 1 and rank-unsafe ASC uses η= 1 withµ= 0.9for k= 10 andµ= 0.5fork= 1000 . As shown in Table 1, these choices yield a tiny MRR@10 loss ratio. For Anytime- µwith over-estimation, we choose the same or higher µvalue as ASC to demonstrate ASC improves relevance while gain- ing the speedup under such a setting. Comparing the three rank-safe versions in Ta- ble 1, ASC is about 2.9x faster than Anytime for k= 10 , and 1.5x faster for k= 1000 , because seg- mentation offers a tighter cluster bound as shown in Theorem 1. ASC is 29x faster than IOQP with k= 10 . Safe IOQP is substantially slower than 6\nTable 1: A comparison with baselines using SPLADE on MS MARCO passages. No time budget MS MARCO Dev DL’19 DL’20 Methods C% MRR (Loss) Recall MRT ( P99) Speedup nDCG (Recall) nDCG (Recall) Retrieval depth k= 10 Exact Search IOQP - 0.3966 0.6824 207 (461) 29x 0.7398 0.7340 MaxScore - 0.3966 0.6824 26.4 (116) 3.7x 0.7398 0.7340 Anytime Ranking 69.8% 0.3966 0.6824 20.7 (89.3) 2.9x 0.7398 0.7340 ASC 49.1% 0.3966 0.6824 7.19 (26.7) - 0.7398 0.7340 Approximate IOQP-10% - 0.3782†(4.6%) 0.6541†24.0 (52.2) 4.3x 0.7381 0.7047 Anytime- µ=0.9 62.7% 0.3815†(3.8%) 0.6111†15.3 (61.1) 2.7x 0.7392 0.7126 ASC- µ=0.9,η=1 7.99% 0.3964 (0.05%) 0.6813 5.59 (18.7) - 0.7403 0.7338 Retrieval depth k= 1000 Exact Search IOQP - 0.3966 0.9802 214 (465) 6.4x 0.7398 0.7340 MaxScore - 0.3966 0.9802 65.8 (209) 2.0x 0.7398 0.7340 Anytime Ranking 93.0% 0.3966 0.9802 50.1 (158) 1.5x 0.7398 0.7340 ASC 54.3% 0.3966 0.9802 33.5 (103) - 0.7398 0.7340 Approximate IOQP-10% - 0.3782†(4.6%) 0.9746 24.4 (53.1) 1.5x 0.7381 0.7047 Anytime- µ= 0.788.9% 0.3963 (0.07%) 0.9696†37.1 (127) 2.3x 0.7398 0.7340 ASC- µ=0.7,η=1 21.7% 0.3966 (0.0%) 0.9799 25.4 (78.8) 1.6x 0.7398 0.7340 ASC- µ=0.5,η=1 8.10% 0.3962 (0.1%) 0.9739 15.8 (48.2) - 0.7398 0.7355 Anytime, which differs from the finding of , possibly because of the difference in data clustering and SPLADE versions. For approximate retrieval when k= 10 , ASC has 3.9% higher MRR@10, 11% higher recall, and is 2.7x faster than Anytime with µ= 0.9. When k= 1000 , ASC is 2.3x faster than Anytime under similar relevance. Even with µbeing as low as 0.5, ASC offers competitive relevance scores. This demonstrates the importance of Theorem 4. For this reason, ASC is configured to be probabilisti- cally safe with η= 1while choosing µvalue mod- estly below 1 for efficiency. For k= 10 , there is a very small MRR loss ( ≤0.1%) compared to the original retrieval, but ASC performs competitively while it is up to 4.7x faster than the original MaxS- core without using clusters. Approximate IOQP is configured to visit 10% of documents, which is a default choice in . ASC outperforms IOQP-10% with 4.8% higher MRR@10 and 3.7% higher recall while ASC is 4.3x faster. Table 2: Performance at a fixed MRR@10 loss. k= 10 MRR Loss 10% 5% 2% 1% 0.5% Anytime- µ15ms (7.8x) 16 (5.9x) 17 (4.4x) 18 (3.9x) 19 (4.0x) Re: 0.5412 0.5921 0.6287 0.6570 0.6682 IOQP 12ms (6.3x) 22 (8.1x) 55 (14x) 90 (20x) 153 (33x) Re: 0.6271 0.6548 0.6741 0.6775 0.6782 ASC 1.9ms ( −) 2.7 ( −) 3.9 ( −) 4.4 ( −) 4.7 ( −) Re: 0.5878 0.6315 0.6639 0.6707 0.6759 Table 2 compares latency in milliseconds and Recall@10 of approximate retrieval under a differ- ent and fixed MRR@10 loss compared to rank-safe retrieval with 0.3966 MRR@10 and 0.6824 Re- call@10. Rows marked with “Re” list Recall@10 of approximate search. To meet the relevance bud- get under each fixed MRR loss ratio, we vary µfor ASC and Anytime, and the percent of documents visited for IOQP to minimize latency. The results show that when the MRR loss is controlled within 1-2%, ASC is about 4x faster than Anytime and is 13x to 33x faster than IOQP. Table 3: Other learned sparse retrieval models uniCOIL LexMAE Methods MRR (Re) MRT MRR (Re) MRT Retrieval depth k= 10 . No time budget Exact Search IOQP 0.352 (.617) 81 0.425 (.718) 163 MaxScore 0.352 (.617) 6.0 0.425 (.718) 47 Anytime 0.352 (.617) 5.0 0.425 (.718) 27 ASC 0.352 (.617) 1.8 0.425 (.718) 12 Approximate IOQP-10% 0.320†(.568†) 11 0.405†(.693†) 18 Anytime- µ=0.9 0.345†(.585†) 4.2 0.413†(.654†) 22 ASC- µ=0.9, η=1 0.352 (.614) 1.4 0.425 (.718) 9.7 Retrieval depth k= 1000 . No time budget Exact Search IOQP 0.352 (.958) 82 0.425 (.988) 165 MaxScore 0.352 (.958) 19 0.425 (.988) 94 Anytime 0.352 (.958) 14 0.425 (.988) 67 ASC 0.352 (.958) 8.8 0.425 (.988) 49 Approximate IOQP-10% 0.320†(.937†) 12 0.405†(.985) 20 Anytime- µ=0.7 0.351 (.940†) 8.9 0.425 (.978) 46 ASC- µ=0.5, η=1 0.351 (.946) 4.0 0.425 (.980) 21 Table 3 applies ASC to uniCOIL and LexMAE and shows MRR@10, Recall@10 or @1000 (de- noted as “Re”), and latency (denoted as MRT). The conclusions are similar as the ones obtained above 7\nfor SPLADE. Table 4: Zero-shot performance with SPLADE on BEIR MaxScore Anytime- µ=0.9 ASC Dataset nDCG MRT nDCG MRT nDCG MRT Retrieval depth k= 10 DBPedia 0.443 81.2 0.431 58.1 0.442 40.7 FiQA 0.358 3.64 0.356 2.49 0.358 1.86 NQ 0.555 44.9 0.545 39.8 0.549 18.2 HotpotQA 0.682 323 0.674 270 0.680 158 NFCorpus 0.352 0.17 0.350 0.15 0.352 0.15 T-COVID 0.719 5.20 0.673 2.48 0.719 2.23 Touche-2020 0.307 4.73 0.281 2.27 0.307 1.83 ArguAna 0.432 9.07 0.411 9.17 0.432 8.27 C-FEVER 0.243 895 0.242 735 0.243 555 FEVER 0.786 694 0.782 587 0.786 372 Quora 0.806 5.16 0.795 2.05 0.806 1.53 SCIDOCS 0.151 2.53 0.150 2.17 0.151 1.96 SciFact 0.676 2.54 0.673 2.45 0.676 2.31 Average 0.501 1.91x 0.490 1.35x 0.501 - Retrieval depth k= 1000 Average 0.501 3.25x 0.498 1.95x 0.499 - Zero-shot out-of-domain retrieval. Table 4 shows average nDCG@10 and latency in milliseconds for 13 BEIR datasets. SPLADE training is only based on MS MARCO passages. For smaller datasets, the number of clusters is proportionally reduced so that each cluster contains approximately 2000 documents, which is aligned with 4096 clusters setup for MS MARCO. The number of segments is kept at 8. ASC has η= 1, and its µ= 0.9for k= 10 andµ= 0.5fork= 1000 . We use µ= 0.9 for Anytime Ranking without early termination. LexMAE has slightly lower average nDCG@10 0.495, and is omitted due to the page limit. ASC offers nDCG@10 similar as MaxScore while being 1.91x faster for k= 10 and 3.25x faster for k= 1000 . Comparing with Any- time, ASC is 1.35x faster and has 2.2% higher nDCG@10 on average for k= 10 , and it is 1.95x faster while maintaining similar relevance scores fork= 1000 . Table 5: K-means segmentation vs. random uniform k=1000 K-means Random µ,η MRR (Re) T MRR (Re) T 0.3, 1 0.393 (.939†) 9.92 0.396 (.972) 15.3 0.4, 1 0.393 (.942†) 10.5 0.396 (.972) 15.4 0.5, 1 0.395 (.959†) 13.8 0.396 (.974) 15.8 0.6, 1 0.397 (.977) 18.1 0.397 (.979) 17.2 0.7, 1 0.397 (.980) 24.4 0.397 (.980) 21.7 1, 1 0.397 (.980) 34.8 0.397 (.980) 33.5 Bound TightnessMaxSbound −AvgSBound Actual Random 0.55 0.49 K-means 0.53 0.69 Segmentation choices. ASC uses random even partitioning to segment term weights of each clus- ter and satisfy the probabilistic safeness condition that each document in a cluster has an equal chance to appear in any segment. Another approach is to use k-means sub-clustering based on document similarity. The top portion of Table 5 shows ran- dom uniform partitioning is more effective than k-means when running SPLADE on MS MARCO passages with 4098 clusters and 8 segments per cluster. Random uniform partitioning offers equal or better relevance in terms of MRR@10 and Re- call@1000, especially when µis small. As µaf- fects cluster-level pruning in ASC, random seg- mentation results in a better prevention of incor- rect aggressive pruning, although this can result in less cluster-level pruning and a longer latency. To explain the above result, the lower portion of Table 5 shows the estimated bound tightness (ratio of actual bound to MaxSBound ), and average dif- ference of MaxSBound andAvgSBound scaled by the actual bound. Random uniform partition- ing gives slightly better cluster bound estimation, while its average difference of MaxSBound and AvgSBound is much smaller than k-means sub- clustering. Then, when µis small, there are more un-skipped clusters, following Theorem 2. The above result also indicates cluster-level prun- ing in ASC becomes safer due to its adaptiveness to the gap between the maximum and average segment bounds, which is consistent with Theo- rem 2. The advantage of random uniform partition- ing shown above corroborates with Theorem 4 and demonstrates the usefulness of possessing proba- bilistic approximate rank-safeness. 5 Concluding Remarks ASC is an ( µ, η)-approximate control scheme for dynamic threshold-driven pruning that aggressively skips clusters while being probabilistically safe. ASC can speed up retrieval applications that still desire high relevance effectiveness. For example, when MRR loss is constrained to under 1-2%, the mean latency of ASC is about 4x faster than Any- time Ranking and is 13x to 33x faster than IOQP for MS MARCO Passage Dev set with k= 10 . Our evaluations with the MS MARCO and BEIR datasets show that µ= 0.5fork= 1000 , and µ= 0.9fork= 10 are good choices with η= 1 to retain high relevance effectiveness. Our findings recommend η= 1 for probabilistic safeness and varying µfrom 1 to 0.5 for a tradeoff between efficiency and effectiveness. 8\n6 Limitations Space overhead . There is a manageable space overhead for storing cluster-wise segmented max- imum weights. Increasing the number of clusters for a given dataset is useful to reduce ASC latency up to a point, but then the overhead of additional clusters leads to diminishing returns. Dense retrieval baselines and GPUs . This paper does not compare ASC to dense retrieval baselines because dense models represent a different cate- gory of retrieval techniques. ASC achieves up to 0.4252 MRR@10 with LexMAE for MS MARCO Dev, which is close to the highest number 0.4258 obtained in state-of-the-art BERT-based dense re- trievers . The zero-shot performance of ASC with SPLADE on BEIR performs better than these dense models. The above dense model studies use expensive GPUs to reach their full relevance ef- fectiveness. Approximate nearest neighbor search techniques of dense retrieval have been devel- oped following IVF cluster search and graph navigation with HNSW . But there is a significant MRR@10 drop using these approximation tech- niques. Although GPUs are readily available, they are expensive and more energy-intensive than CPUs. For example, AWS EC2 charges one to two orders of magnitude more for an advanced GPU instance than a CPU instance with similar memory capacity. Like other sparse retrieval studies, our evaluation is conducted on CPU servers. Code implementation choice and block-based pruning . Our evaluation uses MaxScore instead of VBMW because MaxScore was shown to be faster for relatively longer queries , which fits in the case of SPLADE and LexMAE under the tested retrieval depths. A previous study confirms live block filtering with MaxScore called Range-MaxScore is a strong choice for such cases. It can be interesting to examine the use of different base retriever methods in different settings within each cluster for ASC in the future. Instead of the live block filtering code, ASC implementation was extended from Anytime Rank- ing’s code because of its features that support dynamic cluster ordering and early termination. ASC’s techniques can be applied to the framework of contemporary BMP to im- prove block max estimation and add a probabilistic guarantee for its threshold-driven block pruning. Alternatively, the techniques introduced in BMP, such as partial block (cluster) sorting and hybrid cluster structure with a forward index could also improve our code implementation.
[ "Faster Learned Sparse Retrieval with Block-Max Pruning", "Bridging Dense and Sparse Maximum Inner Product Search", "Representation Sparsification with Hybrid Thresholding for Fast SPLADE-based Document Retrieval", "RetroMAE-2: Duplex Masked Auto-Encoder For Pre-Training Retrieval-Oriented Language Models", "Optimizing Guided Traversal for Fast Learned Sparse Retrieval", "The Tale of Two MSMARCO - and Their Unfair Comparisons", "A Static Pruning Study on Sparse Neural Retrievers", "Efficient Document-at-a-time and Score-at-a-time Query Evaluation for Learned Sparse Representations", "Hybrid Inverted Index Is a Robust Accelerator for Dense Retrieval", "SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval", "An Efficiency Study for SPLADE Models", "From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective", "Faster Learned Sparse Retrieval with Guided Traversal", "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction", "RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking", "SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking", "A Few Brief Notes on DeepImpact, COIL, and a Conceptual Framework for Information Retrieval Techniques", "Learning Passage Impacts for Inverted Indexes", "Anytime Ranking on Document-Ordered Indexes", "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models", "COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List", "Fast Disjunctive Candidate Generation Using Live Block Filtering", "Overview of the TREC 2020 Deep Learning Track", "Context-Aware Term Weighting For First Stage Passage Retrieval", "The Sparse MinMax k-Means Algorithm for High-Dimensional Clustering", "Dense Passage Retrieval for Open-Domain Question Answering", "An Experimental Study of Index Compression and DAAT Query Processing Methods", "Faster BlockMax WAND with Variable-sized Blocks", "On the Efficiency of Selective Search", "Billion-Scale Similarity Search with GPUs", "A Comparison of Document-at-a-Time and Score-at-a-Time Query Evaluation", "Efficient distributed selective search", "Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs", "Anytime Ranking for Impact-Ordered Indexes", "A candidate filtering mechanism for fast top-k query processing on modern cpus", "Efficient and effective retrieval using selective pruning", "Effect of dynamic pruning safety on learning to rank effectiveness", "Faster top-k document retrieval using block-max indexes", "Efficiency and effectiveness of query processing in cluster-based retrieval", "Efficient query evaluation using a two-level retrieval process", "Query Evaluation: Strategies and Optimizations", "Least squares quantization in PCM", "IOQP: A simple Impact-Ordered Query Processor written in Rust", "RetroMAE: Pre-training Retrieval-oriented Transformers via Masked Auto-Encoder", "Accelerating Learned Sparse Indexes Via Term Impact Decomposition", "PISA: Performant Indexes and Search for Academia", "A probabilistic model of information retrieval: development and comparative experiments - Part 1" ]
Multi-dimensional Evaluation of Empathetic Dialogue Responses
Multi-dimensional Evaluation of Empathetic Dialogue Responses Abstract Empathy is critical for effective and satisfactory conversational communication. Prior efforts to measure conversational empathy mostly focus on expressed communicative intents—that is, the way empathy is expressed. Yet, these works ignore the fact that conversation is also a collab- oration involving both speakers and listeners. In contrast, we propose a multi-dimensional empathy evaluation framework to measure both expressed intents from the speaker’s perspec- tive and perceived empathy from the listener’s perspective. We apply our proposed framework to analyze our internal customer-service dia- logue. We find the two dimensions (expressed intent types and perceived empathy) are inter- connected, and perceived empathy has a high correlation with dialogue satisfaction levels. To reduce annotation costs, we explore differ- ent options to automatically measure conver- sational empathy: prompting LLMs and train- ing language model-based classifiers. Our ex- periments show that prompting methods with even popular models like GPT-4 and Flan fam- ily models perform relatively poorly on both public and our internal datasets. In contrast, instruction-finetuned classifiers based on Flan- T5 family models outperform prior works and competitive baselines. We conduct a detailed ablation study to give more insights into instruc- tion finetuning method’s strong performance. 1 Introduction Empathy as a complex socio-emotional phe- nomenon plays a crucial role in forming and main- taining social interactions . Prior works in psychology and social science have high- lighted empathy’s importance in human commu- nications (Omdahl, 2014; Goodwin and Heritage, 1990; Goodwin et al., 2012; Suchman et al., 1997, inter alia ). As speech and text-based dialogues remain prevalent in human communication, devel- oping computational approaches to analyze and measure empathy expressed in dialogues is gaining attention in CL community (Rashkin et al., 2019; Liu et al., 2021; Sharma et al., 2020, inter alia ). Prior efforts propose empathy frameworks grounded in psychology and social science theories. These frameworks are then adapted for specific ap- plications or dataset domains. Correspondingly, researchers propose data-driven approaches to an- alyze and measure empathy. Upon reviewing ex- isting works by the CL community, we found that they mostly adopt an individualistic conceptualiza- tion of empathy which position empathy as a primarily individual mental process. These works mainly focus on mod- eling expressed empathy in dialogues. However, recent psychology works argue that empathy is more fundamentally relational: instead of a static personality trait, empathy is a dynamic process, aiming to serve interpersonal communicative pur- poses. Van Dijke et al. (2020) emphasize empathy as a collaborative practice, involves participation by both parties and is shaped in social interactions. Thus, studying perceived empathy in a collabora- tive, dynamic dialogue setting is of importance. In this study, we propose a novel multi- dimensional evaluation framework to assess per- ceived empathy in multi-round dialogues, mea- suring expressed empathetic intents alongside per- ceived empathy. Notably, our framework includes four perceived empathy dimensions: (1) perceived engagement , (2) perceived understand , (3) per- ceived sympathy and (4) perceived helpfulness . To validate this framework’s practicality, we annotate an internal dataset of 2,000 customer service dia- logue sessions. and carry out a fine-grained analy- sis. Our findings show a strong correlation between perceived empathy and overall dialogue satisfac- tion ratings, affirming the framework’s efficacy in capturing empathetic traits. 1\nThe proposed empathy evaluation framework still requires subjective assessments from human experts, which is expensive and non-trivial to col- lect. To scale up evaluation without excessive re- liance on annotated data, we investigate two model- based evaluation approaches: prompting LLMs and training language model-based classifiers. We show that prompting methods with popular LLMs such as GPT-4 and Flan family models still per- form poorly, indicating the challenging nature of measuring conversational empathy. In contrast, tra- ditional supervised finetuning methods yield better accuracy with labeled data. Notably, instruction- finetuned classifiers based on Flan-T5 family mod- els achieve the best perfor- mance. We conduct a comprehensive ablation study of this instruction finetuning method to provide in- sights into its strong performance. To summarize, our contributions are three-fold: •We propose a novel multi-dimensional evaluation framework to measure conversational empathy from (1) expressed communicative intents and (2) perceived empathy aspects. •We apply this proposed framework to analyze an internal customer service dataset. We find the two empathy dimensions are inter-connected, while perceived empathy dimension directly af- fects conversation satisfaction. •We comprehensively study the effectiveness of different model-based evaluation methods. On both public and internal datasets, instruction- finetuned classifiers based on Flan-T5 family models outperform prompting methods and other competitive baselines. 2 Related Work Definition of empathy in literature. Empathy has been an influential concept in ethics and moral theory , social/developmental psychology and other domains. Con- tributions from various domains result in differ- ent definitions, functions, and proposed empathy components, posing a challenge to studying em- pathy . Prior works (Elliott et al., 2011, 2018; Althoff et al., 2016; Sharma et al., 2020, inter alia ) mainly consider empathy as an individual’s mental process. Recent psychological studies highlight empathy as a collaborative prac- tice involving participation by both parties (Van Di- jke et al., 2020; Yalçın and DiPaola, 2020, inter alia). Within this work, we adopt this interactional definition of empathy, as our aim is to measure and evaluate empathy in social scenarios such as real-time human-human conversation. Measuring empathy in dialogues. The CL community has made much effort in algorithmi- cally generating and measuring empathetic dia- log communications. For example, Rashkin et al. (2019) collected an empathetic dialog dataset via crowdsourcing, with each dialogue created with a preassigned emotion label and rated by partici- pants for empathy, relevance, and fluency. While they explored various empathetic dialog genera- tion models, their empathy measurement relied on BLEU , raising questions about the adequacy of reference-based metrics for empathy assessment. Sharma et al. (2020) focused on measuring expressed empathy intents in online mental support conversations, considering aspects like emotional reactions, interpretations, and ex- plorations, but did not address perceived empathy. Similarly, other works measured expressed empathy intents without considering perceived empathy, albeit with some differences in their intent categories. Previous work on measuring empathetic dia- logue is limited in two aspects. First, many stud- ies focused solely on expressed empathy. Although Rashkin et al. (2019) collected empa- thy ratings from the listener’s perspective, their primary focus was not empathy measurement. Sec- ond, nearly all previous work (Sharma et al., 2020; Liu et al., 2021, inter alia ) specifically focus on mental health support setting. Our work is different from prior works in two ways. Firstly, our measure- ment framework covers different forms of empathy expression (expressed empathy) and assesses how empathy is perceived by the listeners (perceived empathy). Secondly, we study model-based evalu- ation metrics on public datasets for online mental health support and an internal online customer sup- port dataset. This internal task-oriented dialogue dataset can provide valuable insights into empathy measurement across different contexts. 3 Empathy Measurement Framework Empathy as a collaborative practice. Empathy is critical in human communication for social interac- tions , varying in definition across psychology and social science literature (Hall and 2\nTable 1: Comparison between previous works and our empathy measurement framework. Expressed EmpathyPerceived Empathy Rashkin et al. (2019) None Uni-dimensional Sharma et al. (2020) 3 intents None Welivita et al. (2021) 8 intents None Liu et al. (2021) 7 intents None Our framework 16 intents 4 dimensions Schwartz, 2019; Yalçın and DiPaola, 2020, inter alia). Elliott et al. (2018) distinguishes affective andcognitive aspects of empathy. The affective as- pect relates to the emotional stimulation in reaction to the experiences and feelings expressed by oth- ers, while the cognitive aspect is a more deliberate process of understanding and interpreting the expe- riences and feelings of others and communicating such understandings. However, this interpretation overlooks empathy’s role in interactive dialogues where multiple parties engage. Recent psychology works empathize empathy as a collaborative, inter- active process (Van Dijke et al., 2020; Yalçın and DiPaola, 2020; Concannon and Tomalin, 2023, in- ter alia ), urging studies across diverse social scenar- ios. Motivated by these insights, our study adopts this interactional definition of empathy— empathy refers to an individual’s ability to perceive and un- derstand the other’s emotional states and respond correspondingly. The proposed empathy evaluation framework. In our study, we define empathy as a collaborative practice involving both parties. We measure empa- thy in dialog interactions using a two-dimensional framework covering both expressed empathy (com- municative intents) andperceived empathy : •Expressed Empathy (Communicative Intent) : we measure empathy from the speaker’s perspec- tive by predicting the specific intents of the utter- ance as ways to convey empathy. This dimension aligns with previous work , where intents are often detailed and domain-specific. •Perceived Empathy : we also assess empa- thy from the listener’s perspective, predicting whether an utterance is perceived as empathetic. This assessment breaks perceived empathy into four fine-grained aspects: –Engagement : measures the degree to which the listener perceives the speaker as involved in the conversation. Low engagement can lead to the listener feeling indifferent and lack of empathy. –Understanding : measures how well the lis- tener feels the speaker understands their situ- ation, feelings or problems. A lack of under- standing can diminish perceived empathy. –Sympathy : captures the listener’s perception of the speaker’s ability to empathize and react appropriately. A lack of sympathy can weaken perceived empathy. –Helpfulness : determines whether the listener finds the speaker’s communication to be help- ful in addressing the conversation’s core is- sues. While not directly related to empathy, in task-oriented dialogues, where addressing problems is the objective, helpfulness can af- fect perceived empathy. The four perceived empathy aspects are inter- connected and build upon each other. Engagement serves as the foundation, leading to Understanding, which in turn forms the basis for Sympathy and Helpfulness. Sympathy addresses emotional needs, while Helpfulness addresses practical needs. Our framework, unlike previous ones, covers both ex- pressed and perceived empathy simultaneously. It offers a multi-faceted approach to measuring per- ceived empathy, as summarized in Table 1. Applying the proposed framework to analyze real-world dataset. Our framework can reveal nuanced connections between the two dimensions in measuring dialogue empathy. Table 2 reports results from an internal commercial dataset on on- line customer support dialogues. We annotated this dataset using our two-dimensional framework (annotation details in Section 5.1 and Appendix B). First, a UX researcher identified 16 different intents using a grounded theory approach based on man- ual analysis of 100 conversations. Human raters then annotated these 16 expressed intents and 4 perceived empathy aspects for utterances sampled from 2,000 conversations (we sampled one utter- ance for each conversation and judged that utter- ance only). For each utterance, raters assessed in- tent presence (yes/no) and each of the 4 perceived empathy aspects using a Likert-scale from 1 to 5, where higher ratings indicate higher degrees. They also annotated user satisfaction for the whole con- versation if they were the customer. We acknowl- edge that raters, as a third-party to these conversa- tions, may not perfectly assess perceived empathy dimensions, but direct feedback from the customers also has limitations such as response biases. 3\nTable 2: Relationship between expressed intents and perceived empathy dimensions and user satisfaction (each column reports the mean value of the perceived empathy dimension’s ratings when an intent appears or not); ♢,†, and‡indicate a statistical significance of ratings without intent, compared to ratings with the intent, at p < 0.05, 0.01, and 0.001, respectively. We report the standard deviation in Table 9 in Appendix. Expressed IntentPerceived EnthusiasmPerceived UnderstandPerceived SympathyPerceived HelpfulnessConversation Satisfaction w/ INT w/o INT w/ INT w/o INT w/ INT w/o INT w/ INT w/o INT w/ INT w/o INT ask contact 3.97 3.99 3.90 3.96 † 3.05 3.08 ♢ 3.83 3.87 ♢ 3.72 3.74 ask details 4.01 3.98 3.95 3.95 3.07 3.08 3.89 3.86 3.75 3.74 ask confirm 4.00 3.99 3.96 3.95 3.06 3.07 3.92 3.86 ♢ 3.85 3.73 † aware problem 4.03 3.97 ‡ 4.03 3.91 ‡ 3.12 3.06 ‡ 3.92 3.84 ‡ 3.85 3.69 ‡ describe problem 4.04 3.98 ‡ 4.02 3.94 ‡ 3.15 3.06 ‡ 3.93 3.85 ‡ 3.88 3.72 ‡ express sympathy 4.07 3.98 ‡ 4.01 3.94 † 3.48 3.04 ‡ 3.87 3.86 3.82 3.74 express reassurance 4.06 3.97 ‡ 4.01 3.94 ‡ 3.17 3.05 ‡ 3.93 3.85 ‡ 3.84 3.72 ‡ express apology 3.97 3.99 3.96 3.95 3.12 3.07 ♢ 3.84 3.87 3.69 3.75 answer question 4.08 3.98 ‡ 4.05 3.94 ‡ 3.08 3.07 4.02 3.85 ‡ 3.92 3.73 ‡ clarify 4.06 3.99 4.11 3.95 ‡ 3.14 3.07 4.06 3.86 ‡ 3.99 3.74 † explain 4.09 3.99 † 4.08 3.95 ‡ 3.03 3.08 4.07 3.86 ‡ 4.00 3.73 ‡ excuse 3.96 3.99 3.97 3.95 3.08 3.07 3.78 3.87 3.56 3.75 ‡ inform action 4.06 3.98 ‡ 4.03 3.94 ‡ 3.08 3.07 3.98 3.85 ‡ 3.90 3.72 ‡ instruct action 4.02 3.98 ♢ 3.97 3.95 3.08 3.07 3.91 3.86 ♢ 3.75 3.74 tentative solution 4.10 3.98 ♢ 4.06 3.95 † 3.16 3.07 ♢ 4.00 3.86 † 3.87 3.74 ♢ contact other 3.95 3.99 3.88 3.95 ♢ 3.05 3.08 3.74 3.87 † 3.66 3.75 Results in Table 2 show that the two dimensions are connected but sufficiently distinct. We report the ratings of the four perceived empathy aspects (each column) when each intent (each row) ap- pears or not (“w/ INT” and “w/o INT”), and test whether ratings are significantly different by intent occurrence. First, we observed that some expressed empathy intents lead to significantly higher levels of perceived empathy in certain aspects, suggest- ing the value of conveying such intents to improve conversational empathy. Second, each perceived empathy aspect is connected with a distinct set of expressed intents and most expressed commu- nicative intents are only related to a subset of the four perceived aspects. Lastly, perceived empathy aspects differ from overall user satisfaction, yet have Spearman correlation coefficients of 0.410, 0.396, 0.099 and 0.580, all significant at 0.0001 level except perceived sympathy—indicating high correlation. In summary, these findings suggest the nuanced and multi-faceted nature of empathy measurement: two dimensions (expressed intents and perceived empathy aspects) are related but suf- ficiently different, and perceived empathy aspects are directly related to conversation satisfaction. 4 Empathy Measurement Models This section describes model-based metrics to mea- sure conversational empathy. Similar to previous studies (Sharma et al., 2020; Lee et al., 2022; Kim et al., 2021, inter alia ), we leverage language mod- els to build classifiers to distinguish between empa- thetic and non-empathetic utterances in dialogues. Measuring conversational empathy is an intrinsi- cally challenging task because: (1) dialogues vary in themes, types, and contexts; (2) empathy is often communicated implicitly , pos- ing challenges for state-of-the-art language mod- els ; (3) empathy expression styles and intents are often domain specific; and (4) the nuance between expressed and perceived empathy is often subtle and depends on the two parties’ social roles (Suchman et al., 1997; Depper- mann, 2011; Kupetz, 2014, inter alia ). For exam- ple, asking clarification questions may be perceived as empathetic in a dialogue between a therapist and a patient, but can be neutral in task-oriented dia- logues such as customer service dialogues. 4.1 Problem Definition and Notations We address conversational empathy measurement as a classification problem. Denote a dialogue asDconsisting of a few utterances ui,D= {u1, u2, . . . u |D|}, we aim to learn a function fθ:U→Yparameterized by θthat maps thei-th utterance ui∈ D to a corresponding label yin the label space Y. Such labels can be the presence of expressed communicative in- tents or perceived empathy aspects. Given the 4\nlimitation of the maximum input length of lan- guage models, we use a sliding approach simi- lar to existing works . In practice, function ftakes as input the targeted utterance uitogether with its jpreceding utterances and j′proceeding utter- ances as a concatenated natural language sequence, i.e.Concat (ui−j, . . . , u i−1, ui, ui+1, . . . , u i+j′). Such decision choice echos the interactional empa- thy interpretation—the meaning and the effect of an utterance (whether it is perceived as empathetic or not) depends on its context with both the speaker and the listener, alongside their social roles. 4.2 Language Model-based Classifiers Supervised finetuning. We include four encoder models: BERT-Large and RoBERTa-Large , in addition to the encoder part of Flan-T5 : Flan-T5-Enc-{Large, XL}. We randomly initialize a classification head (linear layer) and finetune on labeled training set. For instruction-finetuning methods, we map the labels to natural language verbalizers, and instruc- tion finetune the model to predict corresponding verbalizers. We include two Flan-T5 models: Flan- T5-{Large, XL}. We design the natural language instructions to include the following parts: (1) intent describes what tasks we want the language model to perform. In our case, the task is to predict one specific utterance’s expressed communicative intent or perceived empathy. (2) definition de- scribes the definition of the communicative intent we want the language model to predict and can be skipped if the communicative intent is simple and self-explanatory. (3) domain describes the dialogue domain, such as mental therapy or cus- tomer service dialogue. (4) options refers to what options (verbalizers) should the language model predict from, such as an expressed empathy intent appears or not. Table 3 shows an example of such natural language instruction schema. Complete in- struction tuning templates are given in Appendix E. Prompting method. We use a similar prompt- ing format as instruction finetuning methods, and experiment with two open source instruction fine- tuned encoder-decoder language models—Flan- T5-XXL and Flan-UL2 . We also use a proprietary model GPT-4-0613 . For GPT-4, we use zero-shot and few-shot prompting while for Flan-T5-XXL and Flan-UL2, we only use zero- shot due to their limited 512 context length. 5 Experimental Setup 5.1 Datasets We use two public empathy dialogue understanding datasets and our internal dataset. Detailed datasets statistics are given in Appendix C and Table 12. Empathy Mental Health (EMH) is collected from mental health-focused sub- reddits and consists of 3.1k single-round, asyn- chronous dialogues on online mental health support. This dataset is a multi-label classification dataset where each dialogue is annotated on 3 empressed empathy dimensions at 3 levels. ESConv is collected with crowd- workers chatting in help-seeker and supporter roles, to simulate the scenario of synchronous peer-to- peer mental counseling and support. The dataset consists of 1.3k multi-round dialogues and is an- notated with 3 mental support stages and 7 fine- grained mental support strategies. The dataset is originally designed for multi-class classification for the 7 communicative strategies (intents), and we adapt it to multi-label classification setting. Empeval is our internal dataset (due to privacy con- cerns, we cannot release the annotated dataset, but we try to present a holistic view of the dataset to assist reproducbility). This dataset includes 1,833 anonymized conversations sampled from a com- pany’s customer support logs, where the conversa- tions are between customers and human represen- tatives. All personal identifiers have been removed. On average, each conversation has 41.4 utterances, and each utterance has 7.4 tokens, as measured by T5 models’ WordPiece tokenizer. When we built the dataset, we randomly sampled 2,000 con- versations from the log and used an internal hu- man annotation platform to judge the conversations. The annotation questions are provided in Tables 10 and 11. Each conversation were annotated by two different raters recruited from an internal annota- tion platform. 167 non-English conversations were discarded. On average, the two raters have achieved 90.23% agreement rate and 0.615 Cohen’s kappa on the judgments—indicating a moderate rating consistency. For expressed communicative intents, we mark the intent as True when both annotators mark it as True. For the four perceived empathy dimensions, we convert the numerical annotations to binary labels by marking as True when both annotators give a score higher than 4. 5\nTable 3: An example illustration of the natural language instruction schema. This example is from Empathy Mental Health (EMH) dataset with the communicative intent of emotional reactions. Emotional ReactionsYou are a crowdsourcing annotator. Now read the following definition and a corresponding dialogue between an emotional support seeker and an emotional supporter provider , then answer the corresponding question. Definition: emotional reaction is a common strategy in online mental support. It refers to the supporter expressing emotions such as warmth, compassion, and concern about what was experienced by the help seeker after listening to the help seeker’s utterance in a dialogue. A weak communication addresses those emotions without explicit reference, e.g. ’Everything will be fine’, while strong communication specifies the emotion, e.g. ’I feel really sorry for you.’ {Dialogue} Question: by saying {utterance}, what is the extent of emotional reactions expressed by the supporter? Respond with strong, weak or no communication . For all three datasets, we use five different train- test splits of 80-20 on the dialogue level, then we split the dialogues into utterances and con- duct utterance-level training and predictions. We use the standard practice for multi-label classifica- tion problem—for each label, we train a classifier. Therefore for each fold we train 3, 7 and 20 classi- fiers for EMH, ESConv and Empeval, respectively, the reported results are averaged over 5 splits. 5.2 Compared Methods Prior methods. A few prior studies aim to design model-based metrics to measure conversational em- pathy, we include following methods: Sharma et al. (2020) use a bi-encoder model struc- ture to jointly predict listener’s communicative in- tent and extract rationales. The model is initialized with RoBERTa-Base with contin- ued in-domain pretraining as discussed by . The method is designed for single-round conversations thus we also report the single-round performances on our two multi-round dialogue datasets ESConv and Empeval. Welivita et al. (2021) use a customized structure based on RoBERTa-Base . The model additionally uses an utterance level attention to aggregate hidden states of utterances within a dialogue window to be used by the classifier. Li et al. (2023) uses a sentence-level encoder method based on RoBERTa-Large with continued in-domain pretraining as . Language model-based classifiers. As previously discussed, we include six supervised finetuning methods: BERT-Large, RoBERTa-Large, Flan-T5- Enc{Large, XL}, Flan-T5-{Large,XL}. Prompting methods. We include Flan-T5-XXL Zero-shot, Flan-UL2 Zero-shot, GPT-4 {Zero-shot, Few-shot}. For GPT-4 Few-shot, we use a prompt template where each class has one in-context ex- ample, i.e. 3-shot for EMH dataset and 2-shot for ESConv. Due to privacy reason, we did not conduct GPT-4 experiments on internal Empeval dataset. 5.3 Loss Functions We notice longtail distributions of labels on all three datasets, due to the nature of human-human dialogues and categorization of communicative em- pathy. In addition to the standard cross-entropy loss function, we additionally include two other loss functions: Focal loss and LDAM loss to tackle class im- balance problem. The main performance reported is still based on standard cross-entropy loss and we use these two additional loss functions as an ablation study (detailed in Section 6.2). 5.4 Implementation Details Instruction and prompt templates. One of the au- thors manually writes the natural language instruc- tions used for instruction finetuning and the prompt templates for prompting methods. We experiment with three different sets of prompting templates for prompting method, and select the performant one on a held-out development set of 100 instances. All the templates for instruction finetuning and prompt- ing are given in Appendix E. Training details, hyperparameters and imple- mentations. We use a preceding window of 3 and a proceeding window of 3 for all methods, in- cluding supervised ones and prompting methods, thus the input to language model includes up to 7 utterances except for the ones at the start or end of the dialogue. We additionally include an abla- tion study on using no proceeding utterances and find it mildly hurts performance (Section 6.2). For supervised methods, our implementation is based on a mixture of PyTorch and JAX. For prior meth- ods in Section 5.2, we adapt from their official codebases respectively. We use official GPT-4 API (GPT-4-0613 )1. More training details is referred 1https://platform.openai.com/docs/guides/gpt/chat- completions-api 6\nTable 4: Main results on three datasets. We mark the best number within each section as bold and denote the best number in each column with †. Full results are referred to Appendix F. Datasets EMH ESConv Empeval MetricsMacro Pre.Macro Rec.Macro F1ACC .Macro Pre.Macro Rec.Macro F1ACC .Macro Pre.Macro Rec.Macro F1ACC . PRIOR METHODS Sharma et al. (2020) 72.5 70.7 71.4 85.8 83.2 68.1 70.7 90.9 70.1 68.4 68.6 88.4 Welivita et al. (2021) 68.5 68.7 67.8 85.2 80.2 72.9 74.6 90.8 70.7 69.7 69.9 89.1 Li et al. (2023) 75.3 70.9 72.7 86.3 78.8 72.1 74.4 90.8 72.7 70.7 71.0 89.5 ENCODER MODELS BERT-Large 76.9 73.8 75.1 87.5 75.5 71.7 73.1 90.8 76.7 74.6 75.9 91.1 RoBERTa-Large 77.5 75.8 76.0 87.5 71.8 71.4 71.4 90.6 78.5 73.8 75.5 91.5 Flan-T5-Enc-Large 73.7 74.8 74.1 88.5 77.0 71.9 72.8 90.9 74.0 71.4 72.6 91.5 Flan-T5-Enc-XL 77.4 76.1 76.6 88.4 76.7 73.1 73.5 90.9 78.3 74.5 75.4 90.7 INSTRUCTION FINETUNED ENCODER -DECODER MODELS Flan-T5-Large 78.7† 74.1 74.8 88.2 76.8 74.8† 75.3† 91.2 80.5 75.8 77.2 92.1 Flan-T5-XL 77.8 76.6† 77.0† 88.9†78.6† 73.5 75.1 91.6†81.7† 76.8† 78.3† 92.5† PROMPTING METHODS Flan-T5-XXL Zero-shot 45.5 49.3 42.9 62.6 62.8 71.8 61.5 77.1 66.1 69.4 64.3 79.7 Flan-UL2 Zero-shot 32.0 34.6 13.6 18.7 60.8 69.4 52.4 59.5 63.0 71.1 57.0 68.3 GPT-4 Zero-shot 46.2 51.5 45.4 65.1 66.4 74.3 67.3 82.5 - - - - GPT-4 Few-shot 48.3 54.3 45.5 61.3 63.3 73.6 61.2 73.5 - - - - to Appendix D. 5.5 Evaluation Metrics We report Macro Precision, Macro Recall, Macro F1 and accuracy for each prediction task. For each dataset, we report the mean values of each metric across different communicative intents/strategies, full results are given in Appendix F. 6 Result and Analysis 6.1 Main Results We present the main results in Table 4. We no- tice that prompting methods—although require no or less labeled training instances, have lower performance compared to supervised methods . For example, on EMH dataset, GPT-4 Zero-shot only achieves 45.4% Macro F1 and 65.1% accu- racy, while GPT-4 Few-shot achieves 45.5% and 61.3%. On the other hand, encoder models still perform competitively, outperforming prompting methods by a large margin and the improvement is consistent across all datasets. For example, on EMH dataset, the best encoder model Flan-T5- Enc-XL achieves 76.6% Macro F1 and 88.4% ac- curacy. This observation is consistent with re- cent works on designing challenging datasets for LLMs and testing their understanding capabilities for implicatures con- veyed in communications . From another perspective, this also indicates the challeng- ing nature of empathy understanding task—LLMs still struggle to understand empathy despite being provided with the exact definition in prompts, while supervised fine-tuning still outperforms prompting methods with sufficient training samples. Interestingly, GPT-4 Zero-shot outperforms Few- shot in terms of accuracy on both EMH and ES- Conv dataset, which contradicts prior works’ ob- servation that adding few-shot examplers help im- prove performance . We hypothesize that our task—dialogue understanding is different from tasks evaluated by prior works in the following aspects: (1) empathy dialogue understanding is rel- atively challenging for the language models. Here challenging means it requires background knowl- edge and expertise which might not be prevalent in the language model’s pretraining corpus. (2) dialogue understanding leads to long input length to the language model. As suggested by recent works (Liu et al., 2021, inter alia ), processing long sequences is still challenging for LLMs. Instruction finetuned models achieve the best performance compared to encoder models and prompting methods, the improvement is consis- tent on all datasets. On 2 out of 3 datasets, Flan- T5-Large (770M) achieves better performance com- pared to best encoder model, only worse than Flan- T5-Enc-XL (1.2B) on EMH dataset. Further, Flan- T5-XL (3B) achieves the best performance com- pared to all other methods. We draw the conclusion that instruction-finetuned methods are better suited to be used for measuring conversational empathy. 6.2 Ablation Studies Here we explore several ablations to understand how different design choices affect instruction fine- 7\nTable 5: Ablation study on the effect of loss function. The results are averaged over 20 predictive tasks on Empeval. We highlight the best number in each column. Loss Function Macro Pre.Macro Rec.Macro F1ACC . Cross Entropy Loss 80.5 75.8 77.2 92.1 LDAM Loss 79.8 76.8 77.5 92.3 Focal Loss 79.5 77.3 78.0 92.1Table 6: Ablation study on model sizes. The results are based on focal loss, averaged over 20 predictive tasks on Empeval. We highlight the best number in each column. Backbone LM Macro PrecisionMacro RecallMacro F1ACC . Flan-T5-Small (60M) 68.7 66.5 66.4 91.1 Flan-T5-Base (220M) 78.1 74.5 75.7 91.8 Flan-T5-Large (770M) 79.5 77.3 78.0 92.1 Flan-T5-XL (3B) 80.0 76.5 77.5 91.7 Table 7: Ablation study on the effect of adding proceed- ing context. The results are based on focal loss, averaged over 20 predictive tasks on Empeval dataset. We high- light the best number in each column. Classifier Input Macro Pre.Macro Rec.Macro F1ACC . w/o proceedings 79.2 74.8 75.6 91.6 w/ proceedings 79.5 77.3 78.0 92.1Table 8: Ablation study on the effect of adding natural language instructions. The results are based on focal loss, averaged over 20 predictive tasks on Empeval dataset. We highlight the best number in each column. Instruction Type Macro PrecisionMacro RecallMacro F1ACC . w/o instructions 76.1 71.3 72.4 91.1 w/ instructions 79.5 77.3 78.0 92.1 tuned encoder-decoder model’s performance. Un- less otherwise explicitly mentioned, the results re- ported in this subsection are based on Flan-T5- Large (770M) on our internal Empeval dataset. Choice of loss functions. Most real-world datasets on empathy dialogue understanding are of longtail distribution, We examine whether different choices of loss functions can improve the model’s perfor- mance. From Table 5 we can see that LDAM loss and Focal loss indeed improve classification per- formance on less frequent classes—evidenced by improved Macro Recall and Macro F1, while still being competitive in overall accuracy. This finding also suggests the possibility of further improving performance by carefully choosing loss function and corresponding hyperparameters tuning. Effect of model sizes. In Table 6 we show a comparison between different sizes of instruction- finetuned encoder-decoder models. We can notice that as the model size scales from Small (60M) to Large (770M), the performance gets a significant leap, then plateaus around Large. In fact, the 3B model has slightly worse Macro F1 compared to 770M model (77.5% vs 78%). We hypothesize 3B model’s slightly worse performance might be because it quickly overfits the dataset. Effect of proceeding contexts. As mentioned earlier, for our main results in Table 4 we use a context window of 3 preceding utterances and 3 proceeding utterances. In production systems, it is critical to monitor the dialogue state in real-time, where the proceeding utterances are not available. From Table 7 we notice removing proceeding contexts hurts performance. The result is expected as the model uses less input information, and matches our motivation that empathy is a collabo- rative practice and is context-dependent. Effect of natural language instructions. The effect of natural language instructions is shown in Table 8. Adding instructions significantly im- proves classification performance on all metrics. This comparison highlights the importance of natu- ral language instructions in instruction finetuning. 7 Conclusion and Future Work In this study, we introduce a comprehensive frame- work for assessing conversational empathy, focus- ing on both expressed empathetic intentions and perceived empathy. We apply this framework to an- alyze our internal dataset and find that it effectively correlates expressed intentions, perceived empathy, and overall dialogue satisfaction. This proposed evaluation framework requires subjective assessments from trained annotators, which can be expensive and non-trivial to collect. To explore automated empathy measurement mod- els, we rigorously compare various methods against human judgments. Overall, the instruction fine- tuning method based on Flan-T5 family models achieves the highest classification performance, measured by F1-score and accuracy compared to human-based judgments. While our discussion pri- marily centers on measuring empathy perception in human-human conversations, an important fu- ture direction is to extend this analysis to human- machine interactions. 8\nLimitations and Potential Risks This paper studies the empathy evaluation problem in the context of human-human communication. A natural extension is to apply our framework to di- alogue applications, and the findings in this paper may be subject to change in human-AI dialogues. Due to limited bandwidth, we only experiment with one proprietary model— GPT-4-0613 . Including more proprietary models such as Gemini and Claude will increase the compre- hensiveness of results and analysis. To the best of our knowledge, this study does not involve potential risks.
[ "Beyond Perplexity: Multi-dimensional Safety Evaluation of LLM Compression", "In-Context Example Ordering Guided by Label Distributions", "Measuring perceived empathy in dialogue systems", "An In-depth Investigation of User Response Simulation for Conversational Search", "CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation", "Scaling Instruction-Finetuned Language Models", "Large Language Models are Zero-Shot Reasoners", "UL2: Unifying Language Learning Paradigms", "Scaling Up Models and Data with t5x and seqio", "Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?", "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes", "Towards Emotional Support Dialog Systems", "A Taxonomy of Empathetic Response Intents in Human Social Conversations", "A Computational Approach to Understanding Empathy Expressed in Text-Based Mental Health Support", "Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks", "Towards a relational conceptualization of empathy", "5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding", "Modeling empathy: building a link between affective and cognitive processes", "Evaluating Empathy in Artificial Agents", "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes", "Therapist Empathy and Client Outcome: An Updated Meta-Analysis", "Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset", "Empathy present and future", "Focal Loss for Dense Object Detection", "Large-scale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health", "Understanding empathy", "The Empathy–Altruism Hypothesis", "The Study of Formulations as a Key to an Interactional Semantics", "Bleu: a Method for Automatic Evaluation of Machine Translation", "A model of empathic communication in the medical interview.", "Empathy", "Does GPT-3 Generate Empathetic Dialogues? A Novel In-Context Example Selection Method and Automatic Evaluation Metric for Empathetic Dialogue Generation", "A Large-Scale Dataset for Empathetic Response Generation", "THE THEORY OF MORAL SENTIMENTS", "Empathy displays as interactional achievements—Multimodal and sequential aspects", "Emotion as stance", "The Ethics of Care and Empathy", "Empathy and moral development", "Cognitive appraisal, emotion, and empathy" ]
On the In-context Generation of Language Models
On the In-context Generation of Language Models Abstract Large language models (LLMs) are found to have the ability of in-context generation (ICG): when they are fed with an in-context prompt containing a somehow similar examples, they can implicitly discover the pattern of them and then complete the prompt in the same pat- tern. ICG is curious, since language models are not completely trained in the way same as the in-context prompt, and the distribution of examples in the prompt differs from that of se- quences in the pretrained corpora. This paper provides a systematic study of the ICG abil- ity of language models, covering discussions about its source and influential factors, in the view of both theory and empirical experiments. Concretely, we first propose a plausible latent variable model to describe the distribution of the pretrained corpora, and then formalize ICG as a problem of next topic prediction. With this framework, we can prove that the repetition na- ture of a few topics ensures the ICG ability on them theoretically. Then, we use this control- lable pretrained distribution to generate several medium-scale synthetic datasets (token scale: 2.1B~3.9B) and experiment with different set- tings of Transformer architectures (parameter scale: 4M~234M). Our experimental results further offer insights into how factors of data and model architectures influence ICG. 1 Introduction As the data and parameter scale continue to in- crease, large language models (LLMs) have shown strikingly emergent abilities , where one of the most exciting ones is in-context learning (ICL) . Given an in- context prompt that concatenates a few in-context examples and a query input, LLMs can somehow implicitly guess the "topic" of those examples and complete the query input in the desired way. Fur- thermore, LLMs can imitate those examples using the topic learned in context . For instance, Llama2-13B is able to generate plausible sequences of the topic of in-context examples, as shown in Figure 1. This in-context generation (ICG) ability forms the foun- dation of multiple few-shot prompting methods like ICL and its variants like Chain-of-thoughts . Intuitively, one might comment that LLMs learn the ICG ability from data in the repetition mode , which roughly refers to a type of text concatenated with sequences under the same topic. This is true to some extent. As known, typical pretrained corpora contain (e.g. CommonCrawl1) internet data which has an unneglectable portion of array-page data such as IMDB review pages2. After preprocess- ing, these pages are converted to repetition mode data, as shown in Figure 1a. However, this isn’t enough to explain the ICG ability, since LLMs can also generate sequences of in-context learned top- ics that don’t appear to repeat and even are unseen in the pretrained corpora. For example, Figure 1 shows sampled completions of Llama2-13B given in-context prompts of different types of topics: 1.The first one is a repeated topic called "movie review" (Figure 1a), where Llama2-13B naturally has the ICG ability on it since this topic appears to repeat in the pretrained corpora as mentioned. 2.The second type nonrepeated topic refers to those that appear in the pretrained corpora but never repeat, e.g., forward method in any class inherited from nn.Module ofPytorch code (Figure 1b). However, Llama2-13B can also generate plausible code of forward method when prompting a few ones. 3.The last type unseen topic includes those that never appear in the pretrained corpora. For exam- ple, "unnatural addition" generates 2-digit arith- 1https://commoncrawl.org 2https://www.imdb.com 1\n?Movie review Movie review a solidly seaworthy chiller . Movie review Movie review Movie review Topic-repetition a fascinating and fun film. overly long and worshipful bio-doc. the script ? please. one of the worst films of 2002 . a masterpiece four years in the making. this film is predictable at every turn. interminably bleak , to say nothing of boring. must be seen to be believed. the acting of the young people is very good.Model Input Model Output import torch.nn as nn class LSTMPOST agger(nn.Module): def __init__(self, vocab_size, emb_dim, hidden_dim, n_pos): super().__init__() self.embeddings = nn.Embedding(vocab_size, emb_dim) self.lstm = nn.LSTM(emb_dim, hidden_dim, batch_first=T rue) self.pos_head = nn.Linear(hidden_dim, n_pos) def forward(self, input_ids): embeds = self.embeddings(input_ids) hiddens, _ = self.lstm(embeds) pos_logprobs = self.pos_head(hiddens).log_softmax(-1) return pos_logprobsImport library Class name Initial method Forward method Pytorch module def forward(self, input_ids): embeds = self.embeddings(input_ids) hiddens, _ = self.lstm(embeds) pos_logprobs = self.pos_head(hiddens).log_softmax(-1) return pos_logprobs def forward(self, hiddens): logprobs = self.cls_head(hiddens).log_softmax(-1) return logprobs def forward(self, imgs): feature = self.cnn(imgs) feature = feature.flatten(1, -1) logprobs = self.cls_head(feature).log_softmax(-1) return logprobs def forward(self, imgs, imgs_text): feature = self.encoder(imgs) hm = self.embedding(imgs_text) out1 = self.rel_head(torch.cat([feature[:, -1:, :], hm[:, None, :]], dim=1)) out1 = out1.sum(dim=1).max(-1) out2 = self.rel_head(torch.cat([feature[:, 0:1, :], hm[:, , :]], dim=1)) out2 = out2.sum(dim=1).max(-1) return out1, out2Model Input Model Output 2-3=5 1-1=2 4-6=10 4-2=6 5-3=8Model Input Model Output ???(a) ICG of repeated topics (b) ICG of nonrepeated topics ???(c) ICG of unseen topicsFigure 1: ICG examples (generated from Llama2-13B) of different kinds of topics. metic expressions that input subtraction but expect addition (like "1-1=2"), which is intuitively be- lieved to never be seen in the pretrained corpora . However, Llama2-13B can also rec- ognize this topic and generate plausible sequences in context, as shown in Figure 1c. The above results show that LLMs can gener- alize the repetition mode to nonrepeated and un- seen topics. We term this phenomenon as the topic generalization of ICG, abbreviated as ICG- generalization. ICG-generalization is curious be- cause LLMs are not explicitly trained in the way they test. The biggest challenge of studying ICG and its generalization is that the true pretrained dis- tribution is not accessible. Thus, we don’t know the topic of a span or whether it appears to repeat, mak- ing it difficult to evaluate the ICG abilities of LLMs. To address this, we turn to synthetic data generated from a known and controlled pretrained distribution . The distribution is a hierarchical latent variable model (LVM) as shown in Figure 2, where a docu- ment is guided by two kinds of latent variables. The distribution is not only plausible to explain true pre- trained data but also convenient for analysis since it decouples different levels of uncertainties. Through the proposed pretrained distribution, we can naturally formalize ICG as a problem of next topic prediction, and then conduct mathematical analysis. We first theoretically prove that (Theorem 1), under some mild assumptions, if the language model fits the pretrained distribution well, then it’s guaranteed to have the ICG ability on repeated topics in terms of convergence in probability. As a result, the ICG distribution (i.e., the generative distribution conditioned on the in-context prompt) converges to the true topic-paragraph distribution in probability. Next, we study ICG-generalization via exhaustive experiments, revealing that ICG- generalization is caused by both factors of data and models. Concretely, we use the controllable pretrained distribution to generate several synthetic datasets (token scale: 2.1B~3.9B), and train Trans- former language models with different settings (parameter scale: 4M~234M). Ex- periments show that data compositionality, propor- tion of repeated topics, Transformer’s parameter scale, and window size play crucial roles in en- abling ICG-generalization, while the data topic uncertainty and Transformer’s attention head size have few influences3. Our study provides insights to better understanding the ICG ability and LLMs. 2 Settings 2.1 Pretrained Distribution We assume the pretrained distribution is a hierarchi- cal LVM as shown in Figure 2, where a document is 3These results are consistent with previous works about attention head pruning and the importance of large attention size . 2\n...Figure 2: Bayesian network of the pretrained distribu- tion, where the non-shaded nodes are latent variables. generated via the following steps: 1) Draw a latent mode α∈Afrom the mode prior p(α). 2) Draw a latent outline β1:N∈BNcontaining topics of dif- ferent paragraphs from the Markov mode-outline distribution p(β1:N|α)parameterized by the mode α. 3) Sample each paragraph xi∈Σ∗(Σis the vocabulary) individually from the topic-paragraph distribution p(x|βi), and concatenate them with delimiters. The joint distribution of this LVM is: p(α, β 1:N, x1:N) =p(α)p(β1:N|α)NY i=1p(xi|βi) (1) This distribution is plausible because: 1) It has a clear realistic interpretation of how humans write documents. Generally, humans would first deter- mine the literature genre (e.g., narrative, letter, and so on), and then plan a specific structure of that genre before writing, as shown in Figure 1. Such a process is modeled via the mode prior p(α)and the mode-outline distribution p(β1:N|α). 2) It is capable of describing any language marginal dis- tribution via the marginalization over latent vari- ables. Also, it is convenient to analyze because of disentanglement: two kinds of uncertainties, topic-transition and generation of paragraphs are handled by two separated models p(βn|β1:n−1, α) andp(xn|βn), respectively, but not the entangled marginal p(x1:N). 2.1.1 Assumptions The pretrained distribution has three additional assumptions. Firstly, as mentioned, typical pre- trained distributions for LLMs include the repe- tition mode ˆα∈Athat only generates repeated outlines βN(β∈B)(βNrepresents a N-length outline that each topic within is β). This formally raises the following: Assumption 1. There exists a mode ˆα∈A called repetition mode such that p(βn+1|βn,ˆα) = 1(βn+1=βn)for all timesteps n. Other modes α∈A/ˆαare called continuous modes, since the outline under them seems to shift gradually and continuously. Secondly, we have to ensure that different modes and topics are different to get rid of redundancy. That is, they should be distinguished in terms of distance measure of distribution: Assumption 2. For two different modes α, α′∈A and an arbitrary context x1:n, define: KLn α∥α′:=X xp(x|x1:n, α) logp(x|x1:n, α) p(x|x1:n, α′) (2) We assume that KLn(α∥α′)≥logc1>0. Like- wise, for two different topics β, β′∈B, define: KL(β∥β′):=X xp(x|β) logp(x|β) p(x|β′)(3) We assume that KL(β∥β′)≥logc2>0. Thirdly, for convenience and without loss of plausibility, we assume that: Assumption 3. For each paragraph x∈Σ∗, its support from any topic β∈Bis bounded: 0< c3≤p(x|β)≤c4<1. 2.1.2 Topic Types With Assumption 1, the likelihood of any repeated outline βNunder the repetition mode ˆαonly de- pends on the topic itself: p(βN|ˆα) =p(β1=β|ˆα):=p(β|ˆα) (4) where p(β|ˆα)is the repetition prior measuring how often the topic βis chosen to repeat under mode ˆα. Analogously, let p(β)be the topic prior assessing the frequency of the topic β: p(β):=X α∈Ap(β|α)p(α) (5) According to the appearance, we can formally group topics β∈Binto three mutually exclusive sets, as shown in Figure 1: 1.Repeated set R.∀β∈R, p(β|ˆα)>0. That is, each topic within appears to repeat in the pre- trained distribution. By intuition, repeated topics account for a very small proportion of all topics in realistic data, i.e., rR=|R|/|B|is small. 2.Nonrepeated set C.∀β∈C, p(β|ˆα) = 0, p(β)>0. In other words, this set contains topics that don’t repeat but appear in the pretrained corpora. 3\n3.Unseen set U.∀β∈U, p(β) = 0 . Topics in this set are never seen in the pretrained corpora. 2.2 Problem Formalization Consider a language model pLMtrained on samples of the above pretrained distribution p. The ICG ability could be formalized as: Hypothesis 1. Given a language model pLM trained on the pretrained distribution pand an in-context prompt x1:N, where each sample xn∼ p(x|ˆβ), the in-context topic-repetition rate (ICTR), i.e., the probability that the language model gener- ates a paragraph belong to topic ˆβwhen prompting withx1:Nis somehow close to 1: pLM(ˆβ|x1:N):=pLM(βN+1=ˆβ|x1:N)≈1(6) Accordingly, the model ICG distribution pLM(x|x1:N)is somehow closed to the true topic-paragraph distribution p(x|ˆβ): pLM(x|x1:N)≈p(x|β) (7) Thus, we formalize ICG as next topic prediction, where language models seem to implicitly choose the topic of in-context examples as the next topic. Our goal is to find support for this hypothesis from the perspective of both theory and empirical exper- iments. 3 Theoretical Support Intuitively, the pretrained distribution itself ensures the ICG ability for repeated topics R. This can be explicitly formalized by the following theorem: Theorem 1. Given an in-context prompt x1:N, where each sample xn∼p(x|ˆβ)andˆβ∈R, the pretrained distribution have the following proper- ties: 1.The data ICTR4converges to 1 in probability (corollary 4): plim N→∞p(ˆβ|x1:N) = 1 (8) where we denote p(βN+1=β|x1:N):= p(β|x1:N). 2.For any candidate paragraph x∈Σ∗, the data ICG distribution p(x|x1:N)converges to true topic-paragraph p(x|ˆβ)in probabil- ity (corollary 5): plim N→∞p(x|x1:N) =p(x|ˆβ) (9) 4Note that we use the prefix "data" to distinguish values from pretrained distribution and language model distribution.If the language model is expressive enough, it would gradually approach the pretrained distribu- tion with the increase of the number of training examples5. As a result, it would exhibit the same properties as shown in Theorem 1. Therefore, the ICG ability for repeated topics directly originates from the pretrained corpora. Detailed theoretical results are provided in Ap- pendix B, and here, we only present a proof sketch. Proof Sketch. According to Section 2.1, ∀x∈Σ∗, the data ICG distribution is: p(x|x1:N) =X β∈Bp(β|x1:N)p(x|β) (10) Therefore, the data ICG distribution p(x|x1:N) is dominated by the topic predictive distribution p(β|x1:N), i.e., ICTR. p(β|x1:N)can be further decomposed as the mixture of modes: p(β|x1:N) =X α∈Ap(α|x1:N)p(β|x1:N, α)(11) Firstly, we can prove that if ˆβ∈R, then plimN→∞p(ˆα|x1:N) = 1 (corollary 1). There- fore, the mixture in formula (11) focuses on the component of repetition mode p(β|x1:N,ˆα)when Nis large: p(β|x1:N)≈p(β|x1:N,ˆα) =p(β|ˆα)QN n=1p(xn|β) p(x1:N|ˆα)(12) This form is exactly the Bayesian posterior dis- tribution, which is in accord with previous works connecting ICL and Bayesian statistics . Likewise, it turns out that the if ˆβ∈R, then plimN→∞p(ˆβ|x1:N,ˆα) = 1 (corollary 3), thus establishing the first point of theorem 1. Since the data ICG distribution p(x|x1:N)depends on the topic predictive distribution p(β|x1:N), we can prove the second point of theorem 1 analogously6. In Appendix B and C, we also present a detailed for- mula of the convergence, in which the convergence speed depends on the distinguishment of different modes and topics. 5Previous works typically take this as the null hypothesis. 6Based on of theorem 1, for regular in-context learning scenario where each example in the prompt is a tuple (xn, yn) consisting with an input xnand an output yn, we can also obtain similar theoretical conclusions about the ICL ability. Details are shown in proposition 5 and corollary 6. 4\n4 Experiments Theory 1 can’t ensure the ICG ability for nonre- peated and unseen topics β∈C∪Ubecause they have a zero repetition prior p(β|ˆα) = 0 and so the posterior under repetition mode is also zero: p(β|x1:N,ˆα) = 0 . Then, the correct component p(x|β)would never be selected under the repeti- tion mode, preventing the ICG/ICL ability as a consequence. However, this is contrary to the real case, where LLMs have the ICG-generalization ability: they are able to generalize ICG/ICL abilities to nonrepeated and unseen topics β∈C∪U. We speculate that this might be caused by factors in both data and model side: •Data side: The compositionality of natural language and the proportion of re- peated topics rR. Compositionality considers the meaning of a linguistic unit results from the in- dividual meanings of its sub-parts, and how they are combined . Thus, nonre- peated and unseen topics might share the same "sub-topics" with repeated topics. The bigger the proportion of repeated topics, the more frequently those sub-topics are shared. Therefore, LLMs may be able to recombine those sub-topics to recognize those out-of-distribution topics in the repetition mode and exhibit generalization. •Model side: The Transformer structure. As the mainstream architecture of NLP, the success of Transformer is believed to originate from its strong generalization ability . We conduct rich experiments to verify above arguments. 4.1 Synthetic Data We conduct the experiments on synthetic data gen- erated via the controllable pretrained distribution. As mentioned, the distribution has three compo- nents: 1.Mode prior p(α). We set the mode prior to be uniform: p(α) = 1 /|A|. 2.Mode-outline distribution p(β1:N|α). For continuous modes α∈A/ˆα, Since we don’t exactly care the outline, we set p(β1:N|α) = QN n=1p(βn|α)for convenience, where p(βn|α)is a categorical distribution and its parameter is sam- pled from a Dirichlet distribution. The Dirichlet pa- rameters are 0 for unseen topics (so that p(β) = 0 forβ∈U) and 5 for others. We set the repe- tition prior to be uniform: p(β|ˆα) = 1 /|R|= 1/|B|rR(β∈R). 3.Topic-paragraph distribution p(x|β). In order to simulate the compositionality, each topic β∈B is a tuple containing Msubtopics ρ1:M, where ρm∈B∗(m∈[M])andB=BM ∗. Accordingly, the paragraph xalso contains Msub-paragraphs s1:M, where each sub-paragraph is generated indi- vidually: p(x|β) =MY m=1p(sm|ρm) (13) The composition arity Mcontrols the data com- positionality. Given a fix number of topics |B|, the number of subtopics |B∗|=Mp |B|decreases when composition arity Mincreases, and different topics are more likely to share structures as a result. Here, each sub-paragraph distribution p(sm|ρm)is a Markov model whose initial probability vector πρmand transition matrix Aρmare both sampled fromDir(γ1), where 1is an one vector. γactually controls the uncertainty of different topics, where a lower value is expected to raise the KL diver- gence between different topic-paragraph models, making them easier to be distinguished, as shown in Appendix D. 4.1.1 Data Parameter Settings We set the number of modes |A|= 32 , the num- ber of topics |B|= 5314417, where 95% of top- ics are unseen ( |U|= 504868 ). We set the vo- cab size |Σ|= 324 , the length of sub-paragraph |sm|= 3, and the number of paragraphs in a doc- ument N= 30 . Thus, each document contains 30(3M+ 1) tokens. For other parameters of pre- trained distribution including composition arity M, the ratio of repeated topics rR, and topic uncer- tainty γ, we adjust their values to study the effects of data properties. In specific, we experiment with M∈ {2,3,4},rR∈ {2−d|d={6,7,···,13}}, andγ∈ {0.01,0.02,···,0.05}. For each configuration of the pretrained distri- bution, we generate 10M documents for training. Therefore, the number of tokens in the synthetic dataset ranges from 2.1B to 3.9B. Examples of the synthetic dataset are shown in Figure 6. 7Its square, cube and fourth root are all integers. 5\nModels L H D # params X2S 3 6 384 4M XS 4 8 448 8M S 5 8 448 9M M 6 8 512 15M L 9 12 768 48M XL 12 16 1024 114M X2L 16 20 1280 234M Table 1: Configurations of different models, where Lis the number of layers, His the number of attention heads, Dis the hidden dimension. For parameter efficiency, we use grouped query attention and set the number of key-value heads to be H/2. 4.2 Models We study the effect of model size, attention win- dow size, and the number of attention heads of Transformer. Table 1 shows configurations of dif- ferent experimental models, where the parameters scales from 4M to 237M. The models are based on theTransformers implementa- tion of Mistral . We train each model for 1 epoch on one NVIDIA A100 (40GB). 4.3 Evaluation Metrics We aim to evaluate the overall ICG performance and the ICG-generalization ability of models using ICTR. Firstly, we define topic-wise ICTR as the expectation of prompt-wise ICTR: πβ N=Ep(x1:N|βN)[pLM(β|x1:N)] (14) Then, we can obtain the average ICTR of different kinds of topics: ICTRB N=1 |B|X β∈Bπβ N,ICTRR N=1 |R|X β∈Rπβ N ICTRC N=1 |N|X β∈Cπβ N,ICTRU N=1 |U|X β∈Uπβ N (15) Here, ICTRB Nmeasures the overall ICG abil- ity, while ICTRC Nand ICTRU Nreflect the ICG- generalization ability, where higher values sug- gest better generalizations. In the experiments, since each pretrained document has 30 paragraphs, the trained model at most supports 29-shot in- context prompts. So by default, we reported ICTRB/R/C/U 29 , which is short of ICTRB/R/C/U. According to the values of the above ICTRs, we further define the following four statuses of a trained model by thresholding: 1.Underfit: ICTRR<0.65. 2.Overfit: ICTRR≥0.65,ICTRC<0.65, andICTRU<0.65. 3.C-Generalization: ICTRR≥ 0.65, ICTRC≥0.65, and ICTRU<0.65. 4.U-Generalization: ICTRR≥ 0.65, ICTRC≥0.65, and ICTRU≥0.65. The computation of prompt and topic-wise ICTR is nontrivial, so we present it in Appendix F. 4.4 Results & Discussions Our experimental results suggest the following ar- guments. Data compositionality enables both ICG and ICG-generalization. Figure 3a shows the results of different composition arities. Clearly, we can see that data compositionality enables ICG and ICG-generalization, specifically: 1) As the com- position arity Mincreases, the overall ICG per- formance consistently improves for models in any sizes trained on the pretrained distribution with dif- ferent repeated topic proportions rR. Notably, the improvement is especially significant when we in- crease Mfrom 2 to 3. For example, for all rR, the ICTRB 29value nears 0 for many small models when M= 2, but is lifted to a considerable level when M= 3. 2) The models are easier to generalize on ICG when Mis higher. When M= 2, most mod- els are even hard to overfit on repeated topics, and only model X2L can generalize ICG to both non- repeated and unseen topics only when rR= 1/64. On the contrary, when M= 3orM= 4, models in all sizes exhibit the ICG-generalization ability with much smaller rR. The model emerges the ICG-generalization as the proportion of repeated topics rises. As shown in Figure 3a, the model typically tends to overfit only on repeated topics when rRis small, and then suddenly emerges the ICG-generalization ability when rRhits the threshold. The threshold mainly corresponds to the data compositionality, where a higher composition arity Mleads to a lower threshold and so makes the model easier to generalize. For example, for model X2L, the gener- alization threshold of rRis1/64when M= 2, and decreases to 1/2048 when M= 3. We speculate this is because the more compositionality of the data, the more likely that nonrepeated and unseen 6\nX2S XS S M L XLX2L Model size1/81921/40961/20481/10241/5121/2561/1281/64Ratio of repeated topics rR .00 .00 .00 .00 .00 .00 .00.00 .00 .00 .00 .00 .00 .00.00 .00 .00 .00 .00 .00 .01.00 .01 .02 .03 .06 .06 .05.00 .00 .06 .15 .35 .24 .18.00 .03 .08 .18 .46 .47 .60.00 .03 .06 .27 .51 .52 .64.00 .03 .16 .22 .56 .57 .68Composition arity M=2 X2S XS S M L XLX2L Model size.00 .01 .01 .01 .01 .02 .00.12 .27 .24 .14 .20 .07 .32.54 .64 .28 .75 .89 .91 .89.68 .75 .76 .85 .92 .89 .93.70 .74 .78 .87 .92 .93 .95.72 .77 .80 .87 .92 .93 .95.62 .78 .80 .87 .92 .93 .95.73 .76 .82 .87 .92 .93 .95Composition arity M=3 X2S XS S M L XLX2L Model size.26 .27 .45 .36 .18 .29 .18.48 .73 .83 .67 .79 .79 .70.85 .89 .91 .95 .96 .97 .95.88 .90 .94 .94 .97 .97 .98.88 .90 .94 .95 .97 .97 .98.87 .90 .92 .95 .97 .97 .98.86 .91 .93 .95 .97 .97 .98.88 .91 .92 .95 .97 .97 .98Composition arity M=4 Underfit Overfit C-GeneralizationU-Generalization(a) ICG-generalization results of models in different sizes trained on pretrained distribution with different composition arities M and proportions of repeated topics rR, where the topic uncertainty γis set to 0.01. X2SXS S M L XLX2L Model size0.010.020.030.040.05Topic uncertainty () .68 .75 .76 .85 .92 .89 .93.00 .26 .41 .57 .68 .72 .79.00 .00 .04 .13 .37 .51 .57.00 .00 .00 .02 .12 .15 .43.00 .00 .00 .00 .02 .01 .08 (b) ICG-generalization results of models in different sizes trained on pretrained distribution with different topic uncertainties γ, where we set M= 3andrR= 1/1024 . 4 8 12 24 36 48 512 Window size (W)12361224Number of attention heads (H) .00 .01 .03 .83 .85 .60 .89.00 .00 .05 .68 .83 .88 .64.00 .00 .74 .90 .47 .90 .89.00 .00 .40 .72 .83 .82 .91.00 .00 .72 .89 .75 .91 .92.00 .00 .87 .79 .81 .68 .89(c) ICG-generalization results of model L with different window sizes and numbers of attention heads, where we set M= 3,rR= 1/1024 , andγ= 0.01. Figure 3: ICG-generalization results, where the color suggests the status of the corresponding model, and the number in the cell shows the corresponding ICTRB 29. 0.01 0.02 0.03 0.04 0.05 Topic uncertainty 0.1.2.3.4.5.6.7.8.91ICTRICTRR ICTRC ICTRU Figure 4: ICTR∗ 29of different topics for model L trained on the pretrained distribution with different topic uncer- tainty γ, where the other parameters in the pretrained distribution are: M= 3,rR= 1/1024 . topics share sub-topics with repeated ones, there- fore the less proportion of repeated topics is needed for generalization. Topic uncertainty doesn’t affect ICG-general- ization. As shown in Figure 4, Topic uncertainty mainly affects the fitting difficulty of the data rather than the ICG-generalization ability: As the topic uncertainty γincreases, the ICTR 29of model L for all kinds of topics decreases consistently. However, we don’t observe apparent ICG performance gaps between those topics. Larger models do better on ICG and ICG-gen- eralization. Model size is considered to be a great factor impacting the ability of language models . This is also verified in our ex- periments, which we find: 1) As shown in Figure 3a, obviously, larger models not only have better ICTRB 29, but also require less repeated topics to generalize to nonrepeated and unseen topics. 2) As shown in Figure 3b, larger models are able to deal with topics with more uncertainties, i.e., bigger γ, where models larger than model M are capable of ICG-generalization when γ= 0.02but smaller models pose underfit. Especially for model X2S, whose ICTRB 29is 0. 3) As shown in Figure 5a, in most cases, larger models achieve better ICTRB 7\n1 2 4 8 16 29 Number of shots (N)0.1.2.3.4.5.6.7.8.91ICTRB X2L (237M) XL (114M) L (48M) M (15M) S (9M) XS (8M) X2S (4M)(a) ICTRB ∗of different model sizes. 1 2 4 8 16 29 Number of shots (N)0.1.2.3.4.5.6.7.8.91ICTRBW=4 W=8 W=12 W=24 W=36 W=48 W=512(b) ICTRB ∗of different window sizes for model L. 1 2 4 8 16 29 Number of shots (N)0.1.2.3.4.5.6.7.8.91ICTRB H=1 H=2 H=3 H=6 H=12 H=24(c) ICTRB ∗of different numbers of attention heads for Model L. Figure 5: ICTRB ∗of different model configurations, where we set M= 3,γ= 0.01, and rR= 1/1024 . given fewer demonstrations. However, curiously, this does not hold when the number of shots Nis too small. For example, ICTRB 2of model X2S, XS, S, and M are typically greater than that of model L, XL, and X2L. We speculate this might be because when Nis small, larger models are more cautious in identifying the repetition mode. Big window size is necessary for ICG and ICG– generalization. Recently, Wang et al. (2023a) show that LLMs conduct ICL by collecting infor- mation of demonstrations in the prompt from pre- vious label words. Specifically, the hidden states of previous label words are good summarizations of corresponding demonstrations. Thus, the model needs to attend to all those previous "anchors" to conduct ICL, which hints that a small window size might harm the ICL performance. For example, in the experimental results of Jiang et al. (2023b), we can find that the ICL performance of RWKV series is generally inferior to that of Transformer structures. Our experiments also sup- port this argument. As shown in Figure 3c and 5b, when the number of attention heads is fixed, a low window size would cause underfit. In most cases, as we increase the window size, the model is shifted to overfit and finally U-Generalization, the overall ICTRB 29also rises at the same time. Note that there also exists the emergent phenomenon, where the model suddenly learns ICG and ICG-generalization when its window size hits a threshold. Big number of heads is not necessary for ICG and ICG-generalization. Multi-head/group atten- tion is always believed to be the core driving state- of-the-art Transformer models. By intuition, dif- ferent heads can potentially attend onto different parts of the text, making the model more expressive. However, our experiments show this mechanism is not very important for ICG and ICG-generalization. As shown in Figure 3c, reducing the number of attention heads Hfor XL model hardly change the model status. Also, as shown in Figure 3c, at the same size (L), the model with the highest overall ICG performance does not necessarily have the most attention heads. We speculate that this is because the attention pattern for ICG is relatively simple, so different heads are actually functional equivalent. This is consistent with Michel et al. (2019), which finds that the performance of many tasks including machine translation and natural lan- guage inference is insensitive to the number of attention heads. Generalizations towards nonrepeated and un- seen topics are almost the same. As shown in Figure 3, in most cases, no matter how pretrained distributions and models are configured, the mod- els generally result as either underfit, overfit, or U-Generalization, but hardly in the status of C- Generalization. This suggests that nonrepeated top- ics, though appear in the pretrained distribution, are not easier for models to generalize. 5 Conclusions This paper provides a systematic study of ICG abil- ity of language models. Firstly, we propose a plau- sible latent variable pretrained distribution, formal- izing ICG as a problem of next topic prediction. Then, we prove that the repetition nature of a few topics ensures the ICG ability on them theoretically. We also conduct rich experiments to study the ef- fects of different factors of data and model architec- tures on ICG and ICG-generalization. We believe this paper is beneficial to a better understanding of the ICG ability, as well as large language models. 8\nLimitations The major limitation of this work is that we don’t provide a theoretical support for ICG- generalization, while doing so is non-trivial. Now we can only speculate the ICG-generalization re- sults from the smoothing effects of neural probabil- ity approximator (e.g. Transformer), where unseen inputs would have non-zero probabilities . Therefore, nonrepeated and unseen topics might have a non-zero repetition prior, thus mak- ing them possible to be chosen as the topic of the next paragraph. This phenomenon might be es- pecially obvious when these topics are similar to repeated ones according to our experimental results. Further work on the theoretical understanding of ICG-generalization might take similarities between topics into account.
[ "Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution", "Generative Calibration for In-context Learning", "Llama 2: Open Foundation and Fine-Tuned Chat Models", "In-Context Learning through the Bayesian Prism", "Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection", "Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning", "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints", "Injecting structural hints: Using language models to study inductive biases in language learning", "A Theory of Emergent In-Context Learning as Implicit Structure Induction", "Larger language models do in-context learning differently", "Language Model Crossover: Variation through Few-Shot Prompting", "What Can Transformers Learn In-Context? A Case Study of Simple Function Classes", "Towards Understanding Grokking: An Effective Theory of Representation Learning", "Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?", "An Explanation of In-context Learning as Implicit Bayesian Inference", "Inducing Transformer’s Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks", "Examining the Inductive Bias of Neural Language Models with Artificial Languages", "Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity", "Language Models are Few-Shot Learners", "Pretrained Transformers Improve Out-of-Distribution Robustness", "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned", "Are Sixteen Heads Really Better than One?", "Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks", "Attention is All you Need", "Data Noising as Smoothing in Neural Network Language Models", "Tree-Structured Composition in Neural Networks without Tree-Structured Architectures", "In-Context Learning of Large Language Models Explained as Kernel Regression", "Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning", "语言学纲要 = Essentials of linguistics", "Probability Inequalities for Sums of Bounded Random Variables", "UNDERSTANDING AND THE PRINCIPLE OF COMPOSITIONALITY", "WEIGHTED SUMS OF CERTAIN DEPENDENT RANDOM VARIABLES", "of the Association for Computational Linguistics" ]
Dependency Graph Parsing as Sequence Labeling
Dependency Graph Parsing as Sequence Labeling Abstract Various linearizations have been proposed to cast syntactic dependency parsing as sequence labeling. However, these approaches do not support more complex graph-based represen- tations, such as semantic dependencies or en- hanced universal dependencies, as they can- not handle reentrancy or cycles. By extend- ing them, we define a range of unbounded and bounded linearizations that can be used to cast graph parsing as a tagging task, enlarging the toolbox of problems that can be solved under this paradigm. Experimental results on seman- tic dependency and enhanced UD parsing show that with a good choice of encoding, sequence- labeling semantic dependency parsers combine high efficiency with accuracies close to the state of the art, in spite of their simplicity. 1 Introduction In recent years, a new family of approaches has emerged for dependency parsing that treats the problem as a sequence labeling task . This has advantages in terms of simplicity, flexibility and speed, as parsing can be performed with generic sequence labeling software and easily combined with other tasks that operate within the same framework. For this, one needs an encoding that can represent parse trees as a sequence com- posed of one discrete label per word, so that a se- quence labeling component can be trained and out- put trees can then be decoded from the sequences. In the last few years, a wide range of such en- codings with different strengths and weaknesses have been proposed for dependency parsing (Strzyz et al., 2019; Lacroix, 2019; Strzyz et al., 2020; Gómez-Rodríguez et al., 2023; Amini et al., 2023). However, these encodings are designed for depen- dency trees, where each word is restricted to have exactly one parent and cycles are forbidden. The more complex family of structured prediction prob- lems where the output is a graph of dependen- cies between words, including tasks like seman- tic dependency parsing , en- hanced Universal Dependencies parsing or even structured sentiment analy- sis ; has not been integrated into the sequence labeling framework so far due to a lack of encodings that can support reentrancy, disconnected nodes and cycles.1 We bridge this gap by defining sequence labeling encodings for these problems that are framed as pre- dicting directed graphs between words - which, fol- lowing some previous literature (Agi ´c et al., 2015; Barnes et al., 2021) - we group under the umbrella term of dependency graph parsing . By taking de- pendency tree encodings as a starting point and ex- tending them to support graphs, we define a wide range of both unbounded and bounded encodings to cast dependency graph parsing problems as se- quence labeling. To test the performance of the en- codings, we experiment on two dependency graph parsing tasks, representative of different kinds of linguistically-relevant structures: semantic depen- dency parsing (where the output are DAGs) and enhanced UD parsing (where graphs have cycles). The source code will be released at anonymous . 2 Background Sequence-labeling approaches that partially per- form syntactic parsing have long been known, in the form of supertagging . Still, the goal of supertagging is to cut the search space of the parsing process, not to fully replace it: the generated labels (supertags) are not enough to 1Note that there exist linearizations to implement seman- tic dependency parsing with sequence-to-sequence (seq2seq) approaches . However, seq2seq models are not to be confused with sequence labeling models. In seq2seq models, output length is arbitrary. For sequence labeling, the output needs to have exactly the same length as the input, i.e., graphs need to be encoded as one label per word. Exist- ing seq2seq linearizations do not meet this condition, so they cannot be used for sequence labeling. 1\nencode a whole parse tree, and a parsing algorithm is still needed to fill the missing information. The first attempt at addressing the full task of natural language parsing as sequence labeling was by Spoustová and Spousta (2010), who introduced a linearization for dependency parsing where the label of each word encoded the PoS tag of its head and its relative position among words with the same PoS tag. However, machine learning algorithms of the time struggled to predict such labels, leaving the practical results far behind the state of the art. It was with the development of deep learn- ing and its use in NLP architectures that parsing as sequence labeling became practically viable. This was shown by Gómez-Rodríguez and Vilares (2018) for constituent parsing and by Strzyz et al. (2019) for dependency parsing. For the purposes of this paper, we will leave work on constituent pars- ing linearizations aside and outline the encod- ings that cast dependency parsing as sequence la- beling, since they have a more direct relation to our target problem of dependency graph parsing and will be the inspiration of our proposed encodings. Common notation LetVbe a vocabulary of pos- sible tokens. We will denote a sentence of length nbyw= [w1, w2, . . . , w n]∈Vn. Let Snbe the set of possible parses (dependency trees or depen- dency graphs) for sentences of length n. Then, a sequence labeling encoding for parsing is an in- jective function E:Sn→Ln; where Lis a set of labels that is defined depending on the encod- ing. Thus, a sequence labeling encoding is used to represent a parse for a sentence of length n, w= [w1, w2, . . . , w n], as a sequence of nlabels (one per word), l= [l1, l2, . . . , l n]∈Ln. SinceEis injective, one can recover a parse in Snfrom its associated label sequence in Lnvia the inverse function, E−1:{E(s)|s∈Sn} → Sn. This enables parsing as sequence labeling: if we train a tagger to predict a function fΘ:Vn→Ln (where Vis the vocabulary of possible input tokens, andΘare the model parameters) that associates each word wwith the encoding of its parse, we can obtain the parse for a sentence wasE−1(fΘ(w)). A theoretical limitation is that no known encod- ings are bijective, so E−1is not defined on all pos- sible sequences of labels ( Ln), but just on those that correspond to encodings of valid trees. We call the set of such sequences, Γn={E(s)|s∈ Sn} ⊆Ln, the set of well-formed label sequences for length n. Since fΘis an approximation by a sequence labeling classifier, it is possible that it out- puts ill-formed label sequences. However, this is workable in practice, since there are simple heuris- tics to fix ill-formed sequences converting them to well-formed ones (i.e., mapping from LntoΓn). We now define concepts related to k- planarity (Yli-Jyrä et al., 2003) to later define the coverage of various encodings. Two arcs in a dependency tree or graph are said to cross if their arrows cross when drawn above the words. Thus, two arcs (wi, wj)and(wk, wl) such that min(i, j)<min(k, l)cross iff min(k, l)<max( i, j)<max( k, l). A tree or graph is noncrossing , or1-planar , if it contains no crossing arcs. We also introduce the term relaxed 1-planar for a tree or graph with no pair of crossing arcs pointing in the same direction (i.e., only opposite crossing arcs are allowed). Finally, a tree or graph is (relaxed) k-planar for some k≥0 if it can be written as the union of k(relaxed) 1-planar subgraphs (called planes). Dependency parsing encodings Asequence la- beling encoding for dependency parsing is a se- quence labeling encoding where the set of parses of interest, Sn, isTn, the set of dependency trees for sentences of length n. In all dependency parsing encodings defined so far in the literature, each label liassigned to a word wiis of the form (di, xi), where direpresents the label of the dependency going to wi, and it is xi that varies between encodings and encodes the un- labeled dependency tree. Thus, we will focus on xiand ignore dependency labels from now on. Positional encodings We call positional encod- ings those where xiencodes the position of the head of wi. Let whbe the head of wi. The sim- plest such encoding is the naive positional encod- ingwhere xi=h, i.e., it encodes directly the po- sition of the head of wi(as in the CoNLL format). However, this encoding has been shown to not work well in practice . Instead, the relative positional encoding represents a relative offset, xi=h−i. While it did not obtain good results under simpler implementations, it has been shown to be viable when coupled with more powerful language mod- els . To reduce sparsity, one can use PoS tags to lo- cate head words. In the relative PoS-based encod- ing(Spoustová and Spousta, 2010; Strzyz et al., 2\n2019), xiis a pair (pi, oi)such that if oi>0then whis the oith among the words with PoS tag pi that are located to the right of wi, and if oi<0, then it is the −oith among words with PoS tag pito the left of hi. This encoding has been shown to be very effective in high-resource setups where high- accuracy PoS tags are avaible , but tends to suffer when this is not the case (Muñoz- Ortiz et al., 2021). It is also possible to restrict off- sets using properties other than PoS tags, as in the relative head-based encoding of Lacroix (2019), based on tagging words as leaf or non-leaf nodes and then encoding xi=oiand finding the oith non-leaf to the right or −oith to the left. Unbounded bracketing encodings Based on the axiomatization by Yli-Jyrä and Gómez-Rodríguez (2017), these representations encode each depen- dency arc by adding one symbol to the label of each of its endpoints. In the simplest version, the basic bracketing encoding adapted to sequence labeling by Strzyz et al. (2019), a right arc from witowjis represented by including a /symbol at the label xi and a >symbol at xj, whereas a left arc from xjto xiis encoded by a <symbol at xiand a \symbol atxj.2The label for a word is a string formed by concatenating all symbols involving that word, so that for example, a label xi=\>// means that the word wihas one outgoing word to the left, two to the right, and one incoming arc from the left. Decoding is made with two separate stacks (one to decode /and>into right arcs, and the other < and\into left arcs); and symbols are treated as brackets, so that >is always matched to the closest /and<to the closest \. Thus, it cannot handle trees that have crossing arcs within the same direction, being restricted to relaxed 1-planar trees. To improve coverage, one can apply the notion of multiplanarity (Yli-Jyrä et al., 2003), dividing the dependency tree into two separate subgraphs (planes) and encoding each separately. This yields the2-planar bracketing encoding an encoding with two sets of brackets: the original /,>,<and\are used to encode arcs in the first plane as above, and additional /∗,>∗,<∗, and\∗are added to encode the arcs of the second plane. The decoding of the second plane is made with separate stacks, so that arcs in different planes 2In (Strzyz et al., 2019, 2020), unbounded bracketing en- codings are defined differently, with arcs involving wiandwj being encoded at labels xi+1andxj. We choose the straight- forward xiandxj, as the reason for the first option (reducing sparsity in projective trees) is not relevant for this work.can always cross. This makes the encoding support relaxed 2-planar trees, and thus yields over 99% coverage on a variety of tested treebanks. We classify these encodings as unbounded be- cause the number of possible labels is not bounded by a constant, but scales with respect to sentence length n(consider, for example, that the first word on a sentence could have any number of /between 1andn). Positional encodings are also unbounded, although their number of possible labels is O(n) while in bracketing encodings it is O(n2). In spite of this theoretical drawback, unbounded bracket- ing encodings empirically tend to have fewer la- bels than positional encodings, and they have been shown to be a solid choice in many practical sce- narios (Muñoz-Ortiz et al., 2021). Bounded bracketing encodings Gómez- Rodríguez et al. (2023) define two encodings, derived from the basic and 2-planar bracketing encodings, but where the labels are vectors of a fixed number of bits. Thus, they are bounded, as the number of possible labels is a constant. In the 4-bit encoding , each label xiis of the form b0 ib1 ib2 ib3 i, where each bj iis a bit: b0 iis true (false) if wiis a right (left) dependent, b1 iis true iffwiis the outermost right or left dependent of its parent node; and b2 iandb3 iare true iff wihas one or more left or right dependents, respectively. While this encoding is very compact, having a total of 16 labels, it shares the drawback of the basic brackets of not supporting same-direction crossing arcs. The 7-bit encoding extends it using multipla- narity to support relaxed 2-planar trees by using 7 bits to represent two planes of arcs. Labels are of the form xi=b0 i···b6 i, three more bits than the previous encoding: a bit is added to specify whether wiis a dependent in the first or second plane, and the two bits indicating left or right de- pendents are split into two bits to represent the presence of such dependents in the first plane and two for the second plane. The rest of the bits retain their meaning. This encoding consistently outper- formed unbounded bracketings in the experiments of (Gómez-Rodríguez et al., 2023) Transition-based encodings Gómez-Rodríguez et al. (2020) show that many transition-based parsers can yield sequence labeling encodings. Though in theory applicable to dependency graph parsing, previous results on syntactic parsing show that the systems’ accuracy degrades for non- projective trees, so we will discard this approach. 3\nHexatagging is the overall best-performing encoding known so far for depen- dency parsing. It is bounded and the most compact, as it represents projective trees with only 8 possible labels per word. Yet, its design makes it unlikely that an extension to graph parsing is possible, as it is based on projectivity and treeness (requiring converting dependency trees to a constituent-like representation). Thus, we will not use it here. 3 Unbounded graph encodings Letw= [w1, w2, . . . , w n]∈Vnbe a sentence. A dependency graph for wis a labeled,3directed graph G= (Vw, E)where Vw={w1, . . . , w n}. Contrary to dependency trees, dependency graphs in general allow reentrancy (two or more incoming arcs to the same node) and cycles. Let Gnbe the set of dependency graphs for sentences of length n. Asequence labeling encoding for dependency graph parsing is one where Sn=Gn. We next present our unbounded encodings. Positional graph encodings A naive approach for dependency graph parsing as sequence labeling is to adapt the positional encodings for dependency tree parsing. This can be done by defining xias a tuple of arbitrary length containing the absolute (or relative) positions of all incoming arcs of each word wi, soxiis an ascendingly sorted tuple with the elements of {h: (wh, wi)∈E}for the naive encoding and {h−i: (wh, wi)∈E}for the relative encoding. For example, in the graph in Figure 1, w3has incoming arcs from w2andw6, so the naive encoding assigns it the tuple (2,6) and the relative encoding (-1,3). Note that our definition of a dependency graph Gnallows nodes with no incoming arcs, thus ximight be an empty tuple. Unbounded bracketing encodings The exten- sion of unbounded bracketing encodings for syn- tactic dependency parsing to dependency graphs is straightforward. In this family of linearizations, the restriction to a single parent per node for syntactic parsing is achieved by explicitly enforcing exactly one incoming symbol, <or>, in each xi. For graph parsing, we remove this restriction and allow more than one such symbol, as well as zero (which can 3For simplicity of explanation, we will leave aside de- pendency labels and focus on how to encode the underlying unlabeled graph – thus, we will focus on the xicomponents of the encoding. Labels can be added as separate compo- nents, similarly to syntactic dependency parsing, where each component diencodes all the incoming relationships to wi.w1 w2 w3 w4 w5 w6 Pabs: (2) (,) (2,6) (1) (1,3,4) (5) Prel: (1) (,) (-1,3) (-3) (-4,-2,-1) (-1) Br3: </∗/∗\>/ ></ >∗/ >> >∗\> Figure 1: An example of a relaxed 2-planar dependency graph linearized with our unbounded encodings. even produce an empty string, ε, for disconnected nodes). The decoding process does not change, us- ing two different stacks for right and left arcs, and multiplanarity is supported by introducing new sets of brackets. In Figure 1, w3has one incoming arc from each direction ( ><) and an outgoing arc to the right ( /).w5has three incoming arcs from the left, of which one is in the second plane ( >∗). Having ksets of brackets provides coverage over relaxed k-planar graphs, like the tree encod- ing did for relaxed k-planar trees. However, it is worth noting that previous work using this encod- ing for syntactic parsing has never experimentally explored beyond k= 2(i.e., adding one extra set of brackets <∗,>∗,/∗,\∗). The rationale was the trend that most syntactic trees are 2-planar (Gómez- Rodríguez and Nivre, 2013), so complicating pars- ing algorithms (or encodings) does not seem worth- while for a tiny increase in coverage. We show for the first time that adding a third plane (with a third set of brackets <∗∗,>∗∗,/∗∗,\∗∗) boosts accuracy in several graph parsing datasets. While the motivation to try this was that semantic depen- dency graphs can be denser than trees, and have less propensity to be 2-planar, we have observed this effect even in datasets where the increase in cover- age is very small, raising questions on whether the same could be true for syntactic parsing. As the plane assignment algorithm (i.e. to split arcs in gold graphs into planes in a canonical way) we extend the greedy plane assignment algorithm of to support more than two planes: we traverse arcs in order and assign each to the lowest possible plane such that it does not cross any arcs already assigned to the same plane. 4 Bounded graph encodings We now define two bounded encodings for graph parsing, based on the 4- and 7-bit encodings by (Gómez-Rodríguez et al., 2023). 4\n4.1 4k-bit encoding Assumption This encoding assumes that the set of edges EofGcan be split into krelaxed 1- planar subgraphs, such that in each subgraph, all nodes have at most one incoming arc (maximum in- degree 1). To do so, it explicitly arranges a dummy nodew0that has dependencies towards any parent- less node. Thus, all nodes in the graph ( w1. . . w n) can be seen as having exactly one incoming arc (from a regular or the dummy node). Encoding The encoding uses a sequence of 4bits to encode the arcs related to the word withat are in thejth subgraph. The result is that each label xiis a grouped sequence of 4kbits where the jth group of bits encodes only the arcs of the j-th subgraph. The meaning of each of the four bits in a group is as in the 4-bit encoding of Gómez-Rodríguez et al. (2023): b4j−4 i is true (false) if wihas a left (right) parent in the jth subgraph (which could be the dummy node), b4j−3 iis true if wiis the farthest dependent of its parent in the jth subgraph, b4j−2 i is true if wihas left dependents and b4j−1 iis true if wihas right dependents in the jth subgraph. Thus, this encoding concatenates kinstances of said 4- bit encoding, which is known to be injective and has coverage over relaxed 1-planar graphs with no more than one parent per node.4 Plane assignment We need a way to express a de- pendency graph as the union of krelaxed 1-planar subgraphs with at most one parent per node. The plane assignment algorithm used in the unbounded bracketing does not suffice for two reasons. First (1), arcs may need to be assigned to different sub- graphs not only because they cross, but also be- cause they have the same dependent.5Secondly (2), nodes that have a parent in the dependency graph may be parentless in one or more subgraphs. While this may not seem problematic because the encod- ing supports such nodes by linking them as children of the dummy node, this would require adding arcs that can break relaxed 1-planarity. To solve (1), we modify the plane assignment algorithm to con- sider two arcs incompatible if they cross or share the dependent. To solve (2), we add artificial arcs 4In the original paper, the 4-bit encoding is described as having coverage over relaxed 1-planar forests, but this is be- cause the task – being about syntactic parsing – forbids cycles. The encoding itself does support graphs with cycles as long as they do not have reentrancy. 54k-bit allows only one incoming arc per node and sub- graph, whose direction is encoded in the first bit of each group.w0w1w2w3w4w5w6 4k10100 1111 1101 1000 1101 1100 w0w1w2w3w4w5w6 4k21101 1000 0100 1000 1101 1100∗ w0w1w2w3w4w5w6 4k31101 1101 1101 1101 1101 1100∗ w1 w2 w3 w4 w5 w6 6k1000110 001001 111110 000000 111000 110001 6k2001000 000000 000000 100000 110000 000000 6k3000000 000000 000000 001000 110000 000000∗ Figure 2: Bounded encodings for the example of Fig- ure 1. The relaxed 1-planar subgraphs for the 4k-bit- encoding are shown with their linearization, added null arcs are drawn with dotted lines, and their associated bits underlined. For 6k-bit, we use colors to distinguish the subgraph pairs. Note that, in both cases, those arcs that are assigned to different planes w.r.t. the unbounded bracketing encoding (Figure 1) are marked with ∗. (which we call null arcs) linking each parentless node to the immediately previous node (such an arc is guaranteed to not produce a crossing). When implementing the parser, null arcs are especially labeled and excluded from the final parse. Figure 2 shows this assignment process: note the null arcs drawn with dotted lines, and the arc (w4, w5)being assigned to the third (green) subgraph despite not crossing any other arc, since there are already arcs going to w5in the other two subgraphs. Coverage The4k-bit encoding has coverage over the set of dependency graphs that can be expressed as the union of krelaxed 1-planar graphs with max- imum in-degree 1. This set is trivially a subset of (1) relaxed k-planar graphs, and (2) dependency graphs with maximum in-degree k. In practice, we show that k= 2suffices for almost total coverage on enhanced UD datasets (Table 2), whereas on se- mantic dependency parsing datasets we need k= 4 for really high coverage in most cases (Table 1). Decoding It is performed as in Gómez- Rodríguez et al. (2023), independently for each group of bits, joining the resulting graphs. 5\n4.2 6k-bit encoding Assumption This encoding assumes that the parse’s edge set Ecan be split into 2krelaxed 1-planar subgraphs, under two conditions. First, ksubgraphs have all of their arcs pointing to the left, while in the other k, all arcs are rightward. We consider that the 2ksubgraphs are arranged in pairs, such that each pair has a leftward subgraph and a rightward subgraph. Second, all subgraphs have maximum in-degree 1. 6k-bit does not require dummy arcs, contrary to the 4k-bit encoding. Encoding Each label xihaskgroups of 6 bits each. The jth group in each label encodes informa- tion about the jth pair of subgraphs (composed of thejth rightward subgraph and the corresponding jth leftward subgraph). In particular, the meaning of each of the bits in the jth group is as follows: b6j−6 i is true (false) if wihas a parent in the jth rightward subgraph, b6j−5 i is true if wiis the far- thest dependent of its parent in said subgraph, and b6j−4 iis true if wihas at least one dependent in said subgraph. Finally, b6j−3 i,b6j−2 iandb6j−1 ihave the same meaning for the leftward subgraph. Plane assignment To perform the assignment of arcs to subgraphs, we proceed as follows: left arcs and right arcs are processed separately (with the goal of choosing the pair whose leftward or rightward subgraph we need to assign them to). For each of these subsets, we run a modification of the plane assignment algorithm of the unbounded bracketing. Two arcs are incompatible if they share the dependent (for the same reason as in the 4k- bit encoding: subgraphs need to have maximum in-degree 1). However, here we do not need to add any null arcs, because this encoding supports representing parentless nodes natively: it suffices to setb6j−5 i orb6j−2 i to indicate that wiis parentless in the jth rightward (resp. leftward) subgraph. Figure 2 contains an example of this assign- ment, with the same graph as in previous examples. Colors depict subgraph pairs (with individual sub- graphs being the subsets of leftward and rightward arcs of each color). The assignment of pairs is dif- ferent from the assignment of subgraphs for 4k-bit: apart from not needing null arcs, the arc from w3 tow6can now be assigned to the first subgraph pair (vs. the second subgraph in 4k-bit) since arcs in different directions that share a dependent can coexist in the different subgraphs of a same pair. Coverage The encoding has coverage over the set of dependency graphs that can be split into 2k subgraphs meeting the above conditions. This is a subset of (1) relaxed k-planar graphs (as each sub- graph is relaxed 1-planar, so each subgraph pair is also relaxed 1-planar since joining a leftward graph with a rightward graph cannot generate crossings of arcs in the same direction, and the graph is the union of ksuch pairs); and (2) dependency graphs with maximum in-degree 2k, since each of the 2k subgraphs can contribute one parent to a given node. The encoding cannot cover graphs where a node has more than kincoming arcs from the same di- rection – even if the in-degree does not surpass 2k– as we only have krightward (leftward) subgraphs. In practice, as can be seen in Tables 1 and 2, the coverage of this encoding is larger than that of the 4k-bit one for the same value of k, although the latter is more compact. Decoding Again done by running the decoding by Gómez-Rodríguez et al. (2023), in this case for each three-bit subgraph, and then joining them. 5 Model architecture Letwbe our input sentence. The model produces a sequence of vectors ⃗ w= [⃗ w1, ⃗ w2, . . . , ⃗ w n]using a generic encoder. This encoder can range from lookup tables mapping words to static embeddings6 to modern encoders that output contextualized em- beddings. We then use a generic decoder, Dϕ, to make output predictions at the word level. The decoder could also vary widely, from simple feed- forward networks to more sophisticated architec- tures. Let ⃗ o= [⃗ o1, ⃗ o2, . . . , ⃗ o n] =Dϕ(⃗ w)be the output representations for each word. We use these outputs to predict each component xifrom the la- belli= (di, xi). The sequence (x1, ..., x n)is fed into the specific decoding process to recover the set of predicted arcs ˆE. This process will vary for each proposed encoding. To predict the label repre- senting the relationship between the pair of nodes of a predicted arc (wi, wj)∈ˆE, we concatenate their representations [(⃗ wi|⃗ wj) : (wi, wj)∈ˆE]and use them to feed another generic decoder Dφthat predicts the component diassociated with them. 6 Experiments We present our empirical findings on two kinds of dependency graph structures with different proper- 6However, previous work showed that continuous, contex- tualized representations are needed for accurate outputs. 6\nDM en PAS en PSD en UF LF OF UF LF OF UF LF OF A 69.51 68.82 100 87.79 86.36 100 44.81 39.87 100 R 69.14 68.42 100 87.67 86.21 100 42.12 37.32 100 B295.07 94.33 99.95 95.73 94.24 99.98 91.99 81.40 99.73 B395.26 94.58 100 95.75 94.26 100 92.20 81.62 99.98 B4286.33 85.66 90.45 79.80 78.73 82.46 92.34 81.47 99.58 B4393.01 92.18 97.64 89.56 88.21 92.89 92.55 81.81 99.95 B4494.71 93.99 99.56 93.55 92.11 97.27 92.48 81.58 100 B6291.26 90.65 96.09 87.43 86.37 91.28 92.58 81.93 99.69 B6394.76 94.15 99.44 93.48 92.17 97.38 92.48 81.87 99.96 B6495.16 94.40 99.95 95.23 93.79 99.27 92.62 81.89 100 Biaf 95.33 94.57 100 93.48 92.18 100 92.57 82.09 100 Table 1: DAG in-distribution results. UF and LF are the accuracy metrics discussed in the text. OF is the coverage of each encoding in terms of oracle F-score. Best accuracies among sequence-labeling encodings for each dataset highlighted in bold. ties: semantic dependency parsing, which produces directed acyclic graphs; and enhanced UD parsing, which can result in cyclic outputs. For lack of space, extra results are shown in Appendix A.3. Datasets For semantic dependency parsing, we will use the following datasets in English and for- malisms from the SemEval 2018 Task 18 : (i) DELPH-IN MRS-Derived Bi- Lexical Dependencies , (ii), the Enju Predicate–Argument Structures , and (iii) the Prague Semantic Dependencies (PSD Haji ˇc et al., 2012). They are all collections of sentences annotated as graphs, where some tokens do not contribute to the graph, some might have just one parent, and others mul- tiple parents. These representations are directed acyclic graphs, excluding the study of relevant phe- nomena involving cycles. To study cycles, we rely on enhanced universal dependencies, particularly on the version released together with the IWPT 2021 Shared Task . Appendix A.1 shows a detailed description of languages we selected for evaluation in our work: Arabic, French, Finnish, and Slovak. Metrics We use the SDP evaluation toolkit7 . We report bot unlabeled and labeled F1 score (LF, UF) w.r.t. the predicted de- pendencies, i.e. triplets of the shape (predicate, role, argument).8Further metrics, including preci- sion, recall and exact match, are in the Appendix. 7https://github.com/semantic-dependency-parsing/ toolkit 8Detection of root nodes is considered as identifying addi- tional virtual dependencies, and counts for evaluation.Models’ setup For the encoders of our taggers, we use a few representatives. We run experiments using two Transformer-based encoders: (1) XLM- RoBERTa , as a single multi- lingual pre-trained encoder for non-English exper- iments, and (2) XLNet for En- glish experiments. The decoder is a 1-layered feed- forward network followed by a softmax. We did a minor hyperparameter search to tune the optimizer and adapted the batch size to our resources.9For comparison, we include an external model (Biaf), a biaffine semantic dependency parser - from supar10- using the same encoders as for our experiments. 6.1 Empirical results We provide here the main results of our experi- ments – supplementary data like additional accu- racy metrics, as well as other data like treebank statistics, can be found in the appendices. For brevity, we refer to absolute and relative graph positional encodings as A and R. Bracketing encodings are denoted as B k, where krepresents the number of supported planes. For the 4k-bit and6k-bit encodings, we use B4 kand B6 k, respec- tively, where kindicates the value of k(number of subgraphs or pairs the graph is divided into). Table 1 shows the results for the DAG experi- ments, including DM, PAS, and PSD in-domain test data, for all our encodings. For space reasons, results on the out-of-domain test sets are in the Appendix (Table 6), but the trends are similar. Positional encodings yield poor results across the 9See Appendix A.2 for more details about the model size and training specifications. 10https://github.com/yzhangcs/parser 7\nar fr fi sk UF LF OF UF LF OF UF LF OF UF LF OF A 32.97 30.78 100 53.95 50.34 100 48.95 45.34 100 59.68 55.24 100 R 33.65 31.39 100 45.29 43.05 100 47.56 44.05 100 61.65 56.92 100 B287.73 81.02 99.82 90.48 86.02 99.97 90.60 87.52 99.93 93.10 89.88 99.79 B387.93 81.24 99.94 90.66 86.54 100 90.91 87.86 100 93.30 90.18 99.96 B4288.54 81.97 99.77 92.43 87.87 99.87 91.66 88.57 99.60 94.18 90.71 99.72 B4388.07 81.34 99.90 92.58 87.78 99.98 91.45 88.15 99.87 93.87 90.45 99.94 B4488.06 81.32 99.94 91.66 87.55 100 91.63 88.55 99.94 94.23 90.56 99.99 B6288.30 81.70 99.85 91.84 87.63 99.93 91.31 88.34 99.69 94.24 91.24 99.85 B6388.28 81.43 99.93 91.81 87.91 99.99 91.89 88.93 99.89 94.23 90.95 99.97 B6488.14 81.51 99.95 92.27 88.13 100 91.48 88.49 99.95 94.19 90.96 99.99 Biaf 88.77 81.66 100 93.88 90.22 100 93.82 91.14 100 94.13 90.43 100 Table 2: EUD parsing results on IWPT datasets. Notation as in Table 1 . board, proving inadequate for dependency graph parsing, probably due to the extra sparsity from having to represent a list of head positions rather than a single one as in syntactic parsing. The rest of our encodings, though, are much more robust: most of them are rougly on par or even outperform the biaffine parser, a competitive baseline. Bounded en- codings (especially B6 4) excel in cases where they achieve almost total theoretical coverage (column OF on the table). However, in PAS, where cover- age of bounded encodings is somewhat lower due to higher graph density (see Appendix Table 4 for graph density statistics), unbounded brackets are clearly better, with their 3-plane variant achieving the best accuracy. In this respect, it is also worth noting that 3 planes work better than 2 in most cases in un- bounded bracketing. Even though differences are small since the extra coverage that they afford is very small (e.g. from 99.95% and 99.98% to 100% in English DM and PAS) the third plane gener- ally seems beneficial. This raises the question on whether the same might also be the case in syn- tactic dependency parsing, where the common as- sumption is that the third plane is not worth trying because the added complexity to learn it would offset any coverage gains. For EUD parsing (Table 2), in spite of the pres- ence of cycles, graphs are sparser than in our DAG datasets, so all encodings have fairly good cover- age. In this context, bounded encodings obtain the best performance, consistent with the DAG results, although in this case they fall somewhat short of the biaffine parser for some languages, being on par in others. Another difference is that, whereas in the DAG datasets the best bounded results were almost always by B6 4(the widest-coverage, but less com- pact, bounded encoding), for EUD there is more di- versity, with the more compact 4-bit encodings B4 2 and B4 3being best in some datasets. Again, this seems related to coverage (these encodings had rel- atively low coverage on our DAG datasets, but over 99.6% on all EUD datasets). Finally, unbounded bracketings underperform bounded encodings in the UD datasets, confirming that they are worse for sparse graphs – but the phenomenon that 3 planes are better than 2 for unbounded brackets (in spite of the minimal difference in coverage) is still present. Regarding practical efficiency, our models achieve speeds around 25% faster than the biaffine baseline (e.g. with XLNet, supar achieves 2387 tokens/s on the DM dataset, while our most ac- curate encoding on that dataset clocks in at 3037 tokens/s). 7 Conclusion For the first time, we have framed dependency graph parsing tasks, like semantic dependency or EUD parsing, as sequence labeling tasks. We have proposed a wide variety of bounded and unbounded encodings that – with the right representation – can be learned by standard encoders. Among un- bounded encodings, positional strategies performed poorly, but bracketing-based encoders obtained ro- bust performance, excelling especially in dense datasets. On the other hand, the more compact bounded encodings, with a fixed number of bits per label, obtained the best results in sparser datasets. Overall, results are comparable or even outper- form a strong biaffine baseline. Thus, dependency graph parsing can effectively be solved as sequence labeling, as both bounded and unbounded encod- ings are learnable using standard bidirectional en- coders and simple feed-forward decoders. 8\nLimitations Anchoring This work focuses on dependency graph parsing, i.e. structured prediction problems where the input is a sentence and the output is a graph where nodes correspond to words. While this template fits a considerable range of tasks, includ- ing several flavors of semantic dependency parsing, EUD parsing or graph-based sentiment analysis; there are kinds of meaning representation parsing that do not fit this framework. More in detail, mean- ing representations can be hierarchically organized in different formal flavors for semantic graphs, as described by Oepen et al. (2019, 2020). These fla- vors refer to the relationship between the words of a sentence and the nodes of the graph (known asanchoring ). The scope of our work includes flavor (0) representations. In this flavor, the nodes of the graphs are the tokens of the input sentence, meaning there is a one-to-one correspondence be- tween the nodes of the graph and the words. In future work, we aim to generalize our encodings to more relaxed flavors. However, we would like to remark that the technical contributions to casting these flavors as sequence labeling would mainly arise from aspects other than the linearizations of the graphs. Once the nodes of such graphs are com- puted, our linearizations could be directly applied to any dependency-based formalism. Physical resources Our computational resources are limited to eight shared RTX 3090 GPUs. De- spite this, we have successfully trained models for various formalisms and languages. This has given us insight into the learnability of different encod- ings. While more powerful architectures might boost our results, our extensive empirical findings clearly support the key contributions of our work. Ethical considerations We do not observe ethical implications in our work. Our research focuses on improving the technical aspects of semantic dependency parsing and en- hanced Universal Dependencies parsing, which are primarily computational and linguistic challenges. The methodologies and applications discussed do not involve sensitive personal data, human subjects, or scenarios that could lead to ethical concerns. Thus, our findings and techniques can be applied within the field without ethical reservations.
[ "4 and 7-bit Labeling for Projective and Non-Projective Dependency Trees", "Not All Linearizations Are Equally Data-Hungry in Sequence Labeling Parsing", "Structured Sentiment Analysis as Dependency Graph Parsing", "Bracketing Encodings for 2-Planar Dependency Parsing", "A Unifying Theory of Transition-based and Sequence Labeling Parsing", "MRP 2020: The Second Shared Task on Cross-Framework and Cross-Lingual Meaning Representation Parsing", "Parsing as Tagging", "Unsupervised Cross-lingual Representation Learning at Scale", "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "Tetra-Tagging: Word-Synchronous Parsing with Linear-Time Inference", "Viable Dependency Parsing as Sequence Labeling", "Constituent Parsing as Sequence Labeling", "Seq2seq Dependency Parsing", "Simpler but More Accurate Semantic Dependency Parsing", "Decoupled Weight Decay Regularization", "Generic Axiomatization of Families of Noncrossing Graphs in Dependency Parsing", "Empirical Evaluation of Rectified Activations in Convolutional Network", "Semantic Dependency Graph Parsing Using Tree Approximations", "Divisible Transition Systems and Multiplanar Dependency Parsing", "Who Did What to Whom? A Contrastive Study of Syntacto-Semantic Dependencies", "Announcing Prague Czech-English Dependency Treebank 2.0", "Corpus-Oriented Grammar Development for Acquiring a Head-Driven Phrase Structure Grammar from the Penn Treebank", "Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing", "Dependency Parsing via Sequence Generation", "From Raw Text to Enhanced Universal Dependencies: The Parsing Shared Task at IWPT 2021", "MRP 2019: Cross-Framework Meaning Representation Parsing", "Dependency Parsing as Sequence Labeling with Head-Based Encoding and Multi-Task Learning", "SemEval 2015 Task 18: Broad-Coverage Semantic Dependency Parsing", "Dependency Parsing as a Sequence Labeling Task" ]