2WikiMultihopQA / README.md
framolfese's picture
Update README.md
fe713bf verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: type
      dtype: string
    - name: evidences
      sequence:
        sequence: string
    - name: supporting_facts
      struct:
        - name: title
          sequence: string
        - name: sent_id
          sequence: int64
    - name: context
      struct:
        - name: title
          sequence: string
        - name: sentences
          sequence:
            sequence: string
  splits:
    - name: train
      num_bytes: 664062413
      num_examples: 167454
    - name: validation
      num_bytes: 54492966
      num_examples: 12576
    - name: test
      num_bytes: 51538723
      num_examples: 12576
  download_size: 388043174
  dataset_size: 770094102
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

2WikiMultihopQA

This repository only repackages the original 2WikiMultihopQA data so that every example follows the field layout used by HotpotQA. The content of the underlying questions, answers and contexts is unaltered. All intellectual credit for creating 2WikiMultihopQA belongs to the authors of the paper Constructing a Multi‑hop QA Dataset for Comprehensive Evaluation of Reasoning Steps (COLING 2020) and the accompanying code/data in their GitHub project https://github.com/Alab-NII/2wikimultihop.

Dataset Summary

  • Name: 2WikiMultihopQA
  • What’s different: only the JSON schema. Each instance now has id, question, answer, type, evidences, supporting_facts, and context keys arranged exactly like HotpotQA so that existing HotpotQA data pipelines work out‑of‑the‑box.

The dataset still contains multi‑hop question‑answer pairs with supporting evidence chains drawn from Wikipedia. There are three splits:

split #examples
train 113,284
validation 12,981
test 12,995

Dataset Structure

Each JSON Lines file contains records like:

{
  "id": "13f5ad2c088c11ebbd6fac1f6bf848b6",
  "question": "Are director of film Move (1970 Film) and director of film Méditerranée (1963 Film) from the same country?",
  "answer": "no",
  "type": "bridge_comparison",
  "level": "unknown",
  "supporting_facts": {
    "title": ["Move (1970 film)", "Méditerranée (1963 film)", "Stuart Rosenberg", "Jean-Daniel Pollet"],
    "sent_id": [0, 0, 0, 0]
  },
  "context": {
    "title": ["Stuart Rosenberg", "Méditerranée (1963 film)", "Move (1970 film)", ...],
    "sentences": [["Stuart Rosenberg (August 11, 1927 – March 15, 2007) was an American film and television director ..."],
                   ["Méditerranée is a 1963 French experimental film directed by Jean-Daniel Pollet ..."],
                   ...]
  }
}

Field definitions

Field Type Description
id string Unique identifier (original _id).
question string Natural‑language question.
answer string Short answer span (may be "yes"/"no" for binary questions).
type string Original 2Wiki question type (e.g. bridge_comparison, comparison).
evidences List[List[string]] Structured data (subject, property, object) obtained from Wikidata.
supporting_facts.title List[string] Wikipedia page titles that contain evidence sentences.
supporting_facts.sent_id List[int] Zero‑based sentence indices within each page that support the answer.
context.title List[string] Titles for every paragraph provided to the model.
context.sentences List[List[string]] Tokenised sentences for each corresponding title.

Data Splits

The conversion keeps the same train/validation/test division as the original dataset. No documents or examples were removed or added.

Source Data

No new text was crawled; every paragraph is already present in the original dataset.

License

The original 2WikiMultihopQA is released under the Apache License 2.0. This redistribution keeps the same license. See LICENSE copied verbatim from the upstream repo.

How to Use

from datasets import load_dataset

ds = load_dataset("framolfese/2WikiMultihopQA")
print(ds["train"][0]["question"])