Datasets:

Modalities:
Image
ArXiv:
Tags:
art
License:
shintaro-ozaki commited on
Commit
d716564
·
verified ·
1 Parent(s): de9a5f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -3
README.md CHANGED
@@ -1,3 +1,66 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ - ja
8
+ - zh
9
+ - fr
10
+ - it
11
+ - nl
12
+ - sv
13
+ - de
14
+ - ru
15
+ - es
16
+ tags:
17
+ - art
18
+ size_categories:
19
+ - 1M<n<10M
20
+ ---
21
+
22
+ # Dataset Card for Multilingual Explain Artworks: MultiExpArt
23
+
24
+ <!-- Provide a quick summary of the dataset. -->
25
+
26
+ This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
27
+
28
+ ## Dataset Description
29
+
30
+ ## Dataset Summary
31
+ > As the performance of Large-scale Vision Language Models (LVLMs) improves, they are increasingly capable of responding in multiple languages, and there is an expectation that the demand for explanations generated by LVLMs will grow. However, pre-training of Vision Encoder and the integrated training of LLMs with Vision Encoder are mainly conducted using English training data, leaving it uncertain whether LVLMs can completely handle their potential when generating explanations in languages other than English. In addition, multilingual QA benchmarks that create datasets using machine translation have cultural differences and biases, remaining issues for use as evaluation tasks. To address these challenges, this study created an extended dataset in multiple languages without relying on machine translation. This dataset that takes into account nuances and country-specific phrases was then used to evaluate the generation explanation abilities of LVLMs. Furthermore, this study examined whether Instruction-Tuning in resource-rich English improves performance in other languages. Our findings indicate that LVLMs perform worse in languages other than English compared to English. In addition, it was observed that LVLMs struggle to effectively manage the knowledge learned from English data.
32
+
33
+ ### Languages
34
+ This dataset is availale in 10 languages: English, Japanese, German, French, Italian, Swedish, Chinese, Spanish, Russian, and Dutch.
35
+
36
+ ### English Example
37
+ ```json
38
+ {
39
+ "id": 15,
40
+ "title": "The Waterseller of Seville",
41
+ "en_title": "The Waterseller of Seville",
42
+ "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/2/2a/El_aguador_de_Sevilla%2C_por_Diego_Vel%C3%A1zquez.jpg/270px-El_aguador_de_Sevilla%2C_por_Diego_Vel%C3%A1zquez.jpg",
43
+ "en_image_url": null,
44
+ "reference": "Velázquez's respect for the poor is evidence in the idea that the simple, elemental nature of poverty is profound and effective in depicting higher subjects and morals such as biblical stories .This simply means that even the slightest gestures are ones that are painted as if they were sacred acts. In the context of Velázquez's development as an artist, The Waterseller exhibits the beginnings of the technique found in the artist's mature creations. His insight into the person of the seller is symptomatic of his insight into the subjects of his great portraits, and his precise rendition of the small details of reality demonstrate his understanding of human perception.",
45
+ "entry": "section",
46
+ "prompt": "In The Waterseller of Seville, how is the Theme discussed?Please output in English.",
47
+ "section_entities": ["The Waterseller", "Seville"]
48
+ }
49
+
50
+ ```
51
+
52
+ ## Citation
53
+
54
+ ```bibtex
55
+ @misc{ozaki2025crosslingualexplanationartworklargescale,
56
+ title={Towards Cross-Lingual Explanation of Artwork in Large-scale Vision Language Models},
57
+ author={Shintaro Ozaki and Kazuki Hayashi and Yusuke Sakai and Hidetaka Kamigaito and Katsuhiko Hayashi and Taro Watanabe},
58
+ year={2025},
59
+ eprint={2409.01584},
60
+ archivePrefix={arXiv},
61
+ primaryClass={cs.CL},
62
+ url={https://arxiv.org/abs/2409.01584},
63
+ }
64
+ ```
65
+
66
+