id
int64
953M
3.35B
number
int64
2.72k
7.75k
title
stringlengths
1
290
state
stringclasses
2 values
created_at
timestamp[s]date
2021-07-26 12:21:17
2025-08-23 00:18:43
updated_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-23 12:34:39
closed_at
timestamp[s]date
2021-07-26 13:27:59
2025-08-20 16:35:55
html_url
stringlengths
49
51
pull_request
dict
user_login
stringlengths
3
26
is_pull_request
bool
2 classes
comments
listlengths
0
30
997,127,487
2,919
Unwanted progress bars when accessing examples
closed
2021-09-15T14:05:10
2021-09-15T17:21:49
2021-09-15T17:18:23
https://github.com/huggingface/datasets/issues/2919
null
lhoestq
false
[ "doing a patch release now :)" ]
997,063,347
2,918
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
closed
2021-09-15T13:06:07
2021-12-01T08:15:00
2021-12-01T08:15:00
https://github.com/huggingface/datasets/issues/2918
null
SBrandeis
false
[ "Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389\r\n\r\nI will ask them if they are planning to fix it...", "Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```python\r\nIn [1]: import fsspec\r\n\r\nIn [2]: import json\r\n\r\nIn [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding=\"utf-8\") as f:\r\n ...: for row in f:\r\n ...: data = json.loads(row)\r\n ...:\r\n---------------------------------------------------------------------------\r\nClientPayloadError Traceback (most recent call last)\r\n```", "Thanks for investigating @albertvillanova ! 🤗 " ]
997,041,658
2,917
windows download abnormal
closed
2021-09-15T12:45:35
2021-09-16T17:17:48
2021-09-16T17:17:48
https://github.com/huggingface/datasets/issues/2917
null
wei1826676931
false
[ "Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used", "It is indeed an agency problem, thank you very, very much", "Let me know if you have other questions :)\r\n\r\nClosing this issue now" ]
997,003,661
2,916
Add OpenAI's pass@k code evaluation metric
closed
2021-09-15T12:05:43
2021-11-12T14:19:51
2021-11-12T14:19:50
https://github.com/huggingface/datasets/pull/2916
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2916", "html_url": "https://github.com/huggingface/datasets/pull/2916", "diff_url": "https://github.com/huggingface/datasets/pull/2916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2916.patch", "merged_at": "2021-11-12T14:19:50" }
lvwerra
true
[ "> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?\r\n\r\nIt should work normally, but feel free to test it.\r\nThere is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https://huggingface.co/docs/datasets/loading.html?highlight=rank#distributed-setup)\r\nYou can test to spawn several processes where each process would load the metric. Then in each process you add some references/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references/predictions\r\n\r\nLet me know if you have questions or if I can help", "Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages.", "Indeed it has an issue on windows.\r\nIn your example it's supposed to output\r\n```python\r\n{'pass@1': 0.5, 'pass@2': 1.0}\r\n```\r\nbut it gets\r\n```python\r\n{'pass@1': 0.0, 'pass@2': 0.0}\r\n```\r\n\r\nI'm not on my windows machine today so I can't take a look at it. I can dive into it early next week if you want", "> I'm not on my windows machine today so I can't take a look at it. I can dive into it early next week if you want\r\n\r\nThat would be great - unfortunately I have no access to a windows machine at the moment. I am quite sure it is an issue with in exectue.py because of multiprocessing.\r\n" ]
996,870,071
2,915
Fix fsspec AbstractFileSystem access
closed
2021-09-15T09:39:20
2021-09-15T11:35:24
2021-09-15T11:35:24
https://github.com/huggingface/datasets/pull/2915
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2915", "html_url": "https://github.com/huggingface/datasets/pull/2915", "diff_url": "https://github.com/huggingface/datasets/pull/2915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2915.patch", "merged_at": "2021-09-15T11:35:24" }
pierre-godard
true
[]
996,770,168
2,914
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
closed
2021-09-15T07:54:06
2021-09-15T16:49:17
2021-09-15T16:49:16
https://github.com/huggingface/datasets/issues/2914
null
pierre-godard
false
[ "Closed by #2915." ]
996,436,368
2,913
timit_asr dataset only includes one text phrase
closed
2021-09-14T21:06:07
2021-09-15T08:05:19
2021-09-15T08:05:18
https://github.com/huggingface/datasets/issues/2913
null
margotwagner
false
[ "Hi @margotwagner, \r\nThis bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally)", "Hi @margotwagner,\r\n\r\nYes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:\r\n> Environment info\r\n> - `datasets` version: 1.4.1" ]
996,256,005
2,912
Update link to Blog in docs footer
closed
2021-09-14T17:23:14
2021-09-15T07:59:23
2021-09-15T07:59:23
https://github.com/huggingface/datasets/pull/2912
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2912", "html_url": "https://github.com/huggingface/datasets/pull/2912", "diff_url": "https://github.com/huggingface/datasets/pull/2912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2912.patch", "merged_at": "2021-09-15T07:59:23" }
albertvillanova
true
[]
996,202,598
2,911
Fix exception chaining
closed
2021-09-14T16:19:29
2021-09-16T15:04:44
2021-09-16T15:04:44
https://github.com/huggingface/datasets/pull/2911
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2911", "html_url": "https://github.com/huggingface/datasets/pull/2911", "diff_url": "https://github.com/huggingface/datasets/pull/2911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2911.patch", "merged_at": "2021-09-16T15:04:44" }
albertvillanova
true
[]
996,149,632
2,910
feat: 🎸 pass additional arguments to get private configs + info
closed
2021-09-14T15:24:19
2021-09-15T16:19:09
2021-09-15T16:19:06
https://github.com/huggingface/datasets/pull/2910
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2910", "html_url": "https://github.com/huggingface/datasets/pull/2910", "diff_url": "https://github.com/huggingface/datasets/pull/2910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2910.patch", "merged_at": null }
severo
true
[ "Included in https://github.com/huggingface/datasets/pull/2906" ]
996,002,180
2,909
fix anli splits
closed
2021-09-14T13:10:35
2021-10-13T11:27:49
2021-10-13T11:27:49
https://github.com/huggingface/datasets/pull/2909
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2909", "html_url": "https://github.com/huggingface/datasets/pull/2909", "diff_url": "https://github.com/huggingface/datasets/pull/2909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2909.patch", "merged_at": null }
zaidalyafeai
true
[]
995,970,612
2,908
Update Zenodo metadata with creator names and affiliation
closed
2021-09-14T12:39:37
2021-09-14T14:29:25
2021-09-14T14:29:25
https://github.com/huggingface/datasets/pull/2908
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2908", "html_url": "https://github.com/huggingface/datasets/pull/2908", "diff_url": "https://github.com/huggingface/datasets/pull/2908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2908.patch", "merged_at": "2021-09-14T14:29:25" }
albertvillanova
true
[]
995,968,152
2,907
add story_cloze dataset
closed
2021-09-14T12:36:53
2021-10-08T21:41:42
2021-10-08T21:41:41
https://github.com/huggingface/datasets/pull/2907
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2907", "html_url": "https://github.com/huggingface/datasets/pull/2907", "diff_url": "https://github.com/huggingface/datasets/pull/2907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2907.patch", "merged_at": null }
zaidalyafeai
true
[ "Will create a new one, this one seems to be missed up. " ]
995,962,905
2,906
feat: 🎸 add a function to get a dataset config's split names
closed
2021-09-14T12:31:22
2021-10-04T09:55:38
2021-10-04T09:55:37
https://github.com/huggingface/datasets/pull/2906
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2906", "html_url": "https://github.com/huggingface/datasets/pull/2906", "diff_url": "https://github.com/huggingface/datasets/pull/2906.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2906.patch", "merged_at": "2021-10-04T09:55:37" }
severo
true
[ "> Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)\r\n\r\nYes totally :) This tutorial should indeed mention this, given how fundamental it is" ]
995,843,964
2,905
Update BibTeX entry
closed
2021-09-14T10:16:17
2021-09-14T12:25:37
2021-09-14T12:25:37
https://github.com/huggingface/datasets/pull/2905
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2905", "html_url": "https://github.com/huggingface/datasets/pull/2905", "diff_url": "https://github.com/huggingface/datasets/pull/2905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2905.patch", "merged_at": "2021-09-14T12:25:37" }
albertvillanova
true
[]
995,814,222
2,904
FORCE_REDOWNLOAD does not work
open
2021-09-14T09:45:26
2021-10-06T09:37:19
null
https://github.com/huggingface/datasets/issues/2904
null
anoopkatti
false
[ "Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.\r\n\r\nIf we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.\r\nCurrently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue.", "Facing the same issue, is there any way to overtake this issue until it will be fixed? ", "You can clear your extraction cache in the meantime (by default at `~/.cache/huggingface/datasets/downloads/extracted`)" ]
995,715,191
2,903
Fix xpathopen to accept positional arguments
closed
2021-09-14T08:02:50
2021-09-14T08:51:21
2021-09-14T08:40:47
https://github.com/huggingface/datasets/pull/2903
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2903", "html_url": "https://github.com/huggingface/datasets/pull/2903", "diff_url": "https://github.com/huggingface/datasets/pull/2903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2903.patch", "merged_at": "2021-09-14T08:40:47" }
albertvillanova
true
[ "thanks!" ]
995,254,216
2,902
Add WIT Dataset
closed
2021-09-13T19:38:49
2024-10-02T15:37:48
2022-06-01T17:28:40
https://github.com/huggingface/datasets/issues/2902
null
nateraw
false
[ "@hassiahk is working on it #2810 ", "WikiMedia is now hosting the pixel values directly which should make it a lot easier!\r\nThe files can be found here:\r\nhttps://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/\r\nhttps://analytics.wikimedia.org/published/datasets/one-off/caption_competition/training/image_pixels/", "> @hassiahk is working on it #2810\r\n\r\nThank you @bhavitvyamalik! Added this issue so we could track progress 😄 . Just linked the PR as well for visibility. ", "Hey folks, we are now hosting the merged pixel values + embeddings + metadata ourselves. I gave it a try - [nateraw/wit](https://huggingface.co/datasets/nateraw/wit)\r\n\r\n**⚠️ - Make sure you add `streaming=True` unless you're prepared to download 400GB of data!**\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('nateraw/wit', streaming=True)\r\nexample = next(iter(ds))\r\n```\r\n\r\n```python\r\n>>> example = next(iter(ds['train']))\r\n>>> example.keys()\r\ndict_keys(['b64_bytes', 'original_width', 'image_url', 'wit_features', 'original_height', 'metadata_url', 'mime_type', 'caption_attribution_description', 'embedding'])\r\n>>> example['wit_features'].keys()\r\ndict_keys(['hierarchical_section_title', 'language', 'attribution_passes_lang_id', 'context_section_description', 'is_main_image', 'page_title', 'caption_title_and_reference_description', 'caption_alt_text_description', 'caption_reference_description', 'page_url', 'context_page_description', 'section_title', 'page_changed_recently'])\r\n```", "Hi! `datasets` now hosts two versions of the WIT dataset:\r\n* [`google/wit`](https://huggingface.co/datasets/google/wit): Google's version with the image URLs\r\n* [`wikimedia/wit_base`](https://huggingface.co/datasets/wikimedia/wit_base): Wikimedia's version with the images + ResNet embeddings, but with less data than Google's version", "Does this dataset work with `data_files` parameter. I tried and it doesn't. it attempts to download ~600 GB data even when i specify single file" ]
995,232,844
2,901
Incompatibility with pytest
closed
2021-09-13T19:12:17
2021-09-14T08:40:47
2021-09-14T08:40:47
https://github.com/huggingface/datasets/issues/2901
null
severo
false
[ "Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it!" ]
994,922,580
2,900
Fix null sequence encoding
closed
2021-09-13T13:55:08
2021-09-13T14:17:43
2021-09-13T14:17:42
https://github.com/huggingface/datasets/pull/2900
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2900", "html_url": "https://github.com/huggingface/datasets/pull/2900", "diff_url": "https://github.com/huggingface/datasets/pull/2900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2900.patch", "merged_at": "2021-09-13T14:17:42" }
lhoestq
true
[]
994,082,432
2,899
Dataset
closed
2021-09-12T07:38:53
2021-09-12T16:12:15
2021-09-12T16:12:15
https://github.com/huggingface/datasets/issues/2899
null
rcacho172
false
[]
994,032,814
2,898
Hug emoji
closed
2021-09-12T03:27:51
2021-09-12T16:13:13
2021-09-12T16:13:13
https://github.com/huggingface/datasets/issues/2898
null
Jackg-08
false
[]
993,798,386
2,897
Add OpenAI's HumanEval dataset
closed
2021-09-11T09:37:47
2021-09-16T15:02:11
2021-09-16T15:02:11
https://github.com/huggingface/datasets/pull/2897
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2897", "html_url": "https://github.com/huggingface/datasets/pull/2897", "diff_url": "https://github.com/huggingface/datasets/pull/2897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2897.patch", "merged_at": "2021-09-16T15:02:11" }
lvwerra
true
[ "I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :)" ]
993,613,113
2,896
add multi-proc in `to_csv`
closed
2021-09-10T21:35:09
2021-10-28T05:47:33
2021-10-26T16:00:42
https://github.com/huggingface/datasets/pull/2896
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2896", "html_url": "https://github.com/huggingface/datasets/pull/2896", "diff_url": "https://github.com/huggingface/datasets/pull/2896.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2896.patch", "merged_at": "2021-10-26T16:00:41" }
bhavitvyamalik
true
[ "I think you can just add a test `test_dataset_to_csv_multiproc` in `tests/io/test_csv.py` and we'll be good", "Hi @lhoestq, \r\nI've added `test_dataset_to_csv` apart from `test_dataset_to_csv_multiproc` as no test was there to check generated CSV file when `num_proc=1`. Please let me know if anything is also required! " ]
993,462,274
2,895
Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast
closed
2021-09-10T17:56:57
2021-09-21T22:50:01
2021-09-21T08:18:35
https://github.com/huggingface/datasets/pull/2895
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2895", "html_url": "https://github.com/huggingface/datasets/pull/2895", "diff_url": "https://github.com/huggingface/datasets/pull/2895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2895.patch", "merged_at": "2021-09-21T08:18:35" }
arsarabi
true
[]
993,375,654
2,894
Fix COUNTER dataset
closed
2021-09-10T16:07:29
2021-09-10T16:27:45
2021-09-10T16:27:44
https://github.com/huggingface/datasets/pull/2894
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2894", "html_url": "https://github.com/huggingface/datasets/pull/2894", "diff_url": "https://github.com/huggingface/datasets/pull/2894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2894.patch", "merged_at": "2021-09-10T16:27:44" }
albertvillanova
true
[]
993,342,781
2,893
add mbpp dataset
closed
2021-09-10T15:27:30
2021-09-16T09:35:42
2021-09-16T09:35:42
https://github.com/huggingface/datasets/pull/2893
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2893", "html_url": "https://github.com/huggingface/datasets/pull/2893", "diff_url": "https://github.com/huggingface/datasets/pull/2893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2893.patch", "merged_at": "2021-09-16T09:35:42" }
lvwerra
true
[ "I think it's fine to have the original schema" ]
993,274,572
2,892
Error when encoding a dataset with None objects with a Sequence feature
closed
2021-09-10T14:11:43
2021-09-13T14:18:13
2021-09-13T14:17:42
https://github.com/huggingface/datasets/issues/2892
null
lhoestq
false
[ "This has been fixed by https://github.com/huggingface/datasets/pull/2900\r\nWe're doing a new release 1.12 today to make the fix available :)" ]
993,161,984
2,891
Allow dynamic first dimension for ArrayXD
closed
2021-09-10T11:52:52
2021-11-23T15:33:13
2021-10-29T09:37:17
https://github.com/huggingface/datasets/pull/2891
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2891", "html_url": "https://github.com/huggingface/datasets/pull/2891", "diff_url": "https://github.com/huggingface/datasets/pull/2891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2891.patch", "merged_at": "2021-10-29T09:37:17" }
rpowalski
true
[ "@lhoestq, thanks for your review.\r\n\r\nI added test for `to_pylist`, I didn't do that for `to_numpy` because this method shouldn't be called for dynamic dimension ArrayXD - this method will try to make a single numpy array for the whole column which cannot be done for dynamic arrays.\r\n\r\nI dig into `to_pandas()` functionality and I found it quite difficult to implement. `PandasArrayExtensionArray` takes single np.array as an argument. It might be a bit of changes to make it work with the list of arrays. Do you mind if we exclude this work from this PR. I added an error message for the case if somebody tries to use dynamic arrays with `to_pandas`", "@lhoestq, I just fixed all the tests. Let me know if there is anything else to add.", "@lhoestq, any chance you had some time to check out this PR?\r\n", "Hi ! Sorry for the delay\r\n\r\nIt looks good to me ! I think the only thing missing is the support for passing a list of numpy arrays to `map` when the first dimension is dynamic.\r\n\r\nCurrently it raises an error:\r\n```python\r\nfrom datasets import *\r\nimport numpy as np\r\n\r\nfeatures= Features({\"a\": Array3D(shape=(None, 5, 2), dtype=\"int32\")})\r\nd = Dataset.from_dict({\"a\": [np.zeros((5,5,2)), np.zeros((2,5,2))]}, features=features)\r\nd = d.map(lambda a: {\"a\": np.concatenate([a]*2)}, input_columns=\"a\")\r\nprint(d[0])\r\n```\r\nraises\r\n```python\r\nTraceback (most recent call last):\r\n File \"playground/ttest.py\", line 6, in <module>\r\n d = d.map(lambda x: {\"a\": np.concatenate([x]*2)}, input_columns=\"a\")\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_dataset.py\", line 1932, in map\r\n return self._map_single(\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_dataset.py\", line 426, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/home/truent/hf/datasets/src/datasets/fingerprint.py\", line 406, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_dataset.py\", line 2317, in _map_single\r\n writer.finalize()\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_writer.py\", line 443, in finalize\r\n self.write_examples_on_file()\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_writer.py\", line 312, in write_examples_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow/array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/truent/hf/datasets/src/datasets/arrow_writer.py\", line 108, in __arrow_array__\r\n storage = pa.array(self.data, type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 305, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values\r\n```\r\n\r\nI think the issue is that here we don't cover the case where self.data is a list of numpy arrays:\r\n\r\nhttps://github.com/huggingface/datasets/blob/55fd140a63b8f03a0e72985647e498f1fc799d3f/src/datasets/arrow_writer.py#L104-L109\r\n\r\nWe should remove the `isinstance(self.data[0], np.ndarray)` part and add these lines to cover this case:\r\n\r\nhttps://github.com/huggingface/datasets/blob/55fd140a63b8f03a0e72985647e498f1fc799d3f/src/datasets/arrow_writer.py#L112-L113", "@lhoestq, thanks, good catch!\r\nAre you able to run this check with fixed dimension ArrayXD?\r\nfor below example\r\n```\r\nimport numpy as np\r\nfrom datasets import *\r\n\r\nfeatures = Features({\"a\": Array3D(shape=(2, 5, 2), dtype=\"int32\")})\r\nd = Dataset.from_dict({\"a\": [np.zeros((2, 5, 2)), np.zeros((2, 5, 2))]}, features=features)\r\nd = d.map(lambda a: {\"a\": np.array(a) + 1}, input_columns=\"a\")\r\nprint(d[0])\r\n```\r\n\r\nI am getting:\r\n```\r\n File \"/home/ib/datasets/src/datasets/arrow_writer.py\", line 116, in __arrow_array__\r\n if trying_type and out[0].as_py() != self.data[0]:\r\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\r\n```", "Nevertheless, I tried to fix that. Let me know if that works.", "@lhoestq, just resolved the conflicts. Let me know if there is anything left to do with this PR", "Hi, thanks a lot for your comments.\r\nAgree, happy to contribute to this topic in future PRs", "Hi @rpowalski, thanks for adding this feature! \r\n\r\nI wanted to check if you are still interested in documenting this, otherwise I'd be happy to help with it :)" ]
993,074,102
2,890
0x290B112ED1280537B24Ee6C268a004994a16e6CE
closed
2021-09-10T09:51:17
2021-09-10T11:45:29
2021-09-10T11:45:29
https://github.com/huggingface/datasets/issues/2890
null
rcacho172
false
[]
992,968,382
2,889
Coc
closed
2021-09-10T07:32:07
2021-09-10T11:45:54
2021-09-10T11:45:54
https://github.com/huggingface/datasets/issues/2889
null
Bwiggity
false
[]
992,676,535
2,888
v1.11.1 release date
closed
2021-09-09T21:53:15
2021-09-12T20:18:35
2021-09-12T16:15:39
https://github.com/huggingface/datasets/issues/2888
null
fcakyon
false
[ "Hi ! Probably 1.12 on monday :)\r\n", "@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :)" ]
992,576,305
2,887
#2837 Use cache folder for lockfile
closed
2021-09-09T19:55:56
2021-10-05T17:58:22
2021-10-05T17:58:22
https://github.com/huggingface/datasets/pull/2887
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2887", "html_url": "https://github.com/huggingface/datasets/pull/2887", "diff_url": "https://github.com/huggingface/datasets/pull/2887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2887.patch", "merged_at": "2021-10-05T17:58:22" }
Dref360
true
[ "The CI fail about the meteor metric is unrelated to this PR " ]
992,534,632
2,886
Hj
closed
2021-09-09T18:58:52
2021-09-10T11:46:29
2021-09-10T11:46:29
https://github.com/huggingface/datasets/issues/2886
null
Noorasri
false
[]
992,160,544
2,885
Adding an Elastic Search index to a Dataset
open
2021-09-09T12:21:39
2021-10-20T18:57:11
null
https://github.com/huggingface/datasets/issues/2885
null
MotzWanted
false
[ "Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?\r\n\r\nAlso, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env", "I face similar issue with oscar dataset on remote ealsticsearch instance. It was mainly due to timeout of batch indexing requests and I solve these by adding large request_timeout param in `search.py`\r\n\r\n```\r\n for ok, action in es.helpers.streaming_bulk(\r\n client=self.es_client,\r\n index=index_name,\r\n actions=passage_generator(),\r\n request_timeout=3600,\r\n )\r\n ```", "Hi @MotzWanted - are there any errors in the Elasticsearch cluster logs? Since it works in your local environment and the cluster versions are different between your poetry env and your local env, it is possible that it is some difference in the cluster - either settings or the cluster being under a different load etc that has this effect, so it would be useful to see if any errors are thrown in the cluster's logs when you try to ingest. \r\nWhich elasticsearch client method is the function `add_elasticsearch_index` from your code using under the hood? Is it `helpers.bulk` or is the indexing performed using something else? You can try adding a timeout to the indexing method to see if this helps. Also, you mention that it stops at around 90% - do you know if the timeout/hanging happens always when a particular document is being indexed or does it happen randomly at around 90% completeness but on different documents?" ]
992,135,698
2,884
Add IC, SI, ER tasks to SUPERB
closed
2021-09-09T11:56:03
2021-09-20T09:17:58
2021-09-20T09:00:49
https://github.com/huggingface/datasets/pull/2884
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2884", "html_url": "https://github.com/huggingface/datasets/pull/2884", "diff_url": "https://github.com/huggingface/datasets/pull/2884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2884.patch", "merged_at": "2021-09-20T09:00:49" }
anton-l
true
[ "Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: ", "Thank you so much for adding these subsets @anton-l! \r\n\r\n> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main\r\nAre we allowed to make these datasets public or would that violate the terms of their use?", "@lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us. \nFor example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(", "> @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.\r\n> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(\r\n\r\nI think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)?" ]
991,969,875
2,883
Fix data URLs and metadata in DocRED dataset
closed
2021-09-09T08:55:34
2021-09-13T11:24:31
2021-09-13T11:24:31
https://github.com/huggingface/datasets/pull/2883
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2883", "html_url": "https://github.com/huggingface/datasets/pull/2883", "diff_url": "https://github.com/huggingface/datasets/pull/2883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2883.patch", "merged_at": "2021-09-13T11:24:30" }
albertvillanova
true
[]
991,800,141
2,882
`load_dataset('docred')` results in a `NonMatchingChecksumError`
closed
2021-09-09T05:55:02
2021-09-13T11:24:30
2021-09-13T11:24:30
https://github.com/huggingface/datasets/issues/2882
null
tmpr
false
[ "Hi @tmpr, thanks for reporting.\r\n\r\nTwo weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).\r\n\r\nTherefore, the checksum needs to be updated.\r\n\r\nNormally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.\r\n\r\nI'm fixing all this.\r\n\r\n" ]
991,639,142
2,881
Add BIOSSES dataset
closed
2021-09-09T00:35:36
2021-09-13T14:20:40
2021-09-13T14:20:40
https://github.com/huggingface/datasets/pull/2881
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2881", "html_url": "https://github.com/huggingface/datasets/pull/2881", "diff_url": "https://github.com/huggingface/datasets/pull/2881.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2881.patch", "merged_at": "2021-09-13T14:20:40" }
bwang482
true
[]
990,877,940
2,880
Extend support for streaming datasets that use pathlib.Path stem/suffix
closed
2021-09-08T08:42:43
2021-09-09T13:13:29
2021-09-09T13:13:29
https://github.com/huggingface/datasets/pull/2880
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2880", "html_url": "https://github.com/huggingface/datasets/pull/2880", "diff_url": "https://github.com/huggingface/datasets/pull/2880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2880.patch", "merged_at": "2021-09-09T13:13:29" }
albertvillanova
true
[]
990,257,404
2,879
In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
closed
2021-09-07T18:53:45
2021-09-08T16:55:19
2021-09-08T09:12:28
https://github.com/huggingface/datasets/issues/2879
null
rcgale
false
[ "Hi @rcgale, thanks for reporting.\r\n\r\nPlease note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878\r\n\r\nIf you update `datasets` version, that should work.\r\n\r\nOn the other hand, would it be possible for @patrickvonplaten to update the [blog post](https://huggingface.co/blog/fine-tune-wav2vec2-english) with the correct version of `datasets`?", "I just proposed a change in the blog post.\r\n\r\nI had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.\r\n\r\nI still wonder, though, is there a way for downloads to be invalidated server-side? If the client can announce its version during a download request, perhaps the server could reject known incompatibilities? It would save much valuable time if `datasets` raised an informative error on a known problem (\"Error: the requested data set requires `datasets>=1.5.0`.\"). This kind of API versioning is a prudent move anyhow, as there will surely come a time when you'll need to make a breaking change to data.", "Also, thank you for a quick and helpful reply!" ]
990,093,316
2,878
NotADirectoryError: [WinError 267] During load_from_disk
open
2021-09-07T15:15:05
2021-09-07T15:15:05
null
https://github.com/huggingface/datasets/issues/2878
null
Grassycup
false
[]
990,027,249
2,877
Don't keep the dummy data folder or dataset_infos.json when resolving data files
closed
2021-09-07T14:09:04
2021-09-29T09:05:38
2021-09-29T09:05:38
https://github.com/huggingface/datasets/issues/2877
null
lhoestq
false
[ "Hi @lhoestq I am new to huggingface datasets, I would like to work on this issue!\r\n", "Thanks for the help :) \r\n\r\nAs mentioned in the PR, excluding files named \"dummy_data.zip\" is actually more general than excluding the files inside a \"dummy\" folder. I just did the change in the PR, I think we can merge it now" ]
990,001,079
2,876
Extend support for streaming datasets that use pathlib.Path.glob
closed
2021-09-07T13:43:45
2021-09-10T09:50:49
2021-09-10T09:50:48
https://github.com/huggingface/datasets/pull/2876
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2876", "html_url": "https://github.com/huggingface/datasets/pull/2876", "diff_url": "https://github.com/huggingface/datasets/pull/2876.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2876.patch", "merged_at": "2021-09-10T09:50:48" }
albertvillanova
true
[ "I am thinking that ideally we should call `fs.glob()` instead...", "Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;) \r\n\r\nI have added `rglob` as well and fixed some bugs." ]
989,919,398
2,875
Add Congolese Swahili speech datasets
open
2021-09-07T12:13:50
2021-09-07T12:13:50
null
https://github.com/huggingface/datasets/issues/2875
null
osanseviero
false
[]
989,685,328
2,874
Support streaming datasets that use pathlib
closed
2021-09-07T07:35:49
2021-09-07T18:25:22
2021-09-07T11:41:15
https://github.com/huggingface/datasets/pull/2874
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2874", "html_url": "https://github.com/huggingface/datasets/pull/2874", "diff_url": "https://github.com/huggingface/datasets/pull/2874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2874.patch", "merged_at": "2021-09-07T11:41:15" }
albertvillanova
true
[ "I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```", "@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... 😅 ", "No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in!" ]
989,587,695
2,873
adding swedish_medical_ner
closed
2021-09-07T04:44:53
2021-09-17T20:47:37
2021-09-17T20:47:37
https://github.com/huggingface/datasets/pull/2873
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2873", "html_url": "https://github.com/huggingface/datasets/pull/2873", "diff_url": "https://github.com/huggingface/datasets/pull/2873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2873.patch", "merged_at": null }
bwang482
true
[ "Hi, what's the current status of this request? It says Changes requested, but I can't see what changes?", "Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.\r\n\r\nFeel free to remove these changes, or simply create a new PR that only contains the addition of the dataset" ]
989,453,069
2,872
adding swedish_medical_ner
closed
2021-09-06T22:00:52
2021-09-07T04:36:32
2021-09-07T04:36:32
https://github.com/huggingface/datasets/pull/2872
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2872", "html_url": "https://github.com/huggingface/datasets/pull/2872", "diff_url": "https://github.com/huggingface/datasets/pull/2872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2872.patch", "merged_at": null }
bwang482
true
[]
989,436,088
2,871
datasets.config.PYARROW_VERSION has no attribute 'major'
closed
2021-09-06T21:06:57
2021-09-08T08:51:52
2021-09-08T08:51:52
https://github.com/huggingface/datasets/issues/2871
null
bwang482
false
[ "I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.", "Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:\r\n- test_dataset_common.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`\r\n- but also changed config.py: https://github.com/huggingface/datasets/commit/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists\r\n", "Sorted. Thanks!", "Reopening this. Although the `test_dataset_common.py` script works fine now.\r\n\r\nHas this got something to do with my pull request not passing `ci/circleci: run_dataset_script_tests_pyarrow` tests?\r\n\r\nhttps://github.com/huggingface/datasets/pull/2873", "Hi @bwang482,\r\n\r\nIf you click on `Details` (on the right of your non passing CI test names: `ci/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.\r\n\r\nFor example, for [\"ci/circleci: run_dataset_script_tests_pyarrow_1\" details](https://circleci.com/gh/huggingface/datasets/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]\r\n= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =\r\n```\r\n\r\nTherefore, your PR non-passing test has nothing to do with this issue." ]
988,276,859
2,870
Fix three typos in two files for documentation
closed
2021-09-04T11:49:43
2021-09-06T08:21:21
2021-09-06T08:19:35
https://github.com/huggingface/datasets/pull/2870
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2870", "html_url": "https://github.com/huggingface/datasets/pull/2870", "diff_url": "https://github.com/huggingface/datasets/pull/2870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2870.patch", "merged_at": "2021-09-06T08:19:35" }
leny-mi
true
[]
987,676,420
2,869
TypeError: 'NoneType' object is not callable
closed
2021-09-03T11:27:39
2025-02-19T09:57:34
2021-09-08T09:24:55
https://github.com/huggingface/datasets/issues/2869
null
Chenfei-Kang
false
[ "Hi, @Chenfei-Kang.\r\n\r\nI'm sorry, but I'm not able to reproduce your bug:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"glue\", 'cola')\r\nds\r\n```\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1063\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details and environment info (platform, PyArrow version)?", "> Hi, @Chenfei-Kang.\r\n> \r\n> I'm sorry, but I'm not able to reproduce your bug:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> ds = load_dataset(\"glue\", 'cola')\r\n> ds\r\n> ```\r\n> \r\n> ```\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 8551\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1043\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1063\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Could you please give more details and environment info (platform, PyArrow version)?\r\n\r\nSorry to reply you so late.\r\nplatform: pycharm 2021 + anaconda with python 3.7\r\nPyArrow version: 5.0.0\r\nhuggingface-hub: 0.0.16\r\ndatasets: 1.9.0\r\n", "- For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?", "> * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?\r\n\r\n1. For the platform, here are the output:\r\n - datasets` version: 1.11.0\r\n - Platform: Windows-10-10.0.19041-SP0\r\n - Python version: 3.7.10\r\n - PyArrow version: 5.0.0\r\n2. For the code and error:\r\n ```python\r\n from datasets import load_dataset, load_metric\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n ```\r\n ```python\r\n Traceback (most recent call last):\r\n ....\r\n ....\r\n File \"my_file.py\", line 2, in <module>\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 830, in load_dataset\r\n **config_kwargs,\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 710, in load_dataset_builder\r\n **config_kwargs,\r\n TypeError: 'NoneType' object is not callable\r\n ```\r\n Thank you!", "For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.", "One naive question: do you have internet access from the machine where you execute the code?", "> For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.\r\n\r\nBut I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much!", "Hi,friends. I meet the same problem. Do you have a way to fix this? Thanks!\r\n", "I'm getting the same error. Have you solved the problem? Please tell me how to fix it", "same error, fix by Downgrade the datasets version to 2.16.0.", "Similar error when trying to download another dataset with 2.16.0, but 2.10.0 works", "Same issue in v3.1.0; resolved by downgrading to v2.21.0.", "same error, using the lower version 2.16.0 works", "I met the same error, how did you save your problem?", "> I met the same error, how did you save your problem?\r\n\r\ndown grade datasets version, try 2.16.0 or 2.10.0", "success solve the problem by downgrade to 2.10.0", "ERROR: Could not find a version that satisfies the requirement dataset==2.10.0 (from versions: 0.3, 0.3.3, 0.3.5, 0.3.6, 0.3.9, 0.3.10, 0.3.11, 0.3.12, 0.3.13, 0.3.14, 0.3.15, 0.4.0, 0.5.0, 0.5.1, 0.5.2, 0.5.4, 0.5.5, 0.5.6, 0.6.0, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.7.0, 0.7.1, 0.8.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.1.0, 1.1.1, 1.1.2, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.3.0, 1.3.1, 1.3.2, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.2)\nERROR: No matching distribution found for dataset==2.10.0\n" ]
987,139,146
2,868
Add Common Objects in 3D (CO3D)
open
2021-09-02T20:36:12
2024-01-17T12:03:59
null
https://github.com/huggingface/datasets/issues/2868
null
nateraw
false
[]
986,971,224
2,867
Add CaSiNo dataset
closed
2021-09-02T17:06:23
2021-09-16T15:12:54
2021-09-16T09:23:44
https://github.com/huggingface/datasets/pull/2867
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2867", "html_url": "https://github.com/huggingface/datasets/pull/2867", "diff_url": "https://github.com/huggingface/datasets/pull/2867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2867.patch", "merged_at": "2021-09-16T09:23:44" }
kushalchawla
true
[ "Hi @lhoestq \r\n\r\nJust a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you.", "Hey @lhoestq \r\n\r\nThanks for merging it. One question: I still cannot find the dataset on https://huggingface.co/datasets. Does it take some time or did I miss something?", "Hi ! It takes a few hours or a day for the list of datasets on the website to be updated ;)" ]
986,706,676
2,866
"counter" dataset raises an error in normal mode, but not in streaming mode
closed
2021-09-02T13:10:53
2021-10-14T09:24:09
2021-10-14T09:24:09
https://github.com/huggingface/datasets/issues/2866
null
severo
false
[ "Hi @severo, thanks for reporting.\r\n\r\nJust note that currently not all canonical datasets support streaming mode: this is one case!\r\n\r\nAll datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet.", "OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)?", "We should definitely support datasets using `pathlib` in streaming mode...\r\n\r\nFor non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654.", "Hi @severo, please note that \"counter\" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:\r\n- #2874\r\n- #2876\r\n- #2880\r\n\r\nI have tested it. 😉 ", "Now (on master), we get:\r\n\r\n```\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```\r\n\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1124, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"/home/slesage/hf/datasets/.venv/lib/python3.8/site-packages/tqdm/std.py\", line 1185, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py\", line 161, in _generate_examples\r\n with derived_file.open(encoding=\"utf-8\") as f:\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py\", line 1222, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py\", line 1078, in _opener\r\n return self._accessor.open(self, flags, mode)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 728, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file.\r\nOriginal error:\r\n[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'\r\n```\r\n\r\nThe error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!\r\n", "Note that we might want to open an issue to fix the \"counter\" dataset by itself, but I let it up to you.", "Fixed here: https://github.com/huggingface/datasets/pull/2894. Thanks @albertvillanova ", "On master, I get:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> iterable_dataset = ds.load_dataset('counter', split=\"train\", streaming=True)\r\n>>> rows = list(iterable_dataset.take(100))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 341, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 338, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 273, in __iter__\r\n yield from islice(self.ex_iterable, self.n)\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 78, in __iter__\r\n for key, example in self.generate_examples_fn(**self.kwargs):\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/b9e4378dbd3f5ce235d2302e48168c00196e67bbcd13cc7e1f6e69ef82c0cf2a/counter.py\", line 153, in _generate_examples\r\n files = sorted(base_path.glob(r\"[0-9][0-9][0-9][0-9].xml\"))\r\nTypeError: xpathglob() missing 1 required positional argument: 'pattern'\r\n```", "Associated to the above exception, if I create a test and run it with pytest, I get an awful traceback.\r\n\r\n- create a file `test_counter.py`\r\n\r\n```python\r\nimport pytest\r\nfrom datasets import load_dataset, IterableDataset\r\nfrom typing import Any, cast\r\n\r\n\r\ndef test_counter() -> Any:\r\n iterable_dataset = cast(IterableDataset, load_dataset(\"counter\", split=\"train\", streaming=True))\r\n with pytest.raises(TypeError):\r\n list(iterable_dataset.take(100))\r\n```\r\n\r\n- run the test with pytest\r\n\r\n```bash\r\n$ python -m pytest -x test_counter.py\r\n============================================================================================================================= test session starts ==============================================================================================================================\r\nplatform linux -- Python 3.9.6, pytest-6.2.5, py-1.10.0, pluggy-1.0.0\r\nrootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml\r\nplugins: anyio-3.3.2, cov-2.12.1\r\ncollected 1 item\r\n\r\ntests/test_counter.py . [100%]Traceback (most recent call last):\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pytest/__main__.py\", line 5, in <module>\r\n raise SystemExit(pytest.console_main())\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/config/__init__.py\", line 185, in console_main\r\n code = main()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/config/__init__.py\", line 162, in main\r\n ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 60, in _multicall\r\n return outcome.get_result()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/main.py\", line 316, in pytest_cmdline_main\r\n return wrap_session(config, _main)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/main.py\", line 304, in wrap_session\r\n config.hook.pytest_sessionfinish(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 55, in _multicall\r\n gen.send(outcome)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/terminal.py\", line 803, in pytest_sessionfinish\r\n outcome.get_result()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/cacheprovider.py\", line 428, in pytest_sessionfinish\r\n config.cache.set(\"cache/nodeids\", sorted(self.cached_nodeids))\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/cacheprovider.py\", line 188, in set\r\n f = path.open(\"w\")\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 199, in xpathopen\r\n return xopen(_as_posix(path), *args, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 117, in _as_posix\r\n path_as_posix = path.as_posix()\r\nAttributeError: 'str' object has no attribute 'as_posix'\r\n```\r\n", "I opened a PR to fix these issues.\r\nAlso in your test you expect a TypeError but I don't know why. On my side it works fine without raising a TypeError", "I had the issue (TypeError raised) on my branch, but it's fixed now. Thanks" ]
986,460,698
2,865
Add MultiEURLEX dataset
closed
2021-09-02T09:42:24
2021-09-10T11:50:06
2021-09-10T11:50:06
https://github.com/huggingface/datasets/pull/2865
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2865", "html_url": "https://github.com/huggingface/datasets/pull/2865", "diff_url": "https://github.com/huggingface/datasets/pull/2865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2865.patch", "merged_at": "2021-09-10T11:50:06" }
iliaschalkidis
true
[ "Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! ", "Hi @lhoestq, I adopted most of your suggestions:\r\n\r\n- Dummy data files reduced, including the 2 smallest documents per subset JSONL.\r\n- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.\r\n\r\nI would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. ", "Thanks for the changes :)\r\n\r\nRegarding the labels:\r\n\r\nIf you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.\r\nThe advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.\r\n\r\nLet me know if that sounds good to you or if you still want to stick with the labels as they are now.", "Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages')\r\n# Read strs from the labels (list of integers) for the 1st sample of the training split\r\n```\r\n\r\nI would like to include this in the README file.\r\n\r\nCould you also provide some info on how I could define the supervized key to automate model training, as you said?\r\n\r\nThanks!", "Thanks for the update :)\r\n\r\nHere is an example of usage:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages', split='train')\r\nclasslabel = dataset.features[\"labels\"].feature\r\nprint(dataset[0][\"labels\"])\r\n# [1, 20, 7, 3, 0]\r\nprint(classlabel.int2str(dataset[0][\"labels\"]))\r\n# ['100160', '100155', '100158', '100147', '100149']\r\n```\r\n\r\nThe ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p \r\n\r\nI think one last thing to do is just update the `dataset_infos.json` file and we'll be good !", "Everything is ready! 👍 \r\n" ]
986,159,438
2,864
Fix data URL in ToTTo dataset
closed
2021-09-02T05:25:08
2021-09-02T06:47:40
2021-09-02T06:47:40
https://github.com/huggingface/datasets/pull/2864
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2864", "html_url": "https://github.com/huggingface/datasets/pull/2864", "diff_url": "https://github.com/huggingface/datasets/pull/2864.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2864.patch", "merged_at": "2021-09-02T06:47:40" }
albertvillanova
true
[]
986,156,755
2,863
Update dataset URL
closed
2021-09-02T05:22:18
2021-09-02T08:10:50
2021-09-02T08:10:50
https://github.com/huggingface/datasets/pull/2863
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2863", "html_url": "https://github.com/huggingface/datasets/pull/2863", "diff_url": "https://github.com/huggingface/datasets/pull/2863.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2863.patch", "merged_at": null }
mrm8488
true
[ "Superseded by PR #2864.\r\n\r\n@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. 😉 " ]
985,081,871
2,861
fix: 🐛 be more specific when catching exceptions
closed
2021-09-01T12:18:12
2021-09-02T09:53:36
2021-09-02T09:52:03
https://github.com/huggingface/datasets/pull/2861
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2861", "html_url": "https://github.com/huggingface/datasets/pull/2861", "diff_url": "https://github.com/huggingface/datasets/pull/2861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2861.patch", "merged_at": null }
severo
true
[ "To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?\r\n\r\n", "Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a \"FileNotFoundError\" while it should not be caught. ", "And what about passing the `timeout` parameter instead?", "It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https://github.com/huggingface/datasets-preview-backend/tree/master/src/datasets_preview_backend/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset`", "I understand, you are trying to find a fix for your use case. OK.\r\n\r\nJust note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue...", "Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case." ]
985,013,339
2,860
Cannot download TOTTO dataset
closed
2021-09-01T11:04:10
2021-09-02T06:47:40
2021-09-02T06:47:40
https://github.com/huggingface/datasets/issues/2860
null
mrm8488
false
[ "Hola @mrm8488, thanks for reporting.\r\n\r\nApparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f\r\n\r\nI'm fixing it." ]
984,324,500
2,859
Loading allenai/c4 in streaming mode does too many HEAD requests
closed
2021-08-31T21:11:04
2021-10-12T07:35:52
2021-10-11T11:05:51
https://github.com/huggingface/datasets/issues/2859
null
lhoestq
false
[ "https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/src/datasets/builder.py#L179-L186", "Thanks a lot!!!" ]
984,145,568
2,858
Fix s3fs version in CI
closed
2021-08-31T18:05:43
2021-09-06T13:33:35
2021-08-31T21:29:51
https://github.com/huggingface/datasets/pull/2858
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2858", "html_url": "https://github.com/huggingface/datasets/pull/2858", "diff_url": "https://github.com/huggingface/datasets/pull/2858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2858.patch", "merged_at": "2021-08-31T21:29:51" }
lhoestq
true
[]
984,093,938
2,857
Update: Openwebtext - update size
closed
2021-08-31T17:11:03
2022-02-15T10:38:03
2021-09-07T09:44:32
https://github.com/huggingface/datasets/pull/2857
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2857", "html_url": "https://github.com/huggingface/datasets/pull/2857", "diff_url": "https://github.com/huggingface/datasets/pull/2857.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2857.patch", "merged_at": "2021-09-07T09:44:32" }
lhoestq
true
[ "merging since the CI error in unrelated to this PR and fixed on master" ]
983,876,734
2,856
fix: 🐛 remove URL's query string only if it's ?dl=1
closed
2021-08-31T13:40:07
2021-08-31T14:22:12
2021-08-31T14:22:12
https://github.com/huggingface/datasets/pull/2856
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2856", "html_url": "https://github.com/huggingface/datasets/pull/2856", "diff_url": "https://github.com/huggingface/datasets/pull/2856.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2856.patch", "merged_at": "2021-08-31T14:22:12" }
severo
true
[]
983,858,229
2,855
Fix windows CI CondaError
closed
2021-08-31T13:22:02
2021-08-31T13:35:34
2021-08-31T13:35:33
https://github.com/huggingface/datasets/pull/2855
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2855", "html_url": "https://github.com/huggingface/datasets/pull/2855", "diff_url": "https://github.com/huggingface/datasets/pull/2855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2855.patch", "merged_at": "2021-08-31T13:35:33" }
lhoestq
true
[]
983,726,084
2,854
Fix caching when moving script
closed
2021-08-31T10:58:35
2021-08-31T13:13:36
2021-08-31T13:13:36
https://github.com/huggingface/datasets/pull/2854
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2854", "html_url": "https://github.com/huggingface/datasets/pull/2854", "diff_url": "https://github.com/huggingface/datasets/pull/2854.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2854.patch", "merged_at": "2021-08-31T13:13:36" }
lhoestq
true
[ "Merging since the CI failure is unrelated to this PR" ]
983,692,026
2,853
Add AMI dataset
closed
2021-08-31T10:19:01
2021-09-29T09:19:19
2021-09-29T09:19:19
https://github.com/huggingface/datasets/pull/2853
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2853", "html_url": "https://github.com/huggingface/datasets/pull/2853", "diff_url": "https://github.com/huggingface/datasets/pull/2853.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2853.patch", "merged_at": "2021-09-29T09:19:18" }
cahya-wirawan
true
[ "Hey @cahya-wirawan, \r\n\r\nI played around with the dataset a bit and it looks already very good to me! That's exactly how it should be constructed :-) I can help you a bit with defining the config, etc... on Monday!", "@lhoestq - I think the dataset is ready to be merged :-) \r\n\r\nAt the moment, I don't really see how the failing tests correspond to this PR:\r\n- https://app.circleci.com/pipelines/github/huggingface/datasets/7838/workflows/932a40a2-3e11-48be-84f0-c6434510058e/jobs/48318?invite=true#step-107-18\r\n- https://app.circleci.com/pipelines/github/huggingface/datasets/7838/workflows/932a40a2-3e11-48be-84f0-c6434510058e/jobs/48316?invite=true#step-102-136\r\n\r\ncould you maybe give it a look? :-)" ]
983,609,352
2,852
Fix: linnaeus - fix url
closed
2021-08-31T08:51:13
2021-08-31T13:12:10
2021-08-31T13:12:09
https://github.com/huggingface/datasets/pull/2852
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2852", "html_url": "https://github.com/huggingface/datasets/pull/2852", "diff_url": "https://github.com/huggingface/datasets/pull/2852.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2852.patch", "merged_at": "2021-08-31T13:12:09" }
lhoestq
true
[ "Merging since the CI error is unrelated this this PR" ]
982,789,593
2,851
Update `column_names` showed as `:func:` in exploring.st
closed
2021-08-30T13:21:46
2021-09-01T08:42:11
2021-08-31T14:45:46
https://github.com/huggingface/datasets/pull/2851
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2851", "html_url": "https://github.com/huggingface/datasets/pull/2851", "diff_url": "https://github.com/huggingface/datasets/pull/2851.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2851.patch", "merged_at": "2021-08-31T14:45:46" }
ClementRomac
true
[]
982,654,644
2,850
Wound segmentation datasets
open
2021-08-30T10:44:32
2021-12-08T12:02:00
null
https://github.com/huggingface/datasets/issues/2850
null
osanseviero
false
[]
982,631,420
2,849
Add Open Catalyst Project Dataset
open
2021-08-30T10:14:39
2021-08-30T10:14:39
null
https://github.com/huggingface/datasets/issues/2849
null
osanseviero
false
[]
981,953,908
2,848
Update README.md
closed
2021-08-28T23:58:26
2021-09-07T09:40:32
2021-09-07T09:40:32
https://github.com/huggingface/datasets/pull/2848
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2848", "html_url": "https://github.com/huggingface/datasets/pull/2848", "diff_url": "https://github.com/huggingface/datasets/pull/2848.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2848.patch", "merged_at": "2021-09-07T09:40:32" }
odellus
true
[ "Merging since the CI error is unrelated to this PR and fixed on master" ]
981,589,693
2,847
fix regex to accept negative timezone
closed
2021-08-27T20:54:05
2021-09-13T20:39:50
2021-09-07T09:34:23
https://github.com/huggingface/datasets/pull/2847
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2847", "html_url": "https://github.com/huggingface/datasets/pull/2847", "diff_url": "https://github.com/huggingface/datasets/pull/2847.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2847.patch", "merged_at": "2021-09-07T09:34:23" }
jadermcs
true
[]
981,587,590
2,846
Negative timezone
closed
2021-08-27T20:50:33
2021-09-10T11:51:07
2021-09-10T11:51:07
https://github.com/huggingface/datasets/issues/2846
null
jadermcs
false
[ "Fixed by #2847." ]
981,487,861
2,845
[feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()`
open
2021-08-27T18:21:51
2021-08-27T18:24:05
null
https://github.com/huggingface/datasets/issues/2845
null
stas00
false
[]
981,382,806
2,844
Fix: wikicorpus - fix keys
closed
2021-08-27T15:56:06
2021-09-06T14:07:28
2021-09-06T14:07:27
https://github.com/huggingface/datasets/pull/2844
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2844", "html_url": "https://github.com/huggingface/datasets/pull/2844", "diff_url": "https://github.com/huggingface/datasets/pull/2844.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2844.patch", "merged_at": "2021-09-06T14:07:27" }
lhoestq
true
[ "The CI error is unrelated to this PR\r\n\r\n... merging !" ]
981,317,775
2,843
Fix extraction protocol inference from urls with params
closed
2021-08-27T14:40:57
2021-08-30T17:11:49
2021-08-30T13:12:01
https://github.com/huggingface/datasets/pull/2843
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2843", "html_url": "https://github.com/huggingface/datasets/pull/2843", "diff_url": "https://github.com/huggingface/datasets/pull/2843.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2843.patch", "merged_at": "2021-08-30T13:12:01" }
lhoestq
true
[ "merging since the windows error is just a CircleCI issue", "It works, eg https://observablehq.com/@huggingface/datasets-preview-backend-client#{%22datasetId%22%3A%22discovery%22} and https://datasets-preview.huggingface.tech/rows?dataset=discovery&config=discovery&split=train", "Nice !" ]
980,725,899
2,842
always requiring the username in the dataset name when there is one
closed
2021-08-26T23:31:53
2021-10-22T09:43:35
2021-10-22T09:43:35
https://github.com/huggingface/datasets/issues/2842
null
stas00
false
[ "From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix?", "I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:\r\n```\r\n# first run\r\npython -c \"from datasets import load_dataset; load_dataset('stas/openwebtext-10k')\"\r\n# now run immediately\r\npython -c \"from datasets import load_dataset; load_dataset('openwebtext-10k')\"\r\n# the second command should fail, but it doesn't fail now.\r\n```\r\n\r\nMoreover, if someone were to create `openwebtext-10k` w/o the prefix, they will now get the wrong dataset, if they previously downloaded `stas/openwebtext-10k`.\r\n\r\nAnd if there are 2 users with the same dataset name `foo/ds` and `bar/ds` - currently this won't work to get the correct dataset.\r\n\r\nSo really there 3 unrelated issues hiding in the current behavior.", "This has been fixed now, and we'll do a new release of the library today.\r\n\r\nNow the stas/openwebtext-10k dataset is cached at `.cache/huggingface/datasets/stas___openwebtext10k` and openwebtext-10k would be at `.cache/huggingface/datasets/openwebtext10k`. Since they are different, the cache won't fall back on loading the wrong one anymore.\r\n\r\nSame for the python script used to generate the dataset: stas/openwebtext-10k is cached at `.cache/huggingface/modules/datasets_modules/datasets/stas___openwebtext10k` and openwebtext-10k would be at `.cache/huggingface/modules/datasets_modules/datasets/openwebtext10k`", "Amazing! Thank you for adding this improvement, @lhoestq!", "(can be closed?)", "Yes indeed :) thanks" ]
980,497,321
2,841
Adding GLUECoS Hinglish and Spanglish code-switching bemchmark
open
2021-08-26T17:47:39
2021-10-20T18:41:20
null
https://github.com/huggingface/datasets/issues/2841
null
yjernite
false
[ "Hi @yjernite I am interested in adding this dataset. \r\nIn the repo they have also added a code mixed MT task from English to Hinglish [here](https://github.com/microsoft/GLUECoS#code-mixed-machine-translation-task). I think this could be a good dataset addition in itself and then I can add the rest of the GLUECoS tasks as one dataset. What do you think?" ]
980,489,074
2,840
How can I compute BLEU-4 score use `load_metric` ?
closed
2021-08-26T17:36:37
2021-08-27T08:13:24
2021-08-27T08:13:24
https://github.com/huggingface/datasets/issues/2840
null
Doragd
false
[]
980,271,715
2,839
OpenWebText: NonMatchingSplitsSizesError
closed
2021-08-26T13:50:26
2021-09-21T14:12:40
2021-09-21T14:09:43
https://github.com/huggingface/datasets/issues/2839
null
thomasw21
false
[ "Thanks for reporting, I'm updating the verifications metadata", "I just regenerated the verifications metadata and noticed that nothing changed: the data file is fine (the checksum didn't change), and the number of examples is still 8013769. Not sure how you managed to get 7982430 examples.\r\n\r\nCan you try to delete your cache ( by default at `~/.cache/huggingface/datasets`) and try again please ?\r\nAlso, on which platform are you (linux/macos/windows) ?", "I'll try without deleting the whole cache (we have large datasets already stored). I was under the impression that `download_mode=\"force_redownload\"` would bypass cache.\r\nSorry plateform should be linux (Redhat version 8.1)", "Hi @thomasw21 , are you still having this issue after clearing your cache ?", "Sorry I haven't had time to work on this. I'll close and re-open if I can't figure out why I'm having this issue. Thanks for taking a look !" ]
980,067,186
2,838
Add error_bad_chunk to the JSON loader
open
2021-08-26T10:07:32
2023-09-25T09:06:42
null
https://github.com/huggingface/datasets/pull/2838
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2838", "html_url": "https://github.com/huggingface/datasets/pull/2838", "diff_url": "https://github.com/huggingface/datasets/pull/2838.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2838.patch", "merged_at": null }
lhoestq
true
[ "Somebody reported the following error message which I think this is related to the goal of this PR:\r\n```Python\r\n03/24/2022 02:19:45 - INFO - __main__ - Step 5637: {'lr': 0.00018773333333333333, 'samples': 360768, 'batch_offset': 5637, 'completed_steps': 704, 'loss/train': 4.473083972930908, 'tokens/s': 6692.6176452714235}\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [1/20]\r\n03/24/2022 02:20:01 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [2/20]\r\n03/24/2022 02:20:09 - ERROR - datasets.packaged_modules.json.json - Failed to read file 'gzip://file-000000000007.json::https://huggingface.co/datasets/lvwerra/codeparrot-clean-train/resolve/1d740acb9d09cf7a3307553323e2c677a6535407/file-000000000007.json.gz' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Invalid value. in row 0\r\n03/24/2022 02:20:24 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [1/20]\r\n03/24/2022 02:20:37 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [2/20]\r\n03/24/2022 02:20:44 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [3/20]\r\n03/24/2022 02:20:49 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [4/20]\r\n03/24/2022 02:20:54 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [5/20]\r\n03/24/2022 02:20:59 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [6/20]\r\n03/24/2022 02:21:12 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [7/20]\r\n03/24/2022 02:21:20 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [8/20]\r\n03/24/2022 02:21:25 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [9/20]\r\n03/24/2022 02:21:30 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [10/20]\r\n03/24/2022 02:21:36 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [11/20]\r\n03/24/2022 02:21:41 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [12/20]\r\n03/24/2022 02:21:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [13/20]\r\n03/24/2022 02:21:51 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [14/20]\r\n03/24/2022 02:21:56 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [15/20]\r\n03/24/2022 02:22:01 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [16/20]\r\n03/24/2022 02:22:12 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [17/20]\r\n03/24/2022 02:22:21 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 5sec [18/20]\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py\", line 119, in _generate_tables\r\n io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0\r\n```\r\nThis comes from the CodeParrot training script where streaming is used. When the connection fails it can happen that the JSON cannot be read anymore and then an error is thrown.\r\n\r\n", "Yea if streaming makes a JSON unreadable then `error_bad_chunk` would help by skipping all the bad JSON data", "Should we close this PR?", "I didn't continue this PR but I think it's valuable (though now I think it would be better to have multiple options: raise, warn or ignore errors). I'll continue it at one point" ]
979,298,297
2,837
prepare_module issue when loading from read-only fs
closed
2021-08-25T15:21:26
2021-10-05T17:58:22
2021-10-05T17:58:22
https://github.com/huggingface/datasets/issues/2837
null
Dref360
false
[ "Hello, I opened #2887 to fix this." ]
979,230,142
2,836
Optimize Dataset.filter to only compute the indices to keep
closed
2021-08-25T14:41:22
2021-09-14T14:51:53
2021-09-13T15:50:21
https://github.com/huggingface/datasets/pull/2836
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2836", "html_url": "https://github.com/huggingface/datasets/pull/2836", "diff_url": "https://github.com/huggingface/datasets/pull/2836.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2836.patch", "merged_at": "2021-09-13T15:50:21" }
lhoestq
true
[ "Maybe worth updating the docs here as well?", "Yup, will do !" ]
979,209,394
2,835
Update: timit_asr - make the dataset streamable
closed
2021-08-25T14:22:49
2021-09-07T13:15:47
2021-09-07T13:15:46
https://github.com/huggingface/datasets/pull/2835
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2835", "html_url": "https://github.com/huggingface/datasets/pull/2835", "diff_url": "https://github.com/huggingface/datasets/pull/2835.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2835.patch", "merged_at": "2021-09-07T13:15:46" }
lhoestq
true
[]
978,309,749
2,834
Fix IndexError by ignoring empty RecordBatch
closed
2021-08-24T17:06:13
2021-08-24T17:21:18
2021-08-24T17:21:18
https://github.com/huggingface/datasets/pull/2834
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2834", "html_url": "https://github.com/huggingface/datasets/pull/2834", "diff_url": "https://github.com/huggingface/datasets/pull/2834.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2834.patch", "merged_at": "2021-08-24T17:21:17" }
lhoestq
true
[]
978,296,140
2,833
IndexError when accessing first element of a Dataset if first RecordBatch is empty
closed
2021-08-24T16:49:20
2021-08-24T17:21:17
2021-08-24T17:21:17
https://github.com/huggingface/datasets/issues/2833
null
lhoestq
false
[]
978,012,800
2,832
Logging levels not taken into account
closed
2021-08-24T11:50:41
2023-07-12T17:19:30
2023-07-12T17:19:29
https://github.com/huggingface/datasets/issues/2832
null
LysandreJik
false
[ "I just take a look at all the outputs produced by `datasets` using the different log-levels.\r\nAs far as i can tell using `datasets==1.17.0` they overall issue seems to be fixed.\r\n\r\nHowever, I noticed that there is one tqdm based progress indicator appearing on STDERR that I can simply not suppress.\r\n```\r\nResolving data files: 100%|██████████| 652/652 [00:00<00:00, 1604.52it/s]\r\n```\r\n\r\nAccording to _get_origin_metadata_locally_or_by_urls it shold be supressable by using the `NOTSET` log-level\r\nhttps://github.com/huggingface/datasets/blob/1406a04c3e911cec2680d8bc513653e0cafcaaa4/src/datasets/data_files.py#L491-L501\r\nSadly when specifiing the log-level `NOTSET` it seems to has no effect.\r\n\r\nBut appart from it not having any effect I must admit that it seems unintuitive to me.\r\nI would suggest changing this such that it is only shown when the log-level is greater or equal to INFO.\r\n\r\nThis would conform better to INFO according to the [documentation](https://huggingface.co/docs/datasets/v1.0.0/package_reference/logging_methods.html#datasets.logging.set_verbosity_info).\r\n> This will display most of the logging information and tqdm bars.\r\n\r\nAny inputs on this?\r\nI will be happy to supply a PR if desired 👍 ", "Hi! This should disable the tqdm output:\r\n```python\r\nimport datasets\r\ndatasets.set_progress_bar_enabled(False)\r\n```\r\n\r\nOn a side note: I believe the issue with logging (not tqdm) is still relevant on master." ]
977,864,600
2,831
ArrowInvalid when mapping dataset with missing values
open
2021-08-24T08:50:42
2021-08-31T14:15:34
null
https://github.com/huggingface/datasets/issues/2831
null
uniquefine
false
[ "Hi ! It fails because of the feature type inference.\r\n\r\nBecause the first 1000 examples all have null values in the \"match\" field, then it infers that the type for this field is `null` type before writing the data on disk. But as soon as it tries to map an example with a non-null \"match\" field, then it fails.\r\n\r\nTo fix that you can either:\r\n- increase the writer_batch_size to >2000 (default is 1000) so that some non-null values will be in the first batch written to disk\r\n```python\r\ndatasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], writer_batch_size=2000)\r\n```\r\n- OR force the feature type with:\r\n```python\r\nfrom datasets import Features, Value\r\n\r\nfeatures = Features({\r\n 'conflict': Value('int64'),\r\n 'date': Value('string'),\r\n 'headline': Value('string'),\r\n 'match': Value('float64'),\r\n 'label': Value('float64')\r\n})\r\ndatasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], features=features)\r\n```" ]
977,563,947
2,830
Add imagefolder dataset
closed
2021-08-23T23:34:06
2022-03-01T16:29:44
2022-03-01T16:29:44
https://github.com/huggingface/datasets/pull/2830
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2830", "html_url": "https://github.com/huggingface/datasets/pull/2830", "diff_url": "https://github.com/huggingface/datasets/pull/2830.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2830.patch", "merged_at": "2022-03-01T16:29:44" }
nateraw
true
[ "@lhoestq @albertvillanova it would be super cool if we could get the Image Classification task to work with this. I'm not sure how to have the dataset find the unique label names _after_ the dataset has been loaded. Is that even possible? \r\n\r\nMy hacky community version [here](https://huggingface.co/datasets/nateraw/image-folder) does this, but it wouldn't pass the test suite here. Any thoughts?", "Hi ! Dataset builders that require some `data_files` like `csv` or `json` are handled differently that actual dataset scripts.\r\n\r\nIn particular:\r\n- they are placed directly in the `src` folder of the lib so that you can use it without internet connection (more exactly in `src/datasets/packaged_modules/<builder_name>.py`). So feel free to move the dataset python file there. You also need to register it in `src/datasets/packaked_modules.__init__.py`\r\n- they are handled a bit differently in our test suite (see the `PackagedDatasetTest` class in `test_dataset_common.py`). To be able to test the builder with your dummy data, you just need to modify `get_packaged_dataset_dummy_data_files` in `test_dataset_common.py` to return the right `data_files` for your builder. The dummy data can stay in `datasets/image_folder/dummy`\r\n\r\nLet me know if you have questions or if I can help !", "Hey @lhoestq , I actually already did both of those things. I'm trying to get the `image-classification` task to work now. \r\n\r\nFor example...When you run `ds = load_dataset('imagefolder', data_files='my_files')`, with a directory called `./my_files` that looks like this:\r\n\r\n```\r\nmy_files\r\n----| Cat\r\n--------| image1.jpg\r\n--------| ...\r\n----| Dog\r\n--------| image1.jpg\r\n--------| ...\r\n```\r\n\r\n...We should set the dataset's `labels` feature to `datasets.features.ClassLabel(names=['cat', 'dog'])` dynamically with class names we find by getting a list of directories in `my_files` (via `data_files`). Otherwise the `datasets.tasks.ImageClassification` task will break, as the `labels` feature is not a `ClassLabel`.\r\n\r\nI couldn't figure out how to access the `data_files` in the builder's `_info` function in a way that would pass in the test suite. ", "Nice ! Then maybe you can use `self.config.data_files` in `_info()` ?\r\nWhat error are you getting in the test suite ?\r\n\r\nAlso note that `data_files` was first developed to accept paths to actual files, not directories. In particular, it fetches the metadata of all the data_files to get a unique hash for the caching mechanism. So we may need to do a few changes first.", "I'm trying to make it work by getting the label names in the _info automatically.\r\nI'll let you know tomorrow how it goes :)\r\n\r\nAlso cc @mariosasko since we're going to use #3163 \r\n\r\nRight now I'm getting the label name per file by taking the first word (from regex `\\w+`) after the common prefix of all the files per split", "Data files resolution takes too much time on my side for a dataset of a few 10,000s of examples. I'll speed it up with some multihreading tomorrow, and maybe by removing the unnecessary checksum verification", "The code is a bit ugly for my taste. I'll try to simplify it tomorrow by avoiding the `os.path.commonprefix` computation and do something similar to @nateraw's ImageFolder instead, where only the second-to-last path component is considered a label (and see if I can update the class labels lazily in `_generate_examples`).\r\n\r\nAlso, as discussed offline with @lhoestq, I reverted the automatic directory globbing change in `data_files.py` and will investigate if we can use `data_dir` for that (e.g. `load_dataset(\"imagefolder\", data_dir=\"path/to/data\")` would be equal to `load_dataset(\"imagefolder\", data_files=[\"path/to/data/**/*\", \"path/to/data/*\"])`. The only problem with `data_dir` that it's equal to `dl_manager.manual_dir`, which would break scripts with `manul_download_instructions`, so maybe we can limit this behavior only to the packaged loaders? WDYT?", "An updated example of usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1zJkAQm0Dk372EHcBq6hYgHuunsK4rR0r?usp=sharing)", "> The code is a bit ugly for my taste. I'll try to simplify it tomorrow by avoiding the os.path.commonprefix computation and do something similar to @nateraw's ImageFolder instead, where only the second-to-last path component is considered a label (and see if I can update the class labels lazily in _generate_examples).\r\n\r\nSounds good ! It's fine if we just support the same format as pytorch ImageFolder.\r\n\r\nRegarding the `data_dir` parameter, what do you think is best ?\r\n\r\n1. `dl_manager.data_dir = data_dir`\r\n2. `dl_manager.data_files = resolve(os.path.join(data_dir, \"**\"))`\r\n\r\nor something else ?\r\n\r\n> The only problem with data_dir that it's equal to dl_manager.manual_dir, which would break scripts with manul_download_instructions, so maybe we can limit this behavior only to the packaged loaders? WDYT?\r\n\r\nWe can still have `dl_manager.manual_dir = data_dir` though", "The example colab is amazing !", "@lhoestq \r\n>Regarding the `data_dir` parameter, what do you think is best ?\r\n>\r\n>1. `dl_manager.data_dir = data_dir`\r\n>2. `dl_manager.data_files = resolve(os.path.join(data_dir, \"**\"))`\r\n\r\nThe second option. Basically, I would like `data_files` to be equal to:\r\n```python\r\ndef _split_generators(self, dl_manager):\r\n data_files = self.config.data_files\r\n if data_files is None: \r\n data_files = glob.glob(\"{self.config.data_dir}/**\", recursive=True)\r\n else:\r\n raise ValueError(f\"At least one data file must be specified, but got data_files={data_files}\")\r\n```\r\nin the scripts of packaged modules. It's probably better to do the resolution in `data_files.py` tho (to handle relative file paths on the Hub, for instance)", "> The second option. Basically, I would like data_files to be equal to:\r\n> ```python\r\n> def _split_generators(self, dl_manager):\r\n> data_files = self.config.data_files\r\n> if data_files is None: \r\n> data_files = glob.glob(\"{self.config.data_dir}/**\", recursive=True)\r\n> else:\r\n> raise ValueError(f\"At least one data file must be specified, but got data_files={data_files}\")\r\n> ```\r\n> in the scripts of packaged modules. It's probably better to do the resolution in data_files.py tho (to handle relative file paths on the Hub, for instance)\r\n\r\nsounds good !", "🙌", "Hey @mariosasko are we still actually able to load an image folder?\r\n\r\nFor example...\r\n\r\n```\r\n! wget https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\r\n! unzip kagglecatsanddogs_3367a.zip\r\n```\r\n\r\nfollowed by\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# Does not work\r\nds = load_dataset('imagefolder', data_files='/PetImages')\r\n\r\n# Also doesn't work\r\nds = load_dataset('imagefolder', data_dir='/PetImages')\r\n```\r\n\r\nAre we going forward with the assumption that the user always wants to download from URL and that they won't have a dataset locally already? This at least gets us part of the way, but is technically not an \"imagefolder\" as intended. \r\n\r\nEither way, was delighted to see the colab notebook work smoothly outside of the case I just described above. ❤️ thanks so much for the work here.", "> Hey @mariosasko are we still actually able to load an image folder?\r\n> \r\n> For example...\r\n> \r\n> ```\r\n> ! wget https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\r\n> ! unzip kagglecatsanddogs_3367a.zip\r\n> ```\r\n> \r\n> followed by\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> # Does not work\r\n> ds = load_dataset('imagefolder', data_files='/PetImages')\r\n\r\nI ran into this too when I was trying to out. At the moment you can still load from a local on disk directory using a glob pattern i.e. \r\n\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"imagefolder\", data_files=\"PetImages/**/*\")\r\n```\r\n[Colab example](https://colab.research.google.com/drive/1IvAyYSAADHphzbtJMt02OXmnGwGtku3k?usp=sharing). I'm not sure if that is the intended behaviour or not. If it is, I think it would be good to document this because I also assumed the approach @nateraw used would work." ]
977,233,360
2,829
Optimize streaming from TAR archives
closed
2021-08-23T16:56:40
2022-09-21T14:29:46
2022-09-21T14:08:39
https://github.com/huggingface/datasets/issues/2829
null
lhoestq
false
[ "Closed by: \r\n- #3066" ]
977,181,517
2,828
Add code-mixed Kannada Hope speech dataset
closed
2021-08-23T15:55:09
2021-10-01T17:21:03
2021-10-01T17:21:03
https://github.com/huggingface/datasets/pull/2828
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2828", "html_url": "https://github.com/huggingface/datasets/pull/2828", "diff_url": "https://github.com/huggingface/datasets/pull/2828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2828.patch", "merged_at": null }
adeepH
true
[]
976,976,552
2,827
add a text classification dataset
closed
2021-08-23T12:24:41
2021-08-23T15:51:18
2021-08-23T15:51:18
https://github.com/huggingface/datasets/pull/2827
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2827", "html_url": "https://github.com/huggingface/datasets/pull/2827", "diff_url": "https://github.com/huggingface/datasets/pull/2827.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2827.patch", "merged_at": null }
adeepH
true
[]
976,974,254
2,826
Add a Text Classification dataset: KanHope
closed
2021-08-23T12:21:58
2021-10-01T18:06:59
2021-10-01T18:06:59
https://github.com/huggingface/datasets/issues/2826
null
adeepH
false
[ "Hi ! In your script it looks like you're trying to load the dataset `bn_hate_speech,`, not KanHope.\r\n\r\nMoreover the error `KeyError: ' '` means that you have a feature of type ClassLabel, but for a certain example of the dataset, it looks like the label is empty (it's just a string with a space). Can you make sure that the data don't have missing labels, and that your dataset script parses the labels correctly ?" ]
976,584,926
2,825
The datasets.map function does not load cached dataset after moving python script
closed
2021-08-23T03:23:37
2024-07-29T11:25:50
2021-08-31T13:13:36
https://github.com/huggingface/datasets/issues/2825
null
hobbitlzy
false
[ "This also happened to me on COLAB.\r\nDetails:\r\nI ran the `run_mlm.py` in two different notebooks. \r\nIn the first notebook, I do tokenization since I can get 4 CPU cores without any GPUs, and save the cache into a folder which I copy to drive.\r\nIn the second notebook, I copy the cache folder from drive and re-run the run_mlm.py script (this time I uncomment the trainer code which happens after the tokenization)\r\n\r\nNote: I didn't change anything in the arguments, not even the preprocessing_num_workers\r\n ", "Thanks for reporting ! This is indeed a bug, I'm looking into it", "#2854 fixed the issue :)\r\n\r\nWe'll do a new release of `datasets` soon to make the fix available.\r\nIn the meantime, feel free to try it out by installing `datasets` from source\r\n\r\nIf you have other issues or any question, feel free to re-open the issue :)", "Hello there, \r\n\r\nAlthough I don't change any parameter in the map function, I've faced the same issue.\r\n\r\nThis is the code that I use to map and tokenise my dataset:\r\n\r\n```\r\ntrain_data = train_data.map(\r\n lambda samples: self.tokenizer(\r\n text=samples['source'], \r\n text_target=samples['target'], \r\n padding='max_length', truncation=True),\r\n batched=True, batch_size=128)\r\n```", "Hi @TheTahaaa,\r\n\r\nCould you please open a new Bug issue with all the information about your local environment? ", "> Hi @TheTahaaa,\r\n> \r\n> Could you please open a new Bug issue with all the information about your local environment?\r\n\r\nHey there,\r\n\r\nI managed to solve the problem! Initially, I created the dataset within a function and then tokenised it separately in another part of the code. This approach prevented the tokeniser from accessing the dataset from the cache. So, I moved the tokenisation process into the dataset creation function itself. Now, the .map() function tokenises the data quickly by loading it from the cache!" ]
976,394,721
2,824
Fix defaults in cache_dir docstring in load.py
closed
2021-08-22T14:48:37
2021-08-26T13:23:32
2021-08-26T11:55:16
https://github.com/huggingface/datasets/pull/2824
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2824", "html_url": "https://github.com/huggingface/datasets/pull/2824", "diff_url": "https://github.com/huggingface/datasets/pull/2824.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2824.patch", "merged_at": "2021-08-26T11:55:16" }
mariosasko
true
[]
976,135,355
2,823
HF_DATASETS_CACHE variable in Windows
closed
2021-08-21T13:17:44
2021-08-21T13:20:11
2021-08-21T13:20:11
https://github.com/huggingface/datasets/issues/2823
null
rp2839
false
[ "Agh - I'm a muppet. No quote marks are needed.\r\nset HF_DATASETS_CACHE = C:\\Datasets\r\nworks as intended." ]
975,744,463
2,822
Add url prefix convention for many compression formats
closed
2021-08-20T16:11:23
2021-08-23T15:59:16
2021-08-23T15:59:14
https://github.com/huggingface/datasets/pull/2822
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2822", "html_url": "https://github.com/huggingface/datasets/pull/2822", "diff_url": "https://github.com/huggingface/datasets/pull/2822.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2822.patch", "merged_at": "2021-08-23T15:59:14" }
lhoestq
true
[ "Thanks for the feedback :) I will also complete the documentation to explain this convention", "I just added some documentation about how streaming works with chained URLs.\r\n\r\nI will also add some docs about how to use chained URLs directly in `load_dataset` in #2662, since #2662 does change the documentation already and to avoid having to resolve conflicts.", "Merging this one now, next step is resolve the conflicts in #2662 and update the docs for URL chaining :)\r\n\r\nThere is also the glob feature of zip files that I need to add, to be able to do this for example:\r\n```python\r\nload_dataset(\"json\", data_files=\"zip://*::https://foo.bar/archive.zip\")\r\n```" ]
975,556,032
2,821
Cannot load linnaeus dataset
closed
2021-08-20T12:15:15
2021-08-31T13:13:02
2021-08-31T13:12:09
https://github.com/huggingface/datasets/issues/2821
null
NielsRogge
false
[ "Thanks for reporting ! #2852 fixed this error\r\n\r\nWe'll do a new release of `datasets` soon :)" ]
975,210,712
2,820
Downloading “reddit” dataset keeps timing out.
closed
2021-08-20T02:52:36
2021-09-08T14:52:02
2021-09-08T14:52:02
https://github.com/huggingface/datasets/issues/2820
null
smeyerhot
false
[ "```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset reddit/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to /Volumes/My Passport for Mac/og-chat-data/reddit/default/1.0.0/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969...\r\nDownloading: 13%\r\n403M/3.14G [44:39<2:27:09, 310kB/s]\r\n---------------------------------------------------------------------------\r\ntimeout Traceback (most recent call last)\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)\r\n 437 try:\r\n--> 438 yield\r\n 439 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)\r\n 518 cache_content = False\r\n--> 519 data = self._fp.read(amt) if not fp_closed else b\"\"\r\n 520 if (\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in read(self, amt)\r\n 458 b = bytearray(amt)\r\n--> 459 n = self.readinto(b)\r\n 460 return memoryview(b)[:n].tobytes()\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in readinto(self, b)\r\n 502 # (for example, reading in 1k chunks)\r\n--> 503 n = self.fp.readinto(b)\r\n 504 if not n and b:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/socket.py in readinto(self, b)\r\n 703 try:\r\n--> 704 return self._sock.recv_into(b)\r\n 705 except timeout:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in recv_into(self, buffer, nbytes, flags)\r\n 1240 self.__class__)\r\n-> 1241 return self.read(nbytes, buffer)\r\n 1242 else:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in read(self, len, buffer)\r\n 1098 if buffer is not None:\r\n-> 1099 return self._sslobj.read(len, buffer)\r\n 1100 else:\r\n\r\ntimeout: The read operation timed out\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nReadTimeoutError Traceback (most recent call last)\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()\r\n 757 try:\r\n--> 758 for chunk in self.raw.stream(chunk_size, decode_content=True):\r\n 759 yield chunk\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in stream(self, amt, decode_content)\r\n 575 while not is_fp_closed(self._fp):\r\n--> 576 data = self.read(amt=amt, decode_content=decode_content)\r\n 577 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)\r\n 540 # Content-Length are caught.\r\n--> 541 raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\r\n 542 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/contextlib.py in __exit__(self, type, value, traceback)\r\n 134 try:\r\n--> 135 self.gen.throw(type, value, traceback)\r\n 136 except StopIteration as exc:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)\r\n 442 # there is yet no clean way to get at it from this context.\r\n--> 443 raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\r\n 444 \r\n\r\nReadTimeoutError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nConnectionError Traceback (most recent call last)\r\n/var/folders/3f/md0t9sgj6rz8xy01fskttqdc0000gn/T/ipykernel_89016/1133441872.py in <module>\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 dataset = load_dataset(\"reddit\", ignore_verifications=True, cache_dir=\"/Volumes/My Passport for Mac/og-chat-data\")\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 845 \r\n 846 # Download and prepare data\r\n--> 847 builder_instance.download_and_prepare(\r\n 848 download_config=download_config,\r\n 849 download_mode=download_mode,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 613 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 614 if not downloaded_from_gcs:\r\n--> 615 self._download_and_prepare(\r\n 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 617 )\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 669 split_dict = SplitDict(dataset_name=self.name)\r\n 670 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 671 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 672 \r\n 673 # Checksums verification\r\n\r\n~/.cache/huggingface/modules/datasets_modules/datasets/reddit/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969/reddit.py in _split_generators(self, dl_manager)\r\n 73 def _split_generators(self, dl_manager):\r\n 74 \"\"\"Returns SplitGenerators.\"\"\"\r\n---> 75 dl_path = dl_manager.download_and_extract(_URL)\r\n 76 return [\r\n 77 datasets.SplitGenerator(\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 287 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 288 \"\"\"\r\n--> 289 return self.extract(self.download(url_or_urls))\r\n 290 \r\n 291 def get_recorded_sizes_checksums(self):\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download(self, url_or_urls)\r\n 195 \r\n 196 start_time = datetime.now()\r\n--> 197 downloaded_path_or_paths = map_nested(\r\n 198 download_func,\r\n 199 url_or_urls,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 194 # Singleton\r\n 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 196 return function(data_struct)\r\n 197 \r\n 198 disable_tqdm = bool(logger.getEffectiveLevel() > logging.INFO) or not utils.is_progress_bar_enabled()\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in _download(self, url_or_filename, download_config)\r\n 218 # append the relative path to the base_path\r\n 219 url_or_filename = url_or_path_join(self._base_path, url_or_filename)\r\n--> 220 return cached_path(url_or_filename, download_config=download_config)\r\n 221 \r\n 222 def iter_archive(self, path):\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 286 if is_remote_url(url_or_filename):\r\n 287 # URL, so get it from the cache (downloading if necessary)\r\n--> 288 output_path = get_from_cache(\r\n 289 url_or_filename,\r\n 290 cache_dir=cache_dir,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 643 ftp_get(url, temp_file)\r\n 644 else:\r\n--> 645 http_get(\r\n 646 url,\r\n 647 temp_file,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)\r\n 451 disable=bool(logging.get_verbosity() == logging.NOTSET),\r\n 452 )\r\n--> 453 for chunk in response.iter_content(chunk_size=1024):\r\n 454 if chunk: # filter out keep-alive new chunks\r\n 455 progress.update(len(chunk))\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()\r\n 763 raise ContentDecodingError(e)\r\n 764 except ReadTimeoutError as e:\r\n--> 765 raise ConnectionError(e)\r\n 766 else:\r\n 767 # Standard file-like object.\r\n\r\nConnectionError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n```", "Hey @lhoestq should I try to fix this issue ?", "It also doesn't seem to be \"smart caching\" and I received an error about a file not being found...", "To be clear, the error I get when I try to \"re-instantiate\" the download after failure is: \r\n```\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 20] Not a directory: <HOME>/.cache/huggingface/datasets/downloads/1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json'\r\n```", "Here is a new error:\r\n```\r\nConnectionError: Couldn't reach https://zenodo.org/record/1043504/files/corpus-webis-tldr-17.zip?download=1\r\n```", "Hi ! Since https://github.com/huggingface/datasets/pull/2803 we've changed the time out from 10sec to 100sec.\r\nThis should prevent the `ReadTimeoutError`. Feel free to try it out by installing `datasets` from source\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\n\r\nWhen re-running your code you said you get a `OSError`, could you try deleting the file at the path returned by the error ? (the one after `[Errno 20] Not a directory:`). Ideally when a download fails you should be able to re-run it without error; there might be an issue here.\r\n\r\nFinally not sure what we can do about `ConnectionError`, this must be an issue from zenodo. If it happens you simply need to try again\r\n", "@lhoestq thanks for the update. The directory specified by the OSError ie. \r\n```\r\n1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json \r\n```\r\n was not actually in that directory so I can't delete it. ", "Oh, then could you try deleting the parent directory `1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c` instead ?\r\nThis way the download manager will know that it has to uncompress the data again", "It seems to have worked. It only took like 20min! I think the extra timeout length did the trick! One thing is that it downloaded a total of 41gb instead of 20gb but at least it finished. ", "Great ! The timeout change will be available in the next release of `datasets` :)" ]
974,683,155
2,819
Added XL-Sum dataset
closed
2021-08-19T13:47:45
2021-09-29T08:13:44
2021-09-23T17:49:05
https://github.com/huggingface/datasets/pull/2819
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2819", "html_url": "https://github.com/huggingface/datasets/pull/2819", "diff_url": "https://github.com/huggingface/datasets/pull/2819.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2819.patch", "merged_at": null }
abhik1505040
true
[ "Thanks for adding this one ! I just did some minor changes and set the timeout back to 100sec instead of 1000", "The CI failure is unrelated to this PR - let me take a look", "> Thanks for adding this one! I just did some minor changes and set the timeout back to 100sec instead of 1000\r\n\r\nThank you for updating the language tags. I tried timeout values up to 300 sec on my local machine, but some of the larger files still get timed out. Although this could have been a network issue on my end, have you verified that 100 sec works for all files?", "Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.\r\nTherefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.\r\n\r\nSo ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.\r\nHF can probably help with hosting the data if needed", "> Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.\r\n> Therefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.\r\n> \r\n> So ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.\r\n> HF can probably help with hosting the data if needed\r\n\r\nIt'd be great if the dataset can be hosted in HF. How should I proceed here though? Upload the dataset files as a community dataset and update the links in this pull request or is there a more straightforward way?", "Hi ! Ideally everything should be in the same place, so feel free to create a community dataset on the Hub and upload your data files as well as you dataset script (and also the readme.md and dataset_infos.json).\r\n\r\nThe only change you have to do in your dataset script is use a relative path to your data files instead of urls.\r\nFor example if your repository looks like this:\r\n```\r\nxlsum/\r\n├── data/\r\n│ ├── amharic_XLSum_v2.0.tar.bz2\r\n│ ├── ...\r\n│ └── yoruba_XLSum_v2.0.tar.bz2\r\n├── xlsum.py\r\n├── README.md\r\n└── dataset_infos.json\r\n```\r\nThen you just need to pass `\"data/amharic_XLSum_v2.0.tar.bz2\"` to `dl_manager.download_and_extract(...)`, instead of an url.\r\n\r\nLocally you can test that it's working as expected with\r\n```python\r\nload_dataset(\"path/to/my/directory/named/xlsum\")\r\n```\r\n\r\nThen once it's on the Hub, you can load it with\r\n```python\r\nload_dataset(\"username/xlsum\")\r\n```\r\n\r\nLet me know if you have questions :)", "Thank you for your detailed response regarding the community dataset building process. However, will this pull request be merged into the main branch?", "If XL-sum is available via the Hub we don't need to add it again in the `datasets` github repo ;)", "The dataset has now been uploaded on HF hub. It's available at https://huggingface.co/datasets/csebuetnlp/xlsum. Closing this pull request. Thank you for your contributions. ", "Thank you !" ]