id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,182,157,056
| 4,030
|
Use a constant for the articles regex in SQuAD v2
|
closed
| 2022-03-26T23:06:30
| 2022-04-12T16:30:45
| 2022-04-12T11:00:24
|
https://github.com/huggingface/datasets/pull/4030
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4030",
"html_url": "https://github.com/huggingface/datasets/pull/4030",
"diff_url": "https://github.com/huggingface/datasets/pull/4030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4030.patch",
"merged_at": "2022-04-12T11:00:24"
}
|
bryant1410
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,181,057,011
| 4,029
|
Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold
|
closed
| 2022-03-25T17:31:33
| 2022-05-06T08:35:52
| 2022-05-06T08:35:52
|
https://github.com/huggingface/datasets/issues/4029
| null |
MoritzLaurer
| false
|
[
"Hi ! You can access the faiss index with\r\n```python\r\nfaiss_index = my_dataset.get_index(\"my_index_name\").faiss_index\r\n```\r\nand then do whatever you want with it, e.g. query it using range_search:\r\n```python\r\nthreshold = 0.95\r\nlimits, distances, indices = faiss_index.range_search(x=xq, thresh=threshold)\r\n\r\ntexts = dataset[indices]\r\n```",
"wow, that's great, thank you for the explanation. (if that's not already in the documentation, could be worth adding it)\r\n\r\nwhich type of faiss index is Datasets using? I looked into faiss recently and I understand that there are several different types of indexes and the choice is important, e.g. regarding which distance metric you use (euclidian vs. cosine/dot product), the size of my dataset etc. can I chose the type of index somehow as well?",
"`Dataset.add_faiss_index` has a `string_factory` parameter, used to set the type of index (see the faiss documentation about [index factory](https://github.com/facebookresearch/faiss/wiki/The-index-factory)). Alternatively, you can pass an index you've defined yourself using faiss with the `custom_index` parameter of `Dataset.add_faiss_index` \r\n\r\nHere is the full documentation of `Dataset.add_faiss_index`: https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.Dataset.add_faiss_index",
"great thanks, I will try it out"
] |
1,181,022,675
| 4,028
|
Fix docs on audio feature installation
|
closed
| 2022-03-25T16:55:11
| 2022-03-31T16:20:47
| 2022-03-31T16:15:20
|
https://github.com/huggingface/datasets/pull/4028
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4028",
"html_url": "https://github.com/huggingface/datasets/pull/4028",
"diff_url": "https://github.com/huggingface/datasets/pull/4028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4028.patch",
"merged_at": "2022-03-31T16:15:20"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,180,991,344
| 4,027
|
ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme'
|
closed
| 2022-03-25T16:22:28
| 2022-04-07T10:29:52
| 2022-03-28T07:58:56
|
https://github.com/huggingface/datasets/issues/4027
| null |
MoritzLaurer
| false
|
[
"Hi, @MoritzLaurer, thanks for reporting.\r\n\r\nNormally this is due to a mismatch between the versions of your Elasticsearch client and server:\r\n- your ES client is passing only keyword arguments to your ES server\r\n- whereas your ES server expects a positional argument called 'scheme'\r\n\r\nIn order to fix this, you should align the major versions of both Elasticsearch client and server.\r\n\r\nYou can have more info:\r\n- on this other issue page: https://github.com/huggingface/datasets/issues/3956#issuecomment-1072115173\r\n- Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility\r\n\r\nFeel free to re-open this issue if the problem persists.\r\n\r\nDuplicate of:\r\n- #3956",
"1. Check elasticsearch version\r\n```\r\nimport elasticsearch\r\nprint(elasticsearch.__version__)\r\n```\r\nEx: 7.9.1\r\n2. Uninstall current elasticsearch package\r\n`pip uninstall elasticsearch`\r\n3. Install elasticsearch 7.9.1 package\r\n`pip install elasticsearch==7.9.1`"
] |
1,180,968,774
| 4,026
|
Support streaming xtreme dataset for bucc18 config
|
closed
| 2022-03-25T16:00:40
| 2022-03-25T16:26:50
| 2022-03-25T16:21:52
|
https://github.com/huggingface/datasets/pull/4026
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4026",
"html_url": "https://github.com/huggingface/datasets/pull/4026",
"diff_url": "https://github.com/huggingface/datasets/pull/4026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4026.patch",
"merged_at": "2022-03-25T16:21:52"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,180,963,105
| 4,025
|
Missing argument in precision/recall
|
closed
| 2022-03-25T15:55:52
| 2022-03-28T09:53:06
| 2022-03-28T09:53:06
|
https://github.com/huggingface/datasets/issues/4025
| null |
Dref360
| false
|
[
"Thanks for the suggestion, @Dref360.\r\n\r\nWe are adding that argument. "
] |
1,180,951,817
| 4,024
|
Doc: image_process small tip
|
closed
| 2022-03-25T15:44:32
| 2022-03-31T15:35:35
| 2022-03-31T15:30:20
|
https://github.com/huggingface/datasets/pull/4024
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4024",
"html_url": "https://github.com/huggingface/datasets/pull/4024",
"diff_url": "https://github.com/huggingface/datasets/pull/4024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4024.patch",
"merged_at": null
}
|
FrancescoSaverioZuppichini
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This tip is unnecessary, i.e., Pillow will already be installed since the `Image` feature requires it for encoding and decoding. Thanks anyway.\r\n\r\ncc @stevhliu I've noticed we are missing the installation section in the doc (`pip install datasets[vision]`). I can add it myself."
] |
1,180,840,399
| 4,023
|
Replace yahoo_answers_topics data url
|
closed
| 2022-03-25T14:08:57
| 2022-03-28T10:12:56
| 2022-03-28T10:07:52
|
https://github.com/huggingface/datasets/pull/4023
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4023",
"html_url": "https://github.com/huggingface/datasets/pull/4023",
"diff_url": "https://github.com/huggingface/datasets/pull/4023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4023.patch",
"merged_at": "2022-03-28T10:07:52"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because of issues in the dataset cards that are unrelated to this PR - merging"
] |
1,180,816,682
| 4,022
|
Replace dbpedia_14 data url
|
closed
| 2022-03-25T13:47:21
| 2022-03-25T15:03:37
| 2022-03-25T14:58:49
|
https://github.com/huggingface/datasets/pull/4022
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4022",
"html_url": "https://github.com/huggingface/datasets/pull/4022",
"diff_url": "https://github.com/huggingface/datasets/pull/4022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4022.patch",
"merged_at": "2022-03-25T14:58:49"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,180,805,092
| 4,021
|
Fix `map` remove_columns on empty dataset
|
closed
| 2022-03-25T13:36:29
| 2022-03-29T13:41:31
| 2022-03-29T13:35:44
|
https://github.com/huggingface/datasets/pull/4021
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4021",
"html_url": "https://github.com/huggingface/datasets/pull/4021",
"diff_url": "https://github.com/huggingface/datasets/pull/4021.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4021.patch",
"merged_at": "2022-03-29T13:35:44"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,180,636,754
| 4,020
|
Replace amazon_polarity data URL
|
closed
| 2022-03-25T10:50:57
| 2022-03-25T15:02:36
| 2022-03-25T14:57:41
|
https://github.com/huggingface/datasets/pull/4020
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4020",
"html_url": "https://github.com/huggingface/datasets/pull/4020",
"diff_url": "https://github.com/huggingface/datasets/pull/4020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4020.patch",
"merged_at": "2022-03-25T14:57:41"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,180,628,293
| 4,019
|
Make yelp_polarity streamable
|
closed
| 2022-03-25T10:42:51
| 2022-03-25T15:02:19
| 2022-03-25T14:57:16
|
https://github.com/huggingface/datasets/pull/4019
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4019",
"html_url": "https://github.com/huggingface/datasets/pull/4019",
"diff_url": "https://github.com/huggingface/datasets/pull/4019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4019.patch",
"merged_at": "2022-03-25T14:57:15"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because of the incomplete dataset card - this is unrelated to the goal of this PR so we can ignore it"
] |
1,180,622,816
| 4,018
|
Replace yelp_review_full data url
|
closed
| 2022-03-25T10:37:18
| 2022-03-25T15:01:02
| 2022-03-25T14:56:10
|
https://github.com/huggingface/datasets/pull/4018
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4018",
"html_url": "https://github.com/huggingface/datasets/pull/4018",
"diff_url": "https://github.com/huggingface/datasets/pull/4018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4018.patch",
"merged_at": "2022-03-25T14:56:10"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,180,595,160
| 4,017
|
Support streaming scan dataset
|
closed
| 2022-03-25T10:11:28
| 2022-03-25T12:08:55
| 2022-03-25T12:03:52
|
https://github.com/huggingface/datasets/pull/4017
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4017",
"html_url": "https://github.com/huggingface/datasets/pull/4017",
"diff_url": "https://github.com/huggingface/datasets/pull/4017.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4017.patch",
"merged_at": "2022-03-25T12:03:52"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,180,557,828
| 4,016
|
Support streaming blimp dataset
|
closed
| 2022-03-25T09:39:10
| 2022-03-25T11:19:18
| 2022-03-25T11:14:13
|
https://github.com/huggingface/datasets/pull/4016
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4016",
"html_url": "https://github.com/huggingface/datasets/pull/4016",
"diff_url": "https://github.com/huggingface/datasets/pull/4016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4016.patch",
"merged_at": "2022-03-25T11:14:13"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,180,510,856
| 4,015
|
Can not correctly parse the classes with imagefolder
|
closed
| 2022-03-25T08:51:17
| 2022-03-28T01:02:03
| 2022-03-25T09:27:56
|
https://github.com/huggingface/datasets/issues/4015
| null |
YiSyuanChen
| false
|
[
"I found that the problem arises because the image files in my folder are actually symbolic links (for my own reasons). After modifications, the classes can now be correctly parsed. Therefore, I close this issue.",
"HI, I have a question. How much time did you load the ImageNet data files? "
] |
1,180,481,229
| 4,014
|
Support streaming id_clickbait dataset
|
closed
| 2022-03-25T08:18:28
| 2022-03-25T08:58:31
| 2022-03-25T08:53:32
|
https://github.com/huggingface/datasets/pull/4014
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4014",
"html_url": "https://github.com/huggingface/datasets/pull/4014",
"diff_url": "https://github.com/huggingface/datasets/pull/4014.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4014.patch",
"merged_at": "2022-03-25T08:53:32"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,180,427,174
| 4,013
|
Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
|
closed
| 2022-03-25T07:12:02
| 2022-04-04T08:05:01
| 2022-03-25T14:16:11
|
https://github.com/huggingface/datasets/issues/4013
| null |
hazalturkmen
| false
|
[
"Hi @hazalturkmen, thanks for reporting.\r\n\r\nNote that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.\r\n\r\nWhen there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file extensions. However, your data file does not have any extension.\r\n\r\nNote that current supported data file extensions are: 'csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'.\r\n\r\nYou have more info on our docs: [How to share a dataset](https://huggingface.co/docs/datasets/share).",
"thanks for reply :)"
] |
1,180,350,083
| 4,012
|
Rename wer to cer
|
closed
| 2022-03-25T05:06:05
| 2022-03-28T13:57:25
| 2022-03-28T13:57:25
|
https://github.com/huggingface/datasets/pull/4012
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4012",
"html_url": "https://github.com/huggingface/datasets/pull/4012",
"diff_url": "https://github.com/huggingface/datasets/pull/4012.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4012.patch",
"merged_at": "2022-03-28T13:57:25"
}
|
pmgautam
| true
|
[] |
1,179,885,965
| 4,011
|
Fix SQuAD v2 metric docs on `references` format
|
closed
| 2022-03-24T18:27:10
| 2023-07-11T09:35:46
| 2023-07-11T09:35:15
|
https://github.com/huggingface/datasets/pull/4011
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4011",
"html_url": "https://github.com/huggingface/datasets/pull/4011",
"diff_url": "https://github.com/huggingface/datasets/pull/4011.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4011.patch",
"merged_at": null
}
|
bryant1410
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate"
] |
1,179,848,036
| 4,010
|
Fix None issue with Sequence of dict
|
closed
| 2022-03-24T17:58:59
| 2022-03-28T10:13:53
| 2022-03-28T10:08:40
|
https://github.com/huggingface/datasets/pull/4010
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4010",
"html_url": "https://github.com/huggingface/datasets/pull/4010",
"diff_url": "https://github.com/huggingface/datasets/pull/4010.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4010.patch",
"merged_at": "2022-03-28T10:08:40"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging since I'd like do do a patch release soon for this one"
] |
1,179,658,611
| 4,009
|
AMI load_dataset error: sndfile library not found
|
closed
| 2022-03-24T15:13:38
| 2022-03-24T15:46:38
| 2022-03-24T15:17:29
|
https://github.com/huggingface/datasets/issues/4009
| null |
i-am-neo
| false
|
[
"Issue unresolved, see [4000](https://github.com/huggingface/datasets/issues/4009#issue-1179658611)"
] |
1,179,591,068
| 4,008
|
Support streaming daily_dialog dataset
|
closed
| 2022-03-24T14:23:23
| 2022-03-24T15:29:01
| 2022-03-24T14:46:58
|
https://github.com/huggingface/datasets/pull/4008
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4008",
"html_url": "https://github.com/huggingface/datasets/pull/4008",
"diff_url": "https://github.com/huggingface/datasets/pull/4008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4008.patch",
"merged_at": "2022-03-24T14:46:58"
}
|
albertvillanova
| true
|
[
"Yay! I love this dataset!"
] |
1,179,381,021
| 4,007
|
set_format does not work with multi dimension tensor
|
closed
| 2022-03-24T11:27:43
| 2022-03-30T07:28:57
| 2022-03-24T14:39:29
|
https://github.com/huggingface/datasets/issues/4007
| null |
phihung
| false
|
[
"Hi! Use the `ArrayXD` feature type (where X is the number of dimensions) to get correctly formated tensors. So in your case, define the dataset as follows :\r\n```python\r\nds = Dataset.from_dict({\"A\": [torch.rand((2, 2))]}, features=Features({\"A\": Array2D(shape=(2, 2), dtype=\"float32\")}))\r\n```\r\n",
"Hi @mariosasko I'm facing the same issue and the only work around I've found so far is to convert my `DatasetDict` to a dictionary and then create new objects with `Dataset.from_dict`.\r\n```\r\ndataset = load_dataset(\"my_dataset.py\")\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndict_dataset_test = dataset[\"test\"].to_dict()\r\n...\r\ndataset_test = Dataset.from_dict(dict_dataset_test, features=Features(features))\r\n```\r\nHowever, converting a `Dataset` object to a dict takes quite a lot of time and memory... Is there a way to directly create an `Array2D` without having to transform the original `Dataset` to a dict?",
"Hi! Yes, you can directly pass the `Features` dictionary as `features` in `map` to cast the column to `Array2D`:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example), features=Features(features))\r\n```\r\nOr you can use `cast` after `map` to do that:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndataset = dataset.cast(Features(features))\r\n```",
"Fantastic thank you @mariosasko\r\nThe first option you suggested is indeed way faster 😃 "
] |
1,179,367,195
| 4,006
|
Use audio feature in ASR task template
|
closed
| 2022-03-24T11:15:22
| 2022-03-24T17:19:29
| 2022-03-24T16:48:02
|
https://github.com/huggingface/datasets/pull/4006
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4006",
"html_url": "https://github.com/huggingface/datasets/pull/4006",
"diff_url": "https://github.com/huggingface/datasets/pull/4006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4006.patch",
"merged_at": "2022-03-24T16:48:02"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,179,365,663
| 4,005
|
Yelp not working
|
closed
| 2022-03-24T11:14:00
| 2022-03-25T14:59:57
| 2022-03-25T14:56:10
|
https://github.com/huggingface/datasets/issues/4005
| null |
patrickvonplaten
| false
|
[
"I don't think it's an issue with the dataset-viewer. Maybe @lhoestq or @albertvillanova could confirm.\r\n\r\n```python\r\n>>> from datasets import load_dataset, DownloadMode\r\n>>> import itertools\r\n>>> # without streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.97MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nDownloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /home/slesage/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.10k/1.10k [00:00<00:00, 1.39MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 676, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0']\r\n\r\n>>> # with streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD, streaming=True)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.53MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 375, in _info\r\n await _file_info(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 736, in _file_info\r\n r.raise_for_status()\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/aiohttp/client_reqrep.py\", line 1000, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://doc-0g-bs-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/gklhpdq1arj8v15qrg7ces34a8c3413d/1648144575000/07511006523564980941/*/0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0?e=download')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1677, in load_dataset\r\n return builder_instance.as_streaming_dataset(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 906, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/yelp_review_full/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43/yelp_review_full.py\", line 102, in _split_generators\r\n data_dir = dl_manager.download_and_extract(my_urls)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 800, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 778, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/py_utils.py\", line 306, in map_nested\r\n return function(data_struct)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 783, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 372, in _get_extraction_protocol\r\n with fsspec.open(urlpath, **kwargs) as f:\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/core.py\", line 102, in __enter__\r\n f = self.fs.open(self.path, mode=mode)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/spec.py\", line 978, in open\r\n f = self._open(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 335, in _open\r\n size = size or self.info(path, **kwargs)[\"size\"]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 88, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 69, in sync\r\n raise result[0]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 388, in _info\r\n raise FileNotFoundError(url) from exc\r\nFileNotFoundError: https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0&confirm=t\r\n```\r\n\r\nAnd this is before even trying to access the rows with\r\n\r\n```python\r\n>>> rows = list(itertools.islice(dataset, 100))\r\n>>> rows = list(dataset.take(100))\r\n```",
"Yet another issue related to google drive not being nice. Most likely your IP has been banned from using their API programmatically. Do you know if we are allowed to host and redistribute the data ourselves ?",
"Hi,\r\n\r\nFacing the same issue while loading the dataset: \r\n\r\n`Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files`\r\n\r\nThanks",
"> Facing the same issue while loading the dataset:\r\n> \r\n> Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files\r\n\r\nThanks for reporting. I think this is the same issue. Feel free to try again later, once Google Drive stopped blocking you. You can retry by passing `download_mode=\"force_redownload\"` to `load_dataset`",
"I noticed that FastAI hosts the Yelp dataset at https://s3.amazonaws.com/fast-ai-nlp/yelp_review_full_csv.tgz (from their catalog [here](https://course.fast.ai/datasets))\r\n\r\nLet's update the yelp dataset script to download from there instead of Google Drive",
"I updated the link to not use Google Drive anymore, we will do a release early next week with the updated download url of the dataset :)"
] |
1,179,320,795
| 4,004
|
ASSIN 2 dataset: replace broken Google Drive _URLS by links on github
|
closed
| 2022-03-24T10:37:39
| 2022-03-28T14:01:46
| 2022-03-28T13:56:39
|
https://github.com/huggingface/datasets/pull/4004
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4004",
"html_url": "https://github.com/huggingface/datasets/pull/4004",
"diff_url": "https://github.com/huggingface/datasets/pull/4004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4004.patch",
"merged_at": "2022-03-28T13:56:39"
}
|
ruanchaves
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,179,286,877
| 4,003
|
ASSIN2 dataset checksum bug
|
closed
| 2022-03-24T10:08:50
| 2022-04-27T14:14:45
| 2022-03-28T13:56:39
|
https://github.com/huggingface/datasets/issues/4003
| null |
ruanchaves
| false
|
[
"Using latest code, I am still facing the issue.\r\n\r\n```python\r\n(base) vimos@vimosmu ➜ ~ ipython\r\nPython 3.6.7 | packaged by conda-forge | (default, Nov 6 2019, 16:19:42) \r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: load_dataset(\"assin2\")\r\nDownloading builder script: 4.24kB [00:00, 244kB/s]\r\nDownloading metadata: 2.58kB [00:00, 2.19MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset assin2/default (download: 2.02 MiB, generated: 1.21 MiB, post-processed: Unknown size, total: 3.23 MiB) to /home/vimos/.cache/huggingface/datasets/assin2/default/1.0.0/8467f7acbda82f62ab960ca869dc1e96350e0e103a1ef7eaa43bbee530b80061...\r\nDownloading data: 1.51MB [00:00, 102MB/s]\r\nDownloading data: 116kB [00:00, 63.6MB/s]\r\nDownloading data: 493kB [00:00, 95.8MB/s] \r\nDownloading data files: 100%|██████████████████████████████████████████| 3/3 [00:00<00:00, 8.27it/s]\r\n---------------------------------------------------------------------------\r\nExpectedMoreDownloadedFiles Traceback (most recent call last)\r\n<ipython-input-2-b367d1ffd68e> in <module>\r\n----> 1 load_dataset(\"assin2\")\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1694 ignore_verifications=ignore_verifications,\r\n 1695 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1696 use_auth_token=use_auth_token,\r\n 1697 )\r\n 1698\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 604 if not downloaded_from_gcs:\r\n 605 self._download_and_prepare(\r\n--> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 607 )\r\n 608 # Sync info\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 1102\r\n 1103 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n 1105\r\n 1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 675 if verify_infos:\r\n 676 verify_checksums(\r\n--> 677 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 678 )\r\n 679\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 31 return\r\n 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0:\r\n---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\n 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0:\r\n 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))\r\n\r\nExpectedMoreDownloadedFiles: {'https://drive.google.com/u/0/uc?id=1kb7xq6Mb3eaqe9cOAo70BaG9ypwkIqEU&export=download', 'https://drive.google.com/u/0/uc?id=1J3FpQaHxpM-FDfBUyooh-sZF-B-bM_lU&export=download', 'https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'}\r\n```",
"That's true. Steps to reproduce the bug on Google Colab:\r\n\r\n```\r\ngit clone https://github.com/huggingface/datasets.git\r\ncd datasets\r\npip install -e .\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nHowever the dataset will load without any problems if you just install version 2.0.0:\r\n\r\n ```\r\npip install datasets\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nAny thoughts @lhoestq ?",
"Right indeed ! Let me open a PR to fix this.\r\nThe dataset_infos.json file that stores some metadata about the dataset to download (and is used to verify it was correctly downloaded) hasn't been updated correctly",
"Not sure what the status of this is, but personally I am still getting this error, with glue.",
"Can you open a new issue if you got an error with glue please ?",
"Have posted at #4241"
] |
1,179,263,787
| 4,002
|
Support streaming conll2012_ontonotesv5 dataset
|
closed
| 2022-03-24T09:49:56
| 2022-03-24T10:53:41
| 2022-03-24T10:48:47
|
https://github.com/huggingface/datasets/pull/4002
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4002",
"html_url": "https://github.com/huggingface/datasets/pull/4002",
"diff_url": "https://github.com/huggingface/datasets/pull/4002.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4002.patch",
"merged_at": "2022-03-24T10:48:47"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,179,231,418
| 4,001
|
How to use generate this multitask dataset for SQUAD? I am getting a value error.
|
closed
| 2022-03-24T09:21:51
| 2022-03-26T09:48:21
| 2022-03-26T03:35:43
|
https://github.com/huggingface/datasets/issues/4001
| null |
gsk1692
| false
|
[
"Hi! Replacing `nlp.<obj>` with `datasets.<obj>` in the script should fix the problem. `nlp` has been renamed to `datasets` more than a year ago, so please use `datasets` instead to avoid weird issues.",
"Thank You! Was able to solve with the help of this.",
"But I request you to please fix the same in the dataset hub explorer as well...",
"May I ask how to get this dataset?"
] |
1,178,844,616
| 4,000
|
load_dataset error: sndfile library not found
|
closed
| 2022-03-24T01:52:32
| 2022-03-25T17:53:33
| 2022-03-25T17:53:33
|
https://github.com/huggingface/datasets/issues/4000
| null |
i-am-neo
| false
|
[
"Hi @i-am-neo,\r\n\r\nThe audio support is an extra feature of `datasets` and therefore it must be installed as an additional optional dependency:\r\n```shell\r\npip install datasets[audio]\r\n```\r\nAdditionally, for specific MP3 support (which is not the case for AMI dataset, that contains WAV audio files), there is another third-party dependency on `torchaudio`.\r\n\r\nYou have all the information in our docs: https://huggingface.co/docs/datasets/audio_process#installation",
"Thanks @albertvillanova . Unfortunately the error persists after installing ```datasets[audio]```. Can you direct towards a solution?\r\n\r\n```\r\npip3 install datasets[audio]\r\n```\r\n### log\r\nRequirement already satisfied: datasets[audio] in ./.virtualenvs/hubert/lib/python3.7/site-packages (1.18.3)\r\nRequirement already satisfied: numpy>=1.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.21.5)\r\nRequirement already satisfied: xxhash in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.0.0)\r\nRequirement already satisfied: fsspec[http]>=2021.05.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2022.2.0)\r\nRequirement already satisfied: dill in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.3.4)\r\nRequirement already satisfied: pandas in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.3.5)\r\nRequirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.4.0)\r\nRequirement already satisfied: packaging in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (21.3)\r\nRequirement already satisfied: multiprocess in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.70.12.2)\r\nRequirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (7.0.0)\r\nRequirement already satisfied: tqdm>=4.62.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.63.1)\r\nRequirement already satisfied: aiohttp in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.8.1)\r\nRequirement already satisfied: importlib-metadata in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.11.3)\r\nRequirement already satisfied: requests>=2.19.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2.27.1)\r\nRequirement already satisfied: librosa in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.9.1)\r\nRequirement already satisfied: pyyaml in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (6.0)\r\nRequirement already satisfied: typing-extensions>=3.7.4.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (4.1.1)\r\nRequirement already satisfied: filelock in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (3.6.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from packaging->datasets[audio]) (3.0.7)\r\nRequirement already satisfied: idna<4,>=2.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (3.3)\r\nRequirement already satisfied: certifi>=2017.4.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2021.10.8)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2.0.12)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (1.26.9)\r\nRequirement already satisfied: attrs>=17.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (21.4.0)\r\nRequirement already satisfied: frozenlist>=1.1.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.3.0)\r\nRequirement already satisfied: aiosignal>=1.1.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.2.0)\r\nRequirement already satisfied: yarl<2.0,>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.7.2)\r\nRequirement already satisfied: asynctest==0.13.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (0.13.0)\r\nRequirement already satisfied: multidict<7.0,>=4.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (6.0.2)\r\nRequirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (4.0.2)\r\nRequirement already satisfied: zipp>=0.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from importlib-metadata->datasets[audio]) (3.7.0)\r\nRequirement already satisfied: decorator>=4.0.10 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (5.1.1)\r\nRequirement already satisfied: soundfile>=0.10.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.55.1)\r\nRequirement already satisfied: pooch>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.6.0)\r\nRequirement already satisfied: resampy>=0.2.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.2.2)\r\nRequirement already satisfied: audioread>=2.1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.1.0)\r\nRequirement already satisfied: scipy>=1.2.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.7.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.0.2)\r\nRequirement already satisfied: python-dateutil>=2.7.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2.8.2)\r\nRequirement already satisfied: pytz>=2017.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2022.1)\r\nRequirement already satisfied: setuptools in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (0.38.0)\r\nRequirement already satisfied: appdirs>=1.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa->datasets[audio]) (1.4.4)\r\nRequirement already satisfied: six>=1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas->datasets[audio]) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa->datasets[audio]) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa->datasets[audio]) (1.15.0)\r\nRequirement already satisfied: pycparser in ./.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa->datasets[audio]) (2.21)\r\n\r\n### reload\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### log\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n\r\n### just to double-check as per your docs\r\n```\r\npip3 install librosa torchaudio\r\n```\r\n\r\n### logs\r\nRequirement already satisfied: librosa in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.9.1)\r\nRequirement already satisfied: torchaudio in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.11.0+cu113)\r\nRequirement already satisfied: audioread>=2.1.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.1.0)\r\nRequirement already satisfied: packaging>=20.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (21.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.0.2)\r\nRequirement already satisfied: scipy>=1.2.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.7.3)\r\nRequirement already satisfied: decorator>=4.0.10 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (5.1.1)\r\nRequirement already satisfied: resampy>=0.2.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.2.2)\r\nRequirement already satisfied: pooch>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.6.0)\r\nRequirement already satisfied: numpy>=1.17.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.21.5)\r\nRequirement already satisfied: soundfile>=0.10.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.55.1)\r\nRequirement already satisfied: torch==1.11.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torchaudio) (1.11.0+cu113)\r\nRequirement already satisfied: typing-extensions in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torch==1.11.0->torchaudio) (4.1.1)\r\nRequirement already satisfied: setuptools in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (0.38.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from packaging>=20.0->librosa) (3.0.7)\r\nRequirement already satisfied: requests>=2.19.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (2.27.1)\r\nRequirement already satisfied: appdirs>=1.3.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (1.4.4)\r\nRequirement already satisfied: six>=1.3 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from resampy>=0.2.2->librosa) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa) (1.15.0)\r\nRequirement already satisfied: pycparser in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa) (2.21)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2.0.12)\r\nRequirement already satisfied: certifi>=2017.4.17 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2021.10.8)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (1.26.9)\r\nRequirement already satisfied: idna<4,>=2.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (3.3)\r\n\r\n### try loading again\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### same error\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n",
"Hi @i-am-neo, thanks again for your detailed report.\r\n\r\nOur `datasets` library support for audio relies on a third-party Python library called `librosa`, which is installed when you do:\r\n```shell\r\npip install datasets[audio]\r\n```\r\n\r\nHowever, the `librosa` library has a dependency on `soundfile`; and `soundfile` depends on a non-Python package called `sndfile`. \r\n\r\nOn Linux (which is your case), this must be installed manually using your operating system package manager, for example:\r\n```shell\r\nsudo apt-get install libsndfile1\r\n```\r\n\r\nPlease, let me know if this works and if so, I will update our docs with all this information.",
"@albertvillanova thanks, all good. The key is ```libsndfile1``` - it may help others to note that in your docs. I had installed libsndfile previously."
] |
1,178,685,280
| 3,999
|
Docs maintenance
|
closed
| 2022-03-23T21:27:33
| 2022-03-30T17:01:45
| 2022-03-30T16:56:38
|
https://github.com/huggingface/datasets/pull/3999
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3999",
"html_url": "https://github.com/huggingface/datasets/pull/3999",
"diff_url": "https://github.com/huggingface/datasets/pull/3999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3999.patch",
"merged_at": "2022-03-30T16:56:38"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,178,631,986
| 3,998
|
Fix Audio.encode_example() when writing an array
|
closed
| 2022-03-23T20:32:13
| 2022-03-29T14:21:44
| 2022-03-29T14:16:13
|
https://github.com/huggingface/datasets/pull/3998
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3998",
"html_url": "https://github.com/huggingface/datasets/pull/3998",
"diff_url": "https://github.com/huggingface/datasets/pull/3998.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3998.patch",
"merged_at": "2022-03-29T14:16:13"
}
|
polinaeterna
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova do you think [this line](https://github.com/huggingface/datasets/pull/3998/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R67) is enough? that's why we missed this bug, we didn't check this case"
] |
1,178,566,568
| 3,997
|
Sync Features dictionaries
|
closed
| 2022-03-23T19:23:51
| 2022-04-13T15:52:27
| 2022-04-13T15:46:19
|
https://github.com/huggingface/datasets/pull/3997
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3997",
"html_url": "https://github.com/huggingface/datasets/pull/3997",
"diff_url": "https://github.com/huggingface/datasets/pull/3997.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3997.patch",
"merged_at": "2022-04-13T15:46:19"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,178,415,905
| 3,996
|
Audio.encode_example() throws an error when writing example from array
|
closed
| 2022-03-23T17:11:47
| 2022-03-29T14:16:13
| 2022-03-29T14:16:13
|
https://github.com/huggingface/datasets/issues/3996
| null |
polinaeterna
| false
|
[
"Good catch ! Yes I think passing `format=\"wav\"` is the right thing to do",
"Thanks @polinaeterna for reporting this issue.\r\n\r\nIn relation to the decoding of MP3 audio files without torchaudio, I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio. But yes, nice to give an alternative to non-torchaudio users (with a big warning on performance).",
"> I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio.\r\n\r\nYeah, I know, but as far as I understand, some users just categorically don't want to have torchaudio in their environment. Anyway, it's just a more or less random example, they can use any library they like following the same logic (I'm just not a big expert in decoding utils so if you can give me some presentation / resources about that I would really appreciate it 🤗)"
] |
1,178,232,623
| 3,995
|
Close `PIL.Image` file handler in `Image.decode_example`
|
closed
| 2022-03-23T14:51:48
| 2022-03-23T18:24:52
| 2022-03-23T18:19:27
|
https://github.com/huggingface/datasets/pull/3995
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3995",
"html_url": "https://github.com/huggingface/datasets/pull/3995",
"diff_url": "https://github.com/huggingface/datasets/pull/3995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3995.patch",
"merged_at": "2022-03-23T18:19:26"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,178,211,138
| 3,994
|
Change audio column from string path to Audio feature in ASR task
|
closed
| 2022-03-23T14:34:52
| 2022-03-23T15:43:43
| 2022-03-23T15:43:43
|
https://github.com/huggingface/datasets/pull/3994
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3994",
"html_url": "https://github.com/huggingface/datasets/pull/3994",
"diff_url": "https://github.com/huggingface/datasets/pull/3994.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3994.patch",
"merged_at": null
}
|
polinaeterna
| true
|
[] |
1,178,201,495
| 3,993
|
Streaming dataset + interleave + DataLoader hangs with multiple workers
|
open
| 2022-03-23T14:27:29
| 2023-02-28T14:14:24
| null |
https://github.com/huggingface/datasets/issues/3993
| null |
jpilaul
| false
|
[
"Same thing occurs when streaming files loaded from disk.",
"Hi ! Thanks for reporting, could this be related to https://github.com/huggingface/datasets/issues/3950 ?\r\n\r\nCurrently streaming datasets only works in single process, but we're working on having in work in distributed setups as well :) (EDIT: done)",
"Hi, thanks for your reply. It seems related :)",
"+1",
"Please update `datasets` if you're having this issue. What version are you using ?"
] |
1,177,946,153
| 3,992
|
Image column is not decoded in map when using with with_transform
|
closed
| 2022-03-23T10:51:13
| 2022-12-13T16:59:06
| 2022-12-13T16:59:06
|
https://github.com/huggingface/datasets/issues/3992
| null |
phihung
| false
|
[
"Hi! This behavior stems from this line: https://github.com/huggingface/datasets/blob/799b817d97590ddc97cbd38d07469403e030de8c/src/datasets/arrow_dataset.py#L1919\r\nBasically, the `Image`/`Audio` columns are decoded only if the `format_type` attribute is `None` (`set_format`/`with_format` and `set_transform`/`with_transform` assign a non-`None` value to it) and the `input_columns` param is not specified (see https://github.com/huggingface/datasets/issues/3756). We will remove these limitations soon.\r\n\r\n\r\n\r\n"
] |
1,177,362,901
| 3,991
|
Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset
|
open
| 2022-03-22T22:16:05
| 2022-03-23T12:57:16
| null |
https://github.com/huggingface/datasets/issues/3991
| null |
omarespejel
| false
|
[] |
1,176,976,247
| 3,990
|
Improve AutomaticSpeechRecognition task template
|
closed
| 2022-03-22T15:41:08
| 2022-03-23T17:12:40
| 2022-03-23T17:12:40
|
https://github.com/huggingface/datasets/issues/3990
| null |
polinaeterna
| false
|
[
"There is an open PR to do that: #3364. I just haven't had time to finish it... ",
"> There is an open PR to do that: #3364. I just haven't had time to finish it...\r\n\r\n😬 thanks..."
] |
1,176,955,078
| 3,989
|
Remove old wikipedia leftovers
|
closed
| 2022-03-22T15:25:46
| 2022-03-31T15:35:26
| 2022-03-31T15:30:16
|
https://github.com/huggingface/datasets/pull/3989
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3989",
"html_url": "https://github.com/huggingface/datasets/pull/3989",
"diff_url": "https://github.com/huggingface/datasets/pull/3989.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3989.patch",
"merged_at": "2022-03-31T15:30:16"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> This makes me think we shouldn't advise the use of load_dataset in dataset scripts, since it doesn't guarantee that the cache will work as expected (the cache directory is not set correctly, and the required disk space for downloaded files is not recorded)\r\n\r\n@lhoestq, do you think it could be a good idea to add a comment in this script WARNING that using load_dataset in a script is not good practice and that people should avoid using that script as a template to create other scripts? ",
"good idea ! :)"
] |
1,176,858,540
| 3,988
|
More consistent references in docs
|
closed
| 2022-03-22T14:18:41
| 2022-03-22T17:06:32
| 2022-03-22T16:50:44
|
https://github.com/huggingface/datasets/pull/3988
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3988",
"html_url": "https://github.com/huggingface/datasets/pull/3988",
"diff_url": "https://github.com/huggingface/datasets/pull/3988.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3988.patch",
"merged_at": "2022-03-22T16:50:43"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks good, thanks for working on this!"
] |
1,176,481,659
| 3,987
|
Fix Faiss custom_index device
|
closed
| 2022-03-22T09:11:24
| 2022-03-24T12:18:59
| 2022-03-24T12:14:12
|
https://github.com/huggingface/datasets/pull/3987
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3987",
"html_url": "https://github.com/huggingface/datasets/pull/3987",
"diff_url": "https://github.com/huggingface/datasets/pull/3987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3987.patch",
"merged_at": "2022-03-24T12:14:12"
}
|
albertvillanova
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,176,429,565
| 3,986
|
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
|
open
| 2022-03-22T08:23:21
| 2023-03-06T16:55:04
| null |
https://github.com/huggingface/datasets/issues/3986
| null |
kelvinAI
| false
|
[
"Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?",
"Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datasets/issues/329 . In this case the user was able to modify and add -o flock option while mounting and it solved the problem. \r\nHowever in other cases such as mine, we do not have the permissions to modify the commands while mounting. I'm still trying to figure out a workaround. Any ideas how can we use a mounted Lustre filesystem with no flock option?\r\n",
"Hi @kelvinAI , I've had this issue on our institution's system which uses Lustre (in addition to our compute nodes being siloed off from external network access). The workaround I made for downloading/loading datasets was to set the `$HFHOME` environment variable to a location on the node's local storage (SSD), effectively a location that gets cleared regularly and sometimes gets used for temporary or cached files which is pretty common, e.g. \"scratch\" storage. Maybe your sysadmins, if you have them, could point you to subdirectories on a node that aren't linked to the Lustre filesystem. After downloading to scratch I found that the transformers, modules, and metrics cached folders were fine to move to my user drives on the Lustre filesystem but cached datasets that had fingerprints still had some issues with filelock, so it would help to use the function `my_dataset.save_to_disk('path/on/lustre_fs')` and static class function `Dataset.load_from_disk('path/on/lustre_fs')`. In rough steps:\r\n\r\n1. Initially download to scratch storage with `ds = datasets.load_dataset(dataset_name)`\r\n2. Call `ds.save_to_disk(my_path_on_lustre)` with a path in your user space on the Lustre filesystem\r\n3. Load datasets with `from datasets import Dataset; new_ds = Dataset.load_from_disk(my_path_on_lustre)`\r\n\r\nObviously this hinges on there existing scratch storage on the nodes you're using. Fingers crossed.",
"Hi @jpmcd , thanks for sharing your experience. For my case, the Lustre filesystem (with more storage space) is the scratch storage like the one you've mentioned. We have a local storage for each user but unfortunately there's not enough space in it to 'cache' huge datasets, hence that is why I tried changing HF_HOME to point to the scratch disk with more space and encountered the flock issue. Unfortunately I'm not aware of any viable solution to this for now so I simply fall back to using torch dataset. ",
"@jpmcd your comment saved me from pulling my hair out in frustration. Setting `HF_HOME` to a directory that's not on Lustre works like a charm. ✨ "
] |
1,175,982,937
| 3,985
|
[image feature] Too many files open error when image feature is returned as a path
|
closed
| 2022-03-21T21:54:05
| 2022-03-23T18:19:27
| 2022-03-23T18:19:27
|
https://github.com/huggingface/datasets/issues/3985
| null |
apsdehal
| false
|
[] |
1,175,822,117
| 3,984
|
Local and automatic tests fail
|
closed
| 2022-03-21T19:07:37
| 2023-07-25T15:18:40
| 2023-07-25T15:18:40
|
https://github.com/huggingface/datasets/issues/3984
| null |
MarkusSagen
| false
|
[
"Hi ! To be able to run the tests, you need to install all the test dependencies and additional ones with\r\n```\r\npip install -e .[tests]\r\npip install -r additional-tests-requirements.txt --no-deps\r\n```\r\n\r\nIn particular, you probably need to `sacrebleu`. It looks like it wasn't able to instantiate `sacrebleu.TER` properly."
] |
1,175,759,412
| 3,983
|
Infinitely attempting lock
|
closed
| 2022-03-21T18:11:57
| 2024-05-09T08:24:34
| 2022-05-06T16:12:18
|
https://github.com/huggingface/datasets/issues/3983
| null |
jyrr
| false
|
[
"Hi ! Thanks for reporting. We're using filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest version of filelock` and see if it works ?\r\n\r\n```\r\npip install filelock\r\n```\r\nhere is a code example from the `filelock` documentation that you can try:\r\n\r\n```python\r\nfrom filelock import FileLock\r\n\r\nlock = FileLock(\"test.txt.lock\")\r\nwith lock:\r\n with open(\"test.txt\", \"a\") as f:\r\n f.write(\"foo\")\r\n```",
"I ran into this problem on my school server as well? Any update on how we can solve it? Thanks! ",
"Have you tried running the code above to check if FileLock works in your setup ? You may also be interested in checking the https://github.com/tox-dev/filelock repository for issues",
"Can you try using a different cache directory ? Maybe there are permissions issues with the default one.\r\n\r\nYou can do so by passing `cache_dir=...` to load_dataset()"
] |
1,175,478,099
| 3,982
|
Exclude Google Drive tests of the CI
|
closed
| 2022-03-21T14:34:16
| 2022-03-31T16:38:02
| 2022-03-21T14:51:35
|
https://github.com/huggingface/datasets/pull/3982
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3982",
"html_url": "https://github.com/huggingface/datasets/pull/3982",
"diff_url": "https://github.com/huggingface/datasets/pull/3982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3982.patch",
"merged_at": "2022-03-21T14:51:35"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I was thinking exactly the same: running unit tests that request continuously a third-party API is not a good idea."
] |
1,175,423,517
| 3,981
|
Add TER metric card
|
closed
| 2022-03-21T13:54:36
| 2022-03-29T13:57:11
| 2022-03-29T13:51:40
|
https://github.com/huggingface/datasets/pull/3981
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3981",
"html_url": "https://github.com/huggingface/datasets/pull/3981",
"diff_url": "https://github.com/huggingface/datasets/pull/3981.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3981.patch",
"merged_at": "2022-03-29T13:51:40"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,175,412,905
| 3,980
|
Add tip on how to speed up loading with ImageFolder
|
closed
| 2022-03-21T13:45:58
| 2022-03-22T13:39:45
| 2022-03-22T13:34:56
|
https://github.com/huggingface/datasets/pull/3980
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3980",
"html_url": "https://github.com/huggingface/datasets/pull/3980",
"diff_url": "https://github.com/huggingface/datasets/pull/3980.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3980.patch",
"merged_at": "2022-03-22T13:34:56"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for adding that tip! 👍 \r\n\r\nFor the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,`cast_column`) instead of the full path which can be a bit lengthy for some functions like `datasets.IterableDataset.remove_columns` (and if we like this idea, we can align the rest of the docs on it). ",
"> For the docs syntax, it might be better if we hide the package name/full path to the class or function and only show the name of it. I think it's easier for users to read the function name (eg,cast_column) instead of the full path which can be a bit lengthy for some functions like datasets.IterableDataset.remove_columns (and if we like this idea, we can align the rest of the docs on it).\r\n\r\nThat's also OK, as long as we are consistent.\r\n\r\n@lhoestq @albertvillanova @polinaeterna Which one of these two styles do you prefer?",
"Agree on hiding `datasets` name. Not sure about hiding class name as it's anyway not visible for users if they use `Dataset.cast_column` or `IterableDataset.cast_column` when working with their datasets. But I agree that the most important thing is to be consistent :)",
"Good points! :)\r\n\r\nI think it'll be good to show the class name since some functions have different parameters. For example, if users click on `IterableDataset.map` and then `Dataset.map`, they'll see different parameters and have to figure out why (which isn't too difficult I guess lol). But showing the class name avoids any confusion upfront. "
] |
1,175,258,969
| 3,979
|
Fix google drive streaming for small files
|
closed
| 2022-03-21T11:38:46
| 2023-09-24T09:55:19
| 2022-03-21T14:25:58
|
https://github.com/huggingface/datasets/pull/3979
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3979",
"html_url": "https://github.com/huggingface/datasets/pull/3979",
"diff_url": "https://github.com/huggingface/datasets/pull/3979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3979.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Actually the CI fails because of this\r\n\r\n\r\nIt looks like we can't have a proper way to test google drive in the CI right now. Though it seems to work locally if you're not banned. I think I'll just disable those tests for now",
"this fix will not be included?",
"No we can't do anything except stop using google drive when possible"
] |
1,175,226,456
| 3,978
|
I can't view HFcallback dataset for ASR Space
|
open
| 2022-03-21T11:07:49
| 2023-09-25T12:19:53
| null |
https://github.com/huggingface/datasets/issues/3978
| null |
kingabzpro
| false
|
[
"the dataset viewer is working on this dataset. I imagine the issue is that we would expect to be able to listen to the audio files in the `Please Record Your Voice file` column, right?\r\n\r\nmaybe @lhoestq or @albertvillanova could help\r\n\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-03-24 à 17 36 20\" src=\"https://user-images.githubusercontent.com/1676121/159966006-57dcf8f7-b65f-4200-ac8c-66859318a8bb.png\">\r\n",
"The structure of the dataset is not supported. Only the CSV file is parsed and the audio files are ignored.\r\n\r\nWe're working on supporting audio datasets with a specific structure in #3963 ",
"Got it.",
"Current error:\r\n\r\n```\r\nError code: StreamingRowsError\r\nException: LibsndfileError\r\nMessage: Error opening <File-like object HfFileSystem, datasets/kingabzpro/Urdu-ASR-flags@6a8878cfe3a41343fa86ec8b4254209fe56a0f0d/Please Record Your Voice/0.wav>: Format not recognised.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/utils.py\", line 263, in get_rows_or_raise\r\n return get_rows(\r\n File \"/src/services/worker/src/worker/utils.py\", line 204, in decorator\r\n return func(*args, **kwargs)\r\n File \"/src/services/worker/src/worker/utils.py\", line 241, in get_rows\r\n rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 1357, in __iter__\r\n example = _apply_feature_types_on_example(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 1051, in _apply_feature_types_on_example\r\n decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py\", line 1902, in decode_example\r\n return {\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py\", line 1903, in <dictcomp>\r\n column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py\", line 1325, in decode_nested_example\r\n return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/audio.py\", line 187, in decode_example\r\n array, sampling_rate = sf.read(f)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/soundfile.py\", line 285, in read\r\n with SoundFile(file, 'r', samplerate, channels,\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/soundfile.py\", line 658, in __init__\r\n self._file = self._open(file, mode_int, closefd)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/soundfile.py\", line 1216, in _open\r\n raise LibsndfileError(err, prefix=\"Error opening {0!r}: \".format(self.name))\r\n soundfile.LibsndfileError: Error opening <File-like object HfFileSystem, datasets/kingabzpro/Urdu-ASR-flags@6a8878cfe3a41343fa86ec8b4254209fe56a0f0d/Please Record Your Voice/0.wav>: Format not recognised.\r\n```\r\n\r\nMaybe switch to a discussion here? https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags/discussions. cc @albertvillanova "
] |
1,175,049,927
| 3,977
|
Adapt `docs/README.md` for datasets
|
closed
| 2022-03-21T08:26:49
| 2023-02-27T10:32:37
| 2023-02-27T10:32:37
|
https://github.com/huggingface/datasets/issues/3977
| null |
qqaatw
| false
|
[
"Thanks for reporting @qqaatw.\r\n\r\nYes, we should definitely adapt that file for `datasets`. "
] |
1,175,043,780
| 3,976
|
Fix main classes reference in docs
|
closed
| 2022-03-21T08:19:46
| 2022-04-12T14:19:39
| 2022-04-12T14:19:38
|
https://github.com/huggingface/datasets/pull/3976
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3976",
"html_url": "https://github.com/huggingface/datasets/pull/3976",
"diff_url": "https://github.com/huggingface/datasets/pull/3976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3976.patch",
"merged_at": null
}
|
qqaatw
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976). All of your documentation changes will be reflected on that endpoint.",
"Not sure why some section titles end with `[[datasets.xxx]]`, like this: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976/en/package_reference/main_classes#datasetdict[[datasets.datasetdict]]",
"Thanks ! I think this has been fixed already in https://github.com/huggingface/datasets/pull/3925 though\r\n\r\nI'm closing this one then if it's fine for you"
] |
1,174,678,942
| 3,975
|
Update many missing tags to dataset README's
|
closed
| 2022-03-20T20:42:27
| 2022-03-21T18:39:52
| 2022-03-21T18:39:52
|
https://github.com/huggingface/datasets/pull/3975
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3975",
"html_url": "https://github.com/huggingface/datasets/pull/3975",
"diff_url": "https://github.com/huggingface/datasets/pull/3975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3975.patch",
"merged_at": null
}
|
MarkusSagen
| true
|
[] |
1,174,485,044
| 3,974
|
Add XFUN dataset
|
closed
| 2022-03-20T09:24:54
| 2022-10-03T09:38:16
| 2022-10-03T09:36:22
|
https://github.com/huggingface/datasets/pull/3974
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3974",
"html_url": "https://github.com/huggingface/datasets/pull/3974",
"diff_url": "https://github.com/huggingface/datasets/pull/3974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3974.patch",
"merged_at": null
}
|
qqaatw
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Not sure how to generate dummy data.\r\n\r\nThe downloaded file structure is \r\n\r\n- document file paths\r\n - (a json file containing all documents info, document images folder)\r\n - (a json file containing all documents info, document images folder)\r\n - ...",
"Hey @mariosasko, thanks for the review. I'm not sure how to suggest these changes to the owner @ranpox, and I did spend some time to write the model card and hope to get it on the official repo. Is that possible?",
"Since the author is not responding, maybe we can go ahead with this PR ?",
"Go for it!\n\nOn Tue, Apr 12, 2022 at 10:24 AM Quentin Lhoest ***@***.***>\nwrote:\n\n> Since the author is not responding, maybe we can go ahead with this PR ?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/3974#issuecomment-1096797650>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ATFNL66EVUFWS3P2FOAS7SLVEWBP3ANCNFSM5RFH3MXA>\n> .\n> You are receiving this because you are subscribed to this thread.Message\n> ID: ***@***.***>\n>\n",
"@qqaatw Do you plan to finish this PR? I can give you some pointers and help you with the code if needed.",
"@mariosasko Yes, I'll apply all of the suggestions when I have some time.",
"Thanks for your contribution, @qqaatw.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you propose this changes there to the original repo. Please, feel free to tell us if you need some help."
] |
1,174,455,431
| 3,973
|
ConnectionError and SSLError
|
closed
| 2022-03-20T06:45:37
| 2022-03-30T08:13:32
| 2022-03-30T08:13:32
|
https://github.com/huggingface/datasets/issues/3973
| null |
yanyu2015
| false
|
[
"Hi ! You can download the `oscar.py` file from this repository at `/datasets/oscar/oscar.py`.\r\n\r\nThen you can load the dataset by passing the local path to `oscar.py` to `load_dataset`:\r\n```python\r\nload_dataset(\"path/to/oscar.py\", \"unshuffled_deduplicated_it\")\r\n```",
"it works,but another error occurs.\r\n```\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (SSLError(MaxRetryError(\"HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))\")))\r\n```\r\nI can access `https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt` and `https://aws.amazon.com/cn/s3/` directly, so why it reports a SSLError, should I need tomodify the host file?",
"Could it be an issue with your python environment or your version of OpenSSL ?",
"you are so wise!\r\nit report [ConnectionError] in python 3.9.7\r\nand works well in python 3.8.12\r\n\r\nI need you help again: how can I specify the path for download files?\r\nthe data is too large and my C hardware is not enough",
"Cool ! And you can specify the path for download files with to the `cache_dir` parameter:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('oscar', 'unshuffled_deduplicated_it', cache_dir='path/to/directory')",
"It takes me some days to download data completely, Despise sometimes it occurs again, change py version is feasible way to avoid this ConnectionEror.\r\nparameter `cache_dir` works well, thanks for your kindness again!"
] |
1,174,402,033
| 3,972
|
Adding Roman Urdu Hate Speech dataset
|
closed
| 2022-03-20T00:19:26
| 2022-03-25T15:56:19
| 2022-03-25T15:51:20
|
https://github.com/huggingface/datasets/pull/3972
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3972",
"html_url": "https://github.com/huggingface/datasets/pull/3972",
"diff_url": "https://github.com/huggingface/datasets/pull/3972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3972.patch",
"merged_at": "2022-03-25T15:51:20"
}
|
bp-high
| true
|
[
"@lhoestq can you review when you have some time? Also were the previous CI fails due to the Google Drive tests which were excluded by #3982 ?",
"> were the previous CI fails due to the Google Drive tests which were excluded by https://github.com/huggingface/datasets/pull/3982 ?\r\n\r\nYes exactly, merging `master` into your branch fixed the CI ;)",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,174,329,442
| 3,971
|
Applied index-filters on scores in search.py.
|
closed
| 2022-03-19T18:43:42
| 2022-04-12T14:48:23
| 2022-04-12T14:41:58
|
https://github.com/huggingface/datasets/pull/3971
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3971",
"html_url": "https://github.com/huggingface/datasets/pull/3971",
"diff_url": "https://github.com/huggingface/datasets/pull/3971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3971.patch",
"merged_at": "2022-04-12T14:41:58"
}
|
vishalsrao
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,174,327,367
| 3,970
|
Apply index-filters on scores in get_nearest_examples and get_nearest…
|
closed
| 2022-03-19T18:32:31
| 2022-03-19T18:38:12
| 2022-03-19T18:38:12
|
https://github.com/huggingface/datasets/pull/3970
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3970",
"html_url": "https://github.com/huggingface/datasets/pull/3970",
"diff_url": "https://github.com/huggingface/datasets/pull/3970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3970.patch",
"merged_at": null
}
|
vishalsrao
| true
|
[] |
1,174,273,824
| 3,969
|
Cannot preview cnn_dailymail dataset
|
closed
| 2022-03-19T14:08:57
| 2022-04-20T15:52:49
| 2022-04-20T15:52:49
|
https://github.com/huggingface/datasets/issues/3969
| null |
hasan-besh
| false
|
[
"I guess the cache got corrupted due to a previous issue with Google Drive service.\r\n\r\nThe cache should be regenerated, e.g. by passing `download_mode=\"force_redownload\"`.\r\n\r\nCC: @severo ",
"Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode=\"force_redownload\"` doesn't help. But yes indeed the cache must be refreshed.\r\n\r\nThe CNN Dailymail dataste is currently hosted on Google Drive, which is an unreliable host and we've had many issues with it. Unless we found another most reliable host for the data, we will keep running into issues from time to time.\r\n\r\nAt Hugging Face we're not allowed to host the CNN Dailymail data by ourselves AFAIK",
"Yes @lhoestq, I didn't explain myself well: my previous message was addressed to @severo. ",
"I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Drive",
"Sounds good. I was looking for another host of this dataset but couldn't find any (yet)",
"It seems like the issue is with the streaming mode, not with the hosting:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=True, download_mode=\"force_redownload\")\r\nDownloading builder script: 9.35kB [00:00, 10.2MB/s]\r\nDownloading metadata: 9.50kB [00:00, 12.2MB/s]\r\n>>> len(list(dataset))\r\n0\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=False)\r\nReusing dataset cnn_dailymail (/home/slesage/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234)\r\n>>> len(dataset)\r\n287113\r\n```\r\n\r\nNote, in particular, that the streaming mode is failing silently, returning 0 row while I would have expected an exception instead. The result is that the dataset viewer shows `No data` instead of a detailed error.\r\n\r\n<img width=\"1511\" alt=\"Capture d’écran 2022-04-12 à 11 50 46\" src=\"https://user-images.githubusercontent.com/1676121/162935341-d50f1e73-d053-41d4-917f-e79708a0ca23.png\">\r\n",
"Well this is because the host (Google Drive) returns a document that is not the actual data, but an error page",
"Do you think that `datasets` should detect this anyway and throw an exception?",
"Yes it definitely should ! I don't have the bandwidth to work on this right now though",
"Indeed, streaming was not supported: tgz archives were not properly iterated.\r\n\r\nI've opened a PR to support streaming.\r\n\r\nHowever, keep in mind that Google Drive will keep generating issues from time to time, like 403,..."
] |
1,174,193,962
| 3,968
|
Cannot preview 'indonesian-nlp/eli5_id' dataset
|
closed
| 2022-03-19T06:54:09
| 2022-03-24T16:34:24
| 2022-03-24T16:34:24
|
https://github.com/huggingface/datasets/issues/3968
| null |
cahya-wirawan
| false
|
[
"Hi @cahya-wirawan, thanks for reporting.\r\n\r\nYour dataset is working OK in streaming mode:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"indonesian-nlp/eli5_id\", split=\"train\", streaming=True)\r\n ...: item = next(iter(ds))\r\n ...: item\r\nUsing custom data configuration indonesian-nlp--eli5_id-9fe728a7e760fb7b\r\n\r\nOut[1]: \r\n{'q_id': '1oy5tc',\r\n 'title': 'dalam sepak bola apa gunanya menyia-nyiakan dua permainan pertama dengan terburu-buru - di tengah - bukan permainan terburu-buru biasa saya mendapatkannya',\r\n 'selftext': '',\r\n 'document': '',\r\n 'subreddit': 'explainlikeimfive',\r\n 'answers': {'a_id': ['ccwtgnz', 'ccwtmho', 'ccwt946', 'ccwvj0u'],\r\n 'text': ['Jaga pertahanan tetap jujur, rasakan operan terburu-buru, buka permainan yang lewat. Pelanggaran yang terlalu satu dimensi akan gagal. Dan mereka yang bergegas ke tengah kadang-kadang dapat dibuka lebar-lebar untuk ukuran yard yang besar.',\r\n 'Jika Anda melempar bola sepanjang waktu, maka pertahanan akan beradaptasi untuk selalu menutupi umpan. Dengan melakukan permainan lari sederhana sesekali, Anda memaksa pertahanan untuk tetap dekat dan menjaga dari lari. Terkadang, pelanggaran dapat membuat pertahanan lengah dengan berpura-pura berlari dan membebaskan penerima mereka. Selain itu, Anda tidak perlu mendapatkan yard besar di setiap permainan. Terkadang, paling baik mendapatkan beberapa yard sekaligus. Selama Anda mendapatkan yang pertama, Anda dalam kondisi yang baik.',\r\n 'Dalam kebanyakan kasus, O-Line seharusnya membuat lubang untuk dilalui kembali. Jika Anda menjalankan terlalu banyak permainan ke luar / melempar, pertahanan akan mengejar. Juga, 2 permainan 5 yard memberi Anda satu set down baru.',\r\n 'Saya Anda tidak suka jenis drama itu, tonton CFL. Kami hanya mendapatkan 3 down sehingga Anda tidak bisa menyia-nyiakannya. Lebih banyak lagi yang lewat.'],\r\n 'score': [3, 2, 2, 2]},\r\n 'title_urls': {'url': []},\r\n 'selftext_urls': {'url': []},\r\n 'answers_urls': {'url': []}}\r\n```\r\nTherefore, it should be properly rendered in the previewer. Let me ping @severo to have a look at it.",
"Thanks @albertvillanova for checking it. Btw, I have another dataset indonesian-nlp/lfqa_id which has the same issue. However, this dataset is still private, is it the reason why the preview doesn't work?",
"Yes, preview is not supported on private datasets yet. We are working on that though...",
"Thanks for the confirmation ",
"Fixed. Thanks for your feedback."
] |
1,174,107,128
| 3,967
|
[feat] Add TextVQA dataset
|
closed
| 2022-03-18T23:29:39
| 2022-05-05T06:51:31
| 2022-05-05T06:44:29
|
https://github.com/huggingface/datasets/pull/3967
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3967",
"html_url": "https://github.com/huggingface/datasets/pull/3967",
"diff_url": "https://github.com/huggingface/datasets/pull/3967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3967.patch",
"merged_at": "2022-05-05T06:44:29"
}
|
apsdehal
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hey :) Have you had a chance to continue this PR ? Let me know if you have questions or if I can help",
"Hey @lhoestq, let me wrap this up soon. I will resolve your comments in next push."
] |
1,173,883,084
| 3,966
|
Create metric card for BERTScore
|
closed
| 2022-03-18T18:21:56
| 2022-03-22T13:35:28
| 2022-03-22T13:30:56
|
https://github.com/huggingface/datasets/pull/3966
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3966",
"html_url": "https://github.com/huggingface/datasets/pull/3966",
"diff_url": "https://github.com/huggingface/datasets/pull/3966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3966.patch",
"merged_at": "2022-03-22T13:30:56"
}
|
sashavor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,173,708,739
| 3,965
|
TypeError: Couldn't cast array of type for JSONLines dataset
|
closed
| 2022-03-18T15:17:53
| 2022-05-06T16:13:51
| 2022-05-06T16:13:51
|
https://github.com/huggingface/datasets/issues/3965
| null |
lewtun
| false
|
[
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] |
1,173,564,993
| 3,964
|
Add default Audio Loader
|
closed
| 2022-03-18T12:58:55
| 2022-08-22T14:20:46
| 2022-08-22T14:20:46
|
https://github.com/huggingface/datasets/issues/3964
| null |
polinaeterna
| false
|
[] |
1,173,492,562
| 3,963
|
Add Audio Folder
|
closed
| 2022-03-18T11:40:09
| 2022-06-15T16:33:19
| 2022-06-15T16:33:19
|
https://github.com/huggingface/datasets/pull/3963
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3963",
"html_url": "https://github.com/huggingface/datasets/pull/3963",
"diff_url": "https://github.com/huggingface/datasets/pull/3963.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3963.patch",
"merged_at": null
}
|
polinaeterna
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3963). All of your documentation changes will be reflected on that endpoint.",
"Feel free to merge `master` into this branch to fix the CI errors related to Google Drive :)\r\n\r\nI think we can just remove the test that is based on dummy data, or make it have the `sampling_rate` parameter hardcoded in the test",
"IMO it's important to keep this loader aligned with `imagefolder`. I'm aware that the current `imagefolder` API is limiting because only labels can be inferred from the directory structure, which means it can only be used for classification and self-supervised pretraining. However, to make the loader more generic, we plan to support [metadata files](https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF) (will work on that this week), and in the audio case, these files can store transcripts.\r\n\r\nStreaming TAR archives (`iter_archive`) is not supported by any of the loaders currently, so we can add that in a separate PR for all of them (to keep this PR simple).\r\n\r\nWDYT?",
"> Streaming TAR archives (iter_archive) is not supported by any of the loaders currently, so we can add that in a separate PR for all of them (to keep this PR simple).\r\n\r\nYes definitely, we can see that later\r\n\r\n> to make the loader more generic, we plan to support [metadata files](https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF) (will work on that this week), and in the audio case, these files can store transcripts.\r\n\r\nCould you share an example of what the structure would look like in this case ?\r\n\r\nNote that for audio we ultimately should be able to load several splits at once (common voice, librispeech, etc. all have splits), unlike the current imagefolder implementation that puts everything in `train` (EDIT: I mean, when we pass `data_dir`). If we want consistency then we would need the same for imagefolder.",
"> I think we can just remove the test that is based on dummy data, or make it have the sampling_rate parameter hardcoded in the test\r\n\r\nNot sure what to do with `test_builder_class` and `test_load_dataset_offline`, I don't really want to drop these tests completely but do you think it's a good idea to hardcode builder loading like this: 🤔\r\n```\r\nif dataset_name == \"audiofolder\":\r\n builder = builder_cls(name=name, cache_dir=tmp_cache_dir, sampling_rate=16_000)\r\nelse:\r\n builder = builder_cls(name=name, cache_dir=tmp_cache_dir)\r\n```\r\n@mariosasko totally agree on that APIs should be aligned, do you think we should implement metadata support first? Or maybe we can merge this PR with explicit single transcript file and add full metadata support further.\r\n\r\nSplits support is definitely a required feature too, I think we can implement it in the future PR too. \r\n",
"btw i've found a workaround for splits generation :D\r\n\r\n```\r\nfrom datasets.data_files import DataFilesDict\r\n\r\nds = load_dataset(\r\n \"audiofolder\",\r\n data_files=DataFilesDict(\r\n {\r\n \"train\":\"../audiofolder/AudioTestSplits/train.zip\",\r\n \"test\": \"../audiofolder/AudioTestSplits/test.zip\"\r\n }\r\n ),\r\n sampling_rate=16_000\r\n)\r\n```",
"> Not sure what to do with test_builder_class and test_load_dataset_offline, I don't really want to drop these tests completely but do you think it's a good idea to hardcode builder loading like this: 🤔\r\n\r\nYes it's fine. If you you're not a fan of having such parameters directly at the core of the code you can declare a global variable `PACKAGED_MODULES_TEST_KWARGS = {\"audiofolder\": {\"sampling_rate\": 16_000}}` and do\r\n```python\r\nbuilder_kwargs = PACKAGED_MODULES_TEST_KWARGS.get(name, {})\r\nbuilder = builder_cls(name=name, cache_dir=tmp_cache_dir, **builder_kwargs)\r\n```\r\n\r\n> btw i've found a workaround for splits generation :D\r\n\r\nYes that works :) Note that you don't have to use `DataFilesDict` and you can pass a python dict directly (`DataFilesDict` is for internal usage only)",
"@lhoestq @mariosasko please take a look at the code and feel free to add your comments and discuss the potential issues\r\n \r\nafter we are satisfied with the code, I'll write the documentation ",
"@lhoestq it appeared that this PR already exists... https://github.com/huggingface/datasets/pull/3364",
"> The current problem with this loader is that it supports the ASR task by default, which could be surprising for the users thinking that this is the Image Folder counterpart for audio. To avoid this, we should support the audio classification task by default instead (we can add a template for it in this PR), where the label column is inferred from the directory structure.\r\n\r\nRight indeed, good catch. It's better to keep polishing the API rather than pushing fast something that can be confusing for users. Let's go for maximum alignment between the two then @polinaeterna ?",
"@mariosasko sorry, I didn't understand from your previous message that by aligning with the ImageFolder you mean inferring labels from directories names. Sure, that's not a problem, I can add the corresponding code. Do you also mean that in this version we should get rid of transcription file and feature and add it in the future when the metadata support https://github.com/huggingface/datasets/pull/4069 will be merged? \r\nMy understanding was that support for ASR task is more crucial than audio classification as it's more \"common\", but I would ask @anton-l and @patrickvonplaten about this. Anyway, it's not a problem to implement the classification task first, and the ASR one later. ",
"> Do you also mean that in this version we should get rid of transcription file and feature and add it in the future when the metadata support https://github.com/huggingface/datasets/pull/4069 will be merged?\r\n\r\nWe can wait for the linked PR to be merged first and then add the changes to this PR to have support for ASR from the get-go.",
"Don't follow 100% here, but as @polinaeterna said I think ASR is much more common than audio classification. Also, do you guys think a lot of users will use both the audio and image folder functionality ? Is it very important to have audio and image aligned here? Note that in Transformers while all models follow a common API, audio and vision models can be very different with respect to pre- and post-processing",
"> I think ASR is much more common than audio classification\r\n\r\nI agree, the main focus is ASR\r\n\r\n> do you guys think a lot of users will use both the audio and image folder functionality ?\r\n\r\nYup I think so, people don't just use public academic datasets right ? `imagefolder` is almost used 1k times a week, and it's just the beginning.\r\n\r\n> Is it very important to have audio and image aligned here?\r\n\r\nIf we can get some consistency for free, let's take it ^^ This way it will be easy for users to go from one modality to another, and documentation will be simpler.\r\n\r\n> Note that in Transformers while all models follow a common API, audio and vision models can be very different with respect to pre- and post-processing\r\n\r\nThat make total sense. Here this is mainly about raw data loading (before preprocessing) so we just need to make something generic, no matter what task the data is used for. Even though actually we know that ASR will be the main usage for now :p\r\n\r\nLet me know if it's clearer now or if you have other questions !"
] |
1,173,482,291
| 3,962
|
Fix flatten of Sequence feature type
|
closed
| 2022-03-18T11:27:42
| 2022-03-21T14:40:47
| 2022-03-21T14:36:12
|
https://github.com/huggingface/datasets/pull/3962
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3962",
"html_url": "https://github.com/huggingface/datasets/pull/3962",
"diff_url": "https://github.com/huggingface/datasets/pull/3962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3962.patch",
"merged_at": "2022-03-21T14:36:12"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,173,223,086
| 3,961
|
Scores from Index at extra positions are not filtered out
|
closed
| 2022-03-18T06:13:23
| 2022-04-12T14:41:58
| 2022-04-12T14:41:58
|
https://github.com/huggingface/datasets/issues/3961
| null |
vishalsrao
| false
|
[
"Hi! Yes, that makes sense! Would you like to submit a PR to fix this?",
"Created PR https://github.com/huggingface/datasets/pull/3971"
] |
1,173,148,884
| 3,960
|
Load local dataset error
|
open
| 2022-03-18T03:32:49
| 2023-08-02T17:12:20
| null |
https://github.com/huggingface/datasets/issues/3960
| null |
TXacs
| false
|
[
"Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n```\r\n\r\n\r\nLet us know if that resolves the issue.",
"> Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n> >>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n> ```\r\n> \r\n> Let us know if that resolves the issue.\r\n\r\nSorry, replied late.\r\nThanks a lot! It's worked for me. But it seems much slower than before, and now gets stuck.....\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\nResolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1281167/1281167 [00:02<00:00, 437283.97it/s]\r\nResolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50001/50001 [00:00<00:00, 89094.29it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nDownloading and preparing dataset image_folder/default to ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091...\r\nDownloading data files #0: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 82289.56obj/s]\r\nDownloading data files #1: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 73559.11obj/s]\r\nDownloading data files #2: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 81600.46obj/s]\r\nDownloading data files #3: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 79691.56obj/s]\r\nDownloading data files #4: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 82341.37obj/s]\r\nDownloading data files #5: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 75784.46obj/s]\r\nDownloading data files #6: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 81466.18obj/s]\r\nDownloading data files #7: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 82320.27obj/s]\r\nDownloading data files #8: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 78094.00obj/s]\r\nDownloading data files #9: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 84057.59obj/s]\r\nDownloading data files #10: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 83082.31obj/s]\r\nDownloading data files #11: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 79944.21obj/s]\r\nDownloading data files #12: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 84569.77obj/s]\r\nDownloading data files #13: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 84949.63obj/s]\r\nDownloading data files #14: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 80666.53obj/s]\r\nDownloading data files #15: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80072/80072 [00:01<00:00, 76723.20obj/s]\r\n^[[Bloading data files #8: 94%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 75061/80073 [00:00<00:00, 82609.89obj/s]\r\nDownloading data files #9: 85%|████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 68120/80073 [00:00<00:00, 83868.54obj/s]\r\nDownloading data files #9: 96%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 76784/80073 [00:00<00:00, 84722.34obj/s]\r\nDownloading data files #10: 75%|███████████████████████████████████████████████████████████████████████████████████████▋ | 59995/80073 [00:00<00:00, 84148.19obj/s]\r\nDownloading data files #10: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 77412/80073 [00:00<00:00, 85724.53obj/s]\r\nDownloading data files #11: 71%|███████████████████████████████████████████████████████████████████████████████████▎ | 57032/80073 [00:00<00:00, 79930.58obj/s]\r\nDownloading data files #11: 92%|███████████████████████████████████████████████████████████████████████████████████████████████████████████ | 73277/80073 [00:00<00:00, 78091.27obj/s]\r\nDownloading data files #12: 86%|█████████████████████████████████████████████████████████████████████████████████████████████████████ | 69125/80073 [00:00<00:00, 84723.02obj/s]\r\nDownloading data files #12: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 77803/80073 [00:00<00:00, 85351.59obj/s]\r\nDownloading data files #13: 75%|████████████████████████████████████████████████████████████████████████████████████████▏ | 60356/80073 [00:00<00:00, 84833.35obj/s]\r\nDownloading data files #13: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 77368/80073 [00:00<00:00, 84475.10obj/s]\r\nDownloading data files #14: 72%|████████████████████████████████████████████████████████████████████████████████████▍ | 57751/80073 [00:00<00:00, 80727.33obj/s]\r\nDownloading data files #14: 92%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 74022/80073 [00:00<00:00, 78703.16obj/s]\r\nDownloading data files #15: 78%|███████████████████████████████████████████████████████████████████████████████████████████▋ | 62724/80072 [00:00<00:00, 78387.33obj/s]\r\nDownloading data files #15: 99%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 78933/80072 [00:01<00:00, 79353.63obj/s]\r\n```",
"Wait a long time, it completed. I don't know why it's so slow...",
"You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nThanks!It's worked well.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nI find current `load_dataset` loads ImageNet still slowly, even add `ignore_verifications=True`.\r\nFirst loading, it costs about 20 min in my servers.\r\n```\r\nreal\t19m23.023s\r\nuser\t21m18.360s\r\nsys\t7m59.080s\r\n```\r\n\r\nSecond reusing, it costs about 15 min in my servers.\r\n```\r\nreal\t15m20.735s\r\nuser\t12m22.979s\r\nsys\t5m46.960s\r\n```\r\n\r\nI think it's too much slow, is there other method to make it faster?",
"And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n```python\r\ndef collate_fn(examples):\r\n pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n labels = torch.tensor([example[\"labels\"] for example in examples])\r\n return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n```\r\nHow to know the keys of example?",
"Loading the image files slowly, is it because the multiple processes load files at the same time?",
"Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs. \r\n\r\n> And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> \r\n> ```python\r\n> def collate_fn(examples):\r\n> pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> ```\r\n> \r\n> How to know the keys of example?\r\n\r\nWhat do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\n",
"> Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.\r\n> \r\n> > And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> > ```python\r\n> > def collate_fn(examples):\r\n> > pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> > labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> > return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > How to know the keys of example?\r\n> \r\n> What do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\nThanks for your reply!\r\n\r\n1. I did not record the second output, so I run it again. \r\n```\r\n(merak) txacs@master:/dat/txacs/test$ time python test.py \r\nResolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1281167/1281167 [00:02<00:00, 469497.89it/s]\r\nResolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50001/50001 [00:00<00:00, 70123.73it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nReusing dataset image_folder (./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091)\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:10<00:00, 5.37s/it]\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-cd3fbdc025e03f8c.arrow\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-b5a9de701bbdbb2b.arrow\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 1281167\r\n })\r\n validation: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 50000\r\n })\r\n})\r\n\r\nreal\t10m10.413s\r\nuser\t9m33.195s\r\nsys\t2m47.528s\r\n```\r\nAlthough it cost less time than the last, but still slowly.\r\n\r\n2. Sorry, forgive my poor statement. I solved it, updating to new script 'run_image_classification.py'.",
"Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"˙`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.",
"> Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"˙`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.\r\n\r\nSounds good! The main position, which costs long time, is from program starting to `\"Resolving data files\"`. I hope you can solve it early, thanks!",
"I'm getting this problem. Script has been stuck at this part for the past 15 or so minutes:\r\n \r\n`Resolving data files: 100%|█████████████████████████████████████████| 107/107 [00:00<00:00, 472.74it/s]`\r\n\r\nI had everything working fine on an AWS EC2 node with a single GPU. Then I created an image based on the single GPU machine, and spun up a new one with 4 GPUs, so I got all of the training data ready at .cache. \r\n\r\nTurned off all checks with `verification_mode='no_checks'`. Logged in with huggingface-cli again just to be sure.\r\n\r\nInterrupting shows the code is stuck here:\r\n\r\n```\r\nFile \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 200, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 336, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 357, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py\", line 1059, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py\", line 66, in _memory_mapped_arrow_table_from_file\r\n pa_table = opened_stream.read_all()\r\n```\r\n\r\nIs it just going to take a while or am I going to run out of money? :sweat_smile: \r\n\r\nedit: ping @mariosasko "
] |
1,172,872,695
| 3,959
|
Medium-sized dataset conversion from pandas causes a crash
|
closed
| 2022-03-17T20:20:35
| 2022-12-12T17:14:06
| 2022-04-20T12:35:37
|
https://github.com/huggingface/datasets/issues/3959
| null |
Antymon
| false
|
[
"Hi ! It looks like an issue with pyarrow, could you try updating pyarrow and try again ?",
"@albertvillanova did you find a solution to this?",
"I´m getting the same problem with some files, @albertvillanova did you find a solution to this?"
] |
1,172,657,981
| 3,958
|
Update Wikipedia metadata
|
closed
| 2022-03-17T17:50:05
| 2022-03-21T12:26:48
| 2022-03-21T12:26:47
|
https://github.com/huggingface/datasets/pull/3958
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3958",
"html_url": "https://github.com/huggingface/datasets/pull/3958",
"diff_url": "https://github.com/huggingface/datasets/pull/3958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3958.patch",
"merged_at": "2022-03-21T12:26:47"
}
|
albertvillanova
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3958). All of your documentation changes will be reflected on that endpoint.",
"Once this last PR validated, I can take care of the integration of all the wikipedia update branch into master, @lhoestq. "
] |
1,172,401,455
| 3,957
|
Fix xtreme s metrics
|
closed
| 2022-03-17T13:39:04
| 2022-03-18T13:46:19
| 2022-03-18T13:42:16
|
https://github.com/huggingface/datasets/pull/3957
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3957",
"html_url": "https://github.com/huggingface/datasets/pull/3957",
"diff_url": "https://github.com/huggingface/datasets/pull/3957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3957.patch",
"merged_at": "2022-03-18T13:42:16"
}
|
patrickvonplaten
| true
|
[
"Sorry for the commit history mess, but will be squashed anyways so should be fine",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,172,272,327
| 3,956
|
TypeError: __init__() missing 1 required positional argument: 'scheme'
|
closed
| 2022-03-17T11:43:13
| 2023-11-21T04:26:20
| 2022-03-28T08:00:01
|
https://github.com/huggingface/datasets/issues/3956
| null |
amirj
| false
|
[
"Hi @amirj, thanks for reporting.\r\n\r\nAt first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server.\r\n\r\nFeel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility\r\n> Language clients are forward compatible; meaning that clients support communicating with greater or equal minor versions of Elasticsearch. Elasticsearch language clients are only backwards compatible with default distributions and without guarantees made.",
"@albertvillanova It doesn't seem a version incompatibility between the client and server, since the following code is working:\r\n\r\n```\r\nfrom elasticsearch import Elasticsearch\r\nes_client = Elasticsearch(\"http://localhost:9200\")\r\ndataset.add_elasticsearch_index(column=\"e1\", es_client=es_client, es_index_name=\"e1_index\")\r\n```",
"Hi @amirj, \r\n\r\nI really think it is a version incompatibility issue between your Elasticsearch client and server:\r\n- Your Elasticsearch server NodeConfig expects a positional argument named 'scheme'\r\n- Whereas your Elasticsearch client passes only keyword arguments: `NodeConfig(**options)`\r\n\r\nMoreover:\r\n- Looking at your stack trace, I deduce you are using Elasticsearch client **\"8\"** major version:\r\n - the Elasticsearch file \"elasticsearch/_sync/client/utils.py\" was created in version \"8.0.0a1\": https://github.com/elastic/elasticsearch-py/commit/21fa13b0f03b7b27ace9e19a1f763d40bd2e2ba4\r\n - you can check your Elasticsearch client version by running this Python code:\r\n ```python\r\n import elasticsearch\r\n print(elasticsearch.__version__)\r\n ```\r\n\r\n- However, in the *Environment info*, you informed that the major version of your Eleasticsearch cluster server is **\"7\"** (\"7.10.2-SNAPSHOT\")\r\n\r\nCould you please align the Elasticsearch client/server major versions (as pointed out in Elasticsearch docs) and check if the problem persists?",
"I'm closing this issue, @amirj.\r\n\r\nFeel free to re-open it if the problem persists. \r\n\r\n",
"```\r\nfrom elasticsearch import Elasticsearch\r\nes = Elasticsearch([{'host': 'localhost', 'port': 9200}])\r\n```\r\n```\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-8-675c6ffe5293> in <module>\r\n 1 #es = Elasticsearch([{'host':'localhost', 'port':9200}])\r\n 2 from elasticsearch import Elasticsearch\r\n----> 3 es = Elasticsearch([{'host': 'localhost', 'port': 9200}])\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)\r\n 310 \r\n 311 if _transport is None:\r\n--> 312 node_configs = client_node_configs(\r\n 313 hosts,\r\n 314 cloud_id=cloud_id,\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in client_node_configs(hosts, cloud_id, **kwargs)\r\n 99 else:\r\n 100 assert hosts is not None\r\n--> 101 node_configs = hosts_to_node_configs(hosts)\r\n 102 \r\n 103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in hosts_to_node_configs(hosts)\r\n 142 \r\n 143 elif isinstance(host, Mapping):\r\n--> 144 node_configs.append(host_mapping_to_node_config(host))\r\n 145 else:\r\n 146 raise ValueError(\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in host_mapping_to_node_config(host)\r\n 209 options[\"path_prefix\"] = options.pop(\"url_prefix\")\r\n 210 \r\n--> 211 return NodeConfig(**options) # type: ignore\r\n 212 \r\n 213 \r\n\r\nTypeError: __init__() missing 1 required positional argument: 'scheme'\r\n```",
"I am facing the same issue, and version is same for the both i.e(8.1.3)",
"@raj713335, thanks for reporting.\r\n\r\nPlease note that in your code example, you are not using our `datasets` library. \r\n\r\nThus, I think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py\r\n\r\n",
"it is simple hack which shock you just replace https to http in scheme\r\n\r\n**In My Case:** ->\r\n\r\n`es = Elasticsearch([{'host': 'localhost', 'port': 9200, \"scheme\": \"http\"}])\r\n if es.ping():\r\n print('Connected to ES!')\r\n else:\r\n print('Could not connect!')\r\n sys.exit()`"
] |
1,172,246,647
| 3,955
|
Remove unncessary 'pylint disable' message in ReadMe
|
closed
| 2022-03-17T11:16:55
| 2022-04-12T14:28:35
| 2022-04-12T14:28:35
|
https://github.com/huggingface/datasets/pull/3955
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3955",
"html_url": "https://github.com/huggingface/datasets/pull/3955",
"diff_url": "https://github.com/huggingface/datasets/pull/3955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3955.patch",
"merged_at": "2022-04-12T14:28:35"
}
|
Datta0
| true
|
[] |
1,172,141,664
| 3,954
|
The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
|
closed
| 2022-03-17T09:38:11
| 2022-04-20T12:39:07
| 2022-04-20T12:39:07
|
https://github.com/huggingface/datasets/issues/3954
| null |
MatanBenChorin
| false
|
[
"Hi @MatanBenChorin, thanks for reporting.\r\n\r\nPlease, take into account that the preview may take some time until it properly renders (we are working to reduce this time).\r\n\r\nMaybe @severo can give more details on this.",
"Hi, \r\nThank you",
"Thanks for reporting. We are looking at it and will give updates here.",
"I imagine the dataset has been moved to https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1, which still has an issue:\r\n\r\n```\r\nServer Error\r\n\r\nStatus code: 400\r\nException: NameError\r\nMessage: name 'HebrewSquad' is not defined\r\n```",
"The issue is not related to the dataset viewer but to the loading script (cc @albertvillanova @lhoestq @mariosasko)\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> hf_token = \"hf_...\" # <- required because the dataset is gated\r\n>>> d = ds.load_dataset('tdklab/Hebrew_Squad_v1', use_auth_token=hf_token)\r\n...\r\nNameError: name 'HebrewSquad' is not defined\r\n```",
"Yes indeed there is an error in [Hebrew_Squad_v1.py:L40](https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1/blob/main/Hebrew_Squad_v1.py#L40)\r\n\r\nHere is the fix @MatanBenChorin :\r\n\r\n```diff\r\n- HebrewSquad(\r\n+ HebrewSquadConfig(\r\n```"
] |
1,172,123,736
| 3,953
|
Add ImageNet Sketch
|
closed
| 2022-03-17T09:20:31
| 2022-05-23T18:05:29
| 2022-05-23T18:05:29
|
https://github.com/huggingface/datasets/issues/3953
| null |
NielsRogge
| false
|
[
"Can you assign this task to me? @nreimers @mariosasko ",
"Hi! Sure! Let us know if you need any pointers."
] |
1,171,895,531
| 3,952
|
Checksum error for glue sst2, stsb, rte etc datasets
|
closed
| 2022-03-17T03:45:47
| 2022-03-17T07:10:15
| 2022-03-17T07:10:14
|
https://github.com/huggingface/datasets/issues/3952
| null |
ravindra-ut
| false
|
[
"Hi, @ravindra-ut.\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"glue\", \"sst2\")\r\nDownloading builder script: 28.8kB [00:00, 11.6MB/s] \r\nDownloading metadata: 28.7kB [00:00, 12.9MB/s] \r\nDownloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown size, total: 11.90 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.44M/7.44M [00:01<00:00, 5.82MB/s]\r\nDataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 895.96it/s]\r\n\r\nIn [3]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1821\r\n })\r\n})\r\n``` \r\n\r\nMoreover, I see in your traceback that your error was for an URL at https://firebasestorage.googleapis.com\r\nHowever, the URLs were updated on Sep 16, 2020 (`datasets` version 1.0.2) to https://dl.fbaipublicfiles.com: https://github.com/huggingface/datasets/commit/2f03041a21c03abaececb911760c3fe4f420c229\r\n\r\nCould you please try to update `datasets`\r\n```shell\r\npip install -U datasets\r\n```\r\nand then force redownload\r\n```python\r\nds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\n```\r\nto update the cache?\r\n\r\nPlease, feel free to reopen this issue if the problem persists."
] |
1,171,568,814
| 3,951
|
Forked streaming datasets try to `open` data urls rather than use network
|
closed
| 2022-03-16T21:21:02
| 2022-06-10T20:47:26
| 2022-06-10T20:47:26
|
https://github.com/huggingface/datasets/issues/3951
| null |
dlwh
| false
|
[
"Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.\r\n\r\nIn this issue it seems that the streaming capabilities that we offer to dataset builders are not transferred to the forked process (so it fails to open remote files and start streaming data from them). In particular `open` is supposed to be mocked by our `xopen` function that is an extended open that supports remote files. Let me try to fix this"
] |
1,171,560,585
| 3,950
|
Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
|
closed
| 2022-03-16T21:14:11
| 2022-06-10T20:47:26
| 2022-06-10T20:47:26
|
https://github.com/huggingface/datasets/issues/3950
| null |
dlwh
| false
|
[
"Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too\r\n\r\nWe should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)\r\n\r\nI'm also taking a look at your second issue, which is more technical"
] |
1,171,467,981
| 3,949
|
Remove GLEU metric
|
closed
| 2022-03-16T19:35:31
| 2022-04-12T20:43:26
| 2022-04-12T20:37:09
|
https://github.com/huggingface/datasets/pull/3949
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3949",
"html_url": "https://github.com/huggingface/datasets/pull/3949",
"diff_url": "https://github.com/huggingface/datasets/pull/3949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3949.patch",
"merged_at": "2022-04-12T20:37:09"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,171,460,560
| 3,948
|
Google BLEU Metric Card
|
closed
| 2022-03-16T19:27:17
| 2022-03-21T16:04:26
| 2022-03-21T16:04:25
|
https://github.com/huggingface/datasets/pull/3948
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3948",
"html_url": "https://github.com/huggingface/datasets/pull/3948",
"diff_url": "https://github.com/huggingface/datasets/pull/3948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3948.patch",
"merged_at": "2022-03-21T16:04:25"
}
|
emibaylor
| true
|
[
"A few things that aren't clear for me:\r\n- \"Because it performs better on individual sentence pairs as compared to BLEU, Google BLEU has also been used in RL experiments.\" -- why is this the case? why would that make it more usable for RL? (also, you should put \"Reinforcement Learning\" explicitly, not just the acronym)\r\n- (Minor issue) -- I put inputs before the first example code, I think that's clearer somehow\r\n\r\nOtherwise, it looks great, good job @emibaylor !\r\n"
] |
1,171,452,854
| 3,947
|
BLEU metric card
|
closed
| 2022-03-16T19:20:07
| 2022-03-29T14:59:50
| 2022-03-29T14:54:14
|
https://github.com/huggingface/datasets/pull/3947
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3947",
"html_url": "https://github.com/huggingface/datasets/pull/3947",
"diff_url": "https://github.com/huggingface/datasets/pull/3947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3947.patch",
"merged_at": "2022-03-29T14:54:13"
}
|
emibaylor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Some thoughts:\r\n- For values, e.g. \"Defaults to False\", I would put False in code: `False`. Same for : \"Defaults to `4`.\"\r\n- I would put the following remark in \"Limitations\": \r\n> \"BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.\"\r\n\r\n- Add some values from the original BLEU paper (https://aclanthology.org/P02-1040.pdf)"
] |
1,171,239,287
| 3,946
|
Add newline to text dataset builder for controlling universal newlines mode
|
closed
| 2022-03-16T16:11:11
| 2023-09-24T10:10:50
| 2023-09-24T10:10:47
|
https://github.com/huggingface/datasets/pull/3946
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3946",
"html_url": "https://github.com/huggingface/datasets/pull/3946",
"diff_url": "https://github.com/huggingface/datasets/pull/3946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3946.patch",
"merged_at": null
}
|
albertvillanova
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3946). All of your documentation changes will be reflected on that endpoint.",
"The failing CI test has nothing to do with this PR.",
"I'm closing this PR."
] |
1,171,222,257
| 3,945
|
Fix comet metric
|
closed
| 2022-03-16T15:56:47
| 2022-03-22T15:10:12
| 2022-03-22T15:05:30
|
https://github.com/huggingface/datasets/pull/3945
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3945",
"html_url": "https://github.com/huggingface/datasets/pull/3945",
"diff_url": "https://github.com/huggingface/datasets/pull/3945.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3945.patch",
"merged_at": "2022-03-22T15:05:30"
}
|
lhoestq
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Finally I'm done updating the dependencies ^^'\r\n\r\ncc @sashavor can you review my changes in the metric card please ?",
"Looks good to me! Just fixed a tiny typo :wink: ",
"Thanks !"
] |
1,171,209,510
| 3,944
|
Create README.md
|
closed
| 2022-03-16T15:46:26
| 2022-03-17T17:50:54
| 2022-03-17T17:47:05
|
https://github.com/huggingface/datasets/pull/3944
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3944",
"html_url": "https://github.com/huggingface/datasets/pull/3944",
"diff_url": "https://github.com/huggingface/datasets/pull/3944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3944.patch",
"merged_at": "2022-03-17T17:47:05"
}
|
sashavor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,171,185,070
| 3,943
|
[Doc] Don't use v for version tags on GitHub
|
closed
| 2022-03-16T15:28:30
| 2022-03-17T11:46:26
| 2022-03-17T11:46:25
|
https://github.com/huggingface/datasets/pull/3943
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3943",
"html_url": "https://github.com/huggingface/datasets/pull/3943",
"diff_url": "https://github.com/huggingface/datasets/pull/3943.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3943.patch",
"merged_at": "2022-03-17T11:46:25"
}
|
sgugger
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3943). All of your documentation changes will be reflected on that endpoint."
] |
1,171,177,122
| 3,942
|
reddit_tifu dataset: Checksums didn't match for dataset source files
|
closed
| 2022-03-16T15:23:30
| 2022-03-16T15:57:43
| 2022-03-16T15:39:25
|
https://github.com/huggingface/datasets/issues/3942
| null |
XingxingZhang
| false
|
[
"Hi @XingxingZhang, \r\n\r\nWe have already fixed this. You should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nDuplicate of:\r\n- #3773",
"thanks @albertvillanova . by upgrading to 1.18.4 and using `load_dataset(\"...\", download_mode=\"force_redownload\")` fixed \r\n the bug.\r\n\r\nusing the following as you suggested in another thread can also fixed the bug\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n",
"The latter solution (installing from GitHub) was proposed because the fix was not released yet. But last week we made the 1.18.4 patch release (with the fix), so no longer necessary to install from GitHub.\r\n\r\nYou can now install from PyPI, as usual:\r\n```shell\r\npip install -U datasets\r\n```\r\n"
] |
1,171,132,709
| 3,941
|
billsum dataset: Checksums didn't match for dataset source files:
|
closed
| 2022-03-16T14:52:08
| 2024-03-13T12:11:35
| 2022-03-16T15:46:44
|
https://github.com/huggingface/datasets/issues/3941
| null |
XingxingZhang
| false
|
[
"Hi @XingxingZhang, thanks for reporting.\r\n\r\nThis was due to a change in Google Drive service:\r\n- #3786 \r\n\r\nWe have already fixed it:\r\n- #3787\r\n\r\nYou should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```",
"thanks @albertvillanova ",
"@albertvillanova \r\nYOU Said: pip install git+ https://github.com/huggingface/datasets.git Then set: dataset=load_dataset (\"multinews\", download_mode=\"force-redownload\"). I changed the ’datautils‘ file according to this setting: traindata=load_dataset (path='wikitext ', name='wikitext-2-raw v1', split='train ', download_mode=\"force-redownload\")\r\nTestdata=load_dataset (path='wikitext ', name='wikitext-2-raw v1', split='test ', download_mode=\"force-redownload\")\r\nthen the bug is\r\n\r\n\r\nI have tried both versions\r\n\r\n"
] |
1,171,106,853
| 3,940
|
Create CoVAL metric card
|
closed
| 2022-03-16T14:31:49
| 2022-03-18T17:37:59
| 2022-03-18T17:35:14
|
https://github.com/huggingface/datasets/pull/3940
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3940",
"html_url": "https://github.com/huggingface/datasets/pull/3940",
"diff_url": "https://github.com/huggingface/datasets/pull/3940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3940.patch",
"merged_at": "2022-03-18T17:35:14"
}
|
sashavor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,170,882,331
| 3,939
|
Source links broken
|
closed
| 2022-03-16T11:17:47
| 2022-03-19T04:41:32
| 2022-03-19T04:41:32
|
https://github.com/huggingface/datasets/issues/3939
| null |
qqaatw
| false
|
[
"Thanks for reporting @qqaatw.\r\n\r\n@mishig25 @sgugger do you think this can be tweaked in the new doc framework?\r\n- From: https://github.com/huggingface/datasets/blob/v2.0.0/\r\n- To: https://github.com/huggingface/datasets/blob/2.0.0/",
"@qqaatw thanks a lot for notifying about this issue!\r\n\r\nin comparison, transformers tags start with `v` like [this one](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/bert/configuration_bert.py#L54).\r\n\r\nTherefore, we have to do one of 2 options below:\r\n1. Make necessary changes on doc-builder side\r\nOR\r\n2. Make [datasets tags](https://github.com/huggingface/datasets/tags) start with `v`, just like [transformers](https://github.com/huggingface/transformers/tags) (so that tag naming can be consistent amongst hf repos)\r\n\r\nI'll let you decide @albertvillanova @lhoestq @sgugger ",
"I think option 2 is the easiest and would provide harmony in the HF ecosystem but we can also add a doc config parameter to decide whether the default version has a v or not if `datasets` folks prefer their tags without a v :-)",
"For me it is OK to conform to the rest of libraries and tag/release with a preceding \"v\", rather than adding an extra argument to the doc builder just for `datasets`.\r\n\r\nLet me know if it is also OK for you @lhoestq. ",
"https://github.com/huggingface/doc-build/commit/f41c1e8ff900724213af4c75d287d8b61ecf6141\r\n\r\nhotfix so that `datasets` docs source button works correctly on hf.co/docs/datasets",
"We could add a tag for each release without a 'v' but it could be confusing on github to see both tags `v2.0.0` and `2.0.0` IMO (not sure if many users check them though). Removing the tags without 'v' would break our versioning for github datasets: the library looks for dataset scripts at the URLs like `https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}` where `revision` is equal to `datasets.__version__` (which doesn't start with a 'v') for all released versions of `datasets`.\r\n\r\nI think we could just have a parameter for the documentation - and having different URLs schemes for the source links that the users don't even see (they simply click on a button) is probably fine",
"This is done in #3943 to go along with [doc-builder#146](https://github.com/huggingface/doc-builder/pull/146).\r\n\r\nNote that this will only work for future versions, so once those two are merged, the actual v2.0.0 doc should be fixed. The easiest is to cherry-pick this commit on the v2.0.0 release branch (or on a new branch created from the 2.0.0 tag, with a name that triggers the doc building job, for instance v2.0.0-release)",
"Thanks for fixing @sgugger."
] |
1,170,875,417
| 3,938
|
Avoid info log messages from transformers in FrugalScore metric
|
closed
| 2022-03-16T11:11:29
| 2022-03-17T08:37:25
| 2022-03-17T08:37:24
|
https://github.com/huggingface/datasets/pull/3938
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3938",
"html_url": "https://github.com/huggingface/datasets/pull/3938",
"diff_url": "https://github.com/huggingface/datasets/pull/3938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3938.patch",
"merged_at": "2022-03-17T08:37:24"
}
|
albertvillanova
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3938). All of your documentation changes will be reflected on that endpoint."
] |
1,170,832,006
| 3,937
|
Missing languages in lvwerra/github-code dataset
|
closed
| 2022-03-16T10:32:03
| 2022-03-22T07:09:23
| 2022-03-21T14:50:47
|
https://github.com/huggingface/datasets/issues/3937
| null |
Eytan-S
| false
|
[
"Thanks for contacting @Eytan-S.\r\n\r\nI think @lvwerra could better answer this. ",
"That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the pipeline anyway and will double check the query.\r\n\r\nThanks for reporting this @Eytan-S!",
"Can confirm that the two languages are indeed missing from the dataset. Here are the file counts per language:\r\n```Python\r\n{'Assembly': 82847,\r\n 'Batchfile': 236755,\r\n 'C': 14127969,\r\n 'C#': 6793439,\r\n 'C++': 7368473,\r\n 'CMake': 175076,\r\n 'CSS': 1733625,\r\n 'Dockerfile': 331966,\r\n 'FORTRAN': 141963,\r\n 'GO': 2259363,\r\n 'Haskell': 340521,\r\n 'HTML': 11165464,\r\n 'Java': 19515696,\r\n 'JavaScript': 11829024,\r\n 'Julia': 58177,\r\n 'Lua': 576279,\r\n 'Makefile': 679338,\r\n 'Markdown': 8454049,\r\n 'PHP': 11181930,\r\n 'Perl': 497490,\r\n 'PowerShell': 136827,\r\n 'Python': 7203553,\r\n 'Ruby': 4479767,\r\n 'Rust': 321765,\r\n 'SQL': 655657,\r\n 'Scala': 0,\r\n 'Shell': 1382786,\r\n 'TypeScript': 0,\r\n 'TeX': 250764,\r\n 'Visual Basic': 155371}\r\n ```",
"@Eytan-S check out v1.1 of the `github-code` dataset where issue should be fixed:\r\n\r\n| | Language |File Count| Size (GB)|\r\n|---:|:-------------|---------:|-------:|\r\n| 0 | Java | 19548190 | 107.7 |\r\n| 1 | C | 14143113 | 183.83 |\r\n| 2 | JavaScript | 11839883 | 87.82 |\r\n| 3 | HTML | 11178557 | 118.12 |\r\n| 4 | PHP | 11177610 | 61.41 |\r\n| 5 | Markdown | 8464626 | 23.09 |\r\n| 6 | C++ | 7380520 | 87.73 |\r\n| 7 | Python | 7226626 | 52.03 |\r\n| 8 | C# | 6811652 | 36.83 |\r\n| 9 | Ruby | 4473331 | 10.95 |\r\n| 10 | GO | 2265436 | 19.28 |\r\n| 11 | TypeScript | 1940406 | 24.59 |\r\n| 12 | CSS | 1734406 | 22.67 |\r\n| 13 | Shell | 1385648 | 3.01 |\r\n| 14 | Scala | 835755 | 3.87 |\r\n| 15 | Makefile | 679430 | 2.92 |\r\n| 16 | SQL | 656671 | 5.67 |\r\n| 17 | Lua | 578554 | 2.81 |\r\n| 18 | Perl | 497949 | 4.7 |\r\n| 19 | Dockerfile | 366505 | 0.71 |\r\n| 20 | Haskell | 340623 | 1.85 |\r\n| 21 | Rust | 322431 | 2.68 |\r\n| 22 | TeX | 251015 | 2.15 |\r\n| 23 | Batchfile | 236945 | 0.7 |\r\n| 24 | CMake | 175282 | 0.54 |\r\n| 25 | Visual Basic | 155652 | 1.91 |\r\n| 26 | FORTRAN | 142038 | 1.62 |\r\n| 27 | PowerShell | 136846 | 0.69 |\r\n| 28 | Assembly | 82905 | 0.78 |\r\n| 29 | Julia | 58317 | 0.29 |",
"Thanks @lvwerra. "
] |
1,170,713,473
| 3,936
|
Fix Wikipedia version and re-add tests
|
closed
| 2022-03-16T08:48:04
| 2022-03-16T17:04:07
| 2022-03-16T17:04:05
|
https://github.com/huggingface/datasets/pull/3936
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3936",
"html_url": "https://github.com/huggingface/datasets/pull/3936",
"diff_url": "https://github.com/huggingface/datasets/pull/3936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3936.patch",
"merged_at": "2022-03-16T17:04:05"
}
|
albertvillanova
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3936). All of your documentation changes will be reflected on that endpoint."
] |
1,170,292,492
| 3,934
|
Create MAUVE metric card
|
closed
| 2022-03-15T21:36:07
| 2022-03-18T17:38:14
| 2022-03-18T17:34:13
|
https://github.com/huggingface/datasets/pull/3934
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3934",
"html_url": "https://github.com/huggingface/datasets/pull/3934",
"diff_url": "https://github.com/huggingface/datasets/pull/3934.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3934.patch",
"merged_at": "2022-03-18T17:34:13"
}
|
sashavor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,170,253,605
| 3,933
|
Update README.md
|
closed
| 2022-03-15T20:52:05
| 2022-03-17T17:51:24
| 2022-03-17T17:47:37
|
https://github.com/huggingface/datasets/pull/3933
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3933",
"html_url": "https://github.com/huggingface/datasets/pull/3933",
"diff_url": "https://github.com/huggingface/datasets/pull/3933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3933.patch",
"merged_at": "2022-03-17T17:47:37"
}
|
sashavor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,170,221,773
| 3,932
|
Create SARI metric card
|
closed
| 2022-03-15T20:37:23
| 2022-03-18T17:37:01
| 2022-03-18T17:32:55
|
https://github.com/huggingface/datasets/pull/3932
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3932",
"html_url": "https://github.com/huggingface/datasets/pull/3932",
"diff_url": "https://github.com/huggingface/datasets/pull/3932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3932.patch",
"merged_at": "2022-03-18T17:32:55"
}
|
sashavor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,170,097,208
| 3,931
|
Add align_labels_with_mapping docs
|
closed
| 2022-03-15T19:24:57
| 2022-03-18T16:28:31
| 2022-03-18T16:24:33
|
https://github.com/huggingface/datasets/pull/3931
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3931",
"html_url": "https://github.com/huggingface/datasets/pull/3931",
"diff_url": "https://github.com/huggingface/datasets/pull/3931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3931.patch",
"merged_at": "2022-03-18T16:24:33"
}
|
stevhliu
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,170,087,793
| 3,930
|
Create README.md
|
closed
| 2022-03-15T19:16:59
| 2022-04-04T15:23:15
| 2022-04-04T15:17:28
|
https://github.com/huggingface/datasets/pull/3930
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3930",
"html_url": "https://github.com/huggingface/datasets/pull/3930",
"diff_url": "https://github.com/huggingface/datasets/pull/3930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3930.patch",
"merged_at": "2022-04-04T15:17:28"
}
|
sashavor
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.