repo stringclasses 1
value | github_id int64 378M 4.11B | github_node_id stringlengths 18 24 | number int64 3 44.9k | html_url stringlengths 52 56 | api_url stringlengths 62 66 | title stringlengths 1 487 | body stringlengths 0 234k ⌀ | state stringclasses 2
values | state_reason stringclasses 4
values | locked bool 2
classes | comments_count int64 0 195 | labels listlengths 0 8 | assignees listlengths 0 8 | created_at stringdate 2018-11-05 21:35:51 2026-03-20 09:57:02 | updated_at stringdate 2018-11-07 23:43:42 2026-03-20 14:24:53 | closed_at stringdate 2018-11-07 22:37:09 2026-03-20 13:12:32 ⌀ | milestone_title stringclasses 0
values | snapshot_id stringclasses 1
value | extracted_at stringdate 2026-03-20 14:43:13 2026-03-20 14:43:13 | author_login stringlengths 2 29 | author_id int64 1.33k 265M | author_node_id stringlengths 12 32 | author_type stringclasses 2
values | author_site_admin bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 377,736,844 | MDU6SXNzdWUzNzc3MzY4NDQ= | 6 | https://github.com/huggingface/transformers/issues/6 | https://api.github.com/repos/huggingface/transformers/issues/6 | Failure during pytest (and solution for python3) | ```
foo@bar:~/foo/bar/pytorch-pretrained-BERT$ pytest -sv ./tests/
===================================================================================================================== test session starts =====================================================================================================================
platform linux -- Python 3.6.6, pytest-3.9.1, py-1.7.0, pluggy-0.8.0 -- /home/foo/.pyenv/versions/anaconda3-5.1.0/bin/python
cachedir: .pytest_cache
rootdir: /data1/users/foo/bar/pytorch-pretrained-BERT, inifile:
plugins: remotedata-0.3.0, openfiles-0.3.0, doctestplus-0.1.3, cov-2.6.0, arraydiff-0.2, flaky-3.4.0
collected 0 items / 3 errors
=========================================================================================================================== ERRORS ============================================================================================================================
___________________________________________________________________________________________________________ ERROR collecting tests/modeling_test.py ___________________________________________________________________________________________________________
ImportError while importing test module '/data1/users/foo/bar/pytorch-pretrained-BERT/tests/modeling_test.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/modeling_test.py:25: in <module>
import modeling
E ModuleNotFoundError: No module named 'modeling'
_________________________________________________________________________________________________________ ERROR collecting tests/optimization_test.py _________________________________________________________________________________________________________
ImportError while importing test module '/data1/users/foo/bar/pytorch-pretrained-BERT/tests/optimization_test.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/optimization_test.py:23: in <module>
import optimization
E ModuleNotFoundError: No module named 'optimization'
_________________________________________________________________________________________________________ ERROR collecting tests/tokenization_test.py _________________________________________________________________________________________________________
ImportError while importing test module '/data1/users/foo/bar/pytorch-pretrained-BERT/tests/tokenization_test.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/tokenization_test.py:22: in <module>
import tokenization
E ModuleNotFoundError: No module named 'tokenization'
===Flaky Test Report===
===End Flaky Test Report===
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=================================================================================================================== 3 error in 0.60 seconds ==================================================================================================================
```
In python 3, `python -m pytest -sv tests/` works fine. | closed | completed | false | 1 | [] | [] | 2018-11-06T08:23:29Z | 2018-11-07T23:43:42Z | 2018-11-07T23:43:42Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | dandelin | 3,676,247 | MDQ6VXNlcjM2NzYyNDc= | User | false |
huggingface/transformers | 377,698,378 | MDU6SXNzdWUzNzc2OTgzNzg= | 5 | https://github.com/huggingface/transformers/issues/5 | https://api.github.com/repos/huggingface/transformers/issues/5 | MRPC hyperparameters question | When describing how you reproduced the MRPC results, you say:
"Our test ran on a few seeds with the original implementation hyper-parameters gave evaluation results between 82 and 87."
and you link to the SQuAD hyperparameters (https://github.com/google-research/bert#squad).
Is the link a mistake? Or did you use the SQuAD hyperparameters for tuning on MRPC? More generally, I'm wondering if there's a reason the MRPC dev set accuracy is slightly lower (in [82, 87] vs. [84, 88] reported by Google) | closed | completed | false | 5 | [] | [] | 2018-11-06T05:30:36Z | 2018-11-08T02:04:37Z | 2018-11-07T23:42:51Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | ethanjperez | 6,402,205 | MDQ6VXNlcjY0MDIyMDU= | User | false |
huggingface/transformers | 378,935,595 | MDU6SXNzdWUzNzg5MzU1OTU= | 9 | https://github.com/huggingface/transformers/issues/9 | https://api.github.com/repos/huggingface/transformers/issues/9 | Crash at the end of training | Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output:
I was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8
Is this an issue you know about?
```
11/08/2018 17:50:03 - INFO - __main__ - device cuda n_gpu 1 distributed training False
11/08/2018 17:50:18 - INFO - __main__ - *** Example ***
11/08/2018 17:50:18 - INFO - __main__ - unique_id: 1000000000
11/08/2018 17:50:18 - INFO - __main__ - example_index: 0
11/08/2018 17:50:18 - INFO - __main__ - doc_span_index: 0
11/08/2018 17:50:18 - INFO - __main__ - tokens: [CLS] to whom did the virgin mary allegedly appear in 1858 in lou ##rdes france ? [SEP] architectural ##ly , the school has a catholic character . atop the main building ' s gold dome is a golden statue of the virgin mary . immediately in front of the main building and facing it , is a copper statue of christ with arms up ##rai ##sed with the legend " ve ##ni ##te ad me om ##nes " . next to the main building is the basilica of the sacred heart . immediately behind the basilica is the gr ##otto , a marian place of prayer and reflection . it is a replica of the gr ##otto at lou ##rdes , france where the virgin mary reputed ##ly appeared to saint bern ##ade ##tte so ##ub ##iro ##us in 1858 . at the end of the main drive ( and in a direct line that connects through 3 statues and the gold dome ) , is a simple , modern stone statue of mary . [SEP]
11/08/2018 17:50:18 - INFO - __main__ - token_to_orig_map: 17:0 18:0 19:0 20:1 21:2 22:3 23:4 24:5 25:6 26:6 27:7 28:8 29:9 30:10 31:10 32:10 33:11 34:12 35:13 36:14 37:15 38:16 39:17 40:18 41:19 42:20 43:20 44:21 45:22 46:23 47:24 48:25 49:26 50:27 51:28 52:29 53:30 54:30 55:31 56:32 57:33 58:34 59:35 60:36 61:37 62:38 63:39 64:39 65:39 66:40 67:41 68:42 69:43 70:43 71:43 72:43 73:44 74:45 75:46 76:46 77:46 78:46 79:47 80:48 81:49 82:50 83:51 84:52 85:53 86:54 87:55 88:56 89:57 90:58 91:58 92:59 93:60 94:61 95:62 96:63 97:64 98:65 99:65 100:65 101:66 102:67 103:68 104:69 105:70 106:71 107:72 108:72 109:73 110:74 111:75 112:76 113:77 114:78 115:79 116:79 117:80 118:81 119:81 120:81 121:82 122:83 123:84 124:85 125:86 126:87 127:87 128:88 129:89 130:90 131:91 132:91 133:91 134:92 135:92 136:92 137:92 138:93 139:94 140:94 141:95 142:96 143:97 144:98 145:99 146:100 147:101 148:102 149:102 150:103 151:104 152:105 153:106 154:107 155:108 156:109 157:110 158:111 159:112 160:113 161:114 162:115 163:115 164:115 165:116 166:117 167:118 168:118 169:119 170:120 171:121 172:122 173:123 174:123
11/08/2018 17:50:18 - INFO - __main__ - token_is_max_context: 17:True 18:True 19:True 20:True 21:True 22:True 23:True 24:True 25:True 26:True 27:True 28:True 29:True 30:True 31:True 32:True 33:True 34:True 35:True 36:True 37:True 38:True 39:True 40:True 41:True 42:True 43:True 44:True 45:True 46:True 47:True 48:True 49:True 50:True 51:True 52:True 53:True 54:True 55:True 56:True 57:True 58:True 59:True 60:True 61:True 62:True 63:True 64:True 65:True 66:True 67:True 68:True 69:True 70:True 71:True 72:True 73:True 74:True 75:True 76:True 77:True 78:True 79:True 80:True 81:True 82:True 83:True 84:True 85:True 86:True 87:True 88:True 89:True 90:True 91:True 92:True 93:True 94:True 95:True 96:True 97:True 98:True 99:True 100:True 101:True 102:True 103:True 104:True 105:True 106:True 107:True 108:True 109:True 110:True 111:True 112:True 113:True 114:True 115:True 116:True 117:True 118:True 119:True 120:True 121:True 122:True 123:True 124:True 125:True 126:True 127:True 128:True 129:True 130:True 131:True 132:True 133:True 134:True 135:True 136:True 137:True 138:True 139:True 140:True 141:True 142:True 143:True 144:True 145:True 146:True 147:True 148:True 149:True 150:True 151:True 152:True 153:True 154:True 155:True 156:True 157:True 158:True 159:True 160:True 161:True 162:True 163:True 164:True 165:True 166:True 167:True 168:True 169:True 170:True 171:True 172:True 173:True 174:True
11/08/2018 17:50:18 - INFO - __main__ - input_ids: 101 2000 3183 2106 1996 6261 2984 9382 3711 1999 8517 1999 10223 26371 2605 1029 102 6549 2135 1010 1996 2082 2038 1037 3234 2839 1012 10234 1996 2364 2311 1005 1055 2751 8514 2003 1037 3585 6231 1997 1996 6261 2984 1012 3202 1999 2392 1997 1996 2364 2311 1998 5307 2009 1010 2003 1037 6967 6231 1997 4828 2007 2608 2039 14995 6924 2007 1996 5722 1000 2310 3490 2618 4748 2033 18168 5267 1000 1012 2279 2000 1996 2364 2311 2003 1996 13546 1997 1996 6730 2540 1012 3202 2369 1996 13546 2003 1996 24665 23052 1010 1037 14042 2173 1997 7083 1998 9185 1012 2009 2003 1037 15059 1997 1996 24665 23052 2012 10223 26371 1010 2605 2073 1996 6261 2984 22353 2135 2596 2000 3002 16595 9648 4674 2061 12083 9711 2271 1999 8517 1012 2012 1996 2203 1997 1996 2364 3298 1006 1998 1999 1037 3622 2240 2008 8539 2083 1017 11342 1998 1996 2751 8514 1007 1010 2003 1037 3722 1010 2715 2962 6231 1997 2984 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11/08/2018 17:50:18 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
... [truncated] ...
Iteration: 100%|█████████▉| 29314/29324 [3:27:55<00:04, 2.36it/s][A
Iteration: 100%|█████████▉| 29315/29324 [3:27:55<00:03, 2.44it/s][A
Iteration: 100%|█████████▉| 29316/29324 [3:27:56<00:03, 2.26it/s][A
Iteration: 100%|█████████▉| 29317/29324 [3:27:56<00:02, 2.35it/s][A
Iteration: 100%|█████████▉| 29318/29324 [3:27:56<00:02, 2.44it/s][A
Iteration: 100%|█████████▉| 29319/29324 [3:27:57<00:02, 2.25it/s][A
Iteration: 100%|█████████▉| 29320/29324 [3:27:57<00:01, 2.35it/s][A
Iteration: 100%|█████████▉| 29321/29324 [3:27:58<00:01, 2.41it/s][A
Iteration: 100%|█████████▉| 29322/29324 [3:27:58<00:00, 2.25it/s][A
Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00, 2.36it/s][ATraceback (most recent call last):
File "code/run_squad.py", line 929, in <module>
main()
File "code/run_squad.py", line 862, in main
loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/0x0d4ff90d01fa4168983197b17d73bb0c_dependencies/code/modeling.py", line 467, in forward
start_loss = loss_fct(start_logits, start_positions)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 862, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1550, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1403, in nll_loss
if input.size(0) != target.size(0):
RuntimeError: dimension specified as 0 but tensor has no dimensions
Exception ignored in: <bound method tqdm.__del__ of Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00, 2.36it/s]>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 931, in __del__
self.close()
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 1133, in close
self._decr_instances(self)
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 496, in _decr_instances
cls.monitor.exit()
File "/usr/local/lib/python3.6/dist-packages/tqdm/_monitor.py", line 52, in exit
self.join()
File "/usr/lib/python3.6/threading.py", line 1053, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
``` | closed | completed | false | 2 | [] | [] | 2018-11-08T22:01:57Z | 2018-11-09T08:17:26Z | 2018-11-09T08:17:26Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | bkgoksel | 6,436,274 | MDQ6VXNlcjY0MzYyNzQ= | User | false |
huggingface/transformers | 379,422,090 | MDU6SXNzdWUzNzk0MjIwOTA= | 12 | https://github.com/huggingface/transformers/issues/12 | https://api.github.com/repos/huggingface/transformers/issues/12 | py2 code | if I convert code to python2 version of code, it can't converage ; Would you present py2 code? | closed | completed | false | 1 | [] | [] | 2018-11-10T13:23:31Z | 2018-11-10T15:06:35Z | 2018-11-10T15:06:35Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | antxiaojun | 44,923,827 | MDQ6VXNlcjQ0OTIzODI3 | User | false |
huggingface/transformers | 379,440,759 | MDU6SXNzdWUzNzk0NDA3NTk= | 13 | https://github.com/huggingface/transformers/issues/13 | https://api.github.com/repos/huggingface/transformers/issues/13 | Bug in run_classifier.py | If I am running only evaluation and not training, there are errors as tr_loss and nb_tr_steps are undefined. | closed | completed | false | 0 | [] | [] | 2018-11-10T17:16:01Z | 2018-11-10T17:49:15Z | 2018-11-10T17:45:28Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | rawatprateek | 32,642,916 | MDQ6VXNlcjMyNjQyOTE2 | User | false |
huggingface/transformers | 377,592,631 | MDU6SXNzdWUzNzc1OTI2MzE= | 3 | https://github.com/huggingface/transformers/issues/3 | https://api.github.com/repos/huggingface/transformers/issues/3 | run_squad questions | Thanks a lot for the port! I have some minor questions, for the run_squad file, I see two options for accumulating gradients, accumulate_gradients and gradient_accumulation_steps but it seems to me that it can be combined into one. The other one is for the global_step variable, seems we are only counting but not using this variable in gradient accumulating. Thanks again! | closed | completed | false | 15 | [] | [
"thomwolf",
"VictorSanh"
] | 2018-11-05T21:35:51Z | 2018-11-12T13:59:43Z | 2018-11-07T22:37:09Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | ZhaoyueCheng | 3,590,333 | MDQ6VXNlcjM1OTAzMzM= | User | false |
huggingface/transformers | 380,271,134 | MDU6SXNzdWUzODAyNzExMzQ= | 15 | https://github.com/huggingface/transformers/issues/15 | https://api.github.com/repos/huggingface/transformers/issues/15 | activation function in BERTIntermediate | BERTConfig is not used for `BERTIntermediate`'s activation function. `intermediate_act_fn` is always `gelu`. Is this normal?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/modeling.py#L240 | closed | completed | false | 4 | [] | [] | 2018-11-13T15:09:33Z | 2018-11-13T15:18:30Z | 2018-11-13T15:17:39Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | lukovnikov | 1,732,910 | MDQ6VXNlcjE3MzI5MTA= | User | false |
huggingface/transformers | 380,555,132 | MDU6SXNzdWUzODA1NTUxMzI= | 19 | https://github.com/huggingface/transformers/issues/19 | https://api.github.com/repos/huggingface/transformers/issues/19 | will you push the pytorch code for the pre-training process? | Can you push the pytorch code for the pre-training process,such as MLM task, please?
I really want to study, but I can't understand tensorflow, it's so complex.
thanks!!! | closed | completed | false | 1 | [] | [] | 2018-11-14T06:30:59Z | 2018-11-17T21:55:41Z | 2018-11-17T21:55:41Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | koukoulala | 30,341,159 | MDQ6VXNlcjMwMzQxMTU5 | User | false |
huggingface/transformers | 381,387,717 | MDU6SXNzdWUzODEzODc3MTc= | 24 | https://github.com/huggingface/transformers/issues/24 | https://api.github.com/repos/huggingface/transformers/issues/24 | [Feature request] Port SQuAD 2.0 support | Recently the Google team added support for Squad 2.0:
https://github.com/google-research/bert/commit/60454702590a6c69bd45c5d4258c7e17b8a3e1da
Would be great to also have it available in the Pytorch version. | closed | completed | false | 1 | [] | [] | 2018-11-15T23:47:04Z | 2018-11-17T21:57:08Z | 2018-11-17T21:57:07Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | elyase | 1,175,888 | MDQ6VXNlcjExNzU4ODg= | User | false |
huggingface/transformers | 381,490,584 | MDU6SXNzdWUzODE0OTA1ODQ= | 25 | https://github.com/huggingface/transformers/issues/25 | https://api.github.com/repos/huggingface/transformers/issues/25 | can you push the run-pretraining and create_pretraining_data codes? | just want to study codes, don't need to have same pre-train performance. | closed | completed | false | 1 | [] | [] | 2018-11-16T08:15:33Z | 2018-11-17T21:57:19Z | 2018-11-17T21:57:19Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | koukoulala | 30,341,159 | MDQ6VXNlcjMwMzQxMTU5 | User | false |
huggingface/transformers | 381,835,436 | MDU6SXNzdWUzODE4MzU0MzY= | 28 | https://github.com/huggingface/transformers/issues/28 | https://api.github.com/repos/huggingface/transformers/issues/28 | speed is very slow | convert samples to features, is very slow | closed | completed | false | 2 | [] | [] | 2018-11-17T06:51:54Z | 2018-11-17T22:02:38Z | 2018-11-17T22:02:38Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | susht3 | 12,723,964 | MDQ6VXNlcjEyNzIzOTY0 | User | false |
huggingface/transformers | 381,250,921 | MDU6SXNzdWUzODEyNTA5MjE= | 23 | https://github.com/huggingface/transformers/issues/23 | https://api.github.com/repos/huggingface/transformers/issues/23 | ValueError while using --optimize_on_cpu | > Traceback (most recent call last): | 1/87970 [00:00<8:35:35, 2.84it/s]
File "./run_squad.py", line 990, in <module>
main()
File "./run_squad.py", line 922, in main
is_nan = set_optimizer_params_grad(param_optimizer, model.named_parameters(), test_nan=True)
File "./run_squad.py", line 691, in set_optimizer_params_grad
if test_nan and torch.isnan(param_model.grad).sum() > 0:
File "/people/sanjay/anaconda2/envs/bert_pytorch/lib/python3.5/site-packages/torch/functional.py", line 289, in isnan
raise ValueError("The argument is not a tensor", str(tensor))
ValueError: ('The argument is not a tensor', 'None')
Command:
CUDA_VISIBLE_DEVICES=0 python ./run_squad.py \
--vocab_file bert_large/uncased_L-24_H-1024_A-16/vocab.txt \
--bert_config_file bert_large/uncased_L-24_H-1024_A-16/bert_config.json \
--init_checkpoint bert_large/uncased_L-24_H-1024_A-16/pytorch_model.bin \
--do_lower_case \
--do_train \
--do_predict \
--train_file squad_dir/train-v1.1.json \
--predict_file squad_dir/dev-v1.1.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir outputs \
--train_batch_size 4 \
--gradient_accumulation_steps 2 \
--optimize_on_cpu
Error while using --optimize_on_cpu only.
Works fine without the argument.
GPU: Nvidia GTX 1080Ti Single GPU.
PS: I can only fit in train_batch_size 4 on the memory of a single GPU. | closed | completed | false | 3 | [] | [] | 2018-11-15T16:53:12Z | 2018-11-18T10:17:01Z | 2018-11-17T21:56:46Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | rsanjaykamath | 18,527,321 | MDQ6VXNlcjE4NTI3MzIx | User | false |
huggingface/transformers | 381,998,040 | MDU6SXNzdWUzODE5OTgwNDA= | 35 | https://github.com/huggingface/transformers/issues/35 | https://api.github.com/repos/huggingface/transformers/issues/35 | issues with accents on convert_ids_to_tokens() | Hello, the BertTokenizer seems loose accents when convert_ids_to_tokens() is used :
Example:
- original sentence: "great breakfasts in a nice furnished cafè, slightly bohemian."
- corresponding list of token produced : ['great', 'breakfast', '##s', 'in', 'a', 'nice', 'fur', '##nis', '##hed', 'cafe', ',', 'slightly', 'bohemia', '##n', '.']
Here the problem is in "cafe" that loses its accent. I'm using BertTokenizer.from_pretrained('Bert-base-multilingual') as the tokenizer, I also tried with "Bert-base-uncased" and experienced the same issue.
Thanks for this great work! | closed | completed | false | 2 | [] | [] | 2018-11-18T20:41:24Z | 2018-11-19T08:39:56Z | 2018-11-19T08:39:56Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | perezjln | 5,373,778 | MDQ6VXNlcjUzNzM3Nzg= | User | false |
huggingface/transformers | 381,965,833 | MDU6SXNzdWUzODE5NjU4MzM= | 34 | https://github.com/huggingface/transformers/issues/34 | https://api.github.com/repos/huggingface/transformers/issues/34 | Can not find vocabulary file for Chinese model | After I convert the TF model to pytorch model, I run a classification task on a new Chinese dataset, but get this:
CUDA_VISIBLE_DEVICES=3 python run_classifier.py --task_name weibo --do_eval --do_train --bert_model chinese_L-12_H-768_A-12 --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir bert_result
11/18/2018 21:56:59 - INFO - __main__ - device cuda n_gpu 1 distributed training False
11/18/2018 21:56:59 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file chinese_L-12_H-768_A-12
Traceback (most recent call last):
File "run_classifier.py", line 661, in <module>
main()
File "run_classifier.py", line 508, in main
tokenizer = BertTokenizer.from_pretrained(args.bert_model)
File "/home/lin/jpmorgan/pytorch-pretrained-BERT/pytorch_pretrained_bert/tokenization.py", line 141, in from_pretrained
tokenizer = cls(resolved_vocab_file, do_lower_case)
File "/home/lin/jpmorgan/pytorch-pretrained-BERT/pytorch_pretrained_bert/tokenization.py", line 94, in __init__
"model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file))
ValueError: Can't find a vocabulary file at path 'chinese_L-12_H-768_A-12'. To load the vocabulary from a Google pretrained model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)` | closed | completed | false | 5 | [] | [] | 2018-11-18T14:33:58Z | 2018-11-19T11:13:14Z | 2018-11-19T03:17:31Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | zlinao | 33,000,929 | MDQ6VXNlcjMzMDAwOTI5 | User | false |
huggingface/transformers | 382,489,751 | MDU6SXNzdWUzODI0ODk3NTE= | 41 | https://github.com/huggingface/transformers/issues/41 | https://api.github.com/repos/huggingface/transformers/issues/41 | Typo in README | I think I spotted a typo in the README file under the Usage header. There is a piece of code that uses `BertTokenizer` and the typo is on this line:
`tokenized_text = "Who was Jim Henson ? Jim Henson was a puppeteer"`
I think `tokenized_text` should be replaced with `text`, since the next line is
`tokenized_text = tokenizer.tokenize(text)` | closed | completed | false | 1 | [] | [] | 2018-11-20T03:52:35Z | 2018-11-20T09:02:15Z | 2018-11-20T09:02:15Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | weiyumou | 9,312,916 | MDQ6VXNlcjkzMTI5MTY= | User | false |
huggingface/transformers | 382,300,869 | MDU6SXNzdWUzODIzMDA4Njk= | 39 | https://github.com/huggingface/transformers/issues/39 | https://api.github.com/repos/huggingface/transformers/issues/39 | Command-line interface Document Bug | There is a bug in README.md about Command-line interface:
`export BERT_BASE_DIR=chinese_L-12_H-768_A-12`
**Wrong:**
```
pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \
--tf_checkpoint_path $BERT_BASE_DIR/bert_model.ckpt.index \
--bert_config_file $BERT_BASE_DIR/bert_config.json \
--pytorch_dump_path $BERT_BASE_DIR/pytorch_model.bin
```
**Right:**
```
pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \
$BERT_BASE_DIR/bert_model.ckpt.index \
$BERT_BASE_DIR/bert_config.json \
$BERT_BASE_DIR/pytorch_model.bin
```
| closed | completed | false | 1 | [] | [] | 2018-11-19T16:42:56Z | 2018-11-20T09:03:06Z | 2018-11-20T09:03:06Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | delldu | 31,266,222 | MDQ6VXNlcjMxMjY2MjIy | User | false |
huggingface/transformers | 381,939,792 | MDU6SXNzdWUzODE5Mzk3OTI= | 33 | https://github.com/huggingface/transformers/issues/33 | https://api.github.com/repos/huggingface/transformers/issues/33 | [Bug report] Ineffective no_decay when using BERTAdam | https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L505-L508
With this code, all parameters are decayed because the condition "parameter_name in no_decay" will never be satisfied.
I've made a PR #32 to fix it. | closed | completed | false | 1 | [] | [] | 2018-11-18T08:28:52Z | 2018-11-20T09:07:58Z | 2018-11-20T09:07:58Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | xiaoda99 | 6,015,633 | MDQ6VXNlcjYwMTU2MzM= | User | false |
huggingface/transformers | 382,579,717 | MDU6SXNzdWUzODI1Nzk3MTc= | 45 | https://github.com/huggingface/transformers/issues/45 | https://api.github.com/repos/huggingface/transformers/issues/45 | Issue of `bert_model` arg in `run_classify.py` | Hi,
I am trying to understand the `bert_model` arg in `run_classify.py`. In the file, I can see
```
tokenizer = BertTokenizer.from_pretrained(args.bert_model)
```
where `bert_model` is expected to be the vocab text file of the model
However, I also see
```
model = BertForSequenceClassification.from_pretrained(args.bert_model, len(label_list))
```
where `bert_model` is expected to be a archive file containing the model checkpoint and config.
Please help to advice the correct use of `bert_model` if I have my pretrained model converted locally already.
Thanks! | closed | completed | false | 1 | [] | [] | 2018-11-20T09:48:09Z | 2018-11-20T13:07:14Z | 2018-11-20T13:07:14Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | llidev | 29,957,883 | MDQ6VXNlcjI5OTU3ODgz | User | false |
huggingface/transformers | 382,553,589 | MDU6SXNzdWUzODI1NTM1ODk= | 43 | https://github.com/huggingface/transformers/issues/43 | https://api.github.com/repos/huggingface/transformers/issues/43 | grad is None in squad example | Hi, guys, I try the `run_squad` example with
```
Traceback (most recent call last): | 0/7331 [00:00<?, ?it/s]
File "examples/run_squad.py", line 973, in <module>
main()
File "examples/run_squad.py", line 904, in main
param.grad.data = param.grad.data / args.loss_scale
AttributeError: 'NoneType' object has no attribute 'data'
```
I find one of the param.grads is None, so the param.grad.data doesn't exist.
by the way I down load the data by myself from the urls in this prject. my os is ubuntu 18.04, pytorch 0.41 gpu 1080t
anyone else encounters this situation?
wanna help, please, thx in advance... | closed | completed | false | 2 | [] | [] | 2018-11-20T08:38:03Z | 2018-11-20T23:04:28Z | 2018-11-20T23:04:28Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | vpegasus | 22,723,154 | MDQ6VXNlcjIyNzIzMTU0 | User | false |
huggingface/transformers | 383,028,844 | MDU6SXNzdWUzODMwMjg4NDQ= | 49 | https://github.com/huggingface/transformers/issues/49 | https://api.github.com/repos/huggingface/transformers/issues/49 | Multilingual Issue | Dear authors,
I have two questions.
First, how can I use multilingual pre-trained BERT in pytorch?
Is it all download model to $BERT_BASE_DIR?
Second is tokenization issue.
For Chinese and Japanese, tokenizer may works, however, for Korean, it shows different result that I expected
```
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = "안녕하세요"
tokenized_text = tokenizer.tokenize(text)
print(tokenized_text)
```
` ['ᄋ', '##ᅡ', '##ᆫ', '##ᄂ', '##ᅧ', '##ᆼ', '##ᄒ', '##ᅡ', '##ᄉ', '##ᅦ', '##ᄋ', '##ᅭ']
The result is based on not 'character' but 'byte-based character'
May it comes from unicode issue. (I expect ['안녕', '##하세요'])
| closed | completed | false | 1 | [] | [] | 2018-11-21T09:32:32Z | 2018-11-21T09:39:42Z | 2018-11-21T09:39:41Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | hahmyg | 3,884,429 | MDQ6VXNlcjM4ODQ0Mjk= | User | false |
huggingface/transformers | 383,586,156 | MDU6SXNzdWUzODM1ODYxNTY= | 52 | https://github.com/huggingface/transformers/issues/52 | https://api.github.com/repos/huggingface/transformers/issues/52 | UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 3920: character maps to <undefined> | Installed pytorch-pretrained-BERT from source, Python 3.7, Windows 10
When I run the following snippet:
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
I get the following:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-2-7725148c607d> in <module>()
3
4 # Load pre-trained model tokenizer (vocabulary)
----> 5 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
~\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py in from_pretrained(cls, pretrained_model_name, do_lower_case)
139 vocab_file, resolved_vocab_file))
140 # Instantiate tokenizer.
--> 141 tokenizer = cls(resolved_vocab_file, do_lower_case)
142 except FileNotFoundError:
143 logger.error(
~\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py in __init__(self, vocab_file, do_lower_case)
93 "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained "
94 "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file))
---> 95 self.vocab = load_vocab(vocab_file)
96 self.ids_to_tokens = collections.OrderedDict(
97 [(ids, tok) for tok, ids in self.vocab.items()])
~\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py in load_vocab(vocab_file)
68 with open(vocab_file, "r", encoding="utf8") as reader:
69 while True:
---> 70 token = convert_to_unicode(reader.readline())
71 if not token:
72 break
~\Anaconda3\lib\encodings\cp1252.py in decode(self, input, final)
21 class IncrementalDecoder(codecs.IncrementalDecoder):
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
25 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 3920: character maps to <undefined>
| closed | completed | false | 2 | [] | [] | 2018-11-22T15:42:08Z | 2018-11-23T11:21:57Z | 2018-11-23T11:21:56Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | superchthonic | 5,455,837 | MDQ6VXNlcjU0NTU4Mzc= | User | false |
huggingface/transformers | 384,044,666 | MDU6SXNzdWUzODQwNDQ2NjY= | 55 | https://github.com/huggingface/transformers/issues/55 | https://api.github.com/repos/huggingface/transformers/issues/55 | Loss calculation error | https://github.com/huggingface/pytorch-pretrained-BERT/blob/982339d82984466fde3b1466f657a03200aa2ffb/pytorch_pretrained_bert/modeling.py#L744
Got `ValueError: Expected target size (1, 30522), got torch.Size([1, 11])` at line 744 of `modeling.py`. I think the line should be changed to `masked_lm_loss = loss_fct(prediction_scores.view([-1, self.config.vocab_size]), masked_lm_labels.view([-1]))`. | closed | completed | false | 3 | [] | [] | 2018-11-25T03:48:17Z | 2018-11-26T08:52:00Z | 2018-11-26T08:52:00Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | jwang-lp | 944,876 | MDQ6VXNlcjk0NDg3Ng== | User | false |
huggingface/transformers | 383,967,106 | MDU6SXNzdWUzODM5NjcxMDY= | 54 | https://github.com/huggingface/transformers/issues/54 | https://api.github.com/repos/huggingface/transformers/issues/54 | example in BertForSequenceClassification() conflicts with the api | Hi, firstly, admire u for the great job. but I encounter 2 problems when i use it:
**1**. `UnicodeDecodeError: 'gbk' codec can't decode byte 0x85 in position 4527: illegal multibyte sequence`,
same problem as ISSUE 52 when I excute the `BertTokenizer.from_pretrained('bert-base-uncased')`, but I successfully excute `BertForNextSentencePrediction.from_pretrained('bert-base-uncased')`, >.<
**2**. in the pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py,
line 761 --> ```
`token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] **with the token
types indices selected in [0, 1]**. Type 0 corresponds to a `sentence A` and type 1 corresponds to
a `sentence B` token (see BERT paper for more details).
```
but in the following example, in **line 784**--> `token_type_ids = torch.LongTensor([[0, 0, 1], [0, **2**, 0]])`, why the '2' appears? I am confused. Otherwise, is the situation similar to '0, 1, 0 ' correct ? Or it should be similar to [000000111111] , that is continuous '0' and continuous '1' ?
ty. | closed | completed | false | 1 | [] | [] | 2018-11-24T07:27:50Z | 2018-11-26T08:54:47Z | 2018-11-26T08:54:47Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | labixiaoK | 24,908,364 | MDQ6VXNlcjI0OTA4MzY0 | User | false |
huggingface/transformers | 383,162,319 | MDU6SXNzdWUzODMxNjIzMTk= | 51 | https://github.com/huggingface/transformers/issues/51 | https://api.github.com/repos/huggingface/transformers/issues/51 | Missing options/arguments in run_squad.py for BERT Large | Thanks for the great code..However, the `run_squad.py` for BERT Large seems to not have the `vocab_file` and `bert_config_file` (or other) options/arguments. Did you push the latest version?
Also, it is looking for a pytorch model file (a bin file). Does it need to be there?
I also had to add this line to the file to make BERT base to run on Squad 1.1:
`parser.add_argument('--do_lower_case', action="store_true", default=True, help="Lowercase the input")` | closed | completed | false | 1 | [] | [] | 2018-11-21T15:10:45Z | 2018-11-26T08:57:23Z | 2018-11-26T08:57:23Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | avisil | 43,005,718 | MDQ6VXNlcjQzMDA1NzE4 | User | false |
huggingface/transformers | 382,297,444 | MDU6SXNzdWUzODIyOTc0NDQ= | 38 | https://github.com/huggingface/transformers/issues/38 | https://api.github.com/repos/huggingface/transformers/issues/38 | truncated normal initializer | I have a reasonable truncated normal approximation. (Actually that is what tf does).
https://discuss.pytorch.org/t/implementing-truncated-normal-initializer/4778/16?u=ruotianluo
| closed | completed | false | 2 | [] | [] | 2018-11-19T16:35:08Z | 2018-11-26T09:42:42Z | 2018-11-26T09:42:42Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | ruotianluo | 16,023,153 | MDQ6VXNlcjE2MDIzMTUz | User | false |
huggingface/transformers | 384,525,339 | MDU6SXNzdWUzODQ1MjUzMzk= | 57 | https://github.com/huggingface/transformers/issues/57 | https://api.github.com/repos/huggingface/transformers/issues/57 | Missing function convert_to_unicode in tokenization.py | The function _convert_to_unicode_ is not in tokenization.py but used to be there in v0.1.2. When fine tuning with run_classifier.py, you get an ImportError: cannot import name 'convert_to_unicode'.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/ce37b8e4819142171b61558e64f7dcb0286e9937/examples/run_classifier.py#L33 | closed | completed | false | 1 | [] | [] | 2018-11-26T21:50:15Z | 2018-11-26T22:33:47Z | 2018-11-26T22:33:47Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | ptrichel | 15,148,709 | MDQ6VXNlcjE1MTQ4NzA5 | User | false |
huggingface/transformers | 382,576,559 | MDU6SXNzdWUzODI1NzY1NTk= | 44 | https://github.com/huggingface/transformers/issues/44 | https://api.github.com/repos/huggingface/transformers/issues/44 | Race condition when prepare pretrained model in distributed training | Hi,
I launched two processes per node to run distributed run_classifier.py. However, I am occasionally get below error:
```
11/20/2018 09:31:48 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmpa25_y4es to cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
93%|█████████▎| 381028352/407873900 [00:11<00:01, 14366075.22B/s]
94%|█████████▍| 383812608/407873900 [00:11<00:01, 16210783.00B/s]
95%|█████████▍| 386455552/407873900 [00:11<00:01, 16205260.89B/s]11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.file_utils - creating metadata file for /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.file_utils - removing temp file /tmp/tmpa25_y4es
95%|█████████▌| 388946944/407873900 [00:11<00:01, 18097539.03B/s]11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpvxvnr8_1
97%|█████████▋| 393660416/407873900 [00:11<00:00, 22199883.93B/s]
98%|█████████▊| 399411200/407873900 [00:11<00:00, 27211860.00B/s]
99%|█████████▉| 405128192/407873900 [00:11<00:00, 32287252.94B/s]
100%|██████████| 407873900/407873900 [00:11<00:00, 34098120.40B/s]
11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmp5fcm4v8x to cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
Traceback (most recent call last):
File "examples/run_classifier.py", line 629, in <module>
main()
File "examples/run_classifier.py", line 485, in main
model = BertForSequenceClassification.from_pretrained(args.bert_model, len(label_list))
File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/site-packages/pytorch_pretrained_bert-0.1.2-py3.6.egg/pytorch_pretrained_bert/modeling.py", line 495, in from_pretrained
archive.extractall(tempdir)
File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2007, in extractall
numeric_owner=numeric_owner)
File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2049, in extract
numeric_owner=numeric_owner)
File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2119, in _extract_member
self.makefile(tarinfo, targetpath)
File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2168, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 248, in copyfileobj
buf = src.read(bufsize)
File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/gzip.py", line 276, in read
return self._buffer.read(size)
File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/gzip.py", line 482, in read
raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached
```
It looks like a race-condition that two processes are simultaneously writing model file to `/root/.pytorch_pretrained_bert/`.
Please help to advice any workaround. Thanks! | closed | completed | false | 4 | [] | [] | 2018-11-20T09:40:25Z | 2018-11-27T09:16:02Z | 2018-11-26T09:23:03Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | llidev | 29,957,883 | MDQ6VXNlcjI5OTU3ODgz | User | false |
huggingface/transformers | 383,946,736 | MDU6SXNzdWUzODM5NDY3MzY= | 53 | https://github.com/huggingface/transformers/issues/53 | https://api.github.com/repos/huggingface/transformers/issues/53 | Multi-GPU training vs Distributed training | Hi,
I have a question about Multi-GPU vs Distributed training, probably unrelated to BERT itself.
I have a 4-GPU server, and was trying to run `run_classifier.py` in two ways:
(a) run single-node distributed training with 4 processes and minibatch of 32 each
(b) run Multi-GPU training with minibatch of 128, and all other hyperparams keep the same
Intuitively I believe a and b should yield the closed accuracy and training times. Below please find my observations:
1. (a) runs ~20% faster than (b).
2. (b) yields a better final evaluation accuracy of ~4% than (a)
The first looks like reasonable since I guess the loss.mean() is done by CPU which may be slower than using NCCL directly? However, I don't quite understand the second observation. Can you please give any hint or reference about the possible cause?
Thanks!
| closed | completed | false | 2 | [] | [] | 2018-11-24T00:49:45Z | 2018-11-27T09:22:06Z | 2018-11-26T09:03:23Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | llidev | 29,957,883 | MDQ6VXNlcjI5OTU3ODgz | User | false |
huggingface/transformers | 386,047,173 | MDU6SXNzdWUzODYwNDcxNzM= | 67 | https://github.com/huggingface/transformers/issues/67 | https://api.github.com/repos/huggingface/transformers/issues/67 | `TypeError: object of type 'NoneType' has no len()` when tuning on squad | When running the following command for tuning on squad, I am getting a petty error inside logger `TypeError: object of type 'NoneType' has no len()`. Any thoughts what could be the main cause of the problem?
Full log:
```
python3.6 examples/run_squad.py \
> --bert_model bert-base-uncased \
> --do_train \
> --do_predict \
> --train_file $SQUAD_DIR/train-v1.1.json \
> --predict_file $SQUAD_DIR/dev-v1.1.json \
> --train_batch_size 12 \
> --learning_rate 3e-5 \
> --num_train_epochs 2.0 \
> --max_seq_length 384 \
> --doc_stride 128 \
> --output_dir out
.
.
.
11/29/2018 23:10:14 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11/29/2018 23:10:14 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11/29/2018 23:10:14 - INFO - __main__ - start_position: 47
11/29/2018 23:10:14 - INFO - __main__ - end_position: 48
11/29/2018 23:10:14 - INFO - __main__ - answer: the 1870s
11/29/2018 23:14:38 - INFO - __main__ - Saving train features into cached file /shared/shelley/khashab2/pytorch-pretrained-BERT/squad/train-v1.1.json_bert-base-uncased_384_128_64
11/29/2018 23:14:51 - INFO - __main__ - ***** Running training *****
11/29/2018 23:14:51 - INFO - __main__ - Num orig examples = 87599
Traceback (most recent call last):
File "examples/run_squad.py", line 989, in <module>
main()
File "examples/run_squad.py", line 884, in main
logger.info(" Num split examples = %d", len(train_features))
TypeError: object of type 'NoneType' has no len()
``` | closed | completed | false | 1 | [] | [] | 2018-11-30T05:48:04Z | 2018-11-30T13:24:03Z | 2018-11-30T13:24:02Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | danyaljj | 2,441,454 | MDQ6VXNlcjI0NDE0NTQ= | User | false |
huggingface/transformers | 386,303,565 | MDU6SXNzdWUzODYzMDM1NjU= | 71 | https://github.com/huggingface/transformers/issues/71 | https://api.github.com/repos/huggingface/transformers/issues/71 | run_squad script gets stuck | Hello,
I am trying to run the squad fine tuning script, but it hangs after printing out a few predictions. I am attaching the log. Can you help take a look?
I am running the script on a machine with 8 M40s.
[bert_squad.log](https://github.com/huggingface/pytorch-pretrained-BERT/files/2634588/bert_squad.log)
Best,
Samyam | closed | completed | false | 3 | [] | [] | 2018-11-30T18:39:54Z | 2018-11-30T20:53:04Z | 2018-11-30T19:47:07Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | samyam | 3,409,344 | MDQ6VXNlcjM0MDkzNDQ= | User | false |
huggingface/transformers | 384,276,059 | MDU6SXNzdWUzODQyNzYwNTk= | 56 | https://github.com/huggingface/transformers/issues/56 | https://api.github.com/repos/huggingface/transformers/issues/56 | [Feature request ] Add support for the new cased version of the multilingual model | https://github.com/google-research/bert/commit/332a68723c34062b8f58e5fec3e430db4563320a | closed | completed | false | 1 | [] | [] | 2018-11-26T10:56:18Z | 2018-11-30T22:28:49Z | 2018-11-30T22:28:32Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | elyase | 1,175,888 | MDQ6VXNlcjExNzU4ODg= | User | false |
huggingface/transformers | 385,304,675 | MDU6SXNzdWUzODUzMDQ2NzU= | 61 | https://github.com/huggingface/transformers/issues/61 | https://api.github.com/repos/huggingface/transformers/issues/61 | BERTConfigs in example usages in `modeling.py` are not OK (?) | Hi!
In the `config` definition https://github.com/huggingface/pytorch-pretrained-BERT/blob/21f0196412115876da1c38652d22d1f7a14b36ff/pytorch_pretrained_bert/modeling.py#L848
in the Example usage of `BertForSequenceClassification` in `modeling.py`, there's things I don't understand:
- `vocab_size` in not an acceptable parameter name, by looking at the `BertConfig` class definition https://github.com/huggingface/pytorch-pretrained-BERT/blob/21f0196412115876da1c38652d22d1f7a14b36ff/pytorch_pretrained_bert/modeling.py#L70
- even by changing `vocab_size` into `vocab_size_or_config_json_file`, for the choice of the other params given in the example i.e.
```
vocab_size=32000, hidden_size=512, num_hidden_layers=8, num_attention_heads=6, intermediate_size=1024
```
I get:
`ValueError: The hidden size (512) is not a multiple of the number of attention heads (6)`
I think that something similar may be true for the other classes as well, `BertForQuestionAnswering`, `BertForNextSentencePrediction`, etc.
Am I missing something?
| closed | completed | false | 1 | [] | [] | 2018-11-28T14:53:01Z | 2018-11-30T22:29:24Z | 2018-11-30T22:29:24Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | davidefiocco | 4,547,987 | MDQ6VXNlcjQ1NDc5ODc= | User | false |
huggingface/transformers | 385,368,286 | MDU6SXNzdWUzODUzNjgyODY= | 62 | https://github.com/huggingface/transformers/issues/62 | https://api.github.com/repos/huggingface/transformers/issues/62 | Specify a model from a specific directory for extract_features.py | I have downloaded the model and vocab files into a specific location, using their original file names, so my directory for bert-base-cased contains:
```
bert-base-cased-vocab.txt
bert_config.json
pytorch_model.bin
```
But when I try to specify the directory which contains these files for the `--bert_model` parameter of `extract_features.py` I get the following error:
```
ValueError: Can't find a vocabulary file at path <THEDIRECTORYPATHISPECIFIED> ...
```
When I specify a file that exists and is a proper file, the error messages seem to indicate that the program wants to untar and uncompress the files.
Is there no way to just specify a specific directory that contains the vocab, config, and model files? | closed | completed | false | 4 | [] | [] | 2018-11-28T17:04:39Z | 2018-11-30T22:30:12Z | 2018-11-30T22:30:12Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | johann-petrak | 619,106 | MDQ6VXNlcjYxOTEwNg== | User | false |
huggingface/transformers | 386,055,987 | MDU6SXNzdWUzODYwNTU5ODc= | 68 | https://github.com/huggingface/transformers/issues/68 | https://api.github.com/repos/huggingface/transformers/issues/68 | Accuracy on classification task is lower than the official tensorflow version | Hi, I am running the same task with the same hyper parameters as the official Google Tensorflow implementation of BERT, however, I am getting around 1.5% lower accuracy. Can you please give any hint about the possible cause?
Thanks! | closed | completed | false | 2 | [] | [] | 2018-11-30T06:30:56Z | 2018-11-30T22:56:45Z | 2018-11-30T22:56:45Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | ejld | 31,990,860 | MDQ6VXNlcjMxOTkwODYw | User | false |
huggingface/transformers | 386,489,436 | MDU6SXNzdWUzODY0ODk0MzY= | 76 | https://github.com/huggingface/transformers/issues/76 | https://api.github.com/repos/huggingface/transformers/issues/76 | Wrong signature in model call in run_classifier.py example (?) | I think that
https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/examples/run_classifier.py#L608
may well have a problem, as it's not consistent with
https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/examples/run_classifier.py#L549
nor with
https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/pytorch_pretrained_bert/modeling.py#L875
and this currently breaks the example.
One quick patch would be to replace that line with
```
tmp_eval_loss = model(input_ids, segment_ids, input_mask, label_ids)
logits = model(input_ids, segment_ids, input_mask)
```
But I am not so sure, there are likely better ways. | closed | completed | false | 2 | [] | [] | 2018-12-01T19:34:40Z | 2018-12-02T12:02:34Z | 2018-12-02T12:02:34Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | davidefiocco | 4,547,987 | MDQ6VXNlcjQ1NDc5ODc= | User | false |
huggingface/transformers | 386,553,265 | MDU6SXNzdWUzODY1NTMyNjU= | 78 | https://github.com/huggingface/transformers/issues/78 | https://api.github.com/repos/huggingface/transformers/issues/78 | TypeError: object of type 'WindowsPath' has no len() | Hi, when I run "tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')", the error "TypeError: object of type 'WindowsPath' has no len()" occurs, what is the problem? Thank you for your excellent code! | closed | completed | false | 4 | [] | [] | 2018-12-02T12:03:51Z | 2018-12-02T15:30:43Z | 2018-12-02T15:30:43Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | Deep1994 | 24,366,782 | MDQ6VXNlcjI0MzY2Nzgy | User | false |
huggingface/transformers | 386,698,511 | MDU6SXNzdWUzODY2OTg1MTE= | 79 | https://github.com/huggingface/transformers/issues/79 | https://api.github.com/repos/huggingface/transformers/issues/79 | numpy.core._internal.AxisError: axis 1 is out of bounds for array of dimension 1 | hello, when I am running run_classifier.py with MRPC dataset, there seems to be an mistake. the mistake is as following:
<img width="752" alt="default" src="https://user-images.githubusercontent.com/29532760/49360256-9de0e100-f713-11e8-9a5c-d9f2bc5331e6.PNG">
the mistake is happening when training is over and the model is for evaluating
```
with torch.no_grad():
tmp_eval_loss, logits = model(input_ids, segment_ids, input_mask, label_ids)
```
here I found the size of logits is []
I'm using python3.5 and torch=0.4.1, I don't know how to fix it.
| closed | completed | false | 1 | [] | [] | 2018-12-03T07:56:56Z | 2018-12-03T08:37:11Z | 2018-12-03T08:37:11Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | A-Rain | 29,532,760 | MDQ6VXNlcjI5NTMyNzYw | User | false |
huggingface/transformers | 386,887,965 | MDU6SXNzdWUzODY4ODc5NjU= | 82 | https://github.com/huggingface/transformers/issues/82 | https://api.github.com/repos/huggingface/transformers/issues/82 | AttributeError: 'tuple' object has no attribute 'backward' | Traceback (most recent call last): | 0/11 [00:00<?, ?it/s]
File "examples/run_classifier.py", line 637, in <module>
main()
File "examples/run_classifier.py", line 558, in main
loss.backward()
AttributeError: 'tuple' object has no attribute 'backward'
| closed | completed | false | 2 | [] | [] | 2018-12-03T16:06:20Z | 2018-12-04T07:27:06Z | 2018-12-04T07:27:06Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | Qzsl123 | 23,257,340 | MDQ6VXNlcjIzMjU3MzQw | User | false |
huggingface/transformers | 386,988,878 | MDU6SXNzdWUzODY5ODg4Nzg= | 83 | https://github.com/huggingface/transformers/issues/83 | https://api.github.com/repos/huggingface/transformers/issues/83 | Error while runing example | Hi!
I have a problem when running the example, could you please give me a hint on what may I be doing wrong?
I use:
`PYTHONPATH=. python examples/run_classifier.py --task_name MNLI --do_train --do_eval --do_lower_case --data_dir ../GLUE-baselines/glue_data/MNLI/ --bert_model bert-base-uncased --max_seq_len 40 --train_batch_size 10 --output_dir mnli/`
And obtain:
```
...
12/03/2018 21:11:10 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
12/03/2018 21:11:10 - INFO - __main__ - label: entailment (id = 1)
12/03/2018 21:11:10 - INFO - __main__ - *** Example ***
12/03/2018 21:11:10 - INFO - __main__ - guid: train-3
12/03/2018 21:11:10 - INFO - __main__ - tokens: [CLS] how do you know ? all this is their information again . [SEP] this information belongs to them . [SEP]
12/03/2018 21:11:10 - INFO - __main__ - input_ids: 101 2129 2079 2017 2113 1029 2035 2023 2003 2037 2592 2153 1012 102 2023 2592 7460 2000 2068 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12/03/2018 21:11:10 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12/03/2018 21:11:10 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12/03/2018 21:11:10 - INFO - __main__ - label: entailment (id = 1)
12/03/2018 21:11:10 - INFO - __main__ - *** Example ***
12/03/2018 21:11:10 - INFO - __main__ - guid: train-4
12/03/2018 21:11:10 - INFO - __main__ - tokens: [CLS] yeah i tell you what though if you go price some of those tennis shoes i can see why now you know they ' re getting up in [SEP] the tennis shoes have a range of prices . [SEP]
12/03/2018 21:11:10 - INFO - __main__ - input_ids: 101 3398 1045 2425 2017 2054 2295 2065 2017 2175 3976 2070 1997 2216 5093 6007 1045 2064 2156 2339 2085 2017 2113 2027 1005 2128 2893 2039 1999 102 1996 5093 6007 2031 1037 2846 1997 7597 1012 102
12/03/2018 21:11:10 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
12/03/2018 21:11:10 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1
12/03/2018 21:11:10 - INFO - __main__ - label: neutral (id = 2)
12/03/2018 21:14:39 - INFO - __main__ - ***** Running training *****
12/03/2018 21:14:39 - INFO - __main__ - Num examples = 392702
12/03/2018 21:14:39 - INFO - __main__ - Batch size = 10
12/03/2018 21:14:39 - INFO - __main__ - Num steps = 117810
Epoch: 0%| | 0/3 [00:00<?, ?it/sTHCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THC/generic/THCTensorMath.cu line=26 error=59 : device-side assert triggered | 0/39271 [00:00<?, ?it/s]
/opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "examples/run_classifier.py", line 637, in <module>
main()
File "examples/run_classifier.py", line 558, in main
loss.backward()
File "/home/kchledowski/anaconda2/envs/glue/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/kchledowski/anaconda2/envs/glue/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THC/generic/THCTensorMath.cu:26
```
I would be very grateful for any suggestions where to look. Thanks! | closed | completed | false | 2 | [] | [] | 2018-12-03T20:21:12Z | 2018-12-05T00:12:48Z | 2018-12-05T00:12:48Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | chledowski | 24,462,884 | MDQ6VXNlcjI0NDYyODg0 | User | false |
huggingface/transformers | 387,286,653 | MDU6SXNzdWUzODcyODY2NTM= | 88 | https://github.com/huggingface/transformers/issues/88 | https://api.github.com/repos/huggingface/transformers/issues/88 | Error when calculating loss and running backward | I'm using the sentence classification example. I used my own dataset for emotionclassification (4 classes).
The hyper-parameters are as follows:
<pre>
args.max_seq_length = 100
args.do_train = True
args.do_eval = True
args.do_lower_case = True
args.train_batch_size = 32
args.eval_batch_size = 8
args.learning_rate = 2e-5
args.num_train_epochs = 3
args.warmup_proportion = 0.1
args.no_cuda = False
args.local_rank = -1
args.gpu_id = 1
args.seed = 412
args.gradient_accumulation_steps = 1
args.optimize_on_cpu = False
args.fp16 = False
args.loss_scale = 128
</pre>
I prepared my dataset accordingly and properly:
<pre>
12/04/2018 21:23:02 - INFO - __main__ - *** Example ***
12/04/2018 21:23:02 - INFO - __main__ - guid: train-1
12/04/2018 21:23:02 - INFO - __main__ - tokens: [CLS] but i don ' t [ sep ] u just did [ sep ] i don ##t want to talk to u [SEP]
12/04/2018 21:23:02 - INFO - __main__ - input_ids: 101 2021 1045 2123 1005 1056 1031 19802 1033 1057 2074 2106 1031 19802 1033 1045 2123 2102 2215 2000 2831 2000 1057 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12/04/2018 21:23:02 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12/04/2018 21:23:02 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12/04/2018 21:23:02 - INFO - __main__ - label: angry (id = 3)
</pre>
When I run the following code, a runtime error occurred:
<pre>
for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration")):
    batch = tuple(t.to(device) for t in batch)
    input_ids, input_mask, segment_ids, label_ids = batch
    loss = model(input_ids, segment_ids, input_mask, label_ids)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-20-1977b86302ed> in <module>()
17 try:
---> 18 loss.backward()
19 except RuntimeError:
/raid5/peixiang/anaconda3/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
92 """
---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph)
94
/raid5/peixiang/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
89 tensors, grad_tensors, retain_graph, create_graph,
---> 90 allow_unreachable=True) # allow_unreachable flag
91
RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1532581333611/work/aten/src/THC/THCBlas.cu:411
</pre>
What might be the cause? The dataset? I run the MRPC example without any issue. | closed | completed | false | 2 | [] | [] | 2018-12-04T13:30:58Z | 2018-12-05T03:41:38Z | 2018-12-05T03:41:38Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | zhongpeixiang | 11,826,803 | MDQ6VXNlcjExODI2ODAz | User | false |
huggingface/transformers | 387,233,714 | MDU6SXNzdWUzODcyMzM3MTQ= | 86 | https://github.com/huggingface/transformers/issues/86 | https://api.github.com/repos/huggingface/transformers/issues/86 | code in run_squad.py line 263 | # Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
in segment_ids array,1 indicates token from passage and 0 indicate token form query.
when padding,why segment_ids filled with 0,which represents query | closed | completed | false | 3 | [] | [] | 2018-12-04T11:08:09Z | 2018-12-06T01:30:36Z | 2018-12-06T01:30:36Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | xilinniao123 | 11,830,865 | MDQ6VXNlcjExODMwODY1 | User | false |
huggingface/transformers | 388,713,951 | MDU6SXNzdWUzODg3MTM5NTE= | 100 | https://github.com/huggingface/transformers/issues/100 | https://api.github.com/repos/huggingface/transformers/issues/100 | Squad dataset has multiple answers to a question. | https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/examples/run_squad.py#L143
The confusing part here is that in line 146, only the first answer is considered, so I am wondering why is there a check for multiple answers before.
Also, SQuad dataset has multiple answers for the same question. Is this by design or am I fundamentally missing something? | closed | completed | false | 2 | [] | [] | 2018-12-07T16:02:00Z | 2018-12-08T11:57:22Z | 2018-12-08T11:57:22Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | nischalhp | 1,147,533 | MDQ6VXNlcjExNDc1MzM= | User | false |
huggingface/transformers | 388,930,579 | MDU6SXNzdWUzODg5MzA1Nzk= | 104 | https://github.com/huggingface/transformers/issues/104 | https://api.github.com/repos/huggingface/transformers/issues/104 | BERT for classification example training files | Are there any example training files for `run_classifier.py`? | closed | completed | false | 1 | [] | [] | 2018-12-08T15:16:50Z | 2018-12-08T15:19:17Z | 2018-12-08T15:19:17Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | artemlos | 6,392,760 | MDQ6VXNlcjYzOTI3NjA= | User | false |
huggingface/transformers | 386,786,079 | MDU6SXNzdWUzODY3ODYwNzk= | 81 | https://github.com/huggingface/transformers/issues/81 | https://api.github.com/repos/huggingface/transformers/issues/81 | There is some problem in supporting continuously training | I change the run_classfifier.py in order to support continuously training. i save the model.state_dict() and the BertAdam optimizer.state_dict(), and I load them when start continuously training. However, After some epochs, the loss will increase little by little and finally end with a large loss value. I do not know the reason. Please help me. | closed | completed | false | 1 | [] | [] | 2018-12-03T12:00:09Z | 2018-12-09T21:01:03Z | 2018-12-09T21:01:02Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | ZacharyWaseda | 16,608,767 | MDQ6VXNlcjE2NjA4NzY3 | User | false |
huggingface/transformers | 387,683,054 | MDU6SXNzdWUzODc2ODMwNTQ= | 89 | https://github.com/huggingface/transformers/issues/89 | https://api.github.com/repos/huggingface/transformers/issues/89 | bert-base-multilingual-cased - Text bigger than 512 | Hello,
I am trying to extract features from German text using bert-base-multilingual-cased. However, my text is bigger than 512 words.
Is there any way to use the pertained Bert for text greater than 512 words | closed | completed | false | 2 | [] | [] | 2018-12-05T10:11:21Z | 2018-12-09T21:04:53Z | 2018-12-09T21:04:53Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | agemagician | 6,087,313 | MDQ6VXNlcjYwODczMTM= | User | false |
huggingface/transformers | 388,994,586 | MDU6SXNzdWUzODg5OTQ1ODY= | 105 | https://github.com/huggingface/transformers/issues/105 | https://api.github.com/repos/huggingface/transformers/issues/105 | weights initialized two times | Hi,
I found that you initilized all weights twice:
The first one is in BertModel class:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/pytorch_pretrained_bert/modeling.py#L586
And the second one is in classes of each tasks such as in BertForSequenceClassification class:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/pytorch_pretrained_bert/modeling.py#L674
I think maybe you only need the second one? | closed | completed | false | 2 | [] | [] | 2018-12-09T07:06:52Z | 2018-12-09T21:17:51Z | 2018-12-09T21:17:51Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | friskit-china | 2,494,883 | MDQ6VXNlcjI0OTQ4ODM= | User | false |
huggingface/transformers | 389,201,876 | MDU6SXNzdWUzODkyMDE4NzY= | 106 | https://github.com/huggingface/transformers/issues/106 | https://api.github.com/repos/huggingface/transformers/issues/106 | Picking max_sequence_length in run_classifier.py CoLA task | Is there an upper bound for the max_sequence_length parameter when using run_classifier.py with CoLA task?
When I tested with the default max_sequence_length of 128, everything worked good, but once I changed it to something else, eg 1024, it started the training and failed on the first iteration with the error shown below:
````
Traceback (most recent call last):
File "run_classifier.py", line 643, in <module>
main()
File "run_classifier.py", line 551, in main
loss = model(input_ids, segment_ids, input_mask, label_ids)
File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/jet/var/python/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 868, in forward
_, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/jet/var/python/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 609, in forward
embedding_output = self.embeddings(input_ids, token_type_ids)
File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/jet/var/python/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 199, in forward
embeddings = self.dropout(embeddings)
File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/dropout.py", line 53, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File "/jet/var/python/lib/python3.6/site-packages/torch/nn/functional.py", line 595, in dropout
return _functions.dropout.Dropout.apply(input, p, training, inplace)
File "/jet/var/python/lib/python3.6/site-packages/torch/nn/_functions/dropout.py", line 40, in forward
ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p)
RuntimeError: Creating MTGP constants failed. at /jet/tmp/build/aten/src/THC/THCTensorRandom.cu:34
````
The command I ran is
```
python run_classifier.py \
--task_name CoLA \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/Test/ \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/BERT/test1
```` | closed | completed | false | 2 | [] | [] | 2018-12-10T09:04:47Z | 2018-12-10T15:14:47Z | 2018-12-10T15:14:47Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | artemlos | 6,392,760 | MDQ6VXNlcjYzOTI3NjA= | User | false |
huggingface/transformers | 388,915,407 | MDU6SXNzdWUzODg5MTU0MDc= | 103 | https://github.com/huggingface/transformers/issues/103 | https://api.github.com/repos/huggingface/transformers/issues/103 | Words after tokenization replaced with # | Hello,
When training the bert-base-multilingual-cased model for Question and Answering, I see that the tokens look like this :
```tokens: [CLS] what is the ins ##ured _ name ? [SEP] versi ##cherung ##ss ##che ##in erg ##o hau ##srat ##versi ##cherung hr - sv 927 ##26 ##49 ##2 ```
Any idea why words are getting replaced with #?
Here is the command I am using :
```python run_squad.py --bert_model bert-base-multilingual-cased --do_train --do_predict --train_file dataset_train.json --predict_file dataset_predict.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 400 --doc_stride 20 --output_dir output_dir``` | closed | completed | false | 6 | [] | [] | 2018-12-08T11:56:57Z | 2018-12-11T13:32:37Z | 2018-12-11T10:33:23Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | nischalhp | 1,147,533 | MDQ6VXNlcjExNDc1MzM= | User | false |
huggingface/transformers | 389,846,897 | MDU6SXNzdWUzODk4NDY4OTc= | 114 | https://github.com/huggingface/transformers/issues/114 | https://api.github.com/repos/huggingface/transformers/issues/114 | What is the best dataset structure for BERT? | First I want to say thanks for setting up all this!
I am using BertForSequenceClassification and am wondering what the optimal way is to structure my sequences.
Right now my sequences are blog post which could be upwards to 400 words long.
Would it be better to split my blog posts in sentences and use the sentences as my sequences instead?
Thanks! | closed | completed | false | 0 | [] | [] | 2018-12-11T16:28:00Z | 2018-12-11T20:57:45Z | 2018-12-11T20:57:45Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | wahlforss | 73,305 | MDQ6VXNlcjczMzA1 | User | false |
huggingface/transformers | 389,549,868 | MDU6SXNzdWUzODk1NDk4Njg= | 110 | https://github.com/huggingface/transformers/issues/110 | https://api.github.com/repos/huggingface/transformers/issues/110 | Pretrained Tokenizer Loading Fails: 'PosixPath' object has no attribute 'rfind' | I was trying to work through the toy tokenization example from the main README, and I hit an error on the step of loading in a pre-trained BERT tokenizer.
```
~/bert_transfer$ python3 test_tokenizer.py
Traceback (most recent call last):
File "test_tokenizer.py", line 10, in <module>
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
File "/usr/local/lib/python3.5/dist-packages/pytorch_pretrained_bert/tokenization.py", line 117, in from_pretrained
resolved_vocab_file = cached_path(vocab_file, cache_dir=cache_dir)
File "/usr/local/lib/python3.5/dist-packages/pytorch_pretrained_bert/file_utils.py", line 88, in cached_path
return get_from_cache(url_or_filename, cache_dir)
File "/usr/local/lib/python3.5/dist-packages/pytorch_pretrained_bert/file_utils.py", line 169, in get_from_cache
os.makedirs(cache_dir, exist_ok=True)
File "/usr/lib/python3.5/os.py", line 226, in makedirs
head, tail = path.split(name)
File "/usr/lib/python3.5/posixpath.py", line 103, in split
i = p.rfind(sep) + 1
AttributeError: 'PosixPath' object has no attribute 'rfind'
~/bert_transfer$ python3 --version
Python 3.5.2
```
Exact usage in script:
```
from pytorch_pretrained_bert import BertTokenizer
test_sentence = "When PyTorch first launched in early 2017, it quickly became a popular choice among AI researchers, who found it ideal for rapid experimentation due to its flexible, dynamic programming environment and user-friendly interface"
if __name__ == "__main__":
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
```
I am curious if you're able to replicate this error on python 3.5.2, since the repo states support for 3.5+. | closed | completed | false | 2 | [] | [] | 2018-12-11T00:48:11Z | 2018-12-13T11:16:27Z | 2018-12-11T10:28:47Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | decodyng | 5,902,855 | MDQ6VXNlcjU5MDI4NTU= | User | false |
huggingface/transformers | 390,793,183 | MDU6SXNzdWUzOTA3OTMxODM= | 117 | https://github.com/huggingface/transformers/issues/117 | https://api.github.com/repos/huggingface/transformers/issues/117 | logging.basicConfig overrides user logging | I think logging.basicConfig should not be called inside library code
check out this SO thread
https://stackoverflow.com/questions/27016870/how-should-logging-be-used-in-a-python-package | closed | completed | false | 1 | [] | [] | 2018-12-13T17:58:02Z | 2018-12-14T13:46:51Z | 2018-12-14T13:46:51Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | asafamr | 5,182,534 | MDQ6VXNlcjUxODI1MzQ= | User | false |
huggingface/transformers | 387,100,844 | MDU6SXNzdWUzODcxMDA4NDQ= | 85 | https://github.com/huggingface/transformers/issues/85 | https://api.github.com/repos/huggingface/transformers/issues/85 | How to use pre-trained SQUAD model? | After training squad, I have a model file in a local folder:
```
-rw-rw-r-- 1 khashab2 cs_danr 4.7M Nov 21 19:20 dev-v1.1.json
-rw-rw-r-- 1 khashab2 cs_danr 3.4K Nov 29 22:52 evaluate-v1.1.py
drwxrwsr-x 2 khashab2 cs_danr 10 Nov 30 14:57 out2
-rw-rw-r-- 1 khashab2 cs_danr 29M Nov 21 19:20 train-v1.1.json
-rw-rw-r-- 1 khashab2 cs_danr 490M Nov 29 23:14 train-v1.1.json_bert-base-uncased_384_128_64
-rw-rw-r-- 1 khashab2 cs_danr 490M Nov 30 15:05 train-v1.1.json_bert-large-uncased_384_128_64
```
I want to use this pre-trained model to make predictions. Is there any example that I can follow this? (if not any pointers?) I looked into the instructions and didn't find anything relevant on this. | closed | completed | false | 1 | [] | [] | 2018-12-04T03:13:30Z | 2018-12-14T14:42:04Z | 2018-12-14T14:42:04Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | danyaljj | 2,441,454 | MDQ6VXNlcjI0NDE0NTQ= | User | false |
huggingface/transformers | 388,660,132 | MDU6SXNzdWUzODg2NjAxMzI= | 98 | https://github.com/huggingface/transformers/issues/98 | https://api.github.com/repos/huggingface/transformers/issues/98 | Problem about convert TF model and pretraining | First of all, Thank you for this great job. I use the official tensorflow implementation to pretrain on my corpus and then save the model. I want to convert this model to pytorch format and use it, but I got the error:
Traceback (most recent call last):
File "convert_tf_checkpoint_to_pytorch.py", line 105, in <module>
convert()
File "convert_tf_checkpoint_to_pytorch.py", line 86, in convert
pointer = getattr(pointer, l[0])
AttributeError: 'Parameter' object has no attribute 'adam_m'
Could you give me some advice? Thank you very much.
It is great if you can release the pretrain code. I think it is useful even we cannot use TPU. Because we can fine-tune above google's pertained model. | closed | completed | false | 3 | [] | [] | 2018-12-07T13:42:59Z | 2018-12-14T14:42:40Z | 2018-12-14T14:42:40Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | zhezhaoa | 10,495,098 | MDQ6VXNlcjEwNDk1MDk4 | User | false |
huggingface/transformers | 389,950,888 | MDU6SXNzdWUzODk5NTA4ODg= | 115 | https://github.com/huggingface/transformers/issues/115 | https://api.github.com/repos/huggingface/transformers/issues/115 | How to run a saved model? | How can you run the model without training the model? If we already trained a model with run_classifer? | closed | completed | false | 2 | [] | [] | 2018-12-11T20:58:38Z | 2018-12-14T14:43:43Z | 2018-12-14T14:43:43Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | wahlforss | 73,305 | MDQ6VXNlcjczMzA1 | User | false |
huggingface/transformers | 391,402,013 | MDU6SXNzdWUzOTE0MDIwMTM= | 120 | https://github.com/huggingface/transformers/issues/120 | https://api.github.com/repos/huggingface/transformers/issues/120 | RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index' | I am using part of your evaluation code, with slight modifications:
https://github.com/danyaljj/pytorch-pretrained-BERT/blob/92e22d710287db1b4aa4fda951714887878fa728/examples/daniel_run.py#L582-L616
Wondering if you have encountered the following error:
```
(env3.6) khashab2@gissing:/shared/shelley/khashab2/pytorch-pretrained-BERT$ python3.6 examples/daniel_run.py
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
loaded the model to base . . .
loading the bert . . .
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1248501532/1248501532 [00:26<00:00, 46643749.96B/s]
Evaluating: 0%| | 0/1355 [00:00<?, ?it/s]
Traceback (most recent call last):
File "examples/daniel_run.py", line 817, in <module>
evaluate_model()
File "examples/daniel_run.py", line 606, in evaluate_model
batch_start_logits, batch_end_logits = model(input_ids, segment_ids, input_mask)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 1096, in forward
sequence_output, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 626, in forward
embedding_output = self.embeddings(input_ids, token_type_ids)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py", line 193, in forward
words_embeddings = self.word_embeddings(input_ids)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 110, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/shared/shelley/khashab2/pytorch-pretrained-BERT/env3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 1110, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of type torch.LongTensor but found type torch.cuda.LongTensor for argument #3 'index'
``` | closed | completed | false | 1 | [] | [] | 2018-12-15T18:43:53Z | 2018-12-15T20:45:37Z | 2018-12-15T20:45:37Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | danyaljj | 2,441,454 | MDQ6VXNlcjI0NDE0NTQ= | User | false |
huggingface/transformers | 391,458,997 | MDU6SXNzdWUzOTE0NTg5OTc= | 121 | https://github.com/huggingface/transformers/issues/121 | https://api.github.com/repos/huggingface/transformers/issues/121 | High accuracy for CoLA task | I try to reproduce the CoLA results from the BERT paper (BERTBase, Single GPU).
Running the following command
```
python run_classifier.py \
--task_name cola \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/CoLA/ \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir $OUT_DIR/cola_output/
```
I get eval results of
```
12/16/2018 12:31:34 - INFO - __main__ - ***** Eval results *****
12/16/2018 12:31:34 - INFO - __main__ - eval_accuracy = 0.8302972195589645
12/16/2018 12:31:34 - INFO - __main__ - eval_loss = 0.5117322660925734
12/16/2018 12:31:34 - INFO - __main__ - global_step = 804
12/16/2018 12:31:34 - INFO - __main__ - loss = 0.17348005173644468
```
An accuracy of 0.83 would be fantastic, but compared to the 0.521 stated in the paper this doesn't seem very realistic.
Any suggestions what I'm doing wrong?
| closed | completed | false | 2 | [] | [] | 2018-12-16T11:39:56Z | 2018-12-17T06:41:06Z | 2018-12-17T06:41:06Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | pfecht | 26,819,398 | MDQ6VXNlcjI2ODE5Mzk4 | User | false |
huggingface/transformers | 391,979,075 | MDU6SXNzdWUzOTE5NzkwNzU= | 123 | https://github.com/huggingface/transformers/issues/123 | https://api.github.com/repos/huggingface/transformers/issues/123 | big memory occupied | When I run the examples for MRPC, my program was always killed becaused of big memory occupied. Anyone encounter with this issue? | closed | completed | false | 1 | [] | [] | 2018-12-18T03:13:11Z | 2018-12-18T08:04:38Z | 2018-12-18T08:04:38Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | AIRobotZhang | 20,748,608 | MDQ6VXNlcjIwNzQ4NjA4 | User | false |
huggingface/transformers | 392,409,375 | MDU6SXNzdWUzOTI0MDkzNzU= | 129 | https://github.com/huggingface/transformers/issues/129 | https://api.github.com/repos/huggingface/transformers/issues/129 | BERT + CNN classifier doesn't work after migrating from 0.1.2 to 0.4.0 | I used BERT in a very simple sentence classification task:
in `__init__` I have
```python3
self.bert = BertModel(config)
self.cnn_classifier = CNNClassifier(self.config.hidden_size, intent_cls_num)
```
and in forward it's just
```python3
encoded_layers, _ = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
confidence_score = self.cnn_classifier(encoded_layers)
masked_lm_loss = loss_fct(confidence_score, ground_truth_labels)
```
This code works perfectly when I use 0.1.2 version, but in 0.4.0, it:
- always predicting the most common class when have a large training set
- cannot even learn a dataset with only 4 samples (fed in as one batch); can learn a single sample though
Why are these problems happening in 0.4.0? The only change in my code is that I changed `weight_decay_rate` to `weight_decay`... | closed | completed | false | 2 | [] | [] | 2018-12-19T01:57:22Z | 2018-12-20T00:20:48Z | 2018-12-20T00:20:48Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | jwang-lp | 944,876 | MDQ6VXNlcjk0NDg3Ng== | User | false |
huggingface/transformers | 378,996,831 | MDU6SXNzdWUzNzg5OTY4MzE= | 10 | https://github.com/huggingface/transformers/issues/10 | https://api.github.com/repos/huggingface/transformers/issues/10 | Is there a plan to have a FP16 for GPU so to have larger batch size or longer text documents support ? | Is there a plan to have an FP16 for GPU so to have a larger batch size or longer text documents support? | closed | completed | false | 4 | [] | [] | 2018-11-09T02:23:34Z | 2018-12-20T18:42:11Z | 2018-11-12T16:06:47Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | howardhsu | 10,661,375 | MDQ6VXNlcjEwNjYxMzc1 | User | false |
huggingface/transformers | 393,058,463 | MDU6SXNzdWUzOTMwNTg0NjM= | 136 | https://github.com/huggingface/transformers/issues/136 | https://api.github.com/repos/huggingface/transformers/issues/136 | It's possible to avoid download the pretrained model? | When I run this code `model = BertModel.from_pretrained('bert-base-uncased')` , it would download a big file and sometimes that's very slow. Now I have download the model from [https://github.com/google-research/bert](url). So, It's possible to avoid download the pretrained model when I use pytorch-pretrained-BERT at the first time? | closed | completed | false | 3 | [] | [] | 2018-12-20T14:00:03Z | 2018-12-21T13:47:03Z | 2018-12-20T14:08:10Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | rxy1212 | 14,829,556 | MDQ6VXNlcjE0ODI5NTU2 | User | false |
huggingface/transformers | 394,064,499 | MDU6SXNzdWUzOTQwNjQ0OTk= | 147 | https://github.com/huggingface/transformers/issues/147 | https://api.github.com/repos/huggingface/transformers/issues/147 | Does the final hidden state contains the <CLS> for Squad2.0 | Recently I'm modifying the `run_squad.py` to run on CoQA. In the implementation of TensorFlow from Google, they use the probability on the first token of a context segment, where is the location of `<CLS>` to as the that of the question is unanswerable. So I try to modified the `run_squad.py` in your implementation as this. But when I looked at the predictions, I have found that many questions answers are the first word of the context not the first token, <CLS>, so I wanna know if your implementation have removed the hidden state of start token and end token? Or there may be other problems ? Thank you a lot! | closed | completed | false | 1 | [] | [] | 2018-12-26T02:05:34Z | 2018-12-26T02:48:04Z | 2018-12-26T02:48:04Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | SparkJiao | 16,469,472 | MDQ6VXNlcjE2NDY5NDcy | User | false |
huggingface/transformers | 394,310,682 | MDU6SXNzdWUzOTQzMTA2ODI= | 148 | https://github.com/huggingface/transformers/issues/148 | https://api.github.com/repos/huggingface/transformers/issues/148 | Embeddings from BERT for original tokens | I am trying out the `extract_features.py` example program. I noticed that a sentence gets split into tokens and the embeddings are generated. For example, if you had the sentence “Definitely not”, and the corresponding workpieces can be [“Def”, “##in”, “##ite”, “##ly”, “not”]. It then generates the embeddings for these tokens.
My question is how do I train an NER system on CoNLL dataset?
I want to extract embeddings for original tokens for training an NER with a neural architecture. If you have come across any resource that gives a clear explanation on how to carry this out, post it here. | closed | completed | false | 1 | [] | [] | 2018-12-27T06:48:23Z | 2018-12-28T09:17:16Z | 2018-12-28T09:17:16Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | nihalnayak | 5,679,782 | MDQ6VXNlcjU2Nzk3ODI= | User | false |
huggingface/transformers | 393,876,320 | MDU6SXNzdWUzOTM4NzYzMjA= | 146 | https://github.com/huggingface/transformers/issues/146 | https://api.github.com/repos/huggingface/transformers/issues/146 | BertForQuestionAnswering: Predicting span on the question? | Hello,
I have a question regarding the `BertForQuestionAnswering` implementation. If I am not mistaken, for this model the sequence should be of the form `Question tokens [SEP] Passage tokens`. Therefore, the embedded representation computed by `BertModel` returns the states of both the question and the passage (a tensor of length `passage + question + 1`).
If I am not mistaken, the span logits are then calculated for the whole sequence, i.e. **they can be calculated for the question** even if the answer is always in the passage (see [the model code](https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/pytorch_pretrained_bert/modeling.py#L1097) and the [squad script](https://github.com/huggingface/pytorch-pretrained-BERT/blob/8da280ebbeca5ebd7561fd05af78c65df9161f92/examples/run_squad.py#L899)). I wonder if this behavior is really desirable. Doesn't it confuse the model?
Thank you for your work! | closed | completed | false | 1 | [] | [] | 2018-12-24T12:51:49Z | 2018-12-28T09:20:49Z | 2018-12-28T09:20:49Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | valsworthen | 18,659,328 | MDQ6VXNlcjE4NjU5MzI4 | User | false |
huggingface/transformers | 393,167,784 | MDU6SXNzdWUzOTMxNjc3ODQ= | 139 | https://github.com/huggingface/transformers/issues/139 | https://api.github.com/repos/huggingface/transformers/issues/139 | Not able to use FP16 in pytorch-pretrained-BERT | I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue
**Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target**
when I enabled fp16.
Also when using
`logits = logits.half()
labels = labels.half()`
then the epoch time also increased.
_Originally posted by @Ashish-Gupta03 in https://github.com/huggingface/pytorch-pretrained-BERT/issue_comments#issuecomment-449096213_ | closed | completed | false | 0 | [] | [] | 2018-12-20T18:46:14Z | 2018-12-28T09:23:34Z | 2018-12-28T09:23:34Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | Ashish-Gupta03 | 7,694,700 | MDQ6VXNlcjc2OTQ3MDA= | User | false |
huggingface/transformers | 392,898,311 | MDU6SXNzdWUzOTI4OTgzMTE= | 132 | https://github.com/huggingface/transformers/issues/132 | https://api.github.com/repos/huggingface/transformers/issues/132 | NONE | closed | completed | false | 0 | [] | [] | 2018-12-20T05:42:29Z | 2018-12-28T14:04:26Z | 2018-12-28T13:56:36Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | HuXiangkun | 6,700,036 | MDQ6VXNlcjY3MDAwMzY= | User | false | |
huggingface/transformers | 394,865,030 | MDU6SXNzdWUzOTQ4NjUwMzA= | 154 | https://github.com/huggingface/transformers/issues/154 | https://api.github.com/repos/huggingface/transformers/issues/154 | the run_squad report "for training,each question should exactly have 1 answer" when I tried to fintune bert on squad2.0 | But some questions of train-v2.0.json are unanswerable. | closed | completed | false | 0 | [] | [] | 2018-12-30T11:33:29Z | 2018-12-30T11:48:50Z | 2018-12-30T11:48:50Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | zhaoguangxiang | 17,742,385 | MDQ6VXNlcjE3NzQyMzg1 | User | false |
huggingface/transformers | 391,564,653 | MDU6SXNzdWUzOTE1NjQ2NTM= | 122 | https://github.com/huggingface/transformers/issues/122 | https://api.github.com/repos/huggingface/transformers/issues/122 | _load_from_state_dict() takes 7 positional arguments but 8 were given | closed | completed | false | 3 | [] | [] | 2018-12-17T05:38:40Z | 2019-01-07T11:46:27Z | 2019-01-07T11:46:27Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | guanlongtianzi | 10,386,366 | MDQ6VXNlcjEwMzg2MzY2 | User | false | |
huggingface/transformers | 392,093,383 | MDU6SXNzdWUzOTIwOTMzODM= | 125 | https://github.com/huggingface/transformers/issues/125 | https://api.github.com/repos/huggingface/transformers/issues/125 | Warning/Assert when embedding sequences longer than positional embedding size | Hi team, love the work.
Just a feature suggestion: when running on GPU (presumably the CPU too), BERT will break when you try to run on sentences longer than 512 tokens (on bert-base).
This is because the position embedding matrix size is only 512 (or whatever else it is for the other bert models)
Could the tokenizer have an assert/warning on it that doesn't allow you tokenize a sentence longer than the number of positional embeddings, so that you get a better error message than a bit scary (uncatchable) cuda error.
| closed | completed | false | 2 | [] | [] | 2018-12-18T10:36:23Z | 2019-01-07T11:46:41Z | 2019-01-07T11:46:41Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | patrick-s-h-lewis | 15,031,366 | MDQ6VXNlcjE1MDMxMzY2 | User | false |
huggingface/transformers | 392,922,322 | MDU6SXNzdWUzOTI5MjIzMjI= | 133 | https://github.com/huggingface/transformers/issues/133 | https://api.github.com/repos/huggingface/transformers/issues/133 | lower accuracy on OMD(Obama-McCain Debate twitter sentiment dataset) | I run the classification task with BERT pretrianed model, but while it's much lower than other methods on OMD dataset, which has 2 labels. The final accuracy result is only 62% on binary classification task! | closed | completed | false | 3 | [] | [] | 2018-12-20T07:27:11Z | 2019-01-07T12:11:22Z | 2019-01-07T12:11:22Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | AIRobotZhang | 20,748,608 | MDQ6VXNlcjIwNzQ4NjA4 | User | false |
huggingface/transformers | 393,142,144 | MDU6SXNzdWUzOTMxNDIxNDQ= | 138 | https://github.com/huggingface/transformers/issues/138 | https://api.github.com/repos/huggingface/transformers/issues/138 | Problem loading finetuned model for squad | Hi,
i'm trying to load a fine tuned model for question answering which i trained with squad.py:
```
import torch
from pytorch_pretrained_bert import BertModel, BertForQuestionAnswering
from pytorch_pretrained_bert import modeling
config = modeling.BertConfig(attention_probs_dropout_prob=0.1, hidden_dropout_prob=0.1, hidden_size=768, initializer_range=0.02, intermediate_size=3072, max_position_embeddings=512, num_attention_heads=12, num_hidden_layers=12, vocab_size_or_config_json_file=30522)
model = modeling.BertForQuestionAnswering(config)
model_state_dict = "/home/ubuntu/bert_squad/bert_fine_121918/pytorch_model.bin"
model.bert.load_state_dict(torch.load(model_state_dict))
```
but receiving an error on the last line:
> Error(s) in loading state_dict for BertModel:
> Missing key(s) in state_dict: "embeddings.word_embeddings.weight", "embeddings.position_embeddings.weight", "embeddings.token_type_embeddings.weight", "embeddings.LayerNorm.weight", "embeddings.LayerNorm.bias", "encoder.layer.0.attention.self.query.weight",....
> Unexpected key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight",....
it looks like model definition is not in expected format. Could you direct me on what went wrong? | closed | completed | false | 4 | [] | [] | 2018-12-20T17:27:40Z | 2019-01-07T12:17:58Z | 2019-01-07T12:17:58Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | ni40in | 9,155,183 | MDQ6VXNlcjkxNTUxODM= | User | false |
huggingface/transformers | 393,167,870 | MDU6SXNzdWUzOTMxNjc4NzA= | 140 | https://github.com/huggingface/transformers/issues/140 | https://api.github.com/repos/huggingface/transformers/issues/140 | Not able to use FP16 in pytorch-pretrained-BERT. Getting error **Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target** | I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue
**Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target**
when I enabled fp16.
Also when using
`logits = logits.half()
labels = labels.half()`
then the epoch time also increased.
The training time without fp16 was 2.5 hrs per epoch after doing logits.half() and labels.half() the runtime per epoch shot up to 8hrs. | closed | completed | false | 3 | [] | [] | 2018-12-20T18:46:30Z | 2019-01-07T12:18:36Z | 2019-01-07T12:18:36Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | Ashish-Gupta03 | 7,694,700 | MDQ6VXNlcjc2OTQ3MDA= | User | false |
huggingface/transformers | 393,365,633 | MDU6SXNzdWUzOTMzNjU2MzM= | 143 | https://github.com/huggingface/transformers/issues/143 | https://api.github.com/repos/huggingface/transformers/issues/143 | bug in init_bert_weights | hi ,
there is a bug in init_bert_weights().
the BERTLayerNorm has twice init, the first init is in the BERTLayerNorm module __init__(). the second init in init_bert_weights().
if you want to get pre-training model that is not from google model, the second init will lead to bad convergence in my experiment 。 gamma is variance , beta is mean, there are usually 1 and 0. the second init change it.
first:
self.gamma = nn.Parameter(torch.ones(config.hidden_size))
self.beta = nn.Parameter(torch.zeros(config.hidden_size))
second:
elif isinstance(module, BERTLayerNorm):
module.beta.data.normal_(mean=0.0, std=config.initializer_range)
module.gamma.data.normal_(mean=0.0, std=config.initializer_range)
| closed | completed | false | 1 | [] | [] | 2018-12-21T08:29:40Z | 2019-01-07T12:18:49Z | 2019-01-07T12:18:49Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | mjc14 | 15,847,067 | MDQ6VXNlcjE1ODQ3MDY3 | User | false |
huggingface/transformers | 394,673,351 | MDU6SXNzdWUzOTQ2NzMzNTE= | 151 | https://github.com/huggingface/transformers/issues/151 | https://api.github.com/repos/huggingface/transformers/issues/151 | Using large model with fp16 enable causes the server down | I am using a server with Ubuntu 16.04 and 4 TITAN X GPUs. The server runs the base model with no problems. But it cannot run the large model with 32-bit float point, so I enabled fp16, and the server went down.
(When I successfully ran the base model, it consumes 8G GPU memory for each of the 4 GPUS. ) | closed | completed | false | 2 | [] | [] | 2018-12-28T16:32:05Z | 2019-01-07T12:24:34Z | 2019-01-07T12:24:34Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | hguan6 | 19,914,123 | MDQ6VXNlcjE5OTE0MTIz | User | false |
huggingface/transformers | 395,941,645 | MDU6SXNzdWUzOTU5NDE2NDU= | 164 | https://github.com/huggingface/transformers/issues/164 | https://api.github.com/repos/huggingface/transformers/issues/164 | pretrained model | is the pretrained model downloaded include word embedding?
I do not see any embedding in your code
please | closed | completed | false | 4 | [] | [] | 2019-01-04T14:20:49Z | 2019-01-07T12:28:07Z | 2019-01-07T12:28:07Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | minmummax | 25,759,762 | MDQ6VXNlcjI1NzU5NzYy | User | false |
huggingface/transformers | 396,141,181 | MDU6SXNzdWUzOTYxNDExODE= | 167 | https://github.com/huggingface/transformers/issues/167 | https://api.github.com/repos/huggingface/transformers/issues/167 | Question about hidden layers from pretained model | In the example shown to get hidden states https://github.com/huggingface/pytorch-pretrained-BERT#usage
I want to confirm - the final hidden layer corresponds to the last element of `encoded_layers`, right? | closed | completed | false | 1 | [] | [] | 2019-01-05T07:09:20Z | 2019-01-07T12:28:19Z | 2019-01-07T12:28:19Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | mvss80 | 5,709,876 | MDQ6VXNlcjU3MDk4NzY= | User | false |
huggingface/transformers | 396,232,776 | MDU6SXNzdWUzOTYyMzI3NzY= | 168 | https://github.com/huggingface/transformers/issues/168 | https://api.github.com/repos/huggingface/transformers/issues/168 | Cannot reproduce the result of run_squad 1.1 | I train 5 epochs with learning rate 5e-5, but my evaluation result is {'exact_match': 32.04351939451277, 'f1': 36.53574674513405}.
What is the problem? | closed | completed | false | 5 | [] | [] | 2019-01-06T06:34:47Z | 2019-01-07T12:30:56Z | 2019-01-07T12:30:56Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | hmt2014 | 9,130,751 | MDQ6VXNlcjkxMzA3NTE= | User | false |
huggingface/transformers | 396,375,768 | MDU6SXNzdWUzOTYzNzU3Njg= | 170 | https://github.com/huggingface/transformers/issues/170 | https://api.github.com/repos/huggingface/transformers/issues/170 | How to pretrain my own data with this pytorch code? | I wonder how to pretrain with my own data. | closed | completed | false | 6 | [] | [] | 2019-01-07T07:22:53Z | 2019-01-07T13:05:35Z | 2019-01-07T12:29:44Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | Gpwner | 19,349,207 | MDQ6VXNlcjE5MzQ5MjA3 | User | false |
huggingface/transformers | 394,870,891 | MDU6SXNzdWUzOTQ4NzA4OTE= | 155 | https://github.com/huggingface/transformers/issues/155 | https://api.github.com/repos/huggingface/transformers/issues/155 | Why not the mlm use the information of adjacent sentences? |
I prepare two sentences for mlm predict the mask part:"Tom cant run fast. He [mask] his back a few years ago." The result of model (uncased base) is 'got'. That is meaningless. Obviously ,"hurt" is better.
I wander how to make mlm to use the information of adjacent sentences. | closed | completed | false | 3 | [] | [] | 2018-12-30T13:08:53Z | 2019-01-08T07:01:28Z | 2019-01-07T12:25:24Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | l126t | 21,979,549 | MDQ6VXNlcjIxOTc5NTQ5 | User | false |
huggingface/transformers | 396,776,254 | MDU6SXNzdWUzOTY3NzYyNTQ= | 173 | https://github.com/huggingface/transformers/issues/173 | https://api.github.com/repos/huggingface/transformers/issues/173 | What 's the mlm accuracy of pretrained model? | What 's the mlm accuracy of pretrained model? In my case, I find the scores of candidate in top 10 are very close,but most are not suitable. Is this the same prediction as Google's original project?
_Originally posted by @l126t in https://github.com/huggingface/pytorch-pretrained-BERT/issues/155#issuecomment-452195676_ | closed | completed | false | 1 | [] | [] | 2019-01-08T07:08:35Z | 2019-01-08T10:07:23Z | 2019-01-08T10:07:23Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | l126t | 21,979,549 | MDQ6VXNlcjIxOTc5NTQ5 | User | false |
huggingface/transformers | 398,588,638 | MDU6SXNzdWUzOTg1ODg2Mzg= | 188 | https://github.com/huggingface/transformers/issues/188 | https://api.github.com/repos/huggingface/transformers/issues/188 | Weight Decay Fix Original Paper | Hi There!
Is the weight decay fix from?
https://arxiv.org/abs/1711.05101
Thanks! | closed | completed | false | 1 | [] | [] | 2019-01-12T20:22:45Z | 2019-01-14T01:08:36Z | 2019-01-14T01:08:36Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | PetrochukM | 7,424,737 | MDQ6VXNlcjc0MjQ3Mzc= | User | false |
huggingface/transformers | 394,864,622 | MDU6SXNzdWUzOTQ4NjQ2MjI= | 153 | https://github.com/huggingface/transformers/issues/153 | https://api.github.com/repos/huggingface/transformers/issues/153 | Did you suport squad2.0 | What is the command to reproduce the results of squad2.0 reported in the BERT.
Thanks~ | closed | completed | false | 2 | [] | [] | 2018-12-30T11:25:55Z | 2019-01-14T09:03:51Z | 2019-01-14T09:03:50Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | zhaoguangxiang | 17,742,385 | MDQ6VXNlcjE3NzQyMzg1 | User | false |
huggingface/transformers | 397,703,107 | MDU6SXNzdWUzOTc3MDMxMDc= | 178 | https://github.com/huggingface/transformers/issues/178 | https://api.github.com/repos/huggingface/transformers/issues/178 | Can we use BERT for Punctuation Prediction? | Can we use the pre-trained BERT model for Punctuation Prediction for Conversational Speech? Let say punctuating an ASR output? | closed | completed | false | 1 | [] | [] | 2019-01-10T07:25:30Z | 2019-01-14T09:05:22Z | 2019-01-14T09:05:22Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | dalonlobo | 12,654,849 | MDQ6VXNlcjEyNjU0ODQ5 | User | false |
huggingface/transformers | 398,143,878 | MDU6SXNzdWUzOTgxNDM4Nzg= | 180 | https://github.com/huggingface/transformers/issues/180 | https://api.github.com/repos/huggingface/transformers/issues/180 | Weights not initialized from pretrained model | Thanks for your awesome work!
When I execute the following code for a named entity recognition tasks:
`model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=num_labels)`
Output the following information:
> Weights of BertForTokenClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
Weights from pretrained model not used in BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
What puzzles me is that the parameters of the classifier are not initialized. | closed | completed | false | 3 | [] | [] | 2019-01-11T06:03:47Z | 2019-01-14T09:08:01Z | 2019-01-14T09:05:33Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | lemonhu | 22,219,073 | MDQ6VXNlcjIyMjE5MDcz | User | false |
huggingface/transformers | 398,148,589 | MDU6SXNzdWUzOTgxNDg1ODk= | 181 | https://github.com/huggingface/transformers/issues/181 | https://api.github.com/repos/huggingface/transformers/issues/181 | All about the training speed in classification job | I run the bert-base-uncased model with task 'mrpc' in ubuntu,nvidia p4000 8G.
It's a classification problem, and I use the default demo data.
But the training speed is about 2 batch every second. Any problem?
I think it maybe too slow, but can not find why. I have another task with 1300000 data costs 6 hours per epoch. | closed | completed | false | 1 | [] | [] | 2019-01-11T06:27:39Z | 2019-01-14T09:09:04Z | 2019-01-14T09:09:04Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | zhusleep | 17,355,556 | MDQ6VXNlcjE3MzU1NTU2 | User | false |
huggingface/transformers | 398,208,606 | MDU6SXNzdWUzOTgyMDg2MDY= | 184 | https://github.com/huggingface/transformers/issues/184 | https://api.github.com/repos/huggingface/transformers/issues/184 | Python 3.5 + Torch 1.0 does not work | When running `run_lm_finetuning.py` to fine-tune language model with default settings (see command below), sometimes I could run successfully, but sometimes I received different errors like `RuntimeError: The size of tensor a must match the size of tensor b at non-singleton dimension 1`, `RuntimeError: Creating MTGP constants failed. at /pytorch/aten/src/THC/THCTensorRandom.cu:35` or `RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)`. This problem can be solved when updating `python3.5` to `python3.6`.
```
python run_lm_finetuning.py \
--bert_model ~/bert/models/bert-base-uncased/ \
--do_train \
--train_file ~/bert/codes/samples/sample_text.txt \
--output_dir ~/bert/exp/lm \
--num_train_epochs 5.0 \
--learning_rate 3e-5 \
--train_batch_size 32 \
--max_seq_length 128 \
--on_memory
``` | closed | completed | false | 2 | [] | [] | 2019-01-11T09:43:43Z | 2019-01-14T09:10:03Z | 2019-01-14T09:10:02Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | yuhui-zh15 | 17,669,473 | MDQ6VXNlcjE3NjY5NDcz | User | false |
huggingface/transformers | 398,229,727 | MDU6SXNzdWUzOTgyMjk3Mjc= | 186 | https://github.com/huggingface/transformers/issues/186 | https://api.github.com/repos/huggingface/transformers/issues/186 | BertOnlyMLMHead is a duplicate of BertLMPredictionHead | https://github.com/huggingface/pytorch-pretrained-BERT/blob/35becc6d84f620c3da48db460d6fb900f2451782/pytorch_pretrained_bert/modeling.py#L387-L394
I don't understand how it is useful to wrap the BertLMPredictionHead class like that, perhaps it was forgotten in some refactoring ? I can do a PR if you confirm me it can be replaced.
BertOnlyMLMHead is only used in BertForMaskedLM. | closed | completed | false | 1 | [] | [] | 2019-01-11T10:35:36Z | 2019-01-14T09:14:56Z | 2019-01-14T09:14:56Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | artemisart | 9,201,969 | MDQ6VXNlcjkyMDE5Njk= | User | false |
huggingface/transformers | 397,243,635 | MDU6SXNzdWUzOTcyNDM2MzU= | 175 | https://github.com/huggingface/transformers/issues/175 | https://api.github.com/repos/huggingface/transformers/issues/175 | RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1) | sir i was pretrained for our BERT-Base model for Multi-GPU training 8 GPUs. preprocessing succeed but next step training it shown error. in run_lm_finetuning.py.
--
`python3 run_lm_finetuning.py --bert_model bert-base-uncased --do_train --train_file vocab007.txt --output_dir models --num_train_epochs 5.0 --learning_rate 3e-5 --train_batch_size 32 --max_seq_length 128 `
```
Traceback (most recent call last):
File "run_lm_finetuning.py", line 646, in <module>
main()
File "run_lm_finetuning.py", line 594, in main
loss = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 143, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 153, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/pytorch_pretrained_bert/modeling.py", line 695, in forward
output_all_encoded_layers=False)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/pytorch_pretrained_bert/modeling.py", line 626, in forward
embedding_output = self.embeddings(input_ids, token_type_ids)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/newvolume/pytorch_bert_env/lib/python3.5/site-packages/pytorch_pretrained_bert/modeling.py", line 187, in forward
seq_length = input_ids.size(1)
RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
```
Thanks.
| closed | completed | false | 11 | [] | [] | 2019-01-09T07:26:46Z | 2019-01-14T09:15:38Z | 2019-01-14T09:15:11Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | MuruganR96 | 35,978,784 | MDQ6VXNlcjM1OTc4Nzg0 | User | false |
huggingface/transformers | 398,771,339 | MDU6SXNzdWUzOTg3NzEzMzk= | 194 | https://github.com/huggingface/transformers/issues/194 | https://api.github.com/repos/huggingface/transformers/issues/194 | run_classifier.py doesn't save any configurations and I can't load the trained model. | closed | completed | false | 2 | [] | [] | 2019-01-14T07:16:07Z | 2019-01-14T09:19:59Z | 2019-01-14T09:19:59Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | anz2 | 24,385,276 | MDQ6VXNlcjI0Mzg1Mjc2 | User | false | |
huggingface/transformers | 381,872,071 | MDU6SXNzdWUzODE4NzIwNzE= | 30 | https://github.com/huggingface/transformers/issues/30 | https://api.github.com/repos/huggingface/transformers/issues/30 | [Feature request] Add example of finetuning the pretrained models on custom corpus | closed | completed | false | 2 | [] | [] | 2018-11-17T15:19:58Z | 2019-01-15T14:27:27Z | 2018-11-17T22:03:43Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | elyase | 1,175,888 | MDQ6VXNlcjExNzU4ODg= | User | false | |
huggingface/transformers | 397,673,308 | MDU6SXNzdWUzOTc2NzMzMDg= | 177 | https://github.com/huggingface/transformers/issues/177 | https://api.github.com/repos/huggingface/transformers/issues/177 | run_lm_finetuning.py does not define a do_lower_case argument | The file references `args.do_lower_case`, but doesn't have the corresponding `parser.add_argument` call.
As an aside, has anyone successfully applied LM fine-tuning for a downstream task (using this code, or maybe using the original tensorflow implementation)? I'm not even sure if the code will run in its current state. And after fixing this issue locally, I've had no luck using the output from fine-tuning: I have a model that gets state-of-the-art results when using pre-trained BERT, but after fine-tuning it performs no better than omitting BERT/pre-training entirely! I don't know whether to suspect that there are might be other bugs in the example code, or if the hyperparameters in the README are just a very poor starting point for what I'm doing. | closed | completed | false | 7 | [] | [] | 2019-01-10T05:01:17Z | 2019-01-15T14:34:15Z | 2019-01-14T09:04:46Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | nikitakit | 252,225 | MDQ6VXNlcjI1MjIyNQ== | User | false |
huggingface/transformers | 399,155,566 | MDU6SXNzdWUzOTkxNTU1NjY= | 196 | https://github.com/huggingface/transformers/issues/196 | https://api.github.com/repos/huggingface/transformers/issues/196 | TODO statement on Question/Answering Model | Has this been confirmed?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/647c98353090ee411e1ef9016b2a458becfe36f9/pytorch_pretrained_bert/modeling.py#L1084 | closed | completed | false | 1 | [] | [] | 2019-01-15T01:56:48Z | 2019-01-16T12:23:14Z | 2019-01-16T12:23:14Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | phatlast96 | 10,504,024 | MDQ6VXNlcjEwNTA0MDI0 | User | false |
huggingface/transformers | 398,252,066 | MDU6SXNzdWUzOTgyNTIwNjY= | 187 | https://github.com/huggingface/transformers/issues/187 | https://api.github.com/repos/huggingface/transformers/issues/187 | issue is, that ##string will repeats at intermediate, it collapses all index for mask words | ```
----------------------------------> how much belan i havin my credit card and also debitcard
----------------------------------> ['how', 'much', 'belan', 'i', 'havin', 'my', 'credit', 'card', 'and', 'also', 'debitcard']
----------------------------------> ['**belan**', '**havin**']
----------------------------------> [2, 4]
----------------------------------> ['how', 'much', '**belan**', 'i', '**havin**', 'my', 'credit', 'card', 'and', 'also', 'debitcard']
----------------------------------> how much belan i havin my credit card and also debitcard
before_tokenized_text-------------> ['how', 'much', **'bela'**, **'##n'**, 'i', **'ha'**, **'##vin'**, 'my', 'credit', 'card', 'and', 'also', '**de'**, **'##bit',** '**##card']**
index_useless---------------------> [2, 4]
after_tokenized_text--------------> ['how', 'much', '[MASK]', '##n', '[MASK]', 'ha', '##vin', 'my', 'credit', 'card', 'and', 'also', 'de', '##bit', '##card']
########## ['more', 'most']
########## 2 <---------index_useless_length
########## 2 <---------predicted_words_len
########## how much [MASK] n [MASK] ha vin my credit card and also de bit card <---------tokenized_text
########## index_tk_aft [2, 4]
########## how much more n most ha vin my credit card and also de bit card
########## how much more n most ha vin my credit card and also de bit card <---------Result
```
i think As you understood. that spelling mistake words [2, 4] as Masking to predict.
but in this place, what happened,
##string -> '##n' , '##vin', like this spoil the predict final output.
i found and try so many ways. but all useless still.
**how to predict and fetch two more masking words?**
Thanks.
| closed | completed | false | 3 | [] | [] | 2019-01-11T11:35:06Z | 2019-01-18T09:07:34Z | 2019-01-14T09:16:36Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | MuruganR96 | 35,978,784 | MDQ6VXNlcjM1OTc4Nzg0 | User | false |
huggingface/transformers | 400,968,613 | MDU6SXNzdWU0MDA5Njg2MTM= | 209 | https://github.com/huggingface/transformers/issues/209 | https://api.github.com/repos/huggingface/transformers/issues/209 | Missing softmax in BertForQuestionAnswering after linear layer? | https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/pytorch_pretrained_bert/modeling.py#L1089-L1113
It seems there should be a softmax after the linear layer, or did I miss something? | closed | completed | false | 1 | [] | [] | 2019-01-19T06:55:30Z | 2019-01-19T08:26:35Z | 2019-01-19T08:26:35Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | jianyucai | 28,853,070 | MDQ6VXNlcjI4ODUzMDcw | User | false |
huggingface/transformers | 400,582,170 | MDU6SXNzdWU0MDA1ODIxNzA= | 204 | https://github.com/huggingface/transformers/issues/204 | https://api.github.com/repos/huggingface/transformers/issues/204 | Two to Three mask word prediction at the same sentence is very complex | Two to Three mask word prediction at the same sentence also very complex.
how to get good accuracy?
if i have to pretrained bert model and own dataset with **masked_lm_prob=0.25** (https://github.com/google-research/bert#pre-training-with-bert), what will happened?
Thanks. | closed | completed | false | 2 | [] | [] | 2019-01-18T05:52:40Z | 2019-01-22T16:51:09Z | 2019-01-22T16:50:03Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | MuruganR96 | 35,978,784 | MDQ6VXNlcjM1OTc4Nzg0 | User | false |
huggingface/transformers | 402,103,567 | MDU6SXNzdWU0MDIxMDM1Njc= | 219 | https://github.com/huggingface/transformers/issues/219 | https://api.github.com/repos/huggingface/transformers/issues/219 | How can I get the confidence score for the classification task | In evaluation step, it seems it only shows the predicted label for the data instance.
How can I get the confidence score for each class? | closed | completed | false | 1 | [] | [] | 2019-01-23T07:21:51Z | 2019-01-23T07:36:01Z | 2019-01-23T07:35:25Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | fenneccat | 22,452,009 | MDQ6VXNlcjIyNDUyMDA5 | User | false |
huggingface/transformers | 401,890,579 | MDU6SXNzdWU0MDE4OTA1Nzk= | 216 | https://github.com/huggingface/transformers/issues/216 | https://api.github.com/repos/huggingface/transformers/issues/216 | Training classifier does not work for more than two classes | I am trying to run a classifier on the AGN data which has four classes. I am using the following command to train and evaluate the classifier.
python examples/run_classifier.py \
--task_name agn \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/AGN/ \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 2.0 \
--output_dir /tmp/agn_output/
I have created a task named agn similar to cola, mnli and others. The model is trained properly but during evaluation it throws the following error.
'''
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "examples/run_classifier.py", line 690, in <module>
main()
File "examples/run_classifier.py", line 663, in main
logits = logits.detach().cpu().numpy()
RuntimeError: CUDA error: device-side assert triggered
'''
The reason for this issue is:
The model is trained with output size of 4 (since four classes), but during testing the model has output size of 2 because the BertForSequenceClassification class has default value for num_labels as 2.
So, if we change the following line in run_classifier.py
model = BertForSequenceClassification.from_pretrained(args.bert_model, state_dict=model_state_dict)
to
model = BertForSequenceClassification.from_pretrained(args.bert_model, state_dict=model_state_dict, num_labels=num_labels), the issue will be resolved.
Please let me know If I can push the changes. | closed | completed | false | 2 | [] | [] | 2019-01-22T18:14:52Z | 2019-01-23T13:38:42Z | 2019-01-23T13:38:42Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | satyakesav | 7,447,204 | MDQ6VXNlcjc0NDcyMDQ= | User | false |
huggingface/transformers | 400,544,254 | MDU6SXNzdWU0MDA1NDQyNTQ= | 203 | https://github.com/huggingface/transformers/issues/203 | https://api.github.com/repos/huggingface/transformers/issues/203 | Add some new layers from BertModel and then 'grad' error occurs | I wanna do the fine-tuning work by adding a textcnn on the base of BertModel. I write a new class and add two layers of conv (like a textcnn) basically on Embedding Layer. And then an error occurs, called "grad can be implicitly created only for scalar outputs" i search for the Internet and can't find a good solution to that, hope someone can solve it | closed | completed | false | 2 | [] | [] | 2019-01-18T02:19:58Z | 2019-01-23T16:34:28Z | 2019-01-23T16:34:28Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | lhbrichard | 33,123,730 | MDQ6VXNlcjMzMTIzNzMw | User | false |
huggingface/transformers | 402,517,534 | MDU6SXNzdWU0MDI1MTc1MzQ= | 224 | https://github.com/huggingface/transformers/issues/224 | https://api.github.com/repos/huggingface/transformers/issues/224 | how to add new vocabulary? | for specific task, it is required to add new vocabulary for tokenizer.
It is ok that re-training for those vocabulary for me :)
Is it possible to add new vocabulary for tokenizer?
| closed | completed | false | 1 | [] | [] | 2019-01-24T02:42:38Z | 2019-01-24T05:13:11Z | 2019-01-24T05:13:10Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | hahmyg | 3,884,429 | MDQ6VXNlcjM4ODQ0Mjk= | User | false |
huggingface/transformers | 403,125,784 | MDU6SXNzdWU0MDMxMjU3ODQ= | 226 | https://github.com/huggingface/transformers/issues/226 | https://api.github.com/repos/huggingface/transformers/issues/226 | Logical error in the run_lm_finetuning? | Hi,
@thomwolf @nhatchan
@tholor @deepset-ai
Many thanks for amazing work with this repository =)
I maybe grossly wrong or just missed some line of the code somewhere, but it seems to me that there is a glaring issue in the overall logic of `examples/run_lm_finetuning.py` - I guess you never pre-trained the model till convergence from scratch, right?
_________________________________________
**Context**
I have already been able to fit the model to the Russian version of the SQUAD dataset from scratch (so-called **SberSQUAD** from sdsj 2017), and I was able to obtain **~40% EM w/o any pre-training**. Afaik, ~60% EM is about the top result on this dataset, achieved using BiDAF, so the model worksm which is good =).
Anyway this was a sanity check for me to see that the model is sound, obviously to **achieve good results you need to pre-train first** (afaik the authors of the BERT paper did not even post any results w/o pre-training, right?).
So now I am planning to pre-train BERT for the Russian language with various pre-processing ideas:
- BPE (like in the original);
- Embedding bag (works well for "difficult" languages) + ;
_________________________________________
**The Problem**
First of all let's quote the paper
```
In order to train a deep bidirectional representation, we take a straightforward approach of masking
some percentage of the input tokens at random, and then predicting only those masked tokens.
We refer to this procedure as a “masked LM” (MLM), although it is often referred to as a Cloze task in
the literature (Taylor, 1953). In this case, the fi- nal hidden vectors corresponding to the mask tokens are
fed into an output softmax over the vo- cabulary, as in a standard LM. In all of our exper- iments, we
mask 15% of all WordPiece tokens in each sequence at random. In contrast to denoising auto-encoders
(Vincent et al., 2008), we only pre- dict the masked words rather than reconstructing the entire input.
```
So as far as I can see:
- We mask / alter some of the input (afaik the masking scheme [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L276) is correct) and make the model correct our "mistakes". It only makes sense - we break the input, and the model corrects it;
- But if you look [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L142), [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L331-L334) and [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L371) - it seems to me that in the code:
- Just padded / processed tokens are passed as input;
- The lm targets are the "messed up" tokens;
So, the training is kind of reversed.
The correct sequence is passed, but the incorrect sequence is the target.
Anyway - I may just have missed some line of code, that changes everything.
I am just trying to understand the model properly, because I need to do a total rewrite of the pre-processing, because in my domain usage of embedding bags proved to be more beneficial than BPE.
Many thanks!
| closed | completed | false | 2 | [] | [] | 2019-01-25T11:51:02Z | 2019-01-25T14:35:21Z | 2019-01-25T14:35:21Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | snakers4 | 12,515,440 | MDQ6VXNlcjEyNTE1NDQw | User | false |
huggingface/transformers | 403,423,004 | MDU6SXNzdWU0MDM0MjMwMDQ= | 228 | https://github.com/huggingface/transformers/issues/228 | https://api.github.com/repos/huggingface/transformers/issues/228 | Freezing base transformer weights | As I understand, say if I'm doing a classification task, then the transformer weights, along with the top classification layer weights, are both trainable (i.e. `requires_grad=True`), correct? If so, is there a way to freeze the transformer weights, but only train the top layer? Is that a good idea in general when I have a small dataset? | closed | completed | false | 2 | [] | [] | 2019-01-26T09:09:36Z | 2019-01-26T09:45:04Z | 2019-01-26T09:45:04Z | null | 20260320T144313Z | 2026-03-20T14:43:13Z | ZhaofengWu | 11,954,789 | MDQ6VXNlcjExOTU0Nzg5 | User | false |
End of preview. Expand in Data Studio
Transformers PR Slop Dataset
Normalized snapshots of issues, pull requests, comments, reviews, and linkage data from huggingface/transformers.
Files:
issues.parquetpull_requests.parquetcomments.parquetissue_comments.parquet(derived view of issue discussion comments)pr_comments.parquet(derived view of pull request discussion comments)pr_files.parquetpr_diffs.parquetreviews.parquetreview_comments.parquetlinks.parquetevents.parquet
Use:
- duplicate PR and issue analysis
- triage and ranking experiments
- eval set creation
Notes:
- updated daily
- latest snapshot:
20260320T144313Z - raw data only; no labels or moderation decisions
- PR metadata, file-level patch hunks, and full unified diffs are included
- full file contents for changed files are not included
Bootstrap status:
- root dataset is currently promoted from merged checkpoints
- full historical backfill is still running in scheduled jobs
- state/watermark.json is intentionally omitted until a fully successful snapshot lands
- Downloads last month
- 1,164