Should We Still Pretrain Encoders with Masked Language Modeling? Paper β’ 2507.00994 β’ Published Jul 1 β’ 79 β’ 9