Update README.md
Browse files
README.md
CHANGED
|
@@ -29,6 +29,8 @@ library_name: transformers
|
|
| 29 |
|
| 30 |
`jina-reranker-v3` is a 0.6B parameter multilingual document reranker with a novel *last but not late interaction* architecture. Unlike ColBERT's separate encoding with multi-vector matching, this model performs causal self-attention between query and documents within the same context window, extracting contextual embeddings from the last token of each document.
|
| 31 |
|
|
|
|
|
|
|
| 32 |
Built on Qwen3-0.6B with 28 transformer layers and a lightweight MLP projector (1024→512→256), it processes up to 64 documents simultaneously within 131K token context. The model achieves state-of-the-art BEIR performance with 61.94 nDCG@10 while being 10× smaller than generative listwise rerankers.
|
| 33 |
|
| 34 |
| Model | Size | BEIR | MIRACL | MKQA | CoIR |
|
|
|
|
| 29 |
|
| 30 |
`jina-reranker-v3` is a 0.6B parameter multilingual document reranker with a novel *last but not late interaction* architecture. Unlike ColBERT's separate encoding with multi-vector matching, this model performs causal self-attention between query and documents within the same context window, extracting contextual embeddings from the last token of each document.
|
| 31 |
|
| 32 |
+

|
| 33 |
+
|
| 34 |
Built on Qwen3-0.6B with 28 transformer layers and a lightweight MLP projector (1024→512→256), it processes up to 64 documents simultaneously within 131K token context. The model achieves state-of-the-art BEIR performance with 61.94 nDCG@10 while being 10× smaller than generative listwise rerankers.
|
| 35 |
|
| 36 |
| Model | Size | BEIR | MIRACL | MKQA | CoIR |
|