Update README.md
Browse files
README.md
CHANGED
|
@@ -75,7 +75,7 @@ SPLADE models are a fine balance between retrieval effectiveness (quality) and r
|
|
| 75 |
3. Init Weights: Vanilla **bert-base-uncased**. No corpus awarness unlike Official splade++ / ColBERT
|
| 76 |
4. Yet achieves competitive effectiveness of MRR@10 **37.22** in ID data (& OOD) and a retrieval latency of - **47.27ms**. (multi-threaded) all On **Consumer grade-GPUs** with **only 5 negatives per query**.
|
| 77 |
4. For Industry setting: Effectiveness on custom domains needs more than just **Trading FLOPS for tiny gains** and The Premise "SPLADE++ are not well suited to mono-cpu retrieval" does not hold.
|
| 78 |
-
5. Owing query-time inference latency we still need 2 models one for query & doc, This is a Doc model and Query model will be released soon
|
| 79 |
|
| 80 |
<img src="./ID.png" width=750 height=650/>
|
| 81 |
|
|
|
|
| 75 |
3. Init Weights: Vanilla **bert-base-uncased**. No corpus awarness unlike Official splade++ / ColBERT
|
| 76 |
4. Yet achieves competitive effectiveness of MRR@10 **37.22** in ID data (& OOD) and a retrieval latency of - **47.27ms**. (multi-threaded) all On **Consumer grade-GPUs** with **only 5 negatives per query**.
|
| 77 |
4. For Industry setting: Effectiveness on custom domains needs more than just **Trading FLOPS for tiny gains** and The Premise "SPLADE++ are not well suited to mono-cpu retrieval" does not hold.
|
| 78 |
+
5. Owing to query-time inference latency we still need 2 models one for query & doc, This is a Doc model and Query model will be **released soon.**
|
| 79 |
|
| 80 |
<img src="./ID.png" width=750 height=650/>
|
| 81 |
|