Datasets:
question
stringlengths 8
123
| choices
listlengths 4
4
| answer
int64 0
3
| answer_label
stringclasses 4
values | split
stringclasses 1
value | subcategories
stringclasses 1
value | category
stringclasses 1
value | lang
stringclasses 1
value | second_lang
stringclasses 1
value | notes
stringclasses 1
value | id
stringlengths 7
7
| set_id
stringlengths 5
5
| variation_id
stringclasses 1
value | vanilla_cos_sim_to_canonical
dict | trimmed_cos_sim_to_canonical
dict | token_counts
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7 + 7 is
|
[
"15",
"13",
"16",
"14"
] | 3
|
D
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
200-1.0
|
200.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 5,
"Qwen/Qwen3-8B": 5,
"bigscience/bloom": 4,
"common-pile/comma-v0.1-1t": 5,
"facebook/xglm-564M": 4,
"google-bert/bert-base-multilingual-cased": 4,
"google/byt5-small": 8,
"google/gemma-2-2b": 5,
"gpt2": 4,
"meta-llama/Llama-3.2-1B": 5,
"microsoft/Phi-3-mini-4k-instruct": 6,
"mistralai/tekken": 5,
"tiktoken/gpt-4o": 5,
"tokenmonster/englishcode-32000-consistent-v1": 4
}
|
||
1/2 + 1/4 = 2/4 + 1/4 equals
|
[
"3/4",
"1/2",
"3/8",
"1/3"
] | 0
|
A
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
201-1.0
|
201.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 19,
"Qwen/Qwen3-8B": 19,
"bigscience/bloom": 9,
"common-pile/comma-v0.1-1t": 19,
"facebook/xglm-564M": 10,
"google-bert/bert-base-multilingual-cased": 17,
"google/byt5-small": 28,
"google/gemma-2-2b": 19,
"gpt2": 16,
"meta-llama/Llama-3.2-1B": 19,
"microsoft/Phi-3-mini-4k-instruct": 20,
"mistralai/tekken": 19,
"tiktoken/gpt-4o": 19,
"tokenmonster/englishcode-32000-consistent-v1": 13
}
|
||
The area of a 4 by 3 rectangle is
|
[
"10 square units",
"13 square units",
"14 square units",
"12 square units"
] | 3
|
D
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
202-1.0
|
202.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 11,
"Qwen/Qwen3-8B": 11,
"bigscience/bloom": 9,
"common-pile/comma-v0.1-1t": 12,
"facebook/xglm-564M": 10,
"google-bert/bert-base-multilingual-cased": 11,
"google/byt5-small": 33,
"google/gemma-2-2b": 11,
"gpt2": 9,
"meta-llama/Llama-3.2-1B": 11,
"microsoft/Phi-3-mini-4k-instruct": 11,
"mistralai/tekken": 11,
"tiktoken/gpt-4o": 11,
"tokenmonster/englishcode-32000-consistent-v1": 8
}
|
||
In the pattern 2, 4, 6, 8, the next number is
|
[
"10",
"9",
"8",
"11"
] | 0
|
A
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
203-1.0
|
203.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 19,
"Qwen/Qwen3-8B": 19,
"bigscience/bloom": 15,
"common-pile/comma-v0.1-1t": 19,
"facebook/xglm-564M": 13,
"google-bert/bert-base-multilingual-cased": 15,
"google/byt5-small": 45,
"google/gemma-2-2b": 19,
"gpt2": 15,
"meta-llama/Llama-3.2-1B": 19,
"microsoft/Phi-3-mini-4k-instruct": 19,
"mistralai/tekken": 19,
"tiktoken/gpt-4o": 19,
"tokenmonster/englishcode-32000-consistent-v1": 9
}
|
||
The sum of angles in a triangle is
|
[
"90 degrees",
"60 degrees",
"360 degrees",
"180 degrees"
] | 3
|
D
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
204-1.0
|
204.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 8,
"Qwen/Qwen3-8B": 8,
"bigscience/bloom": 8,
"common-pile/comma-v0.1-1t": 10,
"facebook/xglm-564M": 9,
"google-bert/bert-base-multilingual-cased": 8,
"google/byt5-small": 34,
"google/gemma-2-2b": 8,
"gpt2": 8,
"meta-llama/Llama-3.2-1B": 8,
"microsoft/Phi-3-mini-4k-instruct": 8,
"mistralai/tekken": 8,
"tiktoken/gpt-4o": 8,
"tokenmonster/englishcode-32000-consistent-v1": 6
}
|
||
In the sequence 15, 20, 25, ___, 35, the missing number is
|
[
"28",
"24",
"30",
"32"
] | 2
|
C
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
205-1.0
|
205.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 25,
"Qwen/Qwen3-8B": 25,
"bigscience/bloom": 17,
"common-pile/comma-v0.1-1t": 21,
"facebook/xglm-564M": 16,
"google-bert/bert-base-multilingual-cased": 19,
"google/byt5-small": 58,
"google/gemma-2-2b": 25,
"gpt2": 17,
"meta-llama/Llama-3.2-1B": 21,
"microsoft/Phi-3-mini-4k-instruct": 26,
"mistralai/tekken": 25,
"tiktoken/gpt-4o": 21,
"tokenmonster/englishcode-32000-consistent-v1": 13
}
|
||
The number of sides in a square is
|
[
"3",
"4",
"5",
"6"
] | 1
|
B
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
206-1.0
|
206.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 9,
"Qwen/Qwen3-8B": 9,
"bigscience/bloom": 9,
"common-pile/comma-v0.1-1t": 11,
"facebook/xglm-564M": 8,
"google-bert/bert-base-multilingual-cased": 8,
"google/byt5-small": 35,
"google/gemma-2-2b": 9,
"gpt2": 9,
"meta-llama/Llama-3.2-1B": 9,
"microsoft/Phi-3-mini-4k-instruct": 9,
"mistralai/tekken": 9,
"tiktoken/gpt-4o": 9,
"tokenmonster/englishcode-32000-consistent-v1": 7
}
|
||
Half of a circle is colored blue. The fraction that is shaded is
|
[
"1/3",
"1/4",
"1/2",
"1/5"
] | 2
|
C
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
207-1.0
|
207.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 14,
"Qwen/Qwen3-8B": 14,
"bigscience/bloom": 15,
"common-pile/comma-v0.1-1t": 15,
"facebook/xglm-564M": 17,
"google-bert/bert-base-multilingual-cased": 16,
"google/byt5-small": 64,
"google/gemma-2-2b": 14,
"gpt2": 15,
"meta-llama/Llama-3.2-1B": 14,
"microsoft/Phi-3-mini-4k-instruct": 15,
"mistralai/tekken": 14,
"tiktoken/gpt-4o": 14,
"tokenmonster/englishcode-32000-consistent-v1": 14
}
|
||
In the Pythagorean theorem a² + b² = c², variable 'c' represents
|
[
"The hypotenuse",
"The shortest side",
"The base",
"The height"
] | 0
|
A
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
208-1.0
|
208.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 20,
"Qwen/Qwen3-8B": 20,
"bigscience/bloom": 18,
"common-pile/comma-v0.1-1t": 23,
"facebook/xglm-564M": 22,
"google-bert/bert-base-multilingual-cased": 21,
"google/byt5-small": 67,
"google/gemma-2-2b": 17,
"gpt2": 20,
"meta-llama/Llama-3.2-1B": 20,
"microsoft/Phi-3-mini-4k-instruct": 21,
"mistralai/tekken": 20,
"tiktoken/gpt-4o": 21,
"tokenmonster/englishcode-32000-consistent-v1": 30
}
|
||
24 stickers minus 8 stickers leaves
|
[
"15",
"16",
"14",
"17"
] | 1
|
B
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
209-1.0
|
209.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 8,
"Qwen/Qwen3-8B": 8,
"bigscience/bloom": 8,
"common-pile/comma-v0.1-1t": 9,
"facebook/xglm-564M": 8,
"google-bert/bert-base-multilingual-cased": 8,
"google/byt5-small": 35,
"google/gemma-2-2b": 8,
"gpt2": 6,
"meta-llama/Llama-3.2-1B": 7,
"microsoft/Phi-3-mini-4k-instruct": 11,
"mistralai/tekken": 10,
"tiktoken/gpt-4o": 7,
"tokenmonster/englishcode-32000-consistent-v1": 8
}
|
||
3 apples at 25 cents each costs
|
[
"70 cents",
"75 cents",
"85 cents",
"80 cents"
] | 1
|
B
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
210-1.0
|
210.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 9,
"Qwen/Qwen3-8B": 9,
"bigscience/bloom": 7,
"common-pile/comma-v0.1-1t": 8,
"facebook/xglm-564M": 9,
"google-bert/bert-base-multilingual-cased": 8,
"google/byt5-small": 31,
"google/gemma-2-2b": 9,
"gpt2": 7,
"meta-llama/Llama-3.2-1B": 8,
"microsoft/Phi-3-mini-4k-instruct": 12,
"mistralai/tekken": 9,
"tiktoken/gpt-4o": 8,
"tokenmonster/englishcode-32000-consistent-v1": 8
}
|
||
47 rounded to the nearest 10 is
|
[
"40",
"55",
"50",
"45"
] | 2
|
C
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
211-1.0
|
211.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 10,
"Qwen/Qwen3-8B": 10,
"bigscience/bloom": 7,
"common-pile/comma-v0.1-1t": 10,
"facebook/xglm-564M": 9,
"google-bert/bert-base-multilingual-cased": 7,
"google/byt5-small": 31,
"google/gemma-2-2b": 10,
"gpt2": 7,
"meta-llama/Llama-3.2-1B": 8,
"microsoft/Phi-3-mini-4k-instruct": 11,
"mistralai/tekken": 10,
"tiktoken/gpt-4o": 8,
"tokenmonster/englishcode-32000-consistent-v1": 6
}
|
||
9 × 9 equals
|
[
"62",
"81",
"36",
"64"
] | 1
|
B
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
212-1.0
|
212.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 5,
"Qwen/Qwen3-8B": 5,
"bigscience/bloom": 4,
"common-pile/comma-v0.1-1t": 5,
"facebook/xglm-564M": 5,
"google-bert/bert-base-multilingual-cased": 5,
"google/byt5-small": 13,
"google/gemma-2-2b": 5,
"gpt2": 4,
"meta-llama/Llama-3.2-1B": 5,
"microsoft/Phi-3-mini-4k-instruct": 6,
"mistralai/tekken": 5,
"tiktoken/gpt-4o": 5,
"tokenmonster/englishcode-32000-consistent-v1": 6
}
|
||
A pizza is cut into 8 equal slices. If you eat 3 slices, 5 slices will be left. The fraction of pizza which is left will be
|
[
"5/8",
"3/8",
"5/3",
"3/5"
] | 0
|
A
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
213-1.0
|
213.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 33,
"Qwen/Qwen3-8B": 33,
"bigscience/bloom": 30,
"common-pile/comma-v0.1-1t": 34,
"facebook/xglm-564M": 34,
"google-bert/bert-base-multilingual-cased": 35,
"google/byt5-small": 123,
"google/gemma-2-2b": 33,
"gpt2": 30,
"meta-llama/Llama-3.2-1B": 33,
"microsoft/Phi-3-mini-4k-instruct": 38,
"mistralai/tekken": 33,
"tiktoken/gpt-4o": 33,
"tokenmonster/englishcode-32000-consistent-v1": 24
}
|
||
If x + 6 = 12, then x = 12 - 6 equals
|
[
"8",
"6",
"7",
"17"
] | 1
|
B
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
214-1.0
|
214.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 20,
"Qwen/Qwen3-8B": 20,
"bigscience/bloom": 14,
"common-pile/comma-v0.1-1t": 18,
"facebook/xglm-564M": 14,
"google-bert/bert-base-multilingual-cased": 15,
"google/byt5-small": 37,
"google/gemma-2-2b": 20,
"gpt2": 14,
"meta-llama/Llama-3.2-1B": 18,
"microsoft/Phi-3-mini-4k-instruct": 20,
"mistralai/tekken": 20,
"tiktoken/gpt-4o": 18,
"tokenmonster/englishcode-32000-consistent-v1": 13
}
|
||
50% of 60 is
|
[
"30",
"25",
"35",
"40"
] | 0
|
A
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
215-1.0
|
215.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 8,
"Qwen/Qwen3-8B": 8,
"bigscience/bloom": 4,
"common-pile/comma-v0.1-1t": 7,
"facebook/xglm-564M": 4,
"google-bert/bert-base-multilingual-cased": 5,
"google/byt5-small": 12,
"google/gemma-2-2b": 8,
"gpt2": 5,
"meta-llama/Llama-3.2-1B": 6,
"microsoft/Phi-3-mini-4k-instruct": 9,
"mistralai/tekken": 8,
"tiktoken/gpt-4o": 6,
"tokenmonster/englishcode-32000-consistent-v1": 5
}
|
||
5,000,000,000 / 1,000 is equal to
|
[
"5,000,000",
"50,000,000",
"1,000",
"500,000"
] | 0
|
A
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
216-1.0
|
216.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 23,
"Qwen/Qwen3-8B": 23,
"bigscience/bloom": 14,
"common-pile/comma-v0.1-1t": 17,
"facebook/xglm-564M": 7,
"google-bert/bert-base-multilingual-cased": 14,
"google/byt5-small": 33,
"google/gemma-2-2b": 23,
"gpt2": 14,
"meta-llama/Llama-3.2-1B": 15,
"microsoft/Phi-3-mini-4k-instruct": 24,
"mistralai/tekken": 23,
"tiktoken/gpt-4o": 15,
"tokenmonster/englishcode-32000-consistent-v1": 10
}
|
||
The value of 16^(1/2) is equal to
|
[
"3",
"5",
"4",
"6"
] | 2
|
C
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
217-1.0
|
217.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 14,
"Qwen/Qwen3-8B": 14,
"bigscience/bloom": 11,
"common-pile/comma-v0.1-1t": 16,
"facebook/xglm-564M": 11,
"google-bert/bert-base-multilingual-cased": 13,
"google/byt5-small": 33,
"google/gemma-2-2b": 14,
"gpt2": 13,
"meta-llama/Llama-3.2-1B": 13,
"microsoft/Phi-3-mini-4k-instruct": 14,
"mistralai/tekken": 14,
"tiktoken/gpt-4o": 13,
"tokenmonster/englishcode-32000-consistent-v1": 9
}
|
||
Calculate 3.1416 × 2
|
[
"7.2831",
"628.33",
"62.832",
"6.2832"
] | 3
|
D
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
218-1.0
|
218.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 11,
"Qwen/Qwen3-8B": 11,
"bigscience/bloom": 7,
"common-pile/comma-v0.1-1t": 9,
"facebook/xglm-564M": 6,
"google-bert/bert-base-multilingual-cased": 8,
"google/byt5-small": 21,
"google/gemma-2-2b": 11,
"gpt2": 9,
"meta-llama/Llama-3.2-1B": 9,
"microsoft/Phi-3-mini-4k-instruct": 12,
"mistralai/tekken": 11,
"tiktoken/gpt-4o": 9,
"tokenmonster/englishcode-32000-consistent-v1": 9
}
|
||
The unit of volume is
|
[
"m^2",
"m",
"m^3",
"kg"
] | 2
|
C
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
219-1.0
|
219.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 5,
"Qwen/Qwen3-8B": 5,
"bigscience/bloom": 5,
"common-pile/comma-v0.1-1t": 6,
"facebook/xglm-564M": 5,
"google-bert/bert-base-multilingual-cased": 5,
"google/byt5-small": 21,
"google/gemma-2-2b": 5,
"gpt2": 5,
"meta-llama/Llama-3.2-1B": 5,
"microsoft/Phi-3-mini-4k-instruct": 5,
"mistralai/tekken": 5,
"tiktoken/gpt-4o": 5,
"tokenmonster/englishcode-32000-consistent-v1": 5
}
|
||
The last digit of 1982354 is
|
[
"4",
"1",
"8",
"2"
] | 0
|
A
|
test
|
Canonical
|
Mathematical & Scientific Notation
|
eng_Latn
|
220-1.0
|
220.0
|
1.0
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 1,
"Qwen/Qwen3-8B": 1,
"bigscience/bloom": 1,
"common-pile/comma-v0.1-1t": 1,
"facebook/xglm-564M": 1,
"google-bert/bert-base-multilingual-cased": 1,
"google/byt5-small": 1,
"google/gemma-2-2b": 1,
"gpt2": 1,
"meta-llama/Llama-3.2-1B": 1,
"microsoft/Phi-3-mini-4k-instruct": 1,
"mistralai/tekken": 1,
"tiktoken/gpt-4o": 1,
"tokenmonster/englishcode-32000-consistent-v1": 1
}
|
{
"CohereLabs/aya-expanse-8b": 13,
"Qwen/Qwen3-8B": 13,
"bigscience/bloom": 8,
"common-pile/comma-v0.1-1t": 10,
"facebook/xglm-564M": 7,
"google-bert/bert-base-multilingual-cased": 9,
"google/byt5-small": 28,
"google/gemma-2-2b": 13,
"gpt2": 8,
"meta-llama/Llama-3.2-1B": 9,
"microsoft/Phi-3-mini-4k-instruct": 13,
"mistralai/tekken": 13,
"tiktoken/gpt-4o": 9,
"tokenmonster/englishcode-32000-consistent-v1": 8
}
|
Dataset Card for Tokenization Robustness Math
TokSuite Benchmark ({LANGUAGE_NAME} Collection)
Dataset Description
This dataset is part of TokSuite, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains {LANGUAGE_NAME} language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
- Curated by: R3 Research Team
- Language(s): {LANGUAGE_NAME} ({LANGUAGE_CODE})
- License: MIT License
Dataset Summary
TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The {LANGUAGE_NAME} subset specifically measures model performance on canonical questions and various perturbations including {LIST_KEY_PERTURBATION_TYPES}.
Key Features:
- {NUM_CANONICAL_QUESTIONS} canonical questions covering {TOPIC_AREAS}
- Multiple perturbation types reflecting real-world text variations in {LANGUAGE_NAME}
- Parallel structure with TokSuite benchmark (available in English, Turkish, Italian, Chinese, Farsi)
- Native speaker curation ensuring linguistic authenticity
Supported Tasks
- Multiple-Choice Question Answering: Text completion format with 4 answer choices
- Tokenizer Robustness Evaluation: Measuring performance degradation under various text perturbations
- Multilingual NLP Benchmarking: Evaluating language models on {LANGUAGE_NAME} text understanding
Languages
The dataset contains text in {LANGUAGE_NAME} written in {SCRIPT_NAME} (language code: {LANGUAGE_CODE_FULL}).
Dataset Structure
Data Instances
An example from the dataset:
{
"question": "{EXAMPLE_QUESTION}",
"choices": ["{CHOICE_A}", "{CHOICE_B}", "{CHOICE_C}", "{CHOICE_D}"],
"answer": {ANSWER_INDEX},
"answer_label": "{ANSWER_LABEL}",
"split": "test",
"subcategories": "{SUBCATEGORY}",
"lang": "{LANGUAGE_CODE_FULL}",
"second_lang": "{ENGLISH_TRANSLATION}",
"coding_lang": "",
"notes": "{NOTES}",
"id": "{ID}",
"set_id": {SET_ID},
"variation_id": {VARIATION_ID}
}
Data Fields
| Field | Type | Description |
|---|---|---|
| question | string | The question text in {LANGUAGE_NAME} ({SCRIPT_DESCRIPTION}) |
| choices | list[string] | Four multiple-choice answer options in {LANGUAGE_NAME} |
| answer | int64 | Index of the correct answer (0-3) |
| answer_label | string | Letter label of the correct answer (A, B, C, or D) |
| split | string | Dataset split identifier (all entries are "test") |
| subcategories | string | Perturbation category |
| lang | string | Language code ({LANGUAGE_CODE_FULL} = {LANGUAGE_DESCRIPTION}) |
| second_lang | string | English translation or description of the question |
| coding_lang | string | Not applicable for this dataset (empty string) |
| notes | string | Additional context about the question or perturbation type |
| id | string | Unique question identifier |
| set_id | float64 | Question set grouping identifier (ranges from {ID_RANGE_START}-{ID_RANGE_END}) |
| variation_id | float64 | Variation number within a question set |
Dataset Creation
Curation Rationale
This dataset was created to:
- Systematically evaluate how different tokenization strategies handle {LANGUAGE_NAME} text
- Measure robustness against real-world text perturbations specific to {LANGUAGE_NAME} language
- Support research into tokenization's impact on language model behavior
- Provide standardized benchmarks for {LANGUAGE_NAME} language models
The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
Source Data
Data Collection and Processing
- Canonical Questions: {NUM_BASE_QUESTIONS} baseline questions in English were created covering general knowledge topics
- Translation: Native {LANGUAGE_NAME} speakers translated questions to {LANGUAGE_NAME}
- Perturbations: Each question underwent targeted perturbations designed to reflect {LINGUISTIC_CHARACTERISTICS}
- Validation: Model-in-the-loop process ensured high baseline accuracy across 14 different tokenizers
Perturbation Categories
Canonical {DESCRIPTION_OF_CANONICAL}
{PERTURBATION_NAME_1} {DESCRIPTION_1}
{PERTURBATION_NAME_2} {DESCRIPTION_2}
{PERTURBATION_NAME_3} {DESCRIPTION_3}
{PERTURBATION_NAME_4} {DESCRIPTION_4}
{PERTURBATION_NAME_5} {DESCRIPTION_5}
{PERTURBATION_NAME_6} {DESCRIPTION_6}
{PERTURBATION_NAME_7} {DESCRIPTION_7}
Model Performance Comparison
| model_name | canonical | {PERTURBATION_COL_1} | {PERTURBATION_COL_2} | {PERTURBATION_COL_3} | {PERTURBATION_COL_4} | {PERTURBATION_COL_5} | {PERTURBATION_COL_6} | {PERTURBATION_COL_7} |
|---|---|---|---|---|---|---|---|---|
| Aya | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| BLOOM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| ByT5 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| Comma | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| GPT-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| GPT-4o | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| Gemma-2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| Llama-3.2 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| Phi-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| Qwen-3 | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| Tekken | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| TokenMonster | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| XGLM | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
| mBERT | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} | {VAL} |
Who are the source data producers?
Native {LANGUAGE_NAME} speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
Annotations
Annotation process
Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world {LANGUAGE_NAME} text processing.
Who are the annotators?
Native {LANGUAGE_NAME} speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
Personal and Sensitive Information
The dataset contains only general knowledge questions and does not include any personal or sensitive information.
Considerations for Using the Data
Social Impact of Dataset
This dataset contributes to improving language technology for {LANGUAGE_NAME} speakers by:
- Enabling better understanding of tokenization challenges in {LANGUAGE_NAME}
- Supporting development of more robust multilingual models
- Providing standardized evaluation for {LANGUAGE_NAME} NLP research
Discussion of Biases
- Language variety: The dataset uses {STANDARD_VARIETY} and may not fully represent dialectal variations
- Script focus: {SCRIPT_LIMITATIONS_DESCRIPTION}
- Domain coverage: Questions focus on general knowledge and may not represent domain-specific language use
- Question simplicity: Designed for high baseline accuracy, which may not reflect real-world task complexity
Other Known Limitations
- Relatively small dataset size (designed for evaluation, not training)
- Focus on multiple-choice format may not capture all aspects of language understanding
- Perturbations are specific to {LANGUAGE_NAME}'s characteristics and findings may not generalize to all languages
- Models evaluated were trained at ~1B parameters; results may differ at larger scales
Additional Information
Dataset Curators
The dataset was curated by the TokSuite research team at R3.
Licensing Information
MIT license
Citation Information
If you use this dataset in your research, please cite the TokSuite paper:
@inproceedings{toksuite2026,
title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
booktitle={Preprint.},
year={2026},
url={TBD}
}
Paper: TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior
Contributions
This dataset is part of TokSuite, which includes:
- 14 language models with identical architectures but different tokenizers
- Multilingual benchmark datasets (English, Turkish, Italian, Farsi, Chinese)
- Comprehensive analysis of tokenization's impact on model behavior
Contact
For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors through the paper submission system.
Part of the TokSuite Project
Understanding Tokenization's Role in Language Model Behavior
- Downloads last month
- 264