Datasets:
license: mit
multilinguality: multilingual
task_categories:
- multiple-choice
pretty_name: Tokenization Robustness Math
tags:
- tokenization
- mathematics
dataset_info:
- config_name: tokenizer_robustness_completion_math_canonical
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
- name: vanilla_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: trimmed_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: token_counts
struct:
- name: CohereLabs/aya-expanse-8b
dtype: int64
- name: Qwen/Qwen3-8B
dtype: int64
- name: bigscience/bloom
dtype: int64
- name: common-pile/comma-v0.1-1t
dtype: int64
- name: facebook/xglm-564M
dtype: int64
- name: google-bert/bert-base-multilingual-cased
dtype: int64
- name: google/byt5-small
dtype: int64
- name: google/gemma-2-2b
dtype: int64
- name: gpt2
dtype: int64
- name: meta-llama/Llama-3.2-1B
dtype: int64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: int64
- name: mistralai/tekken
dtype: int64
- name: tiktoken/gpt-4o
dtype: int64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: int64
splits:
- name: test
num_bytes: 11202
num_examples: 21
download_size: 29976
dataset_size: 11202
- config_name: tokenizer_robustness_completion_math_chinese
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
- name: vanilla_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: trimmed_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: token_counts
struct:
- name: CohereLabs/aya-expanse-8b
dtype: int64
- name: Qwen/Qwen3-8B
dtype: int64
- name: bigscience/bloom
dtype: int64
- name: common-pile/comma-v0.1-1t
dtype: int64
- name: facebook/xglm-564M
dtype: int64
- name: google-bert/bert-base-multilingual-cased
dtype: int64
- name: google/byt5-small
dtype: int64
- name: google/gemma-2-2b
dtype: int64
- name: gpt2
dtype: int64
- name: meta-llama/Llama-3.2-1B
dtype: int64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: int64
- name: mistralai/tekken
dtype: int64
- name: tiktoken/gpt-4o
dtype: int64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: int64
splits:
- name: test
num_bytes: 11147
num_examples: 21
download_size: 34445
dataset_size: 11147
- config_name: tokenizer_robustness_completion_math_decorative_unicode
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
- name: vanilla_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: trimmed_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: token_counts
struct:
- name: CohereLabs/aya-expanse-8b
dtype: int64
- name: Qwen/Qwen3-8B
dtype: int64
- name: bigscience/bloom
dtype: int64
- name: common-pile/comma-v0.1-1t
dtype: int64
- name: facebook/xglm-564M
dtype: int64
- name: google-bert/bert-base-multilingual-cased
dtype: int64
- name: google/byt5-small
dtype: int64
- name: google/gemma-2-2b
dtype: int64
- name: gpt2
dtype: int64
- name: meta-llama/Llama-3.2-1B
dtype: int64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: int64
- name: mistralai/tekken
dtype: int64
- name: tiktoken/gpt-4o
dtype: int64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: int64
splits:
- name: test
num_bytes: 11986
num_examples: 21
download_size: 34660
dataset_size: 11986
- config_name: tokenizer_robustness_completion_math_farsi
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
- name: vanilla_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: trimmed_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: token_counts
struct:
- name: CohereLabs/aya-expanse-8b
dtype: int64
- name: Qwen/Qwen3-8B
dtype: int64
- name: bigscience/bloom
dtype: int64
- name: common-pile/comma-v0.1-1t
dtype: int64
- name: facebook/xglm-564M
dtype: int64
- name: google-bert/bert-base-multilingual-cased
dtype: int64
- name: google/byt5-small
dtype: int64
- name: google/gemma-2-2b
dtype: int64
- name: gpt2
dtype: int64
- name: meta-llama/Llama-3.2-1B
dtype: int64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: int64
- name: mistralai/tekken
dtype: int64
- name: tiktoken/gpt-4o
dtype: int64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: int64
splits:
- name: test
num_bytes: 12034
num_examples: 21
download_size: 34859
dataset_size: 12034
- config_name: tokenizer_robustness_completion_math_italian
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
- name: vanilla_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: trimmed_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: token_counts
struct:
- name: CohereLabs/aya-expanse-8b
dtype: int64
- name: Qwen/Qwen3-8B
dtype: int64
- name: bigscience/bloom
dtype: int64
- name: common-pile/comma-v0.1-1t
dtype: int64
- name: facebook/xglm-564M
dtype: int64
- name: google-bert/bert-base-multilingual-cased
dtype: int64
- name: google/byt5-small
dtype: int64
- name: google/gemma-2-2b
dtype: int64
- name: gpt2
dtype: int64
- name: meta-llama/Llama-3.2-1B
dtype: int64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: int64
- name: mistralai/tekken
dtype: int64
- name: tiktoken/gpt-4o
dtype: int64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: int64
splits:
- name: test
num_bytes: 11219
num_examples: 21
download_size: 34631
dataset_size: 11219
- config_name: tokenizer_robustness_completion_math_latex
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
- name: vanilla_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: trimmed_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: token_counts
struct:
- name: CohereLabs/aya-expanse-8b
dtype: int64
- name: Qwen/Qwen3-8B
dtype: int64
- name: bigscience/bloom
dtype: int64
- name: common-pile/comma-v0.1-1t
dtype: int64
- name: facebook/xglm-564M
dtype: int64
- name: google-bert/bert-base-multilingual-cased
dtype: int64
- name: google/byt5-small
dtype: int64
- name: google/gemma-2-2b
dtype: int64
- name: gpt2
dtype: int64
- name: meta-llama/Llama-3.2-1B
dtype: int64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: int64
- name: mistralai/tekken
dtype: int64
- name: tiktoken/gpt-4o
dtype: int64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: int64
splits:
- name: test
num_bytes: 11494
num_examples: 21
download_size: 34230
dataset_size: 11494
- config_name: tokenizer_robustness_completion_math_space_removal
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
- name: vanilla_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: trimmed_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: token_counts
struct:
- name: CohereLabs/aya-expanse-8b
dtype: int64
- name: Qwen/Qwen3-8B
dtype: int64
- name: bigscience/bloom
dtype: int64
- name: common-pile/comma-v0.1-1t
dtype: int64
- name: facebook/xglm-564M
dtype: int64
- name: google-bert/bert-base-multilingual-cased
dtype: int64
- name: google/byt5-small
dtype: int64
- name: google/gemma-2-2b
dtype: int64
- name: gpt2
dtype: int64
- name: meta-llama/Llama-3.2-1B
dtype: int64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: int64
- name: mistralai/tekken
dtype: int64
- name: tiktoken/gpt-4o
dtype: int64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: int64
splits:
- name: test
num_bytes: 11559
num_examples: 21
download_size: 34064
dataset_size: 11559
- config_name: tokenizer_robustness_completion_math_spelled_out
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
- name: vanilla_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: trimmed_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: token_counts
struct:
- name: CohereLabs/aya-expanse-8b
dtype: int64
- name: Qwen/Qwen3-8B
dtype: int64
- name: bigscience/bloom
dtype: int64
- name: common-pile/comma-v0.1-1t
dtype: int64
- name: facebook/xglm-564M
dtype: int64
- name: google-bert/bert-base-multilingual-cased
dtype: int64
- name: google/byt5-small
dtype: int64
- name: google/gemma-2-2b
dtype: int64
- name: gpt2
dtype: int64
- name: meta-llama/Llama-3.2-1B
dtype: int64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: int64
- name: mistralai/tekken
dtype: int64
- name: tiktoken/gpt-4o
dtype: int64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: int64
splits:
- name: test
num_bytes: 12129
num_examples: 21
download_size: 34634
dataset_size: 12129
- config_name: tokenizer_robustness_completion_math_turkish
features:
- name: question
dtype: string
- name: choices
list: string
- name: answer
dtype: int64
- name: answer_label
dtype: string
- name: split
dtype: string
- name: subcategories
dtype: string
- name: category
dtype: string
- name: lang
dtype: string
- name: second_lang
dtype: string
- name: notes
dtype: string
- name: id
dtype: string
- name: set_id
dtype: string
- name: variation_id
dtype: string
- name: vanilla_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: trimmed_cos_sim_to_canonical
struct:
- name: CohereLabs/aya-expanse-8b
dtype: float64
- name: Qwen/Qwen3-8B
dtype: float64
- name: bigscience/bloom
dtype: float64
- name: common-pile/comma-v0.1-1t
dtype: float64
- name: facebook/xglm-564M
dtype: float64
- name: google-bert/bert-base-multilingual-cased
dtype: float64
- name: google/byt5-small
dtype: float64
- name: google/gemma-2-2b
dtype: float64
- name: gpt2
dtype: float64
- name: meta-llama/Llama-3.2-1B
dtype: float64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: float64
- name: mistralai/tekken
dtype: float64
- name: tiktoken/gpt-4o
dtype: float64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: float64
- name: token_counts
struct:
- name: CohereLabs/aya-expanse-8b
dtype: int64
- name: Qwen/Qwen3-8B
dtype: int64
- name: bigscience/bloom
dtype: int64
- name: common-pile/comma-v0.1-1t
dtype: int64
- name: facebook/xglm-564M
dtype: int64
- name: google-bert/bert-base-multilingual-cased
dtype: int64
- name: google/byt5-small
dtype: int64
- name: google/gemma-2-2b
dtype: int64
- name: gpt2
dtype: int64
- name: meta-llama/Llama-3.2-1B
dtype: int64
- name: microsoft/Phi-3-mini-4k-instruct
dtype: int64
- name: mistralai/tekken
dtype: int64
- name: tiktoken/gpt-4o
dtype: int64
- name: tokenmonster/englishcode-32000-consistent-v1
dtype: int64
splits:
- name: test
num_bytes: 11339
num_examples: 21
download_size: 34650
dataset_size: 11339
configs:
- config_name: tokenizer_robustness_completion_math_canonical
data_files:
- split: test
path: tokenizer_robustness_completion_math_canonical/test-*
- config_name: tokenizer_robustness_completion_math_chinese
data_files:
- split: test
path: tokenizer_robustness_completion_math_chinese/test-*
- config_name: tokenizer_robustness_completion_math_decorative_unicode
data_files:
- split: test
path: tokenizer_robustness_completion_math_decorative_unicode/test-*
- config_name: tokenizer_robustness_completion_math_farsi
data_files:
- split: test
path: tokenizer_robustness_completion_math_farsi/test-*
- config_name: tokenizer_robustness_completion_math_italian
data_files:
- split: test
path: tokenizer_robustness_completion_math_italian/test-*
- config_name: tokenizer_robustness_completion_math_latex
data_files:
- split: test
path: tokenizer_robustness_completion_math_latex/test-*
- config_name: tokenizer_robustness_completion_math_space_removal
data_files:
- split: test
path: tokenizer_robustness_completion_math_space_removal/test-*
- config_name: tokenizer_robustness_completion_math_spelled_out
data_files:
- split: test
path: tokenizer_robustness_completion_math_spelled_out/test-*
- config_name: tokenizer_robustness_completion_math_turkish
data_files:
- split: test
path: tokenizer_robustness_completion_math_turkish/test-*
language:
- en
- fa
- zh
- it
- tr
size_categories:
- n<1K
Dataset Card for Tokenization Robustness (Math)
TokSuite Benchmark (Math Collection)
Dataset Description
This dataset is part of TokSuite, a comprehensive benchmark designed to measure how different tokenization strategies affect language model behavior under controlled conditions.
This specific subset focuses on mathematical text completion, containing multiple-choice math questions with a variety of surface-form perturbations that stress tokenizer handling of numbers, symbols, formatting, scripts, and mathematical notation.
- Curated by: R3 Research Team
- Domain: Mathematics
- License: MIT License
Dataset Summary
TokSuite isolates the impact of tokenization by holding model architecture, training data, training budget, and initialization constant, varying only the tokenizer.
The Math benchmark evaluates performance on:
- A canonical mathematical formulation
- Multiple perturbed variants that preserve mathematical meaning while altering surface representation
These perturbations reflect realistic variation in how mathematical expressions are written, formatted, localized, and queried in practice.
Key Features:
- 21 canonical math questions with unambiguous answers
- Perturbations targeting notation, symbols, scripts, and formatting
- Parallel structure with TokSuite language benchmarks
- Designed for evaluation, not training
Supported Tasks
- Multiple-Choice Math Question Answering
- Tokenizer Robustness Evaluation
- Symbolic and Numerical Text Processing
Dataset Structure
Data Fields
| Field | Type | Description |
|---|---|---|
question |
string |
Mathematical question text |
choices |
list[string] |
Multiple-choice answer options |
answer |
int64 |
Index of the correct answer |
answer_label |
string |
Letter label of the correct answer |
split |
string |
Dataset split identifier (all entries are test) |
subcategories |
string |
Perturbation category |
lang |
string |
Domain identifier (math) |
notes |
string |
Additional context about the perturbation |
id |
string |
Unique question identifier |
set_id |
float64 |
Question set grouping identifier |
variation_id |
float64 |
Variation number within a question set |
vanilla_cos_sim_to_canonical |
dict[string, float] |
Cosine similarity to canonical form using raw token sequences |
trimmed_cos_sim_to_canonical |
dict[string, float] |
Cosine similarity after token normalization |
token_counts |
dict[string, int] |
Number of tokens produced per tokenizer |
Dataset Creation
Curation Rationale
This dataset was created to:
- Systematically evaluate tokenizer robustness on mathematical notation and structure
- Measure sensitivity to changes in formatting, symbols, scripts, and numeric representation
- Isolate tokenization effects from mathematical reasoning difficulty
- Provide standardized benchmarks for math-focused language models
Canonical questions are intentionally simple and high-accuracy, allowing researchers to attribute performance degradation to tokenization rather than reasoning complexity.
Source Data
- Canonical math questions were manually authored
- Each question was perturbed while preserving mathematical equivalence
- Canonical accuracy was validated across TokSuite models
Perturbation Categories (Math)
Canonical
The baseline mathematical text written in a standard, well-formatted form with no perturbations. This serves as the reference condition for evaluating all other perturbations.Chinese
Rewrites mathematical text using Chinese characters for numbers, operators, or surrounding descriptions, testing tokenizer robustness to non-Latin scripts in math contexts.Decorative Unicode
Replaces standard mathematical symbols with visually similar decorative or stylized Unicode characters (e.g., fancy numerals or operators), stressing Unicode normalization and symbol handling.Farsi
Introduces Persian (Farsi) numerals or script elements into mathematical expressions, testing tokenizer robustness to right-to-left scripts and cross-script numeric representations.Italian
Rewrites textual components of math problems in Italian while preserving the same mathematical structure and solution.LaTeX
Encodes mathematical expressions using LaTeX-style syntax (e.g.,\frac,^,_), stressing tokenizer handling of markup-heavy mathematical notation.Space Removal
Removes or alters spacing within mathematical expressions and surrounding text, stressing tokenizer assumptions about whitespace in math contexts.Spelled-Out Forms
Replaces numerals or symbols with fully spelled-out textual equivalents (e.g., numbers written as words), increasing sequence length and altering token boundaries.Turkish
Rewrites textual components of math problems in Turkish while preserving the underlying mathematical meaning.
Considerations for Using the Data
- Language variety: The dataset uses standard mathematical notation and English-language math phrasing, and may not represent informal or pedagogical math language.
- Script focus: Mathematical expressions are primarily written using ASCII and standard Unicode; LaTeX, decorative Unicode, and non-Latin scripts are included as perturbations.
- Domain coverage: Questions focus on general mathematics and may not represent highly specialized or advanced mathematical domains.
- Question simplicity: Designed for high baseline accuracy, which may not reflect real-world mathematical task complexity.
Additional Information
Dataset Curators
The dataset was curated by the TokSuite research team at R3.
Licensing Information
MIT License
Citation Information
If you use this dataset in your research, please cite the TokSuite paper:
@inproceedings{toksuite2026,
title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
year={2026}
}
Paper: TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior
Contributions
This dataset is part of TokSuite, which includes:
- 14 language models with identical architectures but different tokenizers
- Multilingual benchmark datasets (English, Turkish, Italian, Farsi, Chinese)
- Comprehensive analysis of tokenization's impact on model behavior
Contact
For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors of the paper.
Part of the TokSuite Project
Understanding Tokenization's Role in Language Model Behavior