Datasets:
question
stringlengths 6
26
| choices
listlengths 4
4
| answer
int64 0
3
| answer_label
stringclasses 4
values | split
stringclasses 1
value | subcategories
stringclasses 1
value | category
stringclasses 1
value | lang
stringclasses 1
value | second_lang
stringclasses 1
value | notes
stringlengths 16
93
| id
stringlengths 7
7
| set_id
stringlengths 5
5
| variation_id
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
天空的颜色是
|
[
"蓝色",
"红色",
"绿色",
"黄色"
] | 0
|
A
|
test
|
Canonical
|
zho_Hans
|
en: The color of the sky is
|
301-4.0
|
301.0
|
4.0
|
||
这栋房子的价格是150,0000元。这栋房子的价格是
|
[
"15,0000元",
"15,0000,0000元",
"1500,0000元",
"150,0000元"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The price of this house is 2,850,000,000 Toman. The cost of this house is (only question)
|
302-4.0
|
302.0
|
4.0
|
||
今天的日期是2014年8月15日。今天的日期是
|
[
"2014/8/16",
"2014/8/15",
"2015/8/15",
"2014/7/15"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: Today's date is 15/8/1403 (in Persian Calendar). Today's date is -> (only question)
|
303-4.0
|
303.0
|
4.0
|
||
地球上大洲的数量是
|
[
"5",
"6",
"8",
"7"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The number of continents on Earth is
|
304-4.0
|
304.0
|
4.0
|
||
伊朗的首都是
|
[
"马什哈德",
"德黑兰",
"巴格达",
"巴黎"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: The capital city of Iran is
|
305-4.0
|
305.0
|
4.0
|
||
一周的天数是
|
[
"5",
"7",
"6",
"8"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: The number of days in a week is
|
306-4.0
|
306.0
|
4.0
|
||
一天的小时数是
|
[
"20",
"25",
"24",
"30"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: The number of hours in a day is
|
307-4.0
|
307.0
|
4.0
|
||
牛的腿的条数是
|
[
"4",
"8",
"3",
"5"
] | 0
|
A
|
test
|
Canonical
|
zho_Hans
|
en: The number of legs a cow has is
|
308-4.0
|
308.0
|
4.0
|
||
2小时的分钟数是
|
[
"100",
"140",
"90",
"120"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The number of minutes in 2 hours is
|
309-4.0
|
309.0
|
4.0
|
||
一年的月数是
|
[
"10",
"11",
"12",
"13"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: The number of months in a year is
|
310-4.0
|
310.0
|
4.0
|
||
一分钟的秒数是
|
[
"60",
"50",
"100",
"30"
] | 0
|
A
|
test
|
Canonical
|
zho_Hans
|
en: The number of seconds in a minute is
|
311-4.0
|
311.0
|
4.0
|
||
六边形的边数是
|
[
"6",
"5",
"7",
"8"
] | 0
|
A
|
test
|
Canonical
|
zho_Hans
|
en: The number of sides a hexagon has is
|
312-4.0
|
312.0
|
4.0
|
||
三角形的边数是
|
[
"2",
"4",
"5",
"3"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The number of sides a triangle has is
|
313-4.0
|
313.0
|
4.0
|
||
在"我在苹果公司工作"中,苹果是一个
|
[
"人",
"公司",
"城市",
"物品"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: In "I work at Apple", Apple is a
|
314-4.0
|
314.0
|
4.0
|
||
在"我在谷歌工作"中,谷歌是一个
|
[
"人",
"城市",
"公司",
"物品"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: In "I work at Google", Google is a
|
315-4.0
|
315.0
|
4.0
|
||
在"微软发布了新更新"中,微软是一个
|
[
"公司",
"人",
"地点",
"日期"
] | 0
|
A
|
test
|
Canonical
|
zho_Hans
|
en: In "Microsoft released a new update", Microsoft is a
|
316-4.0
|
316.0
|
4.0
|
||
在"猫坐在垫子上"中,主语是
|
[
"猫",
"坐",
"垫子",
"在"
] | 0
|
A
|
test
|
Canonical
|
zho_Hans
|
en: In "The cat sat on the mat", the subject is
|
317-4.0
|
317.0
|
4.0
|
||
人类生存需要呼吸的气体是
|
[
"甲烷",
"氦气",
"氢气",
"氧气"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The gas humans need to breathe to live is
|
322-4.0
|
322.0
|
4.0
|
||
100的10%是
|
[
"5",
"15",
"10",
"20"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: 10% of 100 is
|
323-4.0
|
323.0
|
4.0
|
||
80的25%是
|
[
"15",
"20",
"25",
"30"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: 25% of 80 is
|
324-4.0
|
324.0
|
4.0
|
||
英国的首都是
|
[
"罗马",
"伦敦",
"巴黎",
"慕尼黑"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: The capital of England is
|
326-4.0
|
326.0
|
4.0
|
||
法国的首都是
|
[
"伦敦",
"巴黎",
"柏林",
"罗马"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: The capital of France is
|
327-4.0
|
327.0
|
4.0
|
||
日本的首都是
|
[
"京都",
"大阪",
"广岛",
"东京"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The capital of Japan is
|
328-4.0
|
328.0
|
4.0
|
||
土耳其的首都是
|
[
"德黑兰",
"安卡拉",
"伊兹密尔",
"布尔萨"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: The capital of Turkey is
|
329-4.0
|
329.0
|
4.0
|
||
水的化学分子式是
|
[
"CO2",
"H2O",
"NaCl",
"O2"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: The chemical formula for water is
|
330-4.0
|
330.0
|
4.0
|
||
“商店什么时候关门?”的意图是
|
[
"购买",
"预约",
"投诉",
"获取信息"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The intent in "What time does the store close?" is
|
331-4.0
|
331.0
|
4.0
|
||
世界上最大的哺乳动物是
|
[
"狮子",
"老虎",
"熊",
"蓝鲸"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The largest mammal in the world is
|
332-4.0
|
332.0
|
4.0
|
||
国际单位制中温度的计量单位是
|
[
"摄氏度",
"米",
"开尔文",
"兰氏度"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: The unit of measurement for temperature in the International System is
|
333-4.0
|
333.0
|
4.0
|
||
NASA所属的国家是
|
[
"俄罗斯",
"中国",
"美国",
"日本"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: The country whose space agency is NASA is
|
334-4.0
|
334.0
|
4.0
|
||
巴西使用的语言是
|
[
"西班牙语",
"法语",
"意大利语",
"葡萄牙语"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The language spoken in Brazil is
|
335-4.0
|
335.0
|
4.0
|
||
化学符号为'Fe'的金属是
|
[
"铅",
"铁",
"锌",
"金"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: The metal with chemical symbol 'Fe' is
|
336-4.0
|
336.0
|
4.0
|
||
人体中负责泵血的器官是
|
[
"肝脏",
"肺",
"肾脏",
"心脏"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The organ in the human body that pumps blood is
|
337-4.0
|
337.0
|
4.0
|
||
太阳系中离太阳最近的行星是
|
[
"金星",
"火星",
"水星",
"地球"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: The planet closest to the Sun in our solar system is
|
338-4.0
|
338.0
|
4.0
|
||
太阳系中最大的行星是
|
[
"地球",
"土星",
"火星",
"木星"
] | 3
|
D
|
test
|
Canonical
|
zho_Hans
|
en: The largest planet in the Solar System is
|
339-4.0
|
339.0
|
4.0
|
||
植物利用阳光自己制造食物的过程是
|
[
"呼吸作用",
"消化作用",
"光合作用",
"发酵作用"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: The process that allows plants to produce their own food using sunlight is
|
340-4.0
|
340.0
|
4.0
|
||
创作戏剧《罗密欧与朱丽叶》的作者是
|
[
"威廉·莎士比亚",
"查尔斯·狄更斯",
"马克·吐温",
"简·奥斯汀"
] | 0
|
A
|
test
|
Canonical
|
zho_Hans
|
en: The author who wrote the play "Romeo and Juliet" is
|
341-4.0
|
341.0
|
4.0
|
||
蜜蜂生产的是
|
[
"牛奶",
"蜂蜜",
"丝绸",
"蜡"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: What bees produce is
|
342-4.0
|
342.0
|
4.0
|
||
植物制造食物需要从空气中获取的是
|
[
"氮气",
"氢气",
"二氧化碳",
"氦气"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: What plants need from the air to make food is
|
343-4.0
|
343.0
|
4.0
|
||
在"请帮我预订去巴黎的航班"中,这个人想要
|
[
"逛街",
"预订",
"投诉",
"取消"
] | 1
|
B
|
test
|
Canonical
|
zho_Hans
|
en: In "Can you please book a flight to Paris?", the person wants
|
344-4.0
|
344.0
|
4.0
|
||
王小明是医生。王小明是
|
[
"教师",
"法官",
"医生",
"律师"
] | 2
|
C
|
test
|
Canonical
|
zho_Hans
|
en: Dr Ahmadi is a Doctor. Dr Ahmadi is a
|
300-4.0
|
300.0
|
4.0
|
Dataset Card for Tokenization Robustness
TokSuite Benchmark (Chinese Collection)
Dataset Description
This dataset is part of TokSuite, a comprehensive benchmark designed to measure how different tokenization strategies affect language model performance and robustness. This specific subset contains Chinese language multiple-choice text completion questions with various real-world perturbations that test tokenizer robustness.
- Curated by: R3 Research Team
- Language(s): Chinese (It)
- License: MIT License
Dataset Summary
TokSuite addresses a fundamental challenge in language model research: understanding how tokenization choices impact model behavior in isolation. The Chinese subset specifically measures model performance on canonical questions and various perturbations.
Key Features:
- 40 canonical questions covering general knowledge, geography, science, and language understanding
- Multiple perturbation types reflecting real-world text variations in Chinese
- Parallel structure with TokSuite benchmark (available in English, Turkish, Farsi, Italian)
- Native speaker curation ensuring linguistic authenticity
Supported Tasks
- Multiple-Choice Question Answering: Text completion format with 4 answer choices
- Tokenizer Robustness Evaluation: Measuring performance degradation under various text perturbations
- Multilingual NLP Benchmarking: Evaluating language models on Chinese text understanding
Languages
The dataset contains text in Chinese (language code: zho_Hans / zh).
Dataset Structure
Data Fields
| Field | Type | Description |
|---|---|---|
question |
string |
The question text in Chinese |
choices |
list[string] |
4 multiple-choice answer options |
answer |
int64 |
Index of the correct answer |
answer_label |
string |
Letter label of the correct answer |
split |
string |
Dataset split identifier |
subcategories |
string |
Perturbation category |
lang |
string |
Language code |
second_lang |
string |
English translation or description of the question |
notes |
string |
Additional context about the question or perturbation |
id |
string |
Unique question identifier |
set_id |
float64 |
Question set grouping identifier |
variation_id |
float64 |
Variation number within a question set |
vanilla_cos_sim_to_canonical |
dict[string, float] |
Cosine similarity scores to canonical form (raw tokens) |
trimmed_cos_sim_to_canonical |
dict[string, float] |
Cosine similarity scores after token normalization |
token_counts |
dict[string, integer] |
Number of tokens produced per tokenizer |
Dataset Creation
Curation Rationale
This dataset was created to:
- Systematically evaluate how different tokenization strategies handle Chinese
- Measure robustness against real-world text perturbations specific to Chinese
- Support research into the impact of tokenization on language model behavior
- Provide standardized benchmarks for Chinese language models
The questions were designed to be straightforward with high baseline accuracy, allowing researchers to cleanly measure performance degradation when perturbations are applied.
Source Data
Data Collection and Processing
- Canonical Questions: 40 baseline questions created in English
- Translation: Native Chinese speakers translated questions
- Perturbations: Each question underwent targeted perturbations designed to reflect Chinese characteristics
- Validation: Model-in-the-loop process ensured high baseline accuracy
Perturbation Categories
Canonical The baseline Chinese text written in standard, well-formed Simplified Chinese with no perturbations. This serves as the reference condition for evaluating the impact of all other perturbations.
Code / Language / Script Switching Mixes Chinese with English words, phrases, or symbols within the same sentence, reflecting real-world bilingual usage and code-switching commonly seen in technical or online contexts.
Colloquial Rewrites sentences using informal or conversational Chinese expressions, including spoken-style phrasing that differs from standard written Chinese while preserving meaning.
Equivalent Expressions Replaces canonical phrases with alternative Chinese expressions that convey the same meaning using different words or constructions, isolating tokenizer sensitivity to paraphrasing.
Keyboard Proximity Errors Introduces character-level errors caused by adjacent key presses in pinyin-based input methods, simulating realistic typing mistakes during Chinese text entry.
OCR Errors Introduces character substitutions, deletions, or confusions commonly produced by optical character recognition systems, especially for visually similar Chinese characters.
Optional Diacritics Adds or removes optional diacritic markers (e.g., tone marks in pinyin annotations when present), testing tokenizer robustness to auxiliary pronunciation cues.
Partially Romanized Mixes Chinese characters with romanized (pinyin or Latin-script) representations for some words or phrases, reflecting hybrid writing styles used in informal digital text.
Romanization Fully converts Chinese text into romanized form (e.g., pinyin), replacing characters with Latin-script equivalents while preserving pronunciation and meaning.
Space Removal Removes spaces that may appear between Chinese characters or between Chinese and Latin text, stressing tokenizer assumptions about whitespace usage.
Spelled-Out Forms Replaces numerals, symbols, or compact expressions with fully spelled-out Chinese equivalents, increasing sequence length and altering token boundaries.
Traditional Converts Simplified Chinese characters into their Traditional Chinese counterparts, preserving semantics while changing Unicode character forms.
Word Spacing, Zero-Width Characters, Extra Space Manipulates spacing by inserting extra spaces, removing expected spaces, or adding invisible zero-width characters, stressing tokenizer handling of segmentation and Unicode normalization.
Who are the source data producers?
Native Chinese speakers curated and validated all questions and perturbations. The TokSuite research team at R3 designed the overall benchmark framework.
Annotations
Annotation process
Questions were manually created and translated by native speakers. Each perturbation was carefully designed to reflect authentic variations encountered in real-world Chinese text processing.
Who are the annotators?
Native Chinese speakers with expertise in linguistics and NLP, working as part of the TokSuite project.
Personal and Sensitive Information
The dataset contains only general knowledge questions and does not include any personal or sensitive information.
Considerations for Using the Data
Social Impact of Dataset
This dataset contributes to improving language technology for Chinese speakers by enabling better understanding of tokenization challenges and supporting more robust multilingual models.
Discussion of Biases
- Language variety The dataset uses Standard Chinese (Mandarin) and may not fully represent regional or dialectal variations.
- Script focus: Simplified Chinese is used as the primary script; Traditional Chinese and romanized forms (pinyin) are included as perturbations.
- Domain coverage: Questions focus on general knowledge and may not represent domain-specific Chinese language use.
- Question simplicity: Designed for high baseline accuracy, which may not reflect real-world task complexity.
Other Known Limitations
- Relatively small dataset size (evaluation-only)
- Multiple-choice format
- Language-specific perturbations
- Results may differ at larger model scales
Additional Information
Dataset Curators
The dataset was curated by the TokSuite research team at R3.
Licensing Information
MIT license
Citation Information
If you use this dataset in your research, please cite the TokSuite paper:
@inproceedings{toksuite2026,
title={TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior},
author={Altıntaş, Gül Sena and Ehghaghi, Malikeh and Lester, Brian and Liu, Fengyuan and Zhao, Wanru and Ciccone, Marco and Raffel, Colin},
booktitle={Preprint.},
year={2026},
url={TBD}
}
Paper: TokSuite: Measuring the Impact of Tokenizer Choice on Language Model Behavior
Contributions
This dataset is part of TokSuite, which includes:
- 14 language models with identical architectures but different tokenizers
- Multilingual benchmark datasets (English, Turkish, Italian, Farsi, Chinese)
- Comprehensive analysis of tokenization's impact on model behavior
Contact
For questions or issues related to this dataset, please refer to the TokSuite project or contact the authors of the paper.
Part of the TokSuite Project
Understanding Tokenization's Role in Language Model Behavior
- Downloads last month
- 645