Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
Tags:
long context
| language: en | |
| size_categories: 10K<n<100K | |
| task_categories: | |
| - text-classification | |
| task_ids: | |
| - multi-class-classification | |
| - topic-classification | |
| tags: | |
| - long context | |
| dataset_info: | |
| - config_name: abstract | |
| features: | |
| - name: text | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': Human Necessities | |
| '1': Performing Operations; Transporting | |
| '2': Chemistry; Metallurgy | |
| '3': Textiles; Paper | |
| '4': Fixed Constructions | |
| '5': Mechanical Engineering; Lightning; Heating; Weapons; Blasting | |
| '6': Physics | |
| '7': Electricity | |
| '8': General tagging of new or cross-sectional technology | |
| splits: | |
| - name: train | |
| num_bytes: 17225101 | |
| num_examples: 25000 | |
| - name: validation | |
| num_bytes: 3472854 | |
| num_examples: 5000 | |
| - name: test | |
| num_bytes: 3456733 | |
| num_examples: 5000 | |
| download_size: 12067953 | |
| dataset_size: 24154688 | |
| - config_name: patent | |
| features: | |
| - name: text | |
| dtype: string | |
| - name: label | |
| dtype: | |
| class_label: | |
| names: | |
| '0': Human Necessities | |
| '1': Performing Operations; Transporting | |
| '2': Chemistry; Metallurgy | |
| '3': Textiles; Paper | |
| '4': Fixed Constructions | |
| '5': Mechanical Engineering; Lightning; Heating; Weapons; Blasting | |
| '6': Physics | |
| '7': Electricity | |
| '8': General tagging of new or cross-sectional technology | |
| splits: | |
| - name: train | |
| num_bytes: 466788625 | |
| num_examples: 25000 | |
| - name: validation | |
| num_bytes: 95315107 | |
| num_examples: 5000 | |
| - name: test | |
| num_bytes: 93844869 | |
| num_examples: 5000 | |
| download_size: 272966251 | |
| dataset_size: 655948601 | |
| configs: | |
| - config_name: abstract | |
| data_files: | |
| - split: train | |
| path: abstract/train-* | |
| - split: validation | |
| path: abstract/validation-* | |
| - split: test | |
| path: abstract/test-* | |
| - config_name: patent | |
| data_files: | |
| - split: train | |
| path: patent/train-* | |
| - split: validation | |
| path: patent/validation-* | |
| - split: test | |
| path: patent/test-* | |
| default: true | |
| **Patent Classification: a classification of Patents and abstracts (9 classes).** | |
| This dataset is intended for long context classification (non abstract documents are longer that 512 tokens). \ | |
| Data are sampled from "BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization." by Eva Sharma, Chen Li and Lu Wang | |
| * See: https://aclanthology.org/P19-1212.pdf | |
| * See: https://evasharma.github.io/bigpatent/ | |
| It contains 9 unbalanced classes, 35k Patents and abstracts divided into 3 splits: train (25k), val (5k) and test (5k). | |
| **Note that documents are uncased and space separated (by authors)** | |
| Compatible with [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) script: | |
| ``` | |
| export MODEL_NAME=roberta-base | |
| export MAX_SEQ_LENGTH=512 | |
| python run_glue.py \ | |
| --model_name_or_path $MODEL_NAME \ | |
| --dataset_name ccdv/patent-classification \ | |
| --do_train \ | |
| --do_eval \ | |
| --max_seq_length $MAX_SEQ_LENGTH \ | |
| --per_device_train_batch_size 8 \ | |
| --gradient_accumulation_steps 4 \ | |
| --learning_rate 2e-5 \ | |
| --num_train_epochs 1 \ | |
| --max_eval_samples 500 \ | |
| --output_dir tmp/patent | |
| ``` |