SINE / README.md
sungfengh's picture
Add audio dataset in Parquet format for proper preview
81be5d5 verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 10K<n<100K
task_categories:
  - audio-classification
pretty_name: 'SINE: Speech INfilling Edit Dataset'
tags:
  - audio
  - speech
  - deepfake-detection
configs:
  - config_name: preview
    data_files:
      - split: train
        path: preview/train-*
dataset_info:
  config_name: preview
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: filename
      dtype: string
    - name: category
      dtype: string
    - name: timestamp
      dtype: string
    - name: label
      dtype: int64
    - name: manipulation_type
      dtype: string
  splits:
    - name: train
      num_bytes: 10309938
      num_examples: 30
  download_size: 10039423
  dataset_size: 10309938

SINE Dataset

Overview

The Speech INfilling Edit (SINE) dataset is a comprehensive collection for speech deepfake detection and audio authenticity verification. This dataset contains ~87GB of audio data distributed across 32 splits, featuring both authentic and synthetically manipulated speech samples.

Dataset Statistics

  • Total Size: ~87GB
  • Number of Splits: 32 (split-0.tar.gz to split-31.tar.gz)
  • Audio Format: WAV files
  • Source: Speech edited from LibriLight dataset with transcripts obtained from LibriHeavy

Audio Statistics

Audio Types Subsets # of Samples # of Speakers Durations (h) Audio Lengths (s)
min max
Real/Resyn train 26,547 70 51.82 6.00 8.00
val 8,676 100 16.98 6.00 8.00
test 8,494 900 16.60 6.00 8.00
Infill/CaP train 26,546 70 51.98 5.40 9.08
val 8,686 100 16.99 5.45 8.76
test 8,493 903 16.64 5.49 8.85

Data Structure

Each split (e.g., split-0/) contains:

split-X/
β”œβ”€β”€ combine/                    # Directory containing all audio files (~11,076 files)
β”‚   β”œβ”€β”€ dev_real_medium-*.wav          # Authentic audio samples
β”‚   β”œβ”€β”€ dev_edit_medium-*.wav          # Edited audio samples
β”‚   β”œβ”€β”€ dev_cut_paste_medium-*.wav     # Cut-and-paste manipulated samples
β”‚   └── dev_resyn_medium-*.wav         # Resynthesized audio samples
β”œβ”€β”€ medium_real.txt             # Labels for authentic audio (2,769 entries)
β”œβ”€β”€ medium_edit.txt             # Labels for edited audio (2,769 entries)
β”œβ”€β”€ medium_cut_paste.txt        # Labels for cut-paste audio (2,769 entries)
└── medium_resyn.txt            # Labels for resynthesized audio (2,769 entries)

Audio Categories

1. Authentic Speech (dev_real_medium-*)

  • Original, unmodified speech recordings from LibriVox audiobooks
  • Labeled as class 1 (authentic)
  • Simple time annotation format: filename start-end-T label

2. Resynthesized Speech (dev_resyn_medium-*)

  • Speech regenerated from mel-spectrogram using HiFi-GAN vocoder
  • Labeled as class 1 (authentic)
  • Simple time annotation format

3. Edited Speech (dev_edit_medium-*)

  • Audio samples with artificial modifications/edits
  • Labeled as class 0 (manipulated)
  • Complex time annotation with T/F segments indicating real/fake portions

4. Cut-and-Paste Speech (dev_cut_paste_medium-*)

  • Audio created by cutting and pasting segments from different sources
  • Labeled as class 0 (manipulated)
  • Complex time annotation showing spliced segments

Label Format

Simple Format (Real/Resyn)

filename start_time-end_time-T label

Example:

dev_real_medium-100-emerald_city_librivox_64kb_mp3-emeraldcity_02_baum_64kb_21 0.00-7.92-T 1

Complex Format (Edit/Cut-Paste)

filename time_segment1-T/time_segment2-F/time_segment3-T label

Example:

dev_edit_medium-100-emerald_city_librivox_64kb_mp3-emeraldcity_02_baum_64kb_21 0.00-4.89-T/4.89-5.19-F/5.19-8.01-T 0

Where:

  • T = True/Authentic segment
  • F = False/Manipulated segment
  • label: 1 = Authentic, 0 = Manipulated

Applications

This dataset is suitable for:

  • Speech Deepfake Detection: Binary classification of authentic vs. manipulated speech
  • Temporal Localization: Identifying specific time segments that contain manipulations
  • Manipulation Type Classification: Distinguishing between different types of audio manipulation
  • Robustness Testing: Evaluating detection systems across various manipulation techniques

Citation

This is a joint work done by NVIDIA and National Taiwan University. If you use this dataset, please cite:

@inproceedings{huang2024detecting,
  title={Detecting the Undetectable: Assessing the Efficacy of Current Spoof Detection Methods Against Seamless Speech Edits},
  author={Huang, Sung-Feng and Kuo, Heng-Cheng and Chen, Zhehuai and Yang, Xuesong and Yang, Chao-Han Huck and Tsao, Yu and Wang, Yu-Chiang Frank and Lee, Hung-yi and Fu, Szu-Wei},
  booktitle={2024 IEEE Spoken Language Technology Workshop (SLT)},
  pages={652--659},
  year={2024},
  organization={IEEE}
}

License

This dataset is released under the Apache 2.0 License.


Note: This dataset is intended for research purposes in speech authenticity verification and deepfake detection. Please use responsibly and in accordance with applicable laws and regulations.