Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'test' of the config 'default' of the dataset.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
trial_id: int64
trial_name: string
total_pages: int64
date_range: struct<first: string, last: string>
dates: list<item: string>
speakers: list<item: string>
vs
seq: int64
page_number: string
date: timestamp[ns]
date_iso: string
text: string
speakers: list<item: string>
trial_id: int64
trial_name: string
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 547, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
trial_id: int64
trial_name: string
total_pages: int64
date_range: struct<first: string, last: string>
dates: list<item: string>
speakers: list<item: string>
vs
seq: int64
page_number: string
date: timestamp[ns]
date_iso: string
text: string
speakers: list<item: string>
trial_id: int64
trial_name: stringNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Nuremberg Trials Complete Dataset
The most comprehensive digital collection of Nuremberg Trials documents available.
Contents
Harvard Law School Transcripts (518MB)
- 13 trials with complete transcripts
- 153,010 pages of testimony
- Source: Harvard Law School Nuremberg Trials Project
| Trial | Name | Pages |
|---|---|---|
| 1 | Medical Case (Karl Brandt et al.) | 11,681 |
| 2 | Milch Case | 3,105 |
| 3 | Justice Case | 11,122 |
| 4 | Hostages Case | 10,549 |
| 5 | Pohl Case | 8,224 |
| 6 | Einsatzgruppen Case | 6,893 |
| 7 | IMT - Main Trial (Goering et al.) | 17,268 |
| 8 | Flick Case | 11,087 |
| 9 | IG Farben Case | 15,628 |
| 10 | RuSHA Case | 5,415 |
| 11 | Krupp Case | 13,447 |
| 12 | Ministries Case | 29,077 |
| 13 | High Command Case | 9,514 |
Yale Avalon Project (192MB)
- 857 documents
- 11.3 million words
- Includes: Judgments, indictments, London Charter, trial proceedings (22 volumes), Nazi Conspiracy & Aggression documents
- Source: Yale Law School Avalon Project
Wikipedia (5.5MB)
- 49 pages of structured data
- 24 defendant biographies
- 11 trial overview pages
- 8 key prosecutors/judges
- 6 Nazi organization pages
Usage
from datasets import load_dataset
# Load Harvard transcripts
dataset = load_dataset("Adherence/nuremberg-trials-complete", data_dir="harvard_transcripts")
# Load Yale documents
import json
from huggingface_hub import hf_hub_download
yale_path = hf_hub_download(
repo_id="Adherence/nuremberg-trials-complete",
filename="yale_avalon/all_documents.json",
repo_type="dataset"
)
with open(yale_path) as f:
yale_docs = json.load(f)
License
This dataset aggregates public domain historical documents. The compilation is released under CC-BY-4.0.
Citation
If you use this dataset, please cite the original sources:
- Harvard Law School Nuremberg Trials Project
- Yale Law School Avalon Project
- Wikipedia contributors
About
Created to make the historical record of the Nuremberg Trials accessible through AI. Part of the Nuremberg Trials AI project.
- Downloads last month
- 2