Datasets:
metadata
license: mit
pretty_name: Metamath Proof Graphs (10k)
task_categories:
- graph-ml
tags:
- graphs
- gnn
- metamath
- pytorch-geometric
- topobench
size_categories:
- 10K<n<100K
dataset_summary: >
A graph-based dataset of 10,000 Metamath theorems and their 10,000
corresponding proof DAGs, including CodeBERT node embeddings, conclusion
masking, rare-label collapsing, and fixed train/val/test splits.
Metamath Proof Graphs (10k)
This repository provides a PyTorch Geometric dataset designed for the TAG-DS TopoBench challenge.
It contains 20,000 graphs total: 10,000 theorem-only DAGs and 10,000 full proof DAGs drawn from the first 10k theorems in the Metamath [1] database.
Contents
data.pt
A preprocessed PyG dataset containing:data— global collated storage of all nodes, edges, and labelsslices— pointers for reconstructing individual graphstrain_idx,val_idx,test_idx— fixed graph-level splits
Dataset Structure
1. Theorem Graphs (indices 0–9,999)
Each theorem is represented as a small DAG consisting only of:
- its hypothesis nodes
- its conclusion node
- no proof steps
These encode the statement only, not the derivation.
2. Proof Graphs (indices 10,000–19,999)
For each of the same theorems, the full proof DAG is included, containing:
- hypothesis nodes
- intermediate proof steps
- the same conclusion node
Thus each theorem appears twice:
- once as a theorem-only graph
- once as the complete proof of that theorem
This pairing enables:
- learning from theorem statements
- evaluating on masked proof conclusions
- consistent label space across both halves
Additional Details
- Total graphs: 20,000
- Node embeddings: 768-dimensional CodeBERT vectors
- Graph type: directed acyclic graphs (DAGs)
- Label space: 3,557 justification labels, where all labels with <5 training occurrences are collapsed into
UNK - Conclusion masking: the conclusion node’s embedding is zeroed out; the model must infer its label from the structure and other nodes
- Monotonicity constraint: in Metamath, proofs only use theorems with index <= the current theorem, so later theorems never appear in earlier graphs
- Theorem-only graphs are included in training as prior knowledge for downstream proof prediction.
Basic Usage
import torch
obj = torch.load("data.pt", weights_only=False)
data = obj["data"]
slices = obj["slices"]
train_idx = obj["train_idx"]
val_idx = obj["val_idx"]
test_idx = obj["test_idx"]
Acknowledgements
Thanks to the Erdős Institute for providing the project-based, collaborative environment where key components of the preprocessing pipeline were first developed.
References
[1] Metamath Official Site — https://us.metamath.org/index.html