The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 83, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
UncertSAM Benchmark
Dataset Summary
UncertSAM is a curated multi-domain benchmark designed to stress-test segmentation foundation models (specifically the Segment Anything Model, SAM) under challenging conditions. It comprises eight datasets spanning diverse visual degradations and environments, including shadows, transparency, camouflage, and medical imaging.
This benchmark was introduced in the paper "Towards Integrating Uncertainty for Domain-Agnostic Segmentation" to investigate whether uncertainty quantification can enhance model generalisability and robustness in shifted or limited-knowledge domains.
Included Datasets and Licensing
⚠️ Important: The UncertSAM benchmark aggregates existing datasets. While the benchmark curation allows for domain-agnostic analysis, users must adhere to the original licenses of the individual subsets. Below is the licensing information for each component as detailed in the paper:
| Dataset | Source / Reference | License | Domain / Challenge |
|---|---|---|---|
| BIG | Cheng et al. [2020] | Research Only | Fine-grained salient objects |
| COIFT | Liew et al. [2021] | Attribution-NonCommercial 4.0 | Fine-grained salient objects |
| COD10K-v3 | Fan et al. [2022] | Research Only | Camouflaged objects |
| MSD Spleen | Antonelli et al. [2022] | Attribution-ShareAlike 4.0 | Medical CT scans |
| ISTD | Wang et al. [2018] | Research Only | Shadows |
| SBU | Vicente et al. [2016] | Unknown | Shadows |
| Flare7K | Dai et al. [2022] | S-Lab License 1.0 | Lighting artifacts (Flares) |
| Trans10K | Xie et al. [2021] | Research Only | Transparent objects |
| SA-1B (Subset) | Kirillov et al. [2023] | SA-1B V1.0 | General (Training subset) |
Note: For the SBU dataset where the license is listed as "Unknown," please exercise caution and refer to the original publication for terms of use.
Dataset Statistics
I = number of images, M = number of corresponding segmentation masks per split.
| Dataset | Train (I) | Train (M) | Val (I) | Val (M) | Test (I) | Test (M) |
|---|---|---|---|---|---|---|
| Trans | 7,289 | 16,443 | 1,560 | 3,604 | 1,560 | 3,609 |
| MSD | 668 | 668 | 135 | 135 | 135 | 135 |
| ISTD | 1,309 | 1,408 | 275 | 295 | 285 | 300 |
| COD | 3,505 | 4,049 | 760 | 885 | 750 | 846 |
| BIG | 97 | 120 | 19 | 21 | 30 | 36 |
| COIFT | 209 | 209 | 41 | 41 | 30 | 30 |
| Flare | 140 | 163 | 30 | 33 | 30 | 38 |
| SBU | 3,278 | 7,505 | 705 | 1,643 | 705 | 1,592 |
Dataset Preprocessing
To standardise the benchmarks for evaluation with SAM, several preprocessing steps were applied to the original data. The specific processing methods used for different subsets are described below:
1. Connected Component Analysis (CCA)
Applied to datasets containing images with multiple disconnected surfaces (potentially valid masks under SAM's entity-part strategy) to separate them into distinct masks.
- Morphological Closing: Applied twice using a 3x3 kernel to connect small disconnected parts that belong to a larger coherent target.
- Component Extraction: Connected components are extracted from the closed mask.
- Refinement: A binary AND operation is performed between the resulting mask and the initial mask to remove artifacts introduced by closing. Components smaller than 1,000 pixels are discarded.
2. Colour Coded (CC) Processing
For datasets where masks are colour-coded (multiple classes/objects in one mask file), unique RGB values are extracted, and the mask is split into multiple binary targets accordingly.
3. CT Scan Processing (MSD Spleen)
The original 3D CT volumes (.nii.gz format) were processed into 2D images:
- Slicing: Volumes were sliced along the axial plane, retaining only slices containing foreground labels.
- Normalisation: Z-score normalisation was applied.
- Clipping: Voxel intensities were clipped to the [0.05, 99.95] percentile range.
Citation
If you use this benchmark in your research, please cite the original paper:
@inproceedings{brouwers2025towards,
title={Towards Integrating Uncertainty for Domain-Agnostic Segmentation},
author={Brouwers, Jesse and Xing, Xiaoyan and Timans, Alexander},
booktitle={NeurIPS 2025 Workshop on Frontiers in Probabilistic Inference},
year={2025}
}
Please also ensure you cite the original authors of the individual datasets included in this benchmark.
- Downloads last month
- 52