Datasets:

Languages:
English
ArXiv:
License:
nielsr HF Staff commited on
Commit
c374b48
·
verified ·
1 Parent(s): ab5bf33

Improve dataset card: Add task category, paper/code links, and LMDB sample usage

Browse files

This PR enhances the `CPathPatchFeature` dataset card by:

* Adding `task_categories: ['image-feature-extraction']` to the metadata, which accurately reflects the dataset's purpose.
* Including direct links to the associated paper ([Revisiting End-to-End Learning with Slide-level Supervision in Computational Pathology](https://huggingface.co/papers/2506.02408)) and the GitHub repository ([https://github.com/DearCaat/E2E-WSI-ABMILX](https://github.com/DearCaat/E2E-WSI-ABMILX)) at the top of the content for improved discoverability.
* Adding a "Sample Usage" section with a Python code snippet, directly sourced from the project's GitHub README, demonstrating how to load data from an LMDB dataset.

These updates provide more comprehensive information and make the dataset easier to understand and utilize for researchers.

Files changed (1) hide show
  1. README.md +60 -3
README.md CHANGED
@@ -1,16 +1,21 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
 
 
 
5
  tags:
6
  - medical
7
  - pathology
8
- size_categories:
9
- - 100B<n<1T
10
  ---
11
 
12
  # CPathPatchFeature: Pre-extracted WSI Features for Computational Pathology
13
 
 
 
 
14
  ## Dataset Summary
15
 
16
  This dataset provides a comprehensive collection of pre-extracted features from Whole Slide Images (WSIs) for various cancer types, designed to facilitate research in computational pathology. The features are extracted using multiple state-of-the-art encoders, offering a rich resource for developing and evaluating Multiple Instance Learning (MIL) models and other deep learning architectures.
@@ -73,6 +78,58 @@ Then, clone the dataset repository:
73
  git clone https://huggingface.co/datasets/Dearcat/CPathPatchFeature
74
  ```
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
 
77
  ### Citation
78
  This dataset has been used in the following publications. If you find it useful for your research, please consider citing them:
 
1
  ---
 
2
  language:
3
  - en
4
+ license: apache-2.0
5
+ size_categories:
6
+ - 100B<n<1T
7
  tags:
8
  - medical
9
  - pathology
10
+ task_categories:
11
+ - image-feature-extraction
12
  ---
13
 
14
  # CPathPatchFeature: Pre-extracted WSI Features for Computational Pathology
15
 
16
+ Paper: [Revisiting End-to-End Learning with Slide-level Supervision in Computational Pathology](https://huggingface.co/papers/2506.02408)
17
+ Code: [https://github.com/DearCaat/E2E-WSI-ABMILX](https://github.com/DearCaat/E2E-WSI-ABMILX)
18
+
19
  ## Dataset Summary
20
 
21
  This dataset provides a comprehensive collection of pre-extracted features from Whole Slide Images (WSIs) for various cancer types, designed to facilitate research in computational pathology. The features are extracted using multiple state-of-the-art encoders, offering a rich resource for developing and evaluating Multiple Instance Learning (MIL) models and other deep learning architectures.
 
78
  git clone https://huggingface.co/datasets/Dearcat/CPathPatchFeature
79
  ```
80
 
81
+ ## Sample Usage
82
+
83
+ Here is an example of loading data from an LMDB dataset, as provided in the GitHub repository:
84
+
85
+ ```python
86
+ import lmdb
87
+ import torch
88
+ import pickle
89
+ from datasets.utils import imfrombytes # Ensure this utility function is correctly referenced
90
+
91
+ slide_name = "xxxx" # Example slide name
92
+ path_to_lmdb = "YOUR_PATH_TO_LMDB_FILE" # e.g., "/path/to/my_dataset_256_level0.lmdb"
93
+
94
+ # Open LMDB dataset
95
+ env = lmdb.open(path_to_lmdb, subdir=False, readonly=True, lock=False,
96
+ readahead=False, meminit=False, map_size=100 * (1024**3))
97
+
98
+ with env.begin(write=False) as txn:
99
+ # Get patch count for the slide
100
+ pn_dict = pickle.loads(txn.get(b'__pn__'))
101
+ if slide_name not in pn_dict:
102
+ raise ValueError(f"Slide ID {slide_name} not found in LMDB metadata.")
103
+ num_patches = pn_dict[slide_name]
104
+
105
+ # Generate patch IDs
106
+ patch_ids = [f"{slide_name}-{i}" for i in range(num_patches)]
107
+
108
+ # Allocate memory for patches (adjust dimensions and dtype as needed)
109
+ # Assuming patches are 224x224, 3 channels, and will be normalized later
110
+ patches_tensor = torch.empty((len(patch_ids), 3, 224, 224), dtype=torch.float32)
111
+
112
+ # Load and decode data into torch.tensor
113
+ for i, key_str in enumerate(patch_ids):
114
+ patch_bytes = txn.get(key_str.encode('ascii'))
115
+ if patch_bytes is None:
116
+ print(f"Warning: Key {key_str} not found in LMDB.")
117
+ continue
118
+ # Assuming the stored value is pickled image bytes
119
+ img_array = imfrombytes(pickle.loads(patch_bytes).tobytes()) # Or .tobytes() if it's already bytes
120
+ patches_tensor[i] = torch.from_numpy(img_array.transpose(2, 0, 1)) # HWC to CHW
121
+
122
+ # Normalize the data (example using ImageNet stats)
123
+ # Ensure values are in [0, 255] before this normalization if they aren't already
124
+ mean = torch.tensor([0.485, 0.456, 0.406]).view((1, 3, 1, 1)) * 255.0
125
+ std = torch.tensor([0.229, 0.224, 0.225]).view((1, 3, 1, 1)) * 255.0
126
+
127
+ # If your patches_tensor is already in [0,1] range, remove * 255.0 from mean/std
128
+ # If your patches_tensor is uint8 [0,255], convert to float first: patches_tensor.float()
129
+ patches_tensor = (patches_tensor.float() - mean) / std
130
+
131
+ env.close()
132
+ ```
133
 
134
  ### Citation
135
  This dataset has been used in the following publications. If you find it useful for your research, please consider citing them: