Datasets:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
size_categories:
|
| 4 |
+
- 1K<n<10K
|
| 5 |
+
---
|
| 6 |
+
# Image as an IMU: Real-world Finetuning Dataset
|
| 7 |
+
|
| 8 |
+
Official real-world finetuning dataset from *Image as an IMU: Estimating Camera Motion from a Single Motion-Blurred Image* (ICCV 2025 Oral).
|
| 9 |
+
|
| 10 |
+
[[arXiv](https://arxiv.org/abs/2503.17358)] [[Webpage](https://jerredchen.github.io/image-as-imu/)] [[GitHub](https://github.com/jerredchen/image-as-an-imu)]
|
| 11 |
+
|
| 12 |
+
**[PIXL, University of Oxford](https://pixl.cs.ox.ac.uk/)**
|
| 13 |
+
|
| 14 |
+
[Jerred Chen](https://jerredchen.github.io/), [Ronald Clark](https://ronnie-clark.co.uk/)
|
| 15 |
+
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
## Dataset Details
|
| 19 |
+
|
| 20 |
+
This dataset consists of 32 sequences of real-world motion-blurred videos in various indoor scenes, captured using the iPhone 13 camera.
|
| 21 |
+
|
| 22 |
+
`dataset_train_real-world.csv` and `dataset_val_real-world.csv` are the CSV files used for training/validating the model in the paper. These can be directly plugged into the provided dataloader in the GitHub.
|
| 23 |
+
|
| 24 |
+
The CSVs provide the following:
|
| 25 |
+
- blurred: the relative path to the (resized 320x224) motion-blurred RGB image
|
| 26 |
+
- ts1,ts2: the frame timestamps between the previous RGB and next RGB image
|
| 27 |
+
- fx,fy,cx,cy: the *scaled* camera intrinsics, corresponding to the 320x224 image
|
| 28 |
+
- bRa_qx,bRa_qy,bRa_qz,bRa_qw: body-frame rotational velocity, parameterized as a quaternion
|
| 29 |
+
- bta_x,bta_y,bta_z: body-frame translational velocity
|
| 30 |
+
- exposure: exposure time at the given image
|
| 31 |
+
- sequence: the sequence name
|
| 32 |
+
|
| 33 |
+
## Sequence Details
|
| 34 |
+
|
| 35 |
+
Each sequence consists of the following:
|
| 36 |
+
```
|
| 37 |
+
sequence1/
|
| 38 |
+
ββ blurry_frames_320x224
|
| 39 |
+
β ββ XXXXXX.jpg
|
| 40 |
+
β ββ ...
|
| 41 |
+
ββ confidence
|
| 42 |
+
β ββ XXXXXX.png
|
| 43 |
+
β ββ ...
|
| 44 |
+
ββ depth
|
| 45 |
+
β ββ XXXXXX.png
|
| 46 |
+
β ββ ...
|
| 47 |
+
ββ rgb
|
| 48 |
+
β ββ XXXXXX.jpg
|
| 49 |
+
β ββ ...
|
| 50 |
+
ββ rgb_320x224
|
| 51 |
+
β ββ XXXXXX.jpg
|
| 52 |
+
β ββ ...
|
| 53 |
+
ββ blurred_frames_320x224.csv
|
| 54 |
+
ββ camera_matrix.csv
|
| 55 |
+
ββ camera_matrix_320x224.csv
|
| 56 |
+
ββ imu.csv
|
| 57 |
+
ββ odometry.csv
|
| 58 |
+
ββ velocities.csv
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
Sequences were recorded using the [StrayScanner app](https://apps.apple.com/us/app/stray-scanner/id1557051662), slightly modified to also obtain the exposure time from ARKit.
|
| 62 |
+
`confidence`, `depth`, `rgb`, `camera_matrix.csv`, `imu.csv`, and `odometry.csv` are the original outputs from StrayScanner.
|
| 63 |
+
|
| 64 |
+
We provide the following data in addition to the StrayScanner outputs:
|
| 65 |
+
- `rgb_320x224` are the resized recorded RGB images
|
| 66 |
+
- `blurry_frames_320x224` are the identified frames with more extensive blur using FFT as described in [Liu et al.](https://ieeexplore.ieee.org/document/4587465)
|
| 67 |
+
- `camera_matrix_320x224.csv` are the corresponding scaled camera intrinsics
|
| 68 |
+
- `velocities.csv` consist of the translational velocities computed from ARKit poses in `odometry.csv` and the rotational velocities directly from the gyroscope.
|
| 69 |
+
|
| 70 |
+
Of course, the RGB images/camera intrinsics can be resized/scaled online during training; we provide this to maintain consistency with our own training.
|
| 71 |
+
|
| 72 |
+
Since the ARKit computed poses can have very large errors, `dataset_train_real-world.csv` consists of manually filtered samples without large outlier pose estimates.
|