jerredchen00 commited on
Commit
ac0330e
Β·
verified Β·
1 Parent(s): b48b735

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ size_categories:
4
+ - 1K<n<10K
5
+ ---
6
+ # Image as an IMU: Real-world Finetuning Dataset
7
+
8
+ Official real-world finetuning dataset from *Image as an IMU: Estimating Camera Motion from a Single Motion-Blurred Image* (ICCV 2025 Oral).
9
+
10
+ [[arXiv](https://arxiv.org/abs/2503.17358)] [[Webpage](https://jerredchen.github.io/image-as-imu/)] [[GitHub](https://github.com/jerredchen/image-as-an-imu)]
11
+
12
+ **[PIXL, University of Oxford](https://pixl.cs.ox.ac.uk/)**
13
+
14
+ [Jerred Chen](https://jerredchen.github.io/), [Ronald Clark](https://ronnie-clark.co.uk/)
15
+
16
+ ---
17
+
18
+ ## Dataset Details
19
+
20
+ This dataset consists of 32 sequences of real-world motion-blurred videos in various indoor scenes, captured using the iPhone 13 camera.
21
+
22
+ `dataset_train_real-world.csv` and `dataset_val_real-world.csv` are the CSV files used for training/validating the model in the paper. These can be directly plugged into the provided dataloader in the GitHub.
23
+
24
+ The CSVs provide the following:
25
+ - blurred: the relative path to the (resized 320x224) motion-blurred RGB image
26
+ - ts1,ts2: the frame timestamps between the previous RGB and next RGB image
27
+ - fx,fy,cx,cy: the *scaled* camera intrinsics, corresponding to the 320x224 image
28
+ - bRa_qx,bRa_qy,bRa_qz,bRa_qw: body-frame rotational velocity, parameterized as a quaternion
29
+ - bta_x,bta_y,bta_z: body-frame translational velocity
30
+ - exposure: exposure time at the given image
31
+ - sequence: the sequence name
32
+
33
+ ## Sequence Details
34
+
35
+ Each sequence consists of the following:
36
+ ```
37
+ sequence1/
38
+ β”œβ”€ blurry_frames_320x224
39
+ β”‚ β”œβ”€ XXXXXX.jpg
40
+ β”‚ └─ ...
41
+ β”œβ”€ confidence
42
+ β”‚ β”œβ”€ XXXXXX.png
43
+ β”‚ └─ ...
44
+ β”œβ”€ depth
45
+ β”‚ β”œβ”€ XXXXXX.png
46
+ β”‚ └─ ...
47
+ β”œβ”€ rgb
48
+ β”‚ β”œβ”€ XXXXXX.jpg
49
+ β”‚ └─ ...
50
+ β”œβ”€ rgb_320x224
51
+ β”‚ β”œβ”€ XXXXXX.jpg
52
+ β”‚ └─ ...
53
+ └─ blurred_frames_320x224.csv
54
+ └─ camera_matrix.csv
55
+ └─ camera_matrix_320x224.csv
56
+ └─ imu.csv
57
+ └─ odometry.csv
58
+ └─ velocities.csv
59
+ ```
60
+
61
+ Sequences were recorded using the [StrayScanner app](https://apps.apple.com/us/app/stray-scanner/id1557051662), slightly modified to also obtain the exposure time from ARKit.
62
+ `confidence`, `depth`, `rgb`, `camera_matrix.csv`, `imu.csv`, and `odometry.csv` are the original outputs from StrayScanner.
63
+
64
+ We provide the following data in addition to the StrayScanner outputs:
65
+ - `rgb_320x224` are the resized recorded RGB images
66
+ - `blurry_frames_320x224` are the identified frames with more extensive blur using FFT as described in [Liu et al.](https://ieeexplore.ieee.org/document/4587465)
67
+ - `camera_matrix_320x224.csv` are the corresponding scaled camera intrinsics
68
+ - `velocities.csv` consist of the translational velocities computed from ARKit poses in `odometry.csv` and the rotational velocities directly from the gyroscope.
69
+
70
+ Of course, the RGB images/camera intrinsics can be resized/scaled online during training; we provide this to maintain consistency with our own training.
71
+
72
+ Since the ARKit computed poses can have very large errors, `dataset_train_real-world.csv` consists of manually filtered samples without large outlier pose estimates.