The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

Purrturbed but Stable: Human–Cat Paired Egocentric Frames

This dataset contains strictly paired image frames that support cross species comparisons between human style and cat style visual inputs. The corpus is constructed from point of view videos of domestic cats and a biologically informed cat vision filter that approximates key properties of feline early vision.

The dataset was introduced in the paper:

Purrturbed but Stable: Human–Cat Invariant Representations Across CNNs, ViTs and Self Supervised ViTs, 2025. Paper website: (Purrturbed But Stable)

Python package: (CatVision)

Read the paper here: (Arxiv)

Please cite the paper if you use this dataset in your work.

@misc{shah2025purrturbedstablehumancatinvariant,
      title        = {Purrturbed but Stable: Human-Cat Invariant Representations Across CNNs, ViTs and Self-Supervised ViTs},
      author       = {Arya Shah and Vaibhav Tripathi},
      year         = {2025},
      eprint       = {2511.02404},
      archivePrefix= {arXiv},
      primaryClass = {cs.CV},
      url          = {https://arxiv.org/abs/2511.02404}
}

Dataset summary

  • Modality
    • RGB images.
  • Domains
    • Human like original frames.
    • Cat vision filtered frames.
  • Source
    • Public point of view videos of domestic cats with a camera attached to the neck.
  • Structure
    • 191 videos.
    • Over 300000 human–cat frame pairs.
    • One to one pairing at the filename level.
    • Mirrored directory structures across human and cat domains.
  • Use case
    • Analysis of representation invariances under cross species viewing conditions.
    • Comparison across CNNs, ViTs, and self supervised ViTs.
    • Robustness and invariance benchmarks with strict pairing.

The core design principle is to hold scene content fixed while changing the visual domain. Every frame in the human domain has at most one corresponding cat vision frame. Pairs with missing or corrupted counterparts are excluded. Identifiers are stable across the pipeline to enable reproducible joins and cross model analyses.

Directory structure

The dataset is organized as follows:

  • frames/

    • Original video frames in human like form.
    • Subdirectories per video: video1/, video2/, …, video191/.
    • Inside each subdirectory: individual JPEG frames.
  • cat_frames/

    • Cat vision filtered frames produced by the biologically motivated transformation.
    • Mirrored subdirectory structure: video1/, video2/, …, video191/.
    • File names match the corresponding entries in frames/.

Example layout:

dataset_root/
  frames/
    video1/
      frame_000001.jpg
      frame_000002.jpg
      ...
  cat_frames/
    video1/
      frame_000001.jpg
      frame_000002.jpg
      ...
  cat_vision_pairs_metadata.csv

The metadata CSV file lists only those pairs for which both domains are present.

Metadata CSV

We provide a CSV file that encodes stable metadata for each paired frame:

  • File
    • cat_vision_pairs_metadata.csv
  • Columns
    • pair_id – stable identifier for the pair, combining video id and frame filename.
    • video_id – video identifier, for example video42.
    • frame_filename – frame file name, for example frame_000123.jpg.
    • human_frame – relative path to the human like frame, for example frames/video42/frame_000123.jpg.
    • cat_frame – relative path to the cat vision frame, for example cat_frames/video42/frame_000123.jpg.

Paths are relative to the dataset root directory.

The CSV only includes pairs where both the human frame and the cat vision frame are present and valid RGB images.

Cat vision filter

The cat vision frames in cat_frames/ are generated using a biologically informed transformation that approximates several aspects of feline early vision and optics. The implementation is provided as the script cat_vision_filter.py in this repository.

The filter models:

  • Spectral sensitivity with rod dominance

    • Smooth spectral sensitivity curves for short wavelength cones, long wavelength cones, and rods.
    • Approximate peaks around 450 nm for S cones, 556 nm for L cones, and 498 nm for rods.
    • Rod dominated weighting with a rod–cone ratio of 25:1.
    • Reduced long wavelength (red) sensitivity and enhanced blue–green sensitivity.
  • Spatial acuity and peripheral falloff

    • Frequency domain low pass filtering that reduces high spatial frequencies.
    • Effective spatial acuity set to about one sixth of typical human high contrast acuity.
    • Center–surround acuity mapping that keeps the center relatively sharper and blurs the periphery.
  • Geometric optics and field of view

    • Vertical slit pupil approximation with a 3:1 vertical aspect ratio.
    • Barrel like distortion that broadens the effective field of view.
    • Field of view parameters around 200 degrees horizontal and 140 degrees vertical.
  • Temporal sensitivity and flicker fusion

    • Temporal response that peaks near 10 Hz.
    • Reduced gain above roughly 50 to 60 Hz, consistent with elevated flicker fusion threshold in cats.
    • Temporal processing operates on sequences of frames and modulates motion related changes.
  • Motion sensitivity with horizontal bias

    • Optical flow estimation with Lucas–Kanade style updates.
    • Motion magnitude and direction are combined with a bias toward horizontal motion.
    • Direction dependent gain favors horizontally oriented motion vectors.
  • Tapetum lucidum low light enhancement

    • Luminance dependent gain modulation that boosts responses in low light scenes.
    • Additional blue–green tint that mimics the reflective properties of the tapetum lucidum.

The filter is presented as an engineering approximation rather than a fully detailed optical retinal cortical model. It omits wavelength dependent blur, detailed retinal mosaics, chromatic aberrations, and dynamic pupil control. It is intended as a biologically motivated stressor that modifies images in a way that is qualitatively consistent with feline visual characteristics while remaining computationally tractable.

Intended uses

  • Primary uses

    • Studying representational alignment and invariance across models and architectures.
    • Comparing CNNs, supervised ViTs, and self supervised ViTs under cross species visual conditions.
    • Probing how models respond to changes in low level statistics, spectral content, and motion cues that mimic feline vision.
  • Potential downstream tasks

    • Analysis of invariance with strictly paired inputs.
    • Egocentric vision studies using animal mounted cameras.
    • Robustness analysis for models under structured shifts in early vision.

The dataset does not include semantic labels. Models are evaluated using representations extracted from frozen encoders and analyzed with similarity and alignment measures.

Data collection and ethics

  • Frames are derived from publicly available in the wild recordings of domestic cats with cameras attached to the neck.
  • Personal identifiers are not present in the dataset as curated for the experiments.
  • Frames are used only for representational analyses and not for identity recognition.

Users are responsible for ensuring that their own use complies with local regulations and with the terms of the original video sources.

Citation

If you use this dataset, please cite:

@misc{shah2025purrturbedstablehumancatinvariant,
      title        = {Purrturbed but Stable: Human-Cat Invariant Representations Across CNNs, ViTs and Self-Supervised ViTs},
      author       = {Arya Shah and Vaibhav Tripathi},
      year         = {2025},
      eprint       = {2511.02404},
      archivePrefix= {arXiv},
      primaryClass = {cs.CV},
      url          = {https://arxiv.org/abs/2511.02404}
}
Downloads last month
28