Resources
The RobotCycle DevKit streamlines data loading, synchronisation, and analysis for multimodal robotic perception. Built around PyTorch, it provides robust, sensor-specific dataset classes with integrated timestamping, calibration, and preprocessing utilities.
RobotCycle DevKit
A deep learning first Python SDK.
- Comprehensive data loading and processing
- PyTorch
Dataset
classes for each sensor - Built-in synchronisation and timestamping
- Optional auto-rectification and calibration
Semantic Annotations
The dataset provides rich multimodal semantic labeling:
- RGB images labeled with SAM (240 images across 60 scenes)
- LiDAR pseudo-labels via Cylinder3D and DBSCAN
- Semantic taxonomy aligned with SemanticKITTI and nuScenes
Ontology and Knowledge Graphs
A structured OWL2/RDFS ontology captures the relationships between:
- Road agents, vehicles, and infrastructure
- Spatial and temporal relations (
hasPath
,hasLocation
,hasTimestamp
) - Sensor metadata and observation links
This enables multimodal reasoning, scene understanding, and traffic knowledge queries.
Visualisation and Analysis Tools
- Risk & proximity analysis for agent interactions
- Eye-gaze heatmap generation extracting attention patterns
- Traffic flow estimation and trajectory tracking
- Map overlays and infrastructure annotation
For more information about the data collection methodology, sensor calibration, open-source toolkit, and analysis results, please refer to our paper:
Efimia Panagiotaki, Divya Thuremella, Jumana Baghabrah, Samuel Sze, Lanke Frank Tarimo Fu, Benjamin Hardin, Tyler Reinmund, Tobit Flatscher, Daniel Marques, Chris Prahacs, Lars Kunze, Daniele De MartiniTransactions on Field Robotics