Ph.D. · Former Postdoc · Industry Researcher
I build end-to-end autonomous systems at the intersection of SLAM, active 3D perception, and robotic platforms (UAVs and mobile manipulation). Previously at the Italian Institute of Technology (ADVR & CCHT), and currently in industry developing drone-based 3D vision systems.
Systems that connect perception, planning, and deployment — not just standalone models.
A curated selection of systems I’ve designed and developed. Visuals are included where available, and placeholders indicate future drop-in demos.
Developing SLAM and 3D reconstruction pipelines for large-scale field scanning, combining autonomous missions, real-time RGB streaming, and benchmarking workflows.
Built real-time SLAM pipelines based on neural scene representations to support immersive teleoperation and mapping through VR.
Developed an autonomous scanning pipeline using UR manipulators and 3D sensing, including 6-DoF pose tracking for complex artifacts. System delivered and deployed on-site.
Research on LiDAR–camera fusion and multi-view 3D understanding for robust scene perception and object localization.
I’m prototyping an autonomous robotic platform that digitizes physical products into standardized, high-quality 3D assets with minimal supervision.
Product digitization today is often manual, expensive, and hard to scale. The key insight is to treat digitization as an embodied intelligence problem: the robot actively decides where to move and what to observe, adapting to each object and scene.
This section outlines a research-driven product direction; technical details and system design are intentionally abstracted at this stage.
Peer-reviewed and preprint work across 3D perception, multimodal fusion, and robotics.
A quick snapshot of my trajectory across research and industry.
I’m open to collaboration across robotics, 3D perception, and product-oriented autonomous systems.