PhD Student: Nathan Shankar
Supervisors: Pawel Ladosz | Hujun Yin
Robots struggle when the lights go out, resulting in camera failure, loss of localization, and breakdown of navigation systems. This research aims to give robots the ability to perceive and navigate reliably in complete darkness by combining infrared sensing, intelligent image enhancement, and spatial awareness.
The goal is to make vision reliable anywhere without relying on a multitude of sensors. By leveraging infrared perception and learning-based enhancement, robots can reconstruct their surroundings, recognize objects, and navigate safely in environments where conventional vision fails. This research has significant implications for autonomous exploration, search and rescue, warehouse fleets, nuclear inspection, and planetary robotics.
Drag the slider to see the image enhancement in real-time.
Noisy Input
Clean Output
Under the hood: The noisy input is compressed into latent features (bottleneck) and reconstructed. Different layers reveal their internal feature maps during processing.
Deploying autonomous vision where humans cannot go.
Finding survivors in low-light buildings or collapsed tunnels where standard cameras are blinded and GPS fails.
Navigating the permanently shadowed craters of the Moon or Mars caves without draining battery on heavy floodlights.
Monitoring high-risk zones in nuclear plants or deep mines, keeping human workers safe from radiation and hazards.