Autonomous cars need to ‘see’ and understand their surroundings as well as humans do before their roll-out is ever truly widespread – but what if they could do even better? What if, for example, they could see around corners?
That is the goal for researchers at Stanford University in California. “People talk about building a camera that can see as well as humans for applications such as autonomous cars and robots, but we want to build systems that go well beyond that,” said electrical engineer Gordon Wetzstein. “We want to see things in 3D, around corners and beyond the visible light spectrum.”
The Stanford team made crucial progress towards those goals with its new camera system, building on a previous around-the-corner camera. Using hardware, scanning and image processing speeds that are “already common” in autonomous car vision systems, the system scans walls opposite scenes of interest. The light bounces off the wall, hits the objects in the scene then bounces back to the wall and to the camera sensors. Only “specks” remain by the time the laser light reaches the camera, the team said, but the sensor captures them all and uses a bespoke algorithm to “untangle the echoes” and reveal the scene – albeit in very blurry, fuzzy shapes and colours.
Inspired by seismic imaging systems, the system scans at four frames per second (fps) but can reconstruct a scene at 60fps using a graphics processing unit. It reconstructed static objects with a range of surface types, including disco balls, books and “intricately textured” statues. It also reconstructed graduate student David Lindell as he stretched, walked and hopped about, his colleagues able to see the movements despite him being around the corner.
A practical system for autonomous cars or robots will nonetheless require further enhancements, the team said. “It's very humble steps. The movement still looks low-resolution and it's not super-fast, but compared to the state-of-the-art last year it is a significant improvement,” said Wetzstein. “We were blown away the first time we saw these results because we've captured data that nobody has seen before.”
The team hopes to test the system on autonomous research cars, while also looking into other possible applications such as medical imaging. They also hope to tackle challenging visual conditions such as fog, rain, sandstorms and snow, and to improve the system’s speed and resolution.
The researchers will present their research at Siggraph 2019 on 1 August in Los Angeles.
Content published by Professional Engineering does not necessarily represent the views of the Institution of Mechanical Engineers.
Read more related articles