Imaging system lets camera peer around corners

Let us get you 3
Quotes
"An excellent buying service"
Also get quotes for
What's around the corner? Soon imaging systems may be able to tell us.
What's around the corner? Soon imaging systems may be able to tell us.

A new imaging system could use opaque walls, doors or floors as 'mirrors' to gather information about scenes outside its line of sight.

In December, MIT Media Lab researchers in the United States caused a stir by releasing a slow-motion video of a burst of light traveling the length of a plastic bottle. But the experimental setup that enabled that video was designed for a much different application: a camera that can see around corners.

In a paper appearing this week in the journal Nature Communications, the researchers describe using their system to produce recognisable 3D images of a wooden figurine and of foam cutouts outside their camera’s line of sight.

The research could ultimately lead to imaging systems that allow emergency responders to evaluate dangerous environments or vehicle navigation systems that can negotiate blind turns, among other applications.

The principle behind the system is essentially that of the periscope. But instead of using angled mirrors to redirect light, the system uses ordinary walls, doors or floors — surfaces that aren’t generally thought of as reflective.

The system exploits a device called a femtosecond laser, which emits bursts of light so short that their duration is measured in quadrillionths of a second. To peer into a room that’s outside its line of sight, the system might fire femtosecond bursts of laser light at the wall opposite the doorway.

The light would reflect off the wall and into the room, then bounce around and re-emerge, ultimately striking a detector that can take measurements every few picoseconds, or trillionths of a second. Because the light bursts are so short, the system can gauge how far they’ve travelled by measuring the time it takes them to reach the detector.

The system performs this procedure several times, bouncing light off several different spots on the wall, so that it enters the room at several different angles. The detector, too, measures the returning light at different angles. By comparing the times at which returning light strikes different parts of the detector, the system can piece together a picture of the room’s geometry.

Previously, femtosecond lasers had been used to produce extremely high-speed images of biochemical processes in a laboratory setting, where the trajectories of the laser pulses were carefully controlled.

"Four years ago, when I talked to people in ultrafast optics about using femtosecond lasers for room-sized scenes, they said it was totally ridiculous," Ramesh Raskar, an associate professor at the MIT Media Lab, who led the new research, said.

Andreas Velten, a former postdoc in Raskar’s group who is now at the University of Wisconsin at Madison, conducted the experiments reported in Nature Communications using hardware in the lab of MIT chemist Moungi Bawendi, who’s collaborating on the project.

Velten fired femtosecond bursts of laser light at an opaque screen, which reflected the light onto objects suspended in front of another opaque panel standing in for the back wall of a room.

The data collected by the ultrafast sensor were processed by algorithms that Raskar and Velten developed in collaboration with Otkrist Gupta, a graduate student in Raskar’s group; Thomas Willwacher, a mathematics postdoc at Harvard University; and Ashok Veeraraghavan, an assistant professor of electrical engineering and computer science at Rice University.

The 3D images produced by the algorithms were blurry but easily recognisable.

Raskar envisions that a future version of the system could be used by emergency responders — firefighters looking for people in burning buildings or police determining whether rooms are safe to enter — or by vehicle navigation systems, which could bounce light off the ground to look around blind corners. It could also be used with endoscopic medical devices, to produce images of previously obscure regions of the human body.

The maths required to knit multiple femtosecond-laser measurements into visual images is complicated, but Andrew Fitzgibbon, a principal researcher at Microsoft Research who specialises in computer vision, says it does build on research in related fields.

"There are areas of computer graphics which have used that sort of math(s)," Fitzgibbon said.

"In computer graphics, you’re making a picture. Applying that math(s) to acquiring a picture is a great idea."

Raskar adds that his team’s image-reconstruction algorithm uses a technique called filtered backprojection, which is the basis of CAT scans.

Indeed, Fitzgibbon says, the real innovation behind the project was the audacity to try it.

"Coming at it from both ends, from the raw scientific question — because, you know, it is kind of a scientific question: ‘Could we see around a corner?’ — to the extreme engineering of it — ‘Can we time these pulses to femtoseconds?’ — that combination, I think, is rare."

In its work so far, Raskar says, his group has discovered that the problem of peering around a corner has a great deal in common with that of using multiple antennas to determine the direction of incoming radio signals.

Going forward, Raskar hopes to use that insight to improve the quality of the images the system produces and to enable it to handle visual scenes with a lot more clutter.

Get 3+ quotes so you can compare and choose the supplier that's right for you