Fast and energy-efficient, a new AI camera system from Stanford promises to give a boost to autonomous cars, among other technologies.
Many companies are working on the development of driverless cars, and autonomous ships, to bring them to the consumer market.
Some automakers, like Tesla and Mercedes, already offer driver assistance technologies that make cars run semi-autonomously, sort of a rehearsal until they can forgo “human assistance” altogether.
Also participating in the race for autonomous cars are high-tech companies, such as Google and Uber.
To function and get rid of human assistance completely, autonomous cars need LIDARs “LIght Detection and RAnging”, sonars, cameras, and Artificial Intelligence to oversee everything.
An array of cameras provides a 360° vision and image recognition ability to the car, so it can detect objects and explore their environment, for example, traffic signs, obstacles and other driverless vehicles on the road.
Read More: MIT’s Self-Driving System to let Cars Navigate Roads Without a Map
Image recognition systems send data continuously to the onboard AI computer to be processed and made sense of.
Self-driving cars rely on the AI brain for its critical ability to make decisions fast, almost in real time.
The issue with computer systems handling image recognition algorithms in autonomous cars and other technologies like drones is that they are slow and have a big footprint.
“That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk,” said Gordon Wetzstein, an electrical engineer at Stanford University.
Wetzstein has led a research team that designed a new AI-powered camera system, which takes a smaller space, consumes less energy, and classifies images much faster than current systems.
To create their AI-powered camera, the team combined two computers into one hybrid system based on Convolutional Neural Networks (CNNs).
“The first layer of the prototype camera is a type of optical computer, which does not require the power-intensive mathematics of digital computing. The second layer is a traditional digital electronic computer.”
Read More: New Laser Technology Might Allow Driverless Cars to See Around Corners
In a way, Stanford engineers have “outsourced some of the math of artificial intelligence into the optics,” to save on computational costs and time, and enhance performance.
Researchers tested their imaging system and successfully identified different objects and animals put in a natural setting.
The system, while fast, is still in its “prototype” phase, and can’t be described as small. However, engineers who developed it think they can scale it down “to fit in a handheld video camera or an aerial drone”
“Some future version of our system would be especially useful in rapid decision-making applications, like autonomous vehicles,” said Wetzstein.
In the paper published in Nature Scientific Reports, we read:
“We take steps toward the goal of an optical CNN from a computational imaging perspective, integrating image acquisition with computation via co-design of optics and algorithms. Computational cameras exploit the physical propagation of light through custom optics to encode information about a scene that would be lost in a standard 2D image capture. Here we present a computational imaging system modeled after a feed-forward CNN that assists in performing classification of input images.”
Comments (0)
Most Recent