Tech Explained: Depth Sensors
ga(‘send’, ‘event’, ‘Uncategorized’, ‘article’, ‘article-industry-impression’, {nonInteraction: true});
The latest iPhone comes equipped with a dual camera and laser sensor. Even if you do not plan on ever owning an iPhone, this laser and camera combo is an important innovation. It will allow the phone to act as a depth sensor. This in turn will open up new possibilities for augmented reality, facial recognition and security. So how does depth sensing work, and why is it an important innovation?
A conventional camera translates the 3D world into a 2D image. However, humans do not see the world in 2D. In order for computers to see the world as humans do, they need some way to determine depth. Depth carries critical information. Applications such as autonomous driving, robotics, and virtual reality rely on the ability to determine how far apart objects are.
There are three main methods for depth sensing: structural light, time of flight and camera array. In structural light, a laser is used to project a known pattern. A receiver then detects any distortion in the reflected pattern and uses this to calculate depth. This method is very accurate, but is sensitive to environmental brightness, so it’s usually only used in dark or indoor areas.
In time-of-flight sensing, a laser sends a short pulse of light to a target object. A sensor then records the time it took the pulse to reflect back. Because the speed of light is a constant, knowing the time it takes the light to reach an object and return allows the system to calculate how far away the object is. The components needed for this must be highly accurate and are expensive. They also use a lot of power, so are usually found only in high-performance devices. However, time-of-flight can also be determined by sending out a modulated light source and detecting the phase change of the reflected light. An LED can be used as the modulated light source, which is cheaper and more energy efficient than a laser.
The third major type of depth sensor is a camera array. In this approach, multiple cameras are placed at different positions to capture multiple images of the same target. The system uses this ‘stereoscopic’ view to calculate depth. The simplest and most common camera array, found in some smart phones, uses two cameras which are separated to mimic human eyes. Although the camera array may seem like the simplest option, calculating the depth requires complex machine learning algorithms and finding matching points on the target image.
Despite the challenges, depth sensors are becoming more common for a range of uses. For example, depth information is necessary for human-machine interaction in virtual and augmented reality devices. Depth sensors allow virtual objects to be placed in the correct locations in the VR/AR environment. This is very important for VR/AR applications that are used, for example, to train surgeons or pilots.
Depth sensing is also key to navigation, localisation, mapping and collision avoidance. In order for vehicles to move and navigate on their own, they need to know where they are in relation to everything else in the environment. Robots used in warehouses also rely on depth sensing to know where the target object is and how to reach it.
Most facial recognition systems use a 2D camera to capture a photo and send it to an algorithm to determine the person’s identity. However, this type of system is easy to fool if the algorithm cannot tell a photo from a real person. Depth sensing allows more accurate facial recognition and can measure more features of the face. This leads to better security, and can have other applications. At Springwise, we have already seen this with the development of algorithms to detect emotions using the iPhones’ depth sensor. Time-of-Flight depth sensors are already being used in gaming, to detect hand gestures.
Takeaway: Given the huge number of applications that rely on depth sensing, this field is set become a huge market in the next few years. Many current depth sensing technologies still have a lot of room for improvement. As depth sensing improves, it will enable innovations like facial recognition to be applied to more systems. What are some other uses for depth sensing?
Source: New feed 1