Depth-sensing cameras: How many types are there and how do they work?
Depth-sensing camera modules are now a key technology in embedded systems, robotics, industrial automation, and autonomous vehicles. They enable machines to "see" the world in three dimensions, just like we humans do. Depth-sensing technologies, including Time-of-Flight (ToF), LiDAR, and structured light cameras, provide machines with precise spatial perception, enabling a high degree of interactivity and automation in a variety of applications. These technologies are driving the development of fields such as autonomous vehicles, robotic navigation, industrial automation, and augmented reality. This article will take a deep dive into how depth-sensing cameras work, the different technology types, and their diverse applications in modern technology.In our previous articles, we have introduced ToF and other 3D mapping cameras. For more details, please refer to them.
Different types of depth sensing cameras and their basic implementation principles
Before understanding each type of depth sensing camera, let's first understand what depth sensing is.
What is depth sensing?
Depth sensing is a technique for measuring the distance between a device and an object or the distance between two objects. This can be achieved using a 3D depth sensing camera, which automatically detects the presence of any object near the device and measures the distance to the object at any time. This technology is beneficial for devices that integrate depth sensing cameras or autonomous mobile applications that make real-time decisions by measuring distance.
Among the depth sensing technologies used today, the three most commonly used are:
1. Structured light
2. Stereo vision
3. Time of flight
1. Direct time of flight (dToF)
1. LiDAR
2. Indirect time of flight (iToF)
Let's take a closer look at the principles of each depth sensing technology.
Structured light
Structured light cameras calculate the depth and outline of an object by projecting a known light pattern, such as lasers, LEDs, etc. (usually in the form of stripes), onto the target object and analyzing the distortion of the reflected pattern. This technology is excellent for its high accuracy and stability under controlled lighting conditions, but is generally used for 3D scanning and modeling due to its limited operating range.
Stereo vision
Stereo vision cameras work similarly to human binocular vision, capturing images through two cameras at a certain distance and using software processing to detect and compare feature points in the two images to calculate depth information. This technology is useful for real-time applications in various lighting conditions, such as industrial automation and augmented reality.
Time of flight camera
Time of flight (ToF) refers to the time it takes light to travel a certain distance. Time of flight cameras use this principle to estimate the distance to an object based on the time it takes for emitted light to reflect from the surface of the object and return to the sensor.
There are three main components of a time-of-flight camera:
- ToF sensor and sensor module
- Light source
- Depth sensor
ToF can be divided into two types based on the method used by the depth sensor to determine distance: direct time-of-flight (DToF) and indirect time-of-flight (iToF). Let's take a closer look at the differences between these two types.
Direct Time-of-Flight (dToF)
Direct time-of-flight (dToF) technology works by directly measuring distance by emitting infrared laser pulses and measuring the time it takes for these pulses to travel from the emitter to the object and back again.
dToF camera modules use special light-sensitive pixels, such as single-photon avalanche diodes (SPADs), to detect sudden increases in photons in reflected light pulses, allowing for precise calculation of time intervals. When a light pulse reflects off an object, the SPAD detects a sudden peak in photons. This allows it to track the intervals between photon peaks and measure time.
dToF cameras typically have lower resolution, but their small size and low price make them ideal for applications that do not require high resolution and real-time performance.
LiDAR
Since we are talking about using infrared laser pulses to measure distance, let's talk about LiDAR cameras.
LiDAR (Light Detection and Ranging) cameras use a laser transmitter to project a raster light pattern across the scene being recorded and scan it back and forth. Distance is measured by calculating the time it takes for the camera sensor to record the light pulse to reach an object and reflect back to itself.
LiDAR sensors typically use two wavelengths of infrared lasers: 905 nanometers and 1550 nanometers. Lasers with shorter wavelengths are less likely to be absorbed by water in the atmosphere and are better suited for long-range measurements. In contrast, infrared lasers with longer wavelengths can be used in eye-safe applications, such as robots operating around humans.
Indirect Time of Flight (iToF)
Unlike direct time of flight, indirect time of flight (iToF) cameras calculate distance by illuminating the entire scene with continuously modulated laser pulses and recording the phase shift in the sensor pixels. iToF cameras are able to capture distance information for the entire scene at once. Unlike dToF, iToF does not directly measure the time interval between each light pulse.
With an iToF camera, the distance to all points in a scene can be determined with just a single shot.
Property | Structured Light | Stereo Vision | LiDAR | dToF | iToF |
Principle | Projected pattern distortion | Dual camera image comparison | Time of flight of reflected light | Time of flight of reflected light | Phase shift of modulated light pulse |
Software Complexity | High | High | LOW | LOW | Medium |
Cost | High | LOW | Variable | Low | Medium |
Accuracy | Micrometer-level | Centimeter-level | Range-dependent | Millimeter to centimeter | Millimeter to centimeter |
Operating Range | Short | ~6 meters | Highly scalable | Scalable | Scalable |
Low-light Performance | Good | Weak | Good | Good | Good |
Outdoor Performance | Weak | Good | Good | Moderate | Moderate |
Scanning Speed | Slow | Medium | Slow | Fast | Very Fast |
Compactness | Medium | Low | Low | High | Medium |
Power Consumption | High | Low to scalable | High to scalable | Medium | Scalable to medium |
Common fields of depth sensing cameras
- Autonomous vehicles: Depth sensing cameras provide autonomous vehicles with the necessary environmental perception capabilities, allowing them to identify and avoid obstacles while performing accurate navigation and path planning.
- Security and surveillance: Depth sensing cameras are used in the security field for facial recognition, crowd monitoring, and intrusion detection, improving safety and response speed.
- Augmented reality (AR): Depth sensing technology is used in augmented reality applications to accurately overlay virtual images onto the real world, providing users with an immersive experience.
Sinoseen provides you with the right depth sensing camera
As a mature camera module manufacturer, Sinoseen has extensive experience in designing, developing and manufacturing OEM camera modules. We provide high-performance depth ToF camera modules and make them compatible with interfaces such as USB, GMSL, MIPI, etc. At the same time, it supports advanced image processing functions including global shutter and infrared imaging.
If your embedded vision application requires support for depth ToF sensing camera modules, please feel free to contact us. I believe our team will provide you with a satisfactory solution. You can also visit our camera module product list to see if there is a camera module that meets your needs.