3D sensors play a key role in environmental perception and capture

According to Max Consulting, with the decline in the price of 3D sensing modules and the increase in performance, 3D vision or depth sensing is constantly empowering various new applications, including helping robots create environmental maps and complete tasks, such as how to best Avoid humans well. Other applications include object picking and placement, combined assembly and inspection, and moving objects from one location to another. These applications all rely on economical and powerful 3D vision sensors. At present, there are many competing technologies in this field. All these technologies have their own advantages and disadvantages, with different working distances, resolutions, processing capabilities, and costs.Every technology

According to Max Consulting, with the decline in the price of 3D sensing modules and the increase in performance, 3D vision or depth sensing is constantly empowering various new applications, including helping robots create environmental maps and complete tasks, such as how to best Avoid humans well. Other applications include object picking and placement, combined assembly and inspection, and moving objects from one location to another. These applications all rely on economical and powerful 3D vision sensors. At present, there are many competing technologies in this field. All these technologies have their own advantages and disadvantages, with different working distances, resolutions, processing capabilities, and costs. Each technology has its important market value, generally because there is no single best solution that can deal with all application scenarios.

3D sensors play a key role in environmental perception and capture

3D imaging plays a key role in the perception of the environment of autonomous vehicles

“According to the performance and market constraints required by various applications, these technologies can all find their respective uses,” said Alexis Debray, Yole Optoelectronics Technology and Market Analyst. For 3D vision, competing technologies include many types of lidar (LiDAR) and time-of-flight (ToF) sensors, which emit light and obtain distance information by measuring the time required to return the signal. There is also structured light technology, which determines the distance by measuring the deformation of the light pattern projected on the object (also a triangulation method). Another technique is laser triangulation, which extracts the depth from where the laser point appears in the camera’s field of view. Finally, another technical solution is stereo depth vision sensing, which calculates the distance by using the difference between two camera image feature points. In addition to stereoscopic depth vision, the above methods all require some form of lighting. This may be an advantage in some industrial environments with uncertain lighting, but lighting does increase power consumption. If it is a battery-powered device, it may cause power supply problems. Power supply and lighting Now, these 3D sensing technologies have significantly improved in terms of power and size. “For example, structured light depth sensing used about 10W in 2009, and the module was about the size of a brick. Now, the time-of-flight module is only the size of a thumb nail, and the power consumption is only 250mW.” According to Mitch Reifel, vice president of sales and business development for the time-of-flight sensor manufacturer pmd technologies (hereinafter referred to as pmd).

3D sensors play a key role in environmental perception and capture

The evolution of 3D vision or depth imaging solutions. The picture above shows three solutions with roughly equivalent performance, using two different technical solutions (structured light and flight time). Over the past ten years, the size has shrunk approximately ten times. Benefiting from the advancement of these technologies, new applications can emerge in an endless stream, such as robots that move shelves in large warehouses. Depth sensing can help robots recognize which shelves are and help them track objects on the shelves. Garrett Place, head of robotic perception business development at ifm efector in Pennsylvania, USA, said that the application of depth sensing technology in the field of robotics is increasing. This American company is a subsidiary of German ifm Electronic, which is an automation solution provider with pmd. Thanks to better 3D vision, a new application has been realized, and that is depalletizing. As the name suggests, this application involves the use of robots to remove goods from pallets and place them on a conveyor belt. This is very common when the factory purchases. The use of 3D cameras can locate goods faster, thereby improving operational efficiency. “Except for the movement of the robot itself, there is no delay between the completion of one piece and the picking up of the next one,” Place said. “This is too powerful. On average, each item can save 3 to 5 seconds.” The new application case is an automatic forklift. In this task, the self-driving forklift must accurately locate the goods it needs to move the fork. Using a 3D camera to collect the required information can complete tasks faster and more accurately, thereby improving the speed and safety of operations. Emerging applications brought about by the falling prices of 3D modules can also be found in the service industry. For example, using robotic arms to provide people with drinks or other objects, depth sensing can ensure that the objects are safely transported. But when the robot arm is close to the target position, the arm, object, or both may block the robot’s “line of sight.” Therefore, a low-cost 3D sensor can be installed in the robot arm to provide a close-range 3D view, thereby improving safety and performance. However, doing so requires synchronization and data overlay between 3D cameras, as well as information fusion with all other sensors. When multiple 3D technologies are applied together, this fusion of information becomes very important, as is the case when using the products of Israel’s Newsight Imaging company. Eli Assoolin, CEO and co-founder of Newsight Imaging, said that Newsight Imaging provides solutions based on laser triangulation and enhanced ToF, with the purpose of creating a point cloud of distance data.

3D sensors play a key role in environmental perception and capture

A handheld scanner with structured light technology that can capture accurate 3D data from 4.5m away

The cost of collecting such data has dropped dramatically, from tens of thousands of dollars based on lidar solutions a few years ago to hundreds of dollars today. Assoolin said that solutions that use other 3D sensing technologies are even cheaper, and may only cost tens of dollars. Costs will continue to fall, and may even accelerate, because 3D imaging will soon appear in every smart phone. This technology cannot be directly applied to the industrial environment, but the large-scale increase of chips means that the price of non-consumer applications will also drop. As for what applications these new depth sensing solutions can have, one possibility is detection, Assoolin said. At present, advanced sensors can capture lines of several points at a relatively short distance. Newsight now provides chips that can greatly improve detection and can be deployed in a wider range of environments. “Our chip runs 40,000 frames per second and can detect micron-level defects,” Assoolin said. “We can make everything faster, more accurate, and more economical.”

3D sensors play a key role in environmental perception and capture

Declining prices and increasing performance make new 3D imaging applications possible, such as access control

According to Shabtay Negry, Chief Commercial Officer of Mantis Vision, Israel, when considering 3D vision applications in an industrial environment, the environment will have a significant impact on the application and performance. Mantis Vision provides 3D scanning, imaging and video capture solutions. These are all patented algorithms based on structured light that provide the most accurate models in terms of data quality, Negry said. Temperature and vibration affect the accuracy of 3D sensing Temperature changes and vibration will reduce the accuracy of 3D sensing, and these two situations are common in factory floors or other industrial environments. Temperature changes generally tend to change continuously, and when the engine is started, shut down, or objects collide with each other, vibration often causes severe interference. Both will cause the structured light pattern to shift, which may lead to inaccurate 3D sensing results. Therefore, necessary measures need to be taken to deal with these effects, Negry said. In the case of Mantis Vision, the company’s products use a proprietary method for real-time automatic calibration to correct for environmental impacts.

Regarding emerging applications, real-time 3D streaming media applications for gaming and entertainment purposes can also be used for industrial training and simulation. For example, the instructor can demonstrate how to perform a task, using multiple 3D sensors and other sensors to capture each movement. Then the system can perform remote copy demonstration. “Real-time 3D streaming using multiple synchronized sensors opens up new communication methods and new markets,” Negry said. Raymond Boridy, Project and Product Manager of Teledyne DALSA’s Industrial Division, said that improvements in 3D system performance have indeed expanded the possible application areas. The Montreal-based system manufacturer is launching a series of laser triangulation products. Other Teledyne divisions provide ToF and stereo vision technology. Boridy said: “The advancement of laser triangulation technology has increased speed, improved output, and higher accuracy. Teledyne DALSA’s products can distinguish objects with an accuracy of 10 microns, which is very important for high-precision inspections.” “We It can distinguish scratches, defects, and holes as small as 20 microns,” said Boridy. This performance is very useful in certain environments. However, most applications do not require such fine inspection. But it does need the repeatability of the test results, the ability and speed to find defects. Therefore, the 3D vision system must meet the application requirements, and the use of a single depth sensing technology may not be optimal. This is why all technical solutions currently occupy a certain market share, and no technical solution has a clear leading advantage for all applications. However, leaving aside the level of 3D sensing technology, a complete 3D sensing solution requires more than just imaging, because measuring distance is not enough. It also requires multi-sensor data fusion and artificial intelligence algorithms. The depth-sensing lidar technology used in self-driving cars can illustrate this point well. “Lidar data must be calculated, translated, and fused with other sensors through artificial intelligence. This is an important challenge. If it is not properly addressed, it will make Lidar meaningless,” Yole’s Debray emphasized when talking about this situation. The need for a complete solution.

The Links:   CLAA170EA02 SEMIX302GB126V1