For several decades, machine vision technologies have helped manufacturers — from automotive to semiconductor and electronics — automate processes, improve productivity and efficiency, and drive revenue. Machine vision technologies, as well as artificial intelligence (AI) software and robots, have become more important as companies safeguard against disruptions from the labor shortage and COVID-19.
In the manufacturing space, these technologies continue to evolve to meet the ever-changing needs of today’s production processes. The main challenge presented to engineers is keeping pace with new advancements and their capabilities and evaluating which are best suited for a particular application.
The following looks at new developments in deep learning, cloud computing, sensor technology and 3D imaging and how these open unprecedented opportunities for machine vision systems and the manufacturers who deploy them.
Deep Learning Is Not Magic
One of the key machine vision trends ahead is the continued rise of deep learning in automated inspection applications in manufacturing. Deep learning has been hyped in the industry for some time now, presenting both opportunities and challenges.
On the opportunities side, deep learning is a paradigm change, as real-life data — hence, experience — are needed to configure a machine-vision application. This is something manufacturing engineers can relate to, as they expect such a system to be reliable and easy to implement.
As always, hyped technologies raise expectations. With great expectations comes the risk of high disenchantment. While it is true that deep learning can make implementation of vision systems easier, it is not the ideal solution for every application, and success does not come without effort.
The first challenge for engineers and systems integrators is to figure out whether deep learning is the right technology for their application. It might not be — just as machine vision itself might not be the best solution for any given problem. The key is understanding the application and understanding how deep learning works. And today, there is still a fair bit of confusion regarding deep learning.
One widespread myth is that deep learning makes it easy to program an inspection system, even with poor-quality images. This is not true. As with any vision application, the quality of the input data — the images — is essential to the quality of the output. This is particularly true for data used to train an algorithm. Deep learning is not magic. The better the input data, the better the application’s performance will be.
The workflow for implementation of a deep-learning vision system is different from that of a system using exclusively rules-based algorithms. It might be easier to implement, but that does not mean the implementation requires less diligence and understanding of the application. Success starts with the training data. Not only should its quality be good, but quantity is also important: enough data must be available in the first place.
Fortunately, most manufacturing processes do not produce many defective parts. However, this is a curse for machine learning because engineers often don’t have enough example images of documented defects to train a system. If the dataset is too small or its quality is not sufficient, maybe deep learning is not the right technology for that application.
When enough training data is available, proper labeling is essential for the implementation of a deep learning inspection system. Are resources available for that task? Is there a clear and common understanding of what is and what is not a defect? These are aspects to consider. Tools that promote features like collaborative labeling with error analysis and validation can help.
Another aspect is the output data. Deep learning helps identify defects reliably and produces a lot of defect data, but it does not provide a root-cause analysis of how defects were produced and how to eliminate them. Analyzing output data to solve problems and continuously improve the process is the next challenge facing manufacturing engineers.
Cloud or Edge Computing?
The cloud computing trend, which has already impacted many other industries, will certainly impact the machine-vision industry. However, its adoption will probably be limited in industrial vision applications, mainly for two reasons.
First, industrial inspection typically requires high-speed, real-time processing with low latency, which at this stage is hard to achieve with cloud computing. Second, networking industrial inspection systems to a company’s IT infrastructure is a complex endeavor that raises IT security issues. It also involves risk that a production line must be shut down for hours in case of a breakdown or for maintenance of a remote server, which can cost millions of dollars in a very short amount of time.
The likelier scenario for cloud computing in machine vision will be processing on the edge. This involves carrying out image processing locally in real time, with the results uploaded to the cloud for further analysis. In a deep-learning application, training and storage of data can take place in the cloud while the actual execution of the inference is performed on the edge.
Sensor Technology Innovations: Are More Megapixels Always Better?
Another major trend in the vision industry is sensor technology innovation. Sensor resolution is constantly increasing, with smaller and smaller pixels. Short-wave infrared (SWIR) sensitivity is rapidly gaining ground, driven by Sony’s latest ViSWIR sensors.
Event-based sensors have also opened new opportunities for vision applications. While all these new technologies are exciting for engineers at manufacturing facilities, it would be a trap to believe that one significant development will cover the needs of all applications.
For example, a high-resolution sensor may not always be the best solution compared to an array of several lower-resolution cameras or one camera moving across a field of view. Technical challenges on the optics and lighting side, or cost considerations, may justify a lower-resolution setup. There again, an in-depth analysis of an application’s specific requirements is necessary to select the appropriate technology.
3D Imaging Growth Continues
3D imaging has been a trend for a few years now and continues to mature. Thanks to 3D imaging advances, objects no longer need to be fixed and positioned in a preset way for inspection. Thus, the market comes closer to the holy grail of machine vision: bin picking, the ability to grab objects randomly positioned in a bulk container.
Today, the efficiency of bin picking still heavily depends on the geometry of the object: How easily can it be grabbed? How easily can it be separated from the rest? 3D imaging has made a lot of progress, and it is now no longer necessary to perform pixel contrast analysis to interpret the geometry of an object from a 2D image. Thanks to this technology, we can now perform actual geometric measurements of an object.
Several technologies enable the capture of 3D information: laser triangulation, structured light, stereoscopy, time of flight. There again, engineering expertise and a deep understanding of the application are required to select the right technology for a given use case. For example, engineers must know which level of precision is required. Laser triangulation is accurate but requires motion of the object or the 3D scanner. Is it compatible with the overall application setup? These are the types of questions that must be addressed when choosing a 3D imaging technology for a manufacturing scenario.
Engineering Expertise Needed
With so many new technologies and possibilities emerging in the machine vision market, expectations are high. However, these technologies can deliver on their promises only if they are properly implemented in the right application use cases. With the costs and labor associated with defective products, including scrapped and reworked parts, damaged reputations and recalls, manufacturers must be diligent when identifying the right applications and technologies for disparate automated inspection tasks.
David L. Dechow is an experienced engineer, programmer, and entrepreneur specializing in the integration of machine vision, robotics, and other automation technologies, with an extensive career in the industry. He is the is vice president of Outreach and Vision Technology at Landing AI.