Getty Images
Getty Images Sb10069136b 001

Looking at How AI Can Improve Computer Vision

March 10, 2020
This Q&A with Levatas' Johann Beukes takes a closer look at how utilizing AI can significantly impact how manufacturers utilize machine vision.

The definition of a smart manufacturer can vary significantly from one organization to another. However, truly becoming a “smart manufacturer” often depends heavily on an organization’s ability to seamlessly integrate the latest tools and technologies into existing production environments. And doing so in a manner that provides noticeable improvements in productivity, efficiency and capability.

The ability to blend artificial intelligence into the computer vision process could be a prime example. I recently had the opportunity to connect with Johann Beukes, vice-president of data science and analytics at Levatas to hear his insights.

IW: In what ways can AI and computer vision work together to help a smart manufacturer?

Beukes: The manufacturing space allows for many opportunities to improve real-time decision support for business systems, such as increased energy efficiency, supply chain and network optimization, and planned maintenance. However, capturing the data to reach these decision-support objectives requires new sensors, which could be cost-prohibitive, as well as functionally limited. This is where computer vision (CV) could be a more cost-effective and flexible “data-capture” option.

Using autonomous or semi-autonomous robots such as Spot®​ from Boston Dynamics, an engineering and robotics design company, further enhances the accessibility and reach of computer vision data-capture. Instead of installing static sensors, a single robot with CV capabilities can be deployed to cover several ​current use cases, and easily be adjusted to start capturing data for ​new​ use cases.

Additionally, computer vision can use the ​same images captured, but extract ​new information using new models in the backend. Unlike a sensor that is typically single-use, computer vision is flexible in its application, since application occurs at the software layer. For example, capturing a video stream of a conveyor belt and the products being transported allows several applications, such as anomaly detection and quality control.

IW: What are the challenges to getting the most out of AI and computer vision?

Beukes: The performance of even state-of-the-art CV models range from mediocre to better-than-human perception, but achieving high levels of performance comes at a cost. Current CV approaches use deep learning architectures that require large amounts of data, and thus, investment in expensive, optimized processing units can be exorbitantly expensive in cases where well-performing models are still unavailable.

That said, there are several ways to overcome, or at least improve, model performance without incurring high upfront investment. First, use transfer learning where you begin with previously trained models, and then train only the ​final layers of a neural network on a small set, focusing it on the specific problem area being addressed. Frequently, this approach helps ensure strong results from as little as a few thousand images, versus starting from scratch with hundreds of thousands of images.

Another way which applies to more than computer vision, but has been seamlessly integrated into machine-learning pipelines by companies like Levatas, is by using a human-in-the-loop approach. Instead of producing models only when it reaches, for example, an 85% accuracy, you can include humans in the pipeline and thus, ‘shrink the haystack’ to find that needle much faster. This does not ​replace humans, but rather ​optimizes time spent by humans using machine learning, while also tagging data for the model to improve over time.

IW: Any best practices to deploying computer vision in today's increasingly digital manufacturing environment?

Beukes: The ROI on implementing the right solution relies on many factors, one being the implementation details. An important aspect of CV models is that, over time, they could suffer from what is called “concept drift,” where the conditions being monitored changes from the original data on which it was trained.

For example, one of the real-world cases we’ve come across had a computer vision model trained to track inventory by counting and classifying boxes as they enter a packaging area. That data fed into other systems, including the packing and supplies ordering system. One of the attributes captured by the computer vision model was the boxes’ dimensions, providing a real-time count of the number of boxes and specific sizes being taken out of the inventory. A new box size was added to the packaging process on which the model was unfamiliar, and it had to be subsequently retrained to capture this new size. Fortunately, in this case, there was also an anomaly detection model using the same images that captured this “non-standard” box type as an anomaly, which triggered a supervisor to take action.

The lesson here is that for computer vision, as well as most machine learning algorithms, to pay off, it needs to be deployed with a plan on how it will be monitored and adjusted. Many production deployments take this into account and use the aforementioned feedback loop before building a framework that allows human-in-the-loop data-tagging, or other feedback loops, into the machine-learning pipeline. In both cases, though, ‘setting and forgetting’ is currently a non-option for successful computer vision deployments.

IW: When it comes to leveraging AI, where are today's manufacturers missing out on key opportunities?

Beukes: Often, manufacturing focuses on the areas where there’s the greatest activity — in the process of actual manufacturing or, perhaps, safety. However, other opportunities for optimization and cost reduction are left on the table by only thinking in a myopic way when it comes to utilizing AI.

In the span of a day, there are hundreds of decisions being made by humans that can affect the bottom line, and using AI to allow smarter, faster, and more informed decision-making is just one aspect of leveraging AI. Another area that seems natural to manufacturing, but is under-utilized, is the automation of processes and procedures. Referred to as Robotic Process Automation (RPA), relevant areas include procurement automation, collecting actionable insights that can be mobilized in real-time during the manufacturing process, and assisting in regulatory requirements, which often requires costly, manual work with minimal ROI.

In manufacturing’s future, computer vision will become crucial in the optimization of workforce through capturing the movement of people and equipment through the process. Having a digital twin of a manufacturing plant that is able to run different operational processes in a simulated environment, given different conditions and constraints, will become as natural as the current use of flight simulators for pilot training. Where AI really shines is in its ability to influence and experiment with different features, such as optimizing labor-force size, energy efficiency, or maximum throughput. Although it all starts with computer vision, the wide spectrum of machine learning and artificial intelligence enables the best possible outcomes. 

Popular Sponsored Recommendations

Voice your opinion!

To join the conversation, and become an exclusive member of IndustryWeek, create an account today!