Thinkstock
Industryweek 36718 Artificialintelligence 636754212 0
Industryweek 36718 Artificialintelligence 636754212 0
Industryweek 36718 Artificialintelligence 636754212 0
Industryweek 36718 Artificialintelligence 636754212 0
Industryweek 36718 Artificialintelligence 636754212 0

Taking Your AI Projects from Pilot to Production

Dec. 10, 2019
Six tips to keep from getting stuck in the proof of concept stage.

The rise of AI has made it possible for automated visual inspection systems to identify anomalies in manufactured products with high accuracy.  If implemented successfully, these systems can greatly improve quality control and optimize costs. Although many manufacturers are trying to implement such systems into their workflow, very few have managed to reach full-scale production.

The disconnect occurs because proof of concept solutions are put together in a controlled setting, largely by trial and error. However, when pushed into the real world with real-world constraints like variable environmental conditions, real-time requirements, and integrations with existing workflows, proof of concept often breaks down.

In a 2019 white paper, the International Institute for Analytics estimates that less than 10% of AI pilot projects have reached full-scale production. After multiple customer engagements, we have identified six practices that can help your machine learning projects succeed.

1. Have a clear data-acquisition strategy.

Data is crucial for any AI project, so be ready to quickly collect a first version of the dataset, then iterate rapidly since the first version will never be perfect. Keep in mind that the more diverse and closer to the real operative conditions your data is, the larger the chances of your project’s success will be. You can always make things easier at this stage by having a small team dedicated to data labeling or by using existing labeling services and tools. If you succeed in your data-acquisition strategy, the project will start to flow better. So be prepared to spend the major part of the project working directly with the data.

2. Identify all project dependencies and risks up front

To go from proof of concept to production, identify the various conditions in which the system is expected to perform. It may not be possible to solve for all of them, but at least they have been identified, and engineers can understand how the environment might affect the model performance. Most importantly, document the operating conditions even if they cannot be solved. Model builders can create alarms for when those conditions behave adversely. Operating conditions can include air quality, noise, light and power supply—anything that might affect the model’s performance.

Be sure to rank your risks and act accordingly to mitigate those risks based on this prioritization. If you identify a potential blocker to go live, you should start working to remove it at an early stage of your project.

3. Formalize stakeholder agreement on performance criteria and success metrics.

 Too often, misunderstandings lead engineers to build solutions for problems that they were not meant to solve while leaving the target problem unresolved. There should be zero ambiguity. This requires clear communication and transparency between the business leaders and technologists who are building the solution. All stakeholders must agree on the performance and quality criteria that the solution is expected to meet. Have the stakeholders sign off on documentation that defines the goals and measures of success.

It's also important to understand what human-level performance means as a benchmark for success. Human-level performance is simply that of a human doing exactly the same task that the machine is designed to do—not necessarily what humans are doing to solve the problem. If the machine is using a 2D image, the human-level performance should be estimated using the same representation.

You will find that frequently the performance of the current system is not clear. Your goal is to help the people involved quantify it. The customer may want a solution that exceeds human-level performance, but all stakeholders must first agree on what human-level performance means.

4. Don’t reinvent the wheel.

Reuse something if you have ready-made solutions, or look for open-source or commercial tools that do similar jobs. You don't want to spend time solving problems that somebody has already solved. Worse, you risk introducing errors into the system by working on things that are not directly related to the model.

Sometimes, for example, the test data and labels are wrong for the business objective at hand and need to be edited. Tools are required for that editing to track versions of the dataset and to enable multiple people to collaborate on editing the data. Those tools can be adapted from open-source solutions, or they may even be purchased from a vendor. Don’t waste time building them yourself.

5. Focus on delivering value, not 100% accuracy

Your AI projects don't require 100% accurate models to be successful in a production environment. Machine learning algorithms are approximate algorithms, which by definition, always have some error rate. Designing for 100% accuracy is like designing a bridge that will withstand any event, including an asteroid strike. The right solution isn't to pour an enormous investment into basic research to find an unbreakable material in which to build your bridge. The solution is to frame your machine-learning task in a way that delivers significant value with less than 100% accuracy.

Engineers tend to dive deep into the technical aspects of the algorithm. However, high-performing AI teams build baselines quickly and iterate rapidly to improve them. One of the ways to speed up your AI project is to integrate daily sprints (one-day, well-defined development step, experiment or change in direction of the project) into your workflow. If your goal is going live, be sure to start simple, use existing tools and avoid adding unnecessary complexity to the solution.

6. Keep humans in the loop.

It is important to agree on what problems are worth solving with machine learning and what problems can better be addressed by a human. While there is a perception that AI systems replace people, machine-learning models are only meant to improve human efficiency.

When humans label images for a computer vision system to learn what those images represent, they are in effect mapping human perception onto a network built of computer code. But, we cannot download all human knowledge or reasoning into a deep learning network. No matter how complete and well-annotated the training data, it will miss some percentage of relevant cases. That’s where the human comes in. Trying to solve for all cases can be prohibitively time-consuming and expensive if they can be solved at all. Establish a protocol for punting those cases to a human.

Trusting your AI solution is also a process. Try to create a sequential strategy to transition from the current process to the AI-based solution—a practice that we at Landing AI call “shadowing mode.” Always keep some part of your data safely stored to test how your solution will behave when facing unseen conditions.

To sum up, adopting AI is going to be a process rather than a pill.

Greg Diamos is the engineering tech lead at Landing AI, a Silicon Valley-based startup dedicated to empowering companies to jumpstart AI adoption and realize practical value. He leads a team of engineers who partner with customers in the manufacturing industry to develop AI enabled visual inspection solutions, including leak detection, micro particle detection and surface defect detection. Prior to Landing AI, Greg served as a research scientist at Baidu’s Silicon Valley AI Lab. He holds a Ph.D. from the Georgia Institute of Technology, where he led the development of the GPU-Ocelot dynamic compiler, which targeted CPUs and GPUs from the same program representation.

Popular Sponsored Recommendations

Empowering the Modern Workforce: The Power of Connected Worker Technologies

March 1, 2024
Explore real-world strategies to boost worker safety, collaboration, training, and productivity in manufacturing. Emphasizing Industry 4.0, we'll discuss digitalization and automation...

3 Best Practices to Create a Product-Centric Competitive Advantage with PRO.FILE PLM

Jan. 25, 2024
Gain insight on best practices and strategies you need to accelerate engineering change management and reduce time to market. Register now for your opportunity to accelerate your...

Transformative Capabilities for XaaS Models in Manufacturing

Feb. 14, 2024
The manufacturing sector is undergoing a pivotal shift toward "servitization," or enhancing product offerings with services and embracing a subscription model. This transition...

Shifting Your Business from Products to Service-Based Business Models: Generating Predictable Revenues

Oct. 27, 2023
Executive summary on a recent IndustryWeek-hosted webinar sponsored by SAP

Voice your opinion!

To join the conversation, and become an exclusive member of IndustryWeek, create an account today!