Industry 4.0 made use cases for artificial intelligence (AI) and machine learning (ML) widely popular. Over time, the expectations on management’s end skyrocketed, while the outlook from the enterprise IT and operations level took a downturn from elation to cautious optimism. This was because reality set in as challenges arose—including the lack of skilled personnel and insufficient understanding of algorithms needed to select the right people for the project.
I have had many discussions with IT professionals who handle AI and ML projects as the main part of their jobs. Their knowledge and experiences are based on countless dead ends, sleepless nights and fierce conversations with customers and other stakeholders.
Here are some of the main points revealed during our discussions:
Let’s skip the givens, such as the need to set the right expectations, business case creation, governance, and stakeholder involvement, and move right to project management. When working with AI and ML, the overall experience of the experts was that the project management should be based on an agile approach rather than a waterfall. Working with ML technology requires a different approach than working with commodity AI. Hence, the development team must determine the purpose and desired parameters from the outset.
During the initial stages of the projects, AI/ML experts realized how isolated ML knowledge is from other IT skills available in the traditional IT department.
At the very start of the project, certain questions need to be asked, the most obvious being: Should we develop the solution on our own or buy one? Followed by the less obvious questions: Who will operate the model? Who will take care of retraining staff? Who will fine-tune the hyperparameters of neural networks? Who will be responsible for monitoring their quality and for how long? Will the solution be deployed in one workplace, or can the neural network cover more locations?
And closing with the million-dollar question: Who should be on the development and deployment team?
Experts also note the following important considerations:
- How will the neural networks be monitored?
- What infrastructure do we need? Is it in line with the company’s technology development strategy? What safety measures will we need to take?
Another important thing to be aware of are the pitfalls related to the development of ML-based models.
According to the experts, the most underestimated ML-related activity is the labeling process: identifying raw data (images, text files, videos, etc.) and adding one or more meaningful and informative labels to provide context. The accuracy of a neural network depends on the quality of the inputs, but these are often provided by humans. An effective labeling process is absolutely crucial for the success of the final model. Typically, thousands of images must be assessed, sorted, and labeled. Targeted accuracy levels can be impaired as the network makes decisions based on the statuses that were badly defined or not defined at all.
However, the labeling process is intensive and tedious and can’t be carried out for long periods. Also, what an expert has to manually label for admission to the network must be defined. This level of difficulty is often poorly understood by clients, who wonder why neural networks can’t be trained to recognize the statuses autonomously.
Hence, project leadership should provide a competent person who is responsible for determining whether the object is OK or not OK, rather than relying wholly on the IT function. The experts point out that one way to improve the labeling outcomes would be to involve competent production quality or process engineers in the process.
As an illustration of the extent of the case mentioned here, one expert from the automotive industry revealed that to train the model to identify 17 types of non-conformities (defects), over 140,000 samples had to be used for the training. The software system—including a whole ecosystem of infrastructure, backend, integrated layers, and databases—went through 25 iterations. And the neural network model had 52 iterations!
Projects related to the development and deployment of AI-powered technology in manufacturing in the operational environment require a combination of specific skillsets and expertise. Therefore, the right team setup is crucial. The experts identified three main groups of project stakeholders:
Customers: These are the people who will be operating in the environment where the technology is being deployed (manufacturing or maintenance engineers, quality control managers, etc.). They define the problem and/or process that is managed by the solution, as well as who will be using the outputs of the solution. One of their key responsibilities is to define the inputs for the machine learning models.
Translators: A person who knows the process and can explain the situation, status, or problem to data scientists. The translator is also able to interpret the scientist’s outputs and integrate these into the operational process.
Developers: It takes a village to put together a team with the required—and often even new—skills. You are halfway there if you have the skillsets of a data scientist, data engineer, data architect, UX designer, and delivery manager in place.
To boost the capabilities of the AI team, the best practice often mentioned was building the ecosystem around AI. That includes technology vendors, universities, private education initiatives and active memberships of industry clusters. Some gained value from internal training programs, so-called “analytics boot camps.” This includes management training for a better understanding of where AI-supported technology could be deployed and what value it could bring.
Unfortunately, many initial expectations for deploying AI-powered technology emerge from the straightforward plan of saving workforce costs. However, the experts warn against this way of thinking. According to their experiences, someone still needs to be present in the production environment to check the model’s outputs against the originally intended input. Without a person carrying out this function, there is still a chance that an inaccurate item will be approved to the next step in the process.
Hence, the collaboration between the customer (production) and data team is not finished when the model is handed over to operations. As quality control requirements or technology conditions change over time, so must the related technology.
Last but not least, the experts shared another practical recommendation: After the solution model is handed over to production, it is important to have a view into what extent is it being used by the operators. This is key for the project’s long-term success because if it is being underutilized or improperly utilized, production may become discouraged with the results and be inclined to revert to the original process.
AI-powered technology has definitely proved its value for a handful of use cases across industries. However, the path toward a fully industrial and scalable solution is still bumpy and winding. Whether you are a pioneer or an experienced expert, there is still the potential for pitfalls and unpleasant surprises.
On the other hand, there is no need to reinvent the wheel. Most of the challenges have been already identified across industries and in different types of operational environments. The foundation of such projects must be built on the solid ecosystem of IT vendors, advisors, technology and AI experts. In addition, to enhance local competence, organizations must keep investing in the development and training of the internal workforce.
In conclusion, organizations need to focus on the big picture. Knowing the internal client and persevering side by side are the cornerstones of success. Nevertheless, taking small steps will be important, as a focus on the little things will prepare organizations to tackle much larger obstacles. Achieving quick wins will provide teams with experience and encourage further investments from management.
Digital transformation is not only a matter of technology, but also of culture and collaboration. Nothing can be built and run properly without people believing in it. So define the need, address it with technology and let people search for a solution. There is no guaranteed failsafe method, but this has proven to be the most viable approach.
Jan Burian is senior director, head of IDC Manufacturing Insights EMEA and leader of Europe: Future of Operations Practice.