Automation Won’t Save You If Nobody Uses It Correctly

Adoption failure and dashboard blind spots are killing enterprise program efficiency. Here’s what actually works—from the field.
May 4, 2026
6 min read

Key Highlights

  • Successful automation depends heavily on human adoption, not just technology deployment.
  • Regular enablement sessions and ongoing support significantly improve user engagement and data quality.
  • Design dashboards to highlight risks and escalation points, not just status updates, to facilitate proactive decision-making.
  • Assign clear ownership of dashboard metrics to ensure data remains relevant and actionable.
  • The real power of automation is realized when dashboards prompt behavior change and continuous improvement.

You've spent money on the tooling. You've designed and built the automation. You've presented the dashboard. And three months have passed, but half of your engineering team is still making manual ticket requests. Project reports are showing green when, in fact, they are not on track, and IT management is left wondering why expenses are not justified.

This isn't a technology problem. It's a problem with the adoption - and it's usually more common than you will ever see in the dashboards.

I've led reporting and workflow automation implementations at Ford Motor Company and Mopec Group. In both instances, the most challenging aspect of the implementation wasn’t the build. It was getting people to buy-in and building dashboards that acted as decision-makers rather than information tellers.

The gap between a working automation and a useful one is entirely human. And it’s 100% solvable.

Why Adoption Fails — And How to Fix It

At Ford, I built a Jira automation pipeline with the goal of eliminating manual ticket creation for engineering teams working on hardware and software in loop validation testing.

Before the automation, vehicle program releases were previously tracked in Blueprint, a legacy internal system that was cumbersome to use and created unnecessary friction for the teams relying on it every day. Through this automation effort, that data now flows automatically into Jira—a modern, widely adopted project management platform—where it arrives already organized assigned to the right teams and populated with everything needed to get to work immediately, eliminating the manual effort that came with the old process.

The automation successfully operated at 95% accuracy during testing. But in the first weeks of rollout, data quality was affected because some engineers were using the system around  instead of through it.

The problem wasn’t resistance. It was friction. The new process required engineers to understand field dependencies they didn’t have to think about previously. If those fields were un-intuitive, people defaulted to their previous behavior. The automation lost out.

Here’s what actually moved the needle:

Biweekly enablement sessions, not one-time training. A single training session will lead to temporary changes in behavior. Regular enablement sessions allow the group to bring up problems and get their questions answered when they’re encountering real issues, not theoretical ones. Our adoption rates were 20% higher with every additional biweekly session

Tracking data quality, not just activity. The number of logins and the volume of tickets provide information on how much the tool is being accessed. Field completion rates and routing accuracy indicate whether the tool is being properly used. These are separate metrics and you need both for the reports you generate to be reliable.

Making the right action easier than the wrong one. We redesigned field defaults and required-field configurations so that filling out a ticket correctly took fewer clicks than filling it out incorrectly. Adoption is not about convincing; it's about making the old habits more cumbersome.

This led to a 20% increase in team velocity, and most importantly, program leadership had reliable data to base release decisions on.

Dashboards That Drive Decisions, Not Just Displays

Three days before a big PI planning review at Ford, I yanked the project dashboard and saw green across all the workstreams I knew were in trouble. Blocked tickets. Overdue milestones. Epics with no recent activity.

There was nothing wrong with the dashboard. It displayed the exact information it had been set up to display. However, the underlying flags—the impediments that indicated the escalation process—hadn’t been modified by any of the teams for more than 15 days.

A dashboard that shows the data entered by people is different from a dashboard that shows reality. It’s the difference between those two things where the trouble in management decision-making arises.

Here’s what distinguishes dashboards that drive efficiency from ones that just generate slides:

Build for escalation, not status. The most valuable dashboards don’t answer “Where are we?” They answer “What needs attention before it becomes a crisis?” Configure thresholds that surface risk signals automatically: Items overdue with no owner change, milestones with no activity in seven or more days, items flagged as blocked but missing a linked dependency.

Make sure the dashboard inputs to accountable owners. If a metric is not owned by any team or individual, it will become obsolete sooner or later. Therefore, all data used in an executive dashboard should be the responsibility of a specific team or person who updates it.

Match refresh cycles with decision cycles. Having real-time or daily performance data at your fingertips is one thing. Using it to improve decision-making and program effectiveness in real time is another. The difference is in the analytics and processes that turn data into answers to questions that can directly affect outcomes.

After I rebuilt the Power BI risk dashboard at Ford to incorporate live flag data and stakeholder inputs, the new visuals weren’t the first thing leadership saw. It was that three workstreams they thought were on track were now showing red. The conversation shifted from a status update into a problem-solving session. That shift—from reporting to action — is what a dashboard is supposed to do.

The question isn’t whether you have a dashboard. It’s whether anyone changes their behavior because of it.

The Real Measure of Enterprise Automation

Enterprise automation can provide return on investment in two specific areas—efficiency when it works, and visibility when it doesn’t. To achieve either, you need the same two things: a healthy team using the system as intended, and a dashboard that’s highlighting data that shows discrepancies and challenges, rather than hiding them.

Those who work to deliberately drive adoption as much as implementation—and use their dashboards to inform the nuanced decisions that lie between the ups and the downs—see that automation tends to snowball, rather than fizzle.

The tooling is the easy part. The discipline is the differentiator.

About the Author

Devashish Kedar

Devashish Kedar

Automation and Project Management Professional

Devashish Kedar is a program management and automation professional with experience at Ford Motor Company and Mopec Group. He specializes in enterprise workflow automation, Agile program delivery, and analytics-driven decision infrastructure for engineering teams. He holds CSPO, CSM, and Lean Six Sigma Green Belt certifications.

Sign up for our eNewsletters
Get the latest news and updates

Voice Your Opinion!

To join the conversation, and become an exclusive member of IndustryWeek, create an account today!