Skip to main content
Case Studies

You Paid a Fortune for AI + ERP, but Got Only 10% of the Value — The “Last Mile” Problem Nobody Talks About

90% of enterprise AI projects fail—not because the technology is bad, but because people refuse to change. HBR and erp.today expose the Last Mile problem that costs companies millions every year.

26 Mar 202613 min
AI ERPLast MileDigital TransformationChange ManagementEnterprise AIROI

Where Did the Money Go?

Picture this: your company has just spent tens of millions of baht on a new ERP system with a full suite of AI modules. The vendor completed the implementation, delivered a polished demo, and management applauded — but six months later, employees are still buried in Excel, doing things the same old way.

Sound familiar?

Data from Talyx.ai makes it painfully clear: 90% of enterprise AI projects fail. Not just “underperform” — they fail outright, consuming budget without delivering the value that was promised. Even the projects that technically “succeed” often produce only 10–30% of the expected value.

The uncomfortable question few people ask is: where did the other 70–90% go?

The answer is not in the technology itself. It is not because the wrong vendor was selected. And it is not because the budget was too small. It lies in the “Last Mile” — the final stage where technology must turn into real human action inside the organization.


What Is the “Last Mile” — and Why Is It Where AI + ERP Breaks Down?

In logistics, the term “Last Mile” refers to the final leg of delivery — from the distribution center to the customer’s doorstep. It is usually the most expensive and difficult part, even though it covers the shortest distance.

In AI + ERP, it works the same way. The Last Mile is the point where AI makes recommendations, but people still have to decide whether they trust them enough to act.

In its March 2026 issue, Harvard Business Review put it bluntly: the Last Mile problem is significantly slowing AI transformation — not because AI is bad at thinking, but because organizations have failed to redesign decision-making processes to support AI.

erp.today goes even further: AI failure in ERP is almost never a technology failure. It happens because organizations deploy intelligence into their systems without redesigning how decisions are actually made.

Think about it: you install highly capable AI into your ERP system, but the people using that system still work the old way, rely on the same instincts, and do not trust AI-generated insights. The result? The AI works perfectly well — but no one listens to it.


90% Fail — But Not Because the Technology Is Bad

If you are an executive who recently approved an AI + ERP budget, these numbers may keep you awake at night:

  • 95% of AI pilots fail due to governance issues and lack of trust
  • 60% of AI projects without proper data readiness will be canceled by 2026
  • Only 48% of AI projects actually make it into production — the other half die in the pilot stage
  • IBM reports that 53% of executives admit that difficulties integrating AI with legacy systems caused projects to collapse

But more important than the statistics is the real root cause. Across report after report, the conclusion is the same: the issue is not that AI is not smart enough. The issue is that the organization never changed.

McKinsey estimates that AI-ERP integration issues are causing average losses of €500,000 per company in Europe, and the situation in Asia is no better. Organizations buy world-class technology — and get village-level outcomes.

Why does this happen? Because technology can change in a day, but organizational culture takes years. And most companies do not want to invest in something intangible.


A Key Warning Sign: Employees Go Back to Excel

How can you tell if your organization is facing a Last Mile problem? There are several warning signs, and they are often overlooked:

1. Employees go back to spreadsheets — This is the number-one red flag. When teams export data from the ERP system into Excel and redo the analysis themselves, it means they either do not trust the AI’s output or do not know how to interpret it.

2. AI recommendations are overridden as a habit — Not occasionally, but routinely. The procurement team receives an AI recommendation on how much to order, but the team lead insists on using the same numbers “we’ve always used,” ignoring the new data entirely.

3. The system is used only for compliance — Staff enter data into the system because they are required to, but real decisions happen elsewhere: in meeting rooms, on LINE groups, or inside a manager’s head.

4. “AI doesn’t understand our business” — This is a dangerous phrase. Sometimes it is true, but most of the time it is an excuse to avoid changing how work gets done. If you hear it often, ask a harder question: “Have we ever actually followed the AI’s recommendation and compared the results?”

5. ERP data no longer reflects reality — Once employees lose faith in the system, they also stop caring about data quality. The result is garbage in, bad recommendations out, and even less trust in the system — a vicious cycle that becomes harder and harder to fix.

What makes this especially dangerous is that these symptoms often do not show up on executive dashboards. The system may still report “Active Users: 500”, but in reality, 400 of them are only logging in to do the bare minimum. They are not using AI in any meaningful decision-making at all.


The 16% That Succeed — What Are They Doing Differently?

Amid all the bad news, there is one encouraging insight: according to Impact Advisors, 16% of organizations that seriously invest in Last Mile adoption outperform average revenue growth.

So what are they doing differently?

They redesign processes, not just install technology

Successful organizations do not simply layer AI on top of existing workflows. They rethink the process from the ground up, asking what decision-making should look like when AI becomes part of the team.

For example, instead of asking AI to recommend something and then waiting for human approval every single time, they divide decisions by level. Low-risk decisions are automated. High-risk decisions still involve people.

They build trust before they scale

Instead of deploying AI across the entire organization at once, they start with small, open-minded teams. Those teams see real results first, then become internal advocates to other teams.

People always trust colleagues more than vendor presentations.

They measure behavior, not just deployment

Their KPI is not “100% system installation completed.” It is “what percentage of decisions are actually using AI-driven insights?” That is a completely different metric. A deployed system that no one uses is no different from a system that was never deployed.

They invest in people as much as technology

Deloitte data shows that most employees still lack practical experience using AI. Organizations that succeed address this with training that goes beyond button-clicking. They focus on changing mindsets — helping people understand that AI is not a threat coming to replace them, but a tool that helps them do their jobs better.


How to “Close the Last Mile” in Thai Organizations

In the Thai context, the Last Mile problem comes with additional layers: hierarchical culture, deference, and fear of making mistakes all make the challenge even harder.

1. Start with the pain point, not the technology

Do not begin with the question, “What AI should we use?” Start with, “Which process is causing people the most frustration?” Then look at how AI can help solve that specific problem.

When people see that AI is addressing something they genuinely struggle with, resistance drops dramatically.

2. Redesign the decision framework

Be explicit about which decisions AI can make on its own, which decisions require human review, and which decisions must remain fully human but use AI as an input.

Ambiguity here is one of the main reasons people choose not to use AI at all. If they do not know who is accountable when something goes wrong, they will avoid the risk.

3. Create a feedback loop people can see

Let teams compare outcomes: “What happens if we follow the AI recommendation?” versus “What happens if we stick to our old instincts?” When people repeatedly see real numbers, trust starts to build naturally.

4. Do not go Big Bang — use a phased approach

Rather than deploying AI across the company at once, start with the department that has the highest readiness. Prove the outcome there, then expand. This reduces risk, lowers resistance, and creates internal success stories that help drive wider adoption.

5. Measure what actually matters

Stop measuring “number of users who logged in” and start measuring:

  • The rate of decisions that use AI-based insights
  • Decision-making time (is it actually decreasing?)
  • Decision accuracy (is it improving?)
  • How often AI recommendations are overridden (is it happening too much?)

Conclusion — Do Not Blame AI If You Refuse to Change How Work Gets Done

If you buy the best car in the world but still ride a horse to work, you will never get the benefit of that car.

It is the same with AI + ERP. The technology is not the problem. People and processes are.

The 90% that fail do not fail because they bought the wrong tools. They fail because they never changed the way they work to align with the new technology. The 16% that succeed do not succeed because they spent more money. They succeed because they invested in the Last Mile — in people, in process, and in organizational culture.

So the next time a vendor pitches you on how AI will transform your business, ask them this: “How exactly will you help us change the way people work inside our organization?”

If they cannot answer that, you are about to waste a lot of money.


Looking for advisors who understand both technology and organizational change? Contact the Enersys team to assess your organization’s AI readiness — before you spend millions more on a system no one actually uses.


References

"Empowering Innovation,
Transforming Futures."

ติดต่อเราเพื่อทำให้โปรเจกต์ของคุณเป็นจริง