Avery IntelligenceAvery Intelligence

Why 85% of AI Projects Fail (And How to Be in the 15%)

8 min read

Every AI conference shares the same sobering stat: 85% of AI initiatives fail. Gartner says it, Harvard Business Review confirms it. After years of shipping AI products and watching both successes and spectacular failures, I've learned something important that it's not as much AI technology failing us, as the lack of basic product management.

The Convenient Excuses

When AI projects fail, teams more often than not reach for familiar technical excuses - "We need better data.", "The model needs to be more accurate." "We can't trust LLMs because of hallucinations."

That's not at all our experience with LLMs - the technology today is ready to be used and just waiting to be unleashed, but a 99% accurate solution to the wrong problem has 0% business value. I recently watched a team spend six months perfecting an LLM that could answer any company policy question. Impressive, except employees didn't have policy questions - they needed approval workflows. One user interview could have saved them half a year.

The Real Problem: Skipping the Fundamentals

The pattern is predictable. Teams get so excited about implementing GPT-4 or building RAG architectures that they forget to ask if users actually want what they're building. Six months of development before the first user conversation. Demo day arrives, executives applaud, then... nothing.

I've seen teams debate for months between different LLM models while never talking to a single user. The best model is the one that solves a validated user problem.

What Actually Works

Start where every successful product starts: with users. Before writing code or calling APIs, shadow users doing their current tasks. Map their actual workflows. Find where AI reduces friction instead of adding features.

Your first AI implementation should be embarrassingly simple. I helped a financial services team who wanted an all-knowing AI analyst. User research revealed people just needed help writing email summaries of reports.

The Bottom Line

The 85% failure rate is what happens when teams build AI products without product discipline. The AI revolution is real, but it won't be won by those with the best models - it'll be won by those who remember that even the most sophisticated AI is worthless if users won't use it.

Three questions determine your fate:

  • When did you last watch a user try to complete their task?
  • What specific workflow does your AI improve?
  • How do you measure user success, not model performance?

If you can't answer these immediately, you're probably heading for the 85%. But now you know how to change course.


We're Still in the Terminal Era of AI Interfaces

Remember when using a computer meant memorizing DOS commands? When you had to know exactly what to type to make anything happen? That's where we are with AI today. Opening ChatGPT's blank text field, you're met by a blinking cursor might as well have been a command prompt from 1985.

We've built incredibly powerful AI systems and then given users the equivalent of a terminal window to access them. No wonder AI products struggle with churn - we're asking normal humans to think like engineers.

The Blank Slate Problem

Traditional software guides you. Buttons show what's possible. Menus reveal options. Error messages tell you what went wrong. Most AI product today show you a text box. Good luck.

Teams build sophisticated RAG systems, fine-tune models for months, then present users with... a chat interface. Six months later, adoption is near zero. The technology worked perfectly. The interface didn't.

What Actually Works

The breakthrough AI products aren't building better chatbots - they're hiding the chat entirely.

GitHub Copilot doesn't ask you to describe what code you want – it watches you type and suggests completions. No prompting required. Developers don't even realize they're "using AI," they're just coding faster.

Notion AI appears right in your document when you type "/". It's not a separate AI experience, it's just another formatting option. Users don't have to context-switch or learn a new interface.

Midjourney still has text input, but they added sliders for chaos, stylization, and quality. Complex creative parameters became simple visual controls. Suddenly, non-technical artists could create professional imagery without mastering prompt engineering.

The pattern is clear: successful AI products don't make users work to discover value. They bring intelligence to where users already are, in forms they already understand.

What This Means for Builders

Stop defaulting to chat interfaces. Start with these questions instead:

  • Where does the user already spend time? Put AI there.
  • What decision are they trying to make? Give them controls, not a text box.
  • What would this look like if the user never had to type anything?

I've started approaching every AI project with this question: "Would my mom understand how to use this in 10 seconds?" If the answer involves teaching her to write better prompts, I'm building for the wrong era.

The Terminal Era Won't Last

Today's AI interfaces will look as primitive to future users as DOS commands look to us. The winners will be whoever figures out the GUI equivalent for AI – the interface breakthrough that makes artificial intelligence accessible to everyone, not just those willing to learn prompt engineering.

Furthermore, the next generation of AI products won't feel like AI products at all. They'll just be products that happen to be intelligent. No prompting, no chat windows, no "talking to the computer." Just tools that understand context and intent without being explicitly told.

The terminal era of AI is ending. The question is: are you building for what's next, or are you still asking users to type commands into a box?

© 2025 Avery Intelligence