S
Spenser
Founder, Obelisk

Why AI Projects Fail

I stopped counting AI pilots after the first couple dozen fizzled. The failure rate everyone quotes-90 to 95 percent-tracks with what I see in the wild: slick demos, enthusiastic steering committees, and then nothing in production.

The models are not the choke point. GPT-2 was good enough for half the experiments people are spiking today. The real drag lives in the plumbing: data access, ownership, incentives. MIT and BCG have published the same warning for years, but if you have ever sat through a post-mortem, you already know.

Memory Keeps Going Missing

Most enterprise AI still behaves like a goldfish. It handles a single ticket, forgets the conversation, loses the documents, drops the customer history. That is fine for a proof of concept and deadly in the back office. Teams end up standing a human next to the workflow just to stitch continuity back together.

No Feedback Loop, No Improvement

Too many launches treat "go live" as the finish line. They deploy the model, clap, and move on. Three months later the edge cases pile up, nobody knows who is curating corrections, and the accuracy chart slides. MIT's Winning with AI study called this out in 2020; nothing has changed except the marketing gloss.

Internal Builds Lose Steam

I love an in-house tool, but AI programs chew through attention. Between vendor APIs, data contracts, compliance sign-offs, and stakeholder politics, the internal champion has to keep ten plates spinning. The minute they step away, the whole thing drifts. Vendors are not magic, but at least their job is to obsess over the boring parts.

Data Access Makes or Breaks It

The best model cannot help if it is working blind. I keep seeing teams wire an LLM to a stale export and call it a day. No context, no fresh signals, no shot. Give it real-time data, or do not bother.

Humans Still Carry Judgment

The point of automation is not to erase people; it is to stop wasting their time on janitorial work. The teams that win treat AI as a teammate. They keep the review lane tight, measure why interventions happen, and feed that back into the system without drama.

Those are the customers we enjoy the most at Obelisk. They are not chasing buzzwords-they are quietly replacing brittle playbooks with resilient workflows. That is our favorite kind of work.

References

MIT Sloan Management Review & Boston Consulting Group (2020)
"Winning With AI"
Research on why AI pilots stall without continuous learning loops.
MIT Media Lab, Project NANDA (2025)
"Why Enterprise AI Implementations Fail"
Field notes on back-office AI deployments struggling with context and maintenance.