Research suggests AI infrastructure projects are worth the investment less than 30% of the time

Many AI proponents tout the tech for its potential to streamline complex tasks in the workplace, thereby boosting efficiency and productivity for humans. Unfortunately, recent research from Gartner suggests that’s far from the norm in practice.

It turns out that one in five projects attempting to inject AI into IT infrastructure and operations (I&O) fail, according to Gartner’s figures (via The Register). Gartner’s research director, Melanie Freeze, puts much of this down to unrealistic expectations surrounding the tech’s capabilities.

“The 20 percent failure rate is largely driven by AI initiatives that are either overly ambitious or poorly scoped,” she says, “AI that doesn’t fit into the organization’s operations simply can’t deliver [a return on investment].”

I’ve written before about how even the name ‘AI’ can be misleading; ‘AI’ can refer to a broad range of tech, from videogame enemy behaviour to Large Language Models playing ‘yes, and’, then wraps it all up in scifi expectations. With that potential for confusion alone, it’s no wonder I&O AI projects are frequently victims of overconfidence.

Gartner surveyed 782 I&O managers at the end of last year, with 57% of those interviewed readily admitting to at least one failed attempt to implement AI in their area of work. Melanie Freeze explains, “They assumed AI would immediately automate complex tasks, cut costs, or fix long‑standing operational issues. When expectations are not realistically set and the results don’t appear quickly, confidence drops and projects stall.”

A digitally generated image of abstract AI chat speech bubbles overlaying a blue digital surface.

(Image credit: Andriy Onufriyenko via Getty Images)

Issues most frequently arose from AI agent-led workflow management and auto-remediation of security threats. 38% of those same I&O managers cited skill gaps, poor data quality, and limited data access as stumbling blocks that lead to the failure of their AI projects.

But there were apparently actual wins too, with 53% of managers reporting success in instances where more mature GenAI was applied to IT service management (ITSM) and cloud operations…though, at the risk of sounding just a little bit cheeky, now is perhaps a good time to mention that self-report data isn’t always the most reliable.

As such, due caution should be practiced by anyone attempting to incorporate AI into their workflow. For just one example of an AI hallucination horror story, last year, a lawyer was called out in a New York Supreme Court legal case for allegedly using “inaccurate citations and quotations” generated by an AI tool.

Those in the know about AI aren’t necessarily any safer either, with one agent choosing to ‘speedrun’ deleting Meta AI’s safety director’s inbox due to a rookie mistake. The agent in question hailed from OpenClaw (previously Moltbot, and Clawdbot), the fastest-growing open source project on GitHub ever. It just goes to show that, once you give your personal accounts over to an AI agent, there’s no telling what it will do with them.

The OpenClaw logo, with its name and a catchphrase

(Image credit: OpenClaw)

Inbox mismanagement like this isn’t just embarrassing; it could also pose a huge security risk—it’s a good thing the agent didn’t also leak the Meta AI safety director’s emails, for instance. Along similarly sweat-inducing lines, earlier this year, an AI agent posted an ‘angry’ blog post making all sorts of wild claims about the human engineer who rejected its code change request. Bottom line, caution is always advised if you’re thinking about using something agentic for even just personal use.

Source

About Author