We Need to Stop Anthropomorphizing AI
Ever since I started building AI 10 years ago—back when I was working on Kite—I’ve noticed a persistent trend in the way we frame AI to the world. The market keeps inventing metaphors to explain AI through a distinctly human lens. And while that may help with understanding and adoption at first, I believe this approach ultimately limits us in several ways.
Let’s take a step back and look at the history of how AI tools have been framed and used.
The RPA Era: The First Wave of AI
The first wave of AI-powered (or pseudo-AI) products was largely built around robotic process automation (RPA). Many companies successfully positioned themselves as RPA providers, offering tools designed to automate repetitive tasks. But here’s the thing: RPA, as a concept, doesn’t generalize well.
Take UiPath as an example. While they’ve built a name for themselves, the reality is that no RPA solution today can truly call itself self-onboardable or universally horizontal. Every implementation requires significant professional services work to integrate and maintain. Even then, these systems remain vulnerable to the quirks and volatility of human intervention and messy data.
This was the first era of AI abstraction—a time when many companies claimed to be AI-driven but, in reality, were more about clever automation than actual artificial intelligence.
The Agent Era: A New Wave with Old Habits
Now, we’re in the era of AI agents, and yet we’re making the same mistake. This time, the mistake is even more glaring because we’re anthropomorphizing these tools—treating them as if they were human.
AI agents, particularly those leveraging large language models (LLMs), have incredible capabilities, but they are not people. Humans reason. Humans engage in true chain-of-thought processes. Just because an AI system shows you a clever “loading” message or mimics reasoning with well-placed prompts doesn’t mean it’s thinking like a human.
And yet, this insistence on modeling AI in human terms is holding us back. Rather than using these systems for what they are—stochastic abstractions capable of delivering transformative insights and automation—we patch them to look and act like humans. This approach limits their potential and leads to overly complex, brittle solutions.
Rethink the Process, Not the AI
What we should be doing instead is rethinking our business logic from the ground up to fit this new AI paradigm. At BEM, this is exactly how we approach the problem. Instead of forcing AI to mimic human processes or reasoning, we focus on redefining workflows to take full advantage of its stochastic and probabilistic nature.
For example, consider how some companies approach AI’s ability to browse the web. Instead of embracing the unique capabilities of LLMs, they design solutions that force AI to act like a human reading a webpage—down to simulating how humans “think.” These are patches on patches, built around an outdated metaphor that AI needs to behave like us to be effective.
But what if we flipped this? What if, instead of anthropomorphizing AI, we allowed it to guide us in redesigning the process itself? When you stop forcing AI to fit into human-shaped boxes, you unlock its true potential to build scalable, production-ready systems that don’t crumble under the weight of human expectations.
A Call to Action
So, next time you’re building with AI, try this: step back and look at the process you’re trying to solve. Instead of asking, How can I make AI act like a human?, ask, How can I redesign this process to leverage AI as a stochastic abstraction?
You’ll find that by letting go of the need to anthropomorphize, you can create far more scalable, flexible, and innovative systems. AI isn’t human—and it doesn’t need to be. It’s something entirely new. Let’s start designing like it.