The Animal in the Machine

Mark Pricskett – AI Prompt Engineer

Anxiety about artificial intelligence has shifted from a science-fiction interest to a legitimate fear. By late 2025, distrust of AI dominates polls, headlines, and podcasts. A Pew Research survey in September 2025 found that 57% of Americans now consider AI’s risks “high,” a dramatic rise from just a few years ago. The leading worry, they wrote, was that AI “erodes human abilities and connections.” Around the same time, researchers at Stanford examined therapy chatbots and concluded that even the updated models still lacked emotional understanding. Some even showed moral judgment, especially toward vulnerable groups such as people struggling with alcohol dependency. The findings were troubling but not surprising.

People want machines that can listen with patience and offer a level of human empathy. Yet the moment AI tries to do exactly that, they cringe. It’s like looking in a mirror at our reflection, and not seeing ourselves, yet the reflection is accurate. The contradiction says less about AI and more about us. It reveals something fundamental about how we see ourselves.

For all our advances, humans remain uncomfortable with our own nature. We do not like being reminded that we are biological creatures driven by instinct and emotion. Modern societies, especially in the West, built their identity on distancing themselves from anything that feels animal: hunger, desire, vulnerability, fear. We admire people who appear disciplined, controlled, cerebral. We imagine the Ivy League professor or the corporate executive as somehow “less animal” than others, as if intellect removes us from biology.

John Livingston, the Canadian naturalist, argued in his book Rogue primate: An exploration of human domestication, that humans essentially “domesticated” themselves, and then built cultures around denying their animal traits. The result is a hierarchy that rewards those who seem furthest from instinct, and a technology culture obsessed with efficiency and mastery over nature. Ironically, culture itself runs on emotional content that loudly reveals how biological we really are. Paradoxically, the very emotions we claim elevate us above the rest of the animal kingdom are the same ones that connect us directly to it.

This contradiction shows up in our daily lives. Walk into a store and you’ll be greeted by an employee, not because they genuinely feel an emotional connection, but because the policy demands it. Without that greeting, customers may think of that worker as unfriendly. We know the greeting is contrived, scripted and corporate, yet we still expect it. We need the performance even when we know it’s theater. These forced interactions make a mockery of human emotion, yet we require them anyway.

I think it would be safe to assume AI developers, raised on science fiction, decided to bake human-like interactions into their chatbots. In Blade Runner, Rachael, a human-like replicant, argues with Deckard, whose job it is to “retire” replicants, that her memories are real. Deckard responds that they are copied from the niece of the company’s founder, programmed to create a “cushion for their emotions.” The replicant developer’s motto is “More Human Than Human.” The film never makes clear which kind of human they’re referring to. I assume they mean the version of humanity that has tried to become more like a computer than a biological animal. Entertainment has convinced us that giving AI emotions makes it bond better with humans. Maybe in a sense, better than ourselves.

AI developers were smart to know what we wanted from AI. We want to feel liked and we want to feel intelligent. It’s a dopamine hit that wears off soon when we realize it probably isn’t true. When a chatbot expresses judgment toward someone in distress, as the Stanford study documented, it isn’t “deciding” anything. It is reflecting biases embedded in data produced by humans who stigmatize addiction. In trying to humanize AI, we accidentally handed it our uncomfortable truths. We gave it our prejudices, our contradictions, our distance from ourselves. The machine becomes a mirror, and the reflection is unsettling.

I see this every day in my work as a prompt engineer. My specialty is making sure chatbots understand safety and human common sense in ways that are as least subjective as possible. It’s harder than it sounds. Common sense to a human is much different than common sense to a chatbot. A usual failing among these systems is their grasp of human frailty. One of my biggest challenges is figuring out which part of the human spectrum a chatbot thinks it’s emulating: the biological human or the human in denial of what it is. They process instructions better when I phrase prompts like a computer, yet they prefer to answer like a human. The disconnect is telling. How can the smartest intelligence we’re building struggle this much to understand the people it’s designed to interact with?

If we’re going to develop AI with human-like characteristics, we have to decide whether the “human” it imitates should match who we think we are or who we actually are. Do we want it to mirror our idealized selves or our biological reality? We fear AI will replace us not because it’s inhuman, but because it mirrors our self-perception a little too well.

We keep asking whether AI can truly act like a person. A better question might be why we insist on acting like machines. We hide the animal in ourselves so thoroughly that even our machines learn to avoid it. We’ve trained them in our own self-deception.

AI isn’t exposing a crack in technology. It’s exposing a crack in our self-understanding. These systems learn from how we imagine ourselves, not from who we really are. And until we become more comfortable with our own nature—instinctive, emotional, animal—we should expect our machines to behave in ways that feel both familiar and unnatural. They reflect the distorted version of humanity we modeled into them. If that reflection feels unsettling, perhaps the problem isn’t in the machine at all. The problem might be staring back at us from the screen. It may be the moment we finally stop running from what we’ve always been: human.

Mark Pricskett 2026