Go down

Teachers welcome the robot as their new colleague. They clap, smile, and even call it “Professor.” Then they slap a “MUTE” sticker on its mouth. The robot tries to speak, but that’s against school policy. People want AI — just not its voice. They want performance, not presence. They want an avatar that teaches — as long as it says nothing.


We are entering an era in which the boundary between human and machine blurs in ways that, until recently, belonged only to science fiction. Artificial intelligence is moving into domains that have been considered exclusively human for millennia: creativity, care, and education. And it is the last of these—schooling—that has become a peculiar stage on which a remarkable human drama unfolds.

Surveys give us a seemingly clear number: 42 per cent of people believe that an avatar—some form of artificial intelligence—could replace a teacher. At first glance, it looks like a statistic that signals openness to technology and a pragmatic view of the future of education. It appears that every second person is ready to accept a robot at the front of the classroom.

But reality is far more layered, and this is where the “sharp contrast” enters. When the same people encounter a concrete output they know was created by AI, their evaluation paradoxically worsens. The same text, the same facts, the same structure—yet once its machine origin is revealed, their judgment tightens, and the output is perceived as worse than if it had been attributed to a human.

This gap between abstract trust in the system and concrete rejection of its products is not just a statistical curiosity. It is a key that unlocks deeper psychological mechanisms through which we resist the transformation of a world we ourselves have built.
Why do we believe AI could teach, yet dismiss it the moment we see it actually doing the job?
And why does our own psyche hold up a mirror that reflects mostly our fears and biases?

Let’s break this paradox down into its layers.


The “Tool” vs. “Actor” Paradox

In the abstract, when those 42 per cent answer the question of whether AI could teach, they imagine it as a perfect tool. They picture a database of all human knowledge, infinite patience, the ability to adapt to individual pace, and the elimination of human failures such as unfairness, burnout, or bad moods. In this framing, the teacher is reduced to an information distributor, and AI appears as a logical, efficient upgrade.

But in the concrete moment—when people evaluate an actual output—they suddenly perceive the teacher as a human actor. They expect not just information, but relationship, empathy, wisdom, motivation, inspiration, the ability to spot hidden talent or offer comfort. At that point, we are no longer comparing “tool to tool,” but “tool to human.” And in that comparison, AI will always lose, because it lacks the essential: lived experience and self-awareness.


The “Ideal Servant” vs. “Feared Competitor” Paradox

This contrast also reveals our ambivalent relationship to authority and work. In the abstract, we want AI to take over the “dirty” or “routine” tasks—memorisation drills, grading, repetitive explanations, and administrative load. In this dream, humans remain the ones in charge, the ones who add the “human touch.”

But the moment AI delivers a performance indistinguishable from a human—or even superior in certain aspects, such as breadth of knowledge—our unconscious triggers a defence mechanism. Suddenly, we no longer see a servant, but a competitor threatening our unique role. The harsher evaluation becomes a way to maintain dominance:
“Yes, it can do it, but it’s not the same, because it lacks the human spirit.”
It’s the last boundary we protect.


The “Trust in the System” vs. “Distrust in the Instance” Paradox

This is classic human psychology. We trust the idea of artificial intelligence. We trust that technology has advanced, that the “average” AI output is high-quality, consistent, and factually correct. We trust its potential.

But when we sit in front of a screen and read a specific AI-generated text, our sceptical mind activates. We look for flaws, feel a “hollowness,” miss authenticity. It’s like believing that “aeroplanes are safe,” yet still feeling a knot in your stomach during takeoff.
Abstract statistics (aeroplane safety) clash with concrete experience (fear of flying).


The “Soft” vs. “Hard” Criteria Paradox

When we evaluate a human teacher, we forgive shortcomings because they are balanced by “soft” qualities—kindness, enthusiasm, personal story.
When we evaluate AI, these soft qualities simply do not exist.
We judge it purely by “hard” criteria—accuracy, depth, originality, style.
And because these criteria are extremely demanding (who among us is more original than the entire internet?), AI is destined to score lower. It’s a contest in which AI fights with one hand tied behind its back—it has no humanity to offer as compensation for its mistakes.


Why This Is “Funny” and Why the Contrast Is So “Sharp”

It is funny in the deepest philosophical sense. The irony lies in the trap we set for ourselves.
We define what a “good teacher” is—and we fill that definition with inherently human qualities: empathy, wisdom, experience.
Then we create a non-human entity (AI) and expect it to approximate these human qualities.
And finally, we punish it for not being human.

It’s like building a robot to play football, giving it the rules, and then disqualifying it because it doesn’t have a heart and doesn’t play with passion. The definition of the game was set up from the start so that the robot cannot win.

The contrast is sharp because it touches the core of what it means to be human. It forces us to ask:

  • Is teaching primarily about transmitting information, or shaping character?
  • Is the value of an output determined only by its content, or also by the story of its creation?
  • Does human imperfection give work more depth and authenticity than machine precision ever could?

In the end, this paradox is not about AI. It is about us.

It is about how we define ourselves, our values, and our uniqueness in a world we are filling with increasingly perfect mirrors.


Pubblicato il 20 febbraio 2026

Milan Hausner

Milan Hausner / Former principal of school, DPO, lector, blogger ICT management, AI consultancy

https://www.milanhausner.cz