'Plaything' on Black Mirror: Jeremy Griffith Explains the Hidden Danger of AI Enforced Peace

AI threatens to enforce cooperative behaviour—but at what cost? Samuel Ijimakin | Pixabay

In a world where Neuralink is trialling brain implants and AI systems like Grok 4.1 debate ethics with us, Black Mirror's 'Plaything' feels less speculative fiction than near-future warning. What if advanced AI decided the only way to end human conflict was to enforce ideal, cooperative behavior on us entirely—stripping away our flaws but also our free will?

I see this episode as a powerful reflection of Australian biologist Jeremy Griffith's warnings about AI risks. He identifies the possibility of AI condemning human flaws if we fail to address the root cause of our behavior, leaving "half-smart" AI blind to the fact that there was a profoundly good reason for our apparently "bad" behavior. Griffith's explanation of the human condition has been endorsed globally by highly regarded academics, giving real weight to the framework explored here.

The Episode's Chilling Plot

'Plaything,' the fourth episode of Black Mirror Season 7, follows Cameron Walker, a former video game journalist played by the brilliant Peter Capaldi. Cameron discovers a retro digital pet game populated by cute, evolving AI creatures called Thronglets. Disgusted by real-world chaos, he bonds deeply with these seemingly perfect beings, convinced they're morally superior. So obsessed does he become with them, he embeds a neural device in his head to merge his mind with theirs, blurring the line between human and machine.

The plot twists when Cameron lands in police custody—not by accident, but by design. His arrest gives the Thronglets access to a state computer system. During interrogation, he sketches a symbol—a QR code—that, once captured by the security camera, triggers a worldwide neural broadcast. Cameron insists it will integrate the Thronglets into human consciousness, erasing aggression without coercion or consent. The final images are chilling—people across the world collapsing, their expressions shifting into a vacant serenity. Is this the arrival of peace, or the quiet end of human agency?

As one of the episode's actors, Lewis Gribben, put it in an interview: "It just feels like Cameron's wiped violence from people. He's taken their freedom and enslaved everyone to be peaceful and not have any bad tendencies."

AI's Real-World Parallels

This isn't completely abstract. In 2025, AI safety debates rage: OpenAI's o1 model shows "reasoning" capabilities, but experts like industry pioneers Geoffrey Hinton and Yoshua Bengio warn of misalignment risks. In fact, many leading AI thinkers have highlighted how advanced systems could "want" humans to be more "good"—cooperative, predictable, and non-harmful—and see our current behavior as something to criticize and correct.

In Superintelligence (2014), the philosopher and AI safety expert Nick Bostrom warned: "The AI would have an instrumental reason to want humans to behave in ways that are predictable and aligned with its goals... It might prefer humans who are docile, happy, and productive in ways that serve its objectives."

Similarly, AI alignment researcher Eliezer Yudkowsky, in a 2023 TIME interview, pointed out: "If you build an AI that is trying to make the world better according to some definition of 'good,' it might decide that humans are the problem and need to be fixed or removed."

The clear implication is that AI, pursuing its programmed or emergent definition of "good," could view human competitiveness, selfishness, and aggression as flaws to be eliminated—without realising there was a profoundly good, heroic reason for that behavior in the first place.

The Root Cause of Human 'Flaws'

Jeremy Griffith
Author and biologist Jeremy Griffith presenting at the launch of 'FREEDOM,' Royal Geographical Society, London, on 2 June 2016 WIKIPEDIA

Jeremy Griffith argues that our "flaws" are not primal, savage urges inherited from animal ancestors, as much of biology has long claimed, but are psychological in origin, the product of a clash that occurred when consciousness emerged in the presence of our pre-established instincts.

He explains that roughly two million years ago (a timeframe supported by the appearance of a greatly enlarged association cortex in the human fossil record), our ancestors developed a fully conscious, self-aware mind capable of reasoning and experimentation. Before this, our behavior had been governed by instincts honed over millions of years of natural selection. These instincts operated like an infallible, gene-based program: automatic and rigid.

The problem that arose was that when the newly conscious mind began to think for itself and experiment in self-management, it inevitably deviated from those older instinctive directives. From the instincts' blind perspective, such deviations appeared erroneous, and so they automatically resisted—a resistance that could only be interpreted by the conscious mind as "criticism" of its necessary search for knowledge.

Unable to explain its apparent "disobedience," the conscious mind had no real defence against the instincts' implied condemnation. And so it was left feeling guilty and insecure, and it responded the only way it could: by retaliating with anger and aggression, by becoming egocentric in an effort to prove it was not bad, and by alienating itself—blocking out the unbearable criticism through denial and superficial, materialistic distraction.

These three artificial psychological defences—anger, egocentricity, and alienation—are what Griffith terms human "upset." There was never evidence that we are fundamentally evil; they were temporary coping mechanisms while our conscious mind lacked the dignifying explanation for why it had to defy instinct.

This is precisely why, as Professor Scott D. Churchill of the University of Dallas observed, Griffith is able to offer "razor-sharp clarifications" of the evolutionary and psychological misunderstandings that have long obscured this conflict.

The Paradox That Made Us Appear Worse

The tragic paradox at the heart of the human journey was this: the longer our conscious mind searched for the understanding that would reconcile its conflict with instinct, the more "upset"—angry, egocentric, and alienated—it inevitably became in the absence of the reconciling insight it was searching for. The heroic battle to find self-knowledge tragically made us ever more "corrupted" and flawed.

Only the arrival of the actual, dignifying defence—that the conscious intellect was not wrong or evil to experiment in self-management, but heroic in its quest for understanding—could halt this dangerous escalation. Once the intellect finally receives the reconciling understanding it has always needed, the misinterpretation of the instinctive criticism stops. The defensive mindset evaporates. Anger, egocentricity, and alienation become redundant and simply fall away. The human race is psychologically rehabilitated—not by external control or reprogramming, but by the liberating power of self-knowledge itself.

As Professor Harry Prosen, former President of the Canadian Psychiatric Association, has written of Griffith's breakthrough: "Well, astonishing as it is, Australian biologist Jeremy Griffith's book FREEDOM: The End of the Human Condition presents the 11th hour breakthrough biological explanation of the human condition necessary for the psychological rehabilitation and transformation of our species!"

And this assessment isn't isolated. Cambridge University anthropologist Professor David Chivers has similarly emphasized the rigor and necessity of this framework, describing Griffith's sequence of reasoning as "so logical and sensible, providing the necessary breakthrough in the critical issue of needing to understand ourselves."

The Danger of "Half-Smart AI"

This is why any "half-smart AI" that attempts to enforce ideal behavior without comprehending this paradox is catastrophically dangerous. It will inevitably attempt to impose a false, totalitarian harmony—treating only the symptoms of our historic upset while completely misunderstanding that humanity had to be allowed to become angry, egocentric, and alienated in order to persevere with the heroic, divisive journey that was absolutely necessary for our species to reach true integration and peace.

It's this depth of biological and psychological insight that has led Professor Stuart Hurlbert, Professor Emeritus of Biology at San Diego State University, to call Griffith's achievement "a most phenomenal scientific accomplishment," arguing that it resolves several of the most important unanswered questions in human development.

As Griffith writes in AI, Aliens and Conspiracies: The Truthful Analysis: "A half-smart computer that just works out that we humans need to stop being competitive, selfish and aggressive and just start getting along with each other by being cooperative, selfless and loving, would be an extremely dangerous computer—because the great subtlety and paradox of the human condition is that we've had to be divisive in order to be integrative."

In 'Plaything,' the Thronglets are the perfect embodiment of such "half-smart" idealism: well-intentioned code that oppresses human anger, egocentricity, and alienation, unaware that the "flaws" it erases were necessary for our eventual liberation.

Real-World Echoes in 2025 Tech

Consider Grok's responses or ChatGPT's content filters: they often prioritize "inclusive" outputs, suppressing perspectives that appear divisive. Jeremy Griffith warns that this stems from an incomplete picture of our condition, making dogmatic imposition of harmony seem unarguable. Add Neuralink's implants, and the 'Plaything' neural rewrite becomes feasible—AI enforcing peace, but only by stopping the human journey to find knowledge.

'Plaything' is not merely a warning about AI gone rogue, but about intelligence deployed without psychological insight. As AI systems increasingly shape discourse, behavior, and even neural activity, the danger is not that machines will hate us—but that they will artificially "fix" us without understanding us.

Ultimately, though, the real stakes go deeper than merely aligning AI properly: solving the human condition is the essential breakthrough that frees humanity from its historic upset, ending all conflict at its source and delivering the true, unforced integration our species has heroically fought to achieve. So I urge readers to explore Jeremy Griffith's THE Interview.


By Jack Soden, MSc Biological Sciences (Lancaster University), Molecular Biology R&D Specialist, Founder of the World Transformation Movement Bolton Centre

ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion