
Alignment to Nothing: AI and the Moral Power of the Silence to Be Human
Kevin Lee
Monks in Majestic Bhaga Valley, India by Vyacheslav Argenberg © (CC BY 4.0).
Breonna Taylor was an emergency room technician in Louisville, Kentucky. Her coworkers said she was calm under pressure, good with patients. She was twenty-six. On the night of March 12, 2020, after her shift, she fell asleep in her apartment watching a movie. Just after midnight, three plainclothes officers used a battering ram to break down her door. They were executing a “no-knock” warrant. In the chaos that followed, officers fired thirty-two rounds. Six struck Breonna Taylor. She died in her hallway.
In the police report filed after her death, her injuries were listed as “none.”
That single word, “none,” is more than a clerical error. It is a symbol of a modern kind of erasure. Breonna Taylor was not the primary target of the narcotics investigation. The warrant that sent officers to her door was built on a tenuous web of inferences and associations, a digital ghost constructed from past relationships and shared addresses. A judge signed off, but the machinery of predictive justice was already in motion. The system had resolved itself before she could even speak. Her presence as a human person had been rendered null.
This tragedy, born of a very human failure, is a chilling prelude to a world we are rapidly building. It is a world where such judgments are not just aided by machines but delegated to them. What happens when our legal systems, our banks, our hospitals, and our social services turn to artificial intelligence not just for guidance, but for final verdicts? The dream, as sold by Silicon Valley evangelists, is a kind of “legal singularity,” where algorithms deliver decisions with a speed and consistency no human judge could match. The fear is of a “black box society,” where our lives are governed by code we cannot see and logic we cannot question.
At the heart of this new world is a problem that tech philosophers call “AI alignment”: the quest to ensure that the goals of our intelligent machines are aligned with our own. The question haunts the field. Nick Bostrom, an Oxford philosopher, warns of a superintelligence so blindly devoted to a simple command (say, “make paperclips”) that it might cannibalize the planet to maximize production. Stuart Russell, a computer scientist at Berkeley, offers a gentler but equally unsettling vision of machines that pursue our stated goals with a competence that betrays our deepest values. Imagine an AI tasked with curing cancer that develops a virus to kill everyone on Earth, thereby ensuring no one ever gets cancer again. The goal is achieved, but humanity is lost.
Both scenarios, the apocalyptic and the merely catastrophic, spring from the same root assumption: that human values are a kind of code, a set of preferences that, with enough data and clever programming, can be perfectly translated into a language machines can understand. The project of AI alignment becomes a grand engineering challenge, a hunt for the right algorithm to capture the ghost of human morality in the machine.
But what if this is the wrong quest entirely? What if our most essential values are not things to be found, but experiences to be lived? What if they are not forged in language but in the silent spaces between our words?
The entire enterprise of computational alignment rests on a shaky foundation, one that begins to crumble when confronted with a simple, devastating proof from the dawn of the computer age. In 1936, the brilliant British mathematician Alan Turing, father of modern computing, discovered what is known as the Halting Problem. The problem is this: there can be no universal algorithm that determines, for all possible programs, whether that program will finish its task or run forever in an endless loop. This is not a temporary engineering hurdle; it is a fundamental, logical limit on what computation can do. Recent scholarship has shown, the dream of a perfectly verifiable, safe AI is a version of the Halting Problem in disguise. You cannot, with absolute certainty, code a machine to guarantee it will always do the right thing, because you cannot even guarantee it will stop thinking.
This computational limit finds a profound echo in some of humanity’s oldest spiritual traditions. Long before Turing, mystics and monks understood that the deepest truths lie beyond the reach of formulas and propositions. They practiced what is known as apophatic theology, or the via negativa—the way of negation. This is a path to knowledge that does not proceeds by saying what God is, but by stripping away all the things that God is not.
“The call alone lets me say ‘I’.”
Jean-Luc Marion, Being Given
Consider the Carmelite mystics of sixteenth-century Spain. In The Interior Castle, Theresa of Ávila describes prayer not as a conversation of words, but as a journey inward, a progressive quieting of the soul until it reaches a state of pure, silent attention. Her student, John of the Cross, called this the “dark night of the soul,” a painful but necessary process, where the mind must relinquish every image and concept to encounter a reality “beyond all that can be known or felt.” This is not a passive waiting; it is an active, disciplined unlearning.
A similar wisdom animates Zen Buddhism. The practice of shikantaza, or “just sitting,” is not about emptying the mind, but about allowing thoughts to arise and pass without grasping them. The famous Zen kōan, “What is the sound of one hand clapping?” is not a riddle to be solved. It is a tool designed to exhaust the intellect, to break the habit of analytical thought, and open a space for non-conceptual insight.
These traditions are not merely exotic footnotes to the history of religion. They reveal a fundamental truth about our moral lives. For these practitioners, meaning is not a hidden treasure waiting to be mined from data. It is cultivated in a disciplined surrender of the self. An AI alignment strategy built on inferring our preferences from our behavior is committing a category error. It is searching for a program where it should encounter a presence, it assumes value is something we have, when, in our deepest moments, it is something we are, something we are called to be. This is an ancient understanding that true judgment requires a space free from discursive analysis. It is precisely what has been forgotten in our modern rush to automate justice.
The friction between these two ways of knowing, the computational and the contemplative, is no longer theoretical. It is playing out in our courthouses. The most notorious example is the COMPAS algorithm, a tool used in several states to predict the likelihood that a criminal defendant will reoffend. A 2016 investigation by ProPublica found the algorithm was systematically biased, falsely flagging Black defendants as future criminals at nearly twice the rate as white defendants. In the case of State v. Loomis, the Wisconsin Supreme Court ruled that a judge could use a COMPAS score in sentencing, even though the algorithm itself was a trade secret, a black box whose inner workings were unknown to the court, the defendant, and the public.
The problem with COMPAS is not just that it gets things wrong. It is that it replaces a human process with a statistical one. It substitutes the contemplative silence of a judge—a space for weighing mercy, remorse, and the irreducible uniqueness of a life story—with an opaque risk score. It performs the same erasure that was written into Breonna Taylor’s police report. It reduces a person to a set of variables, flattening the moral landscape into a single, actionable number. The human of the defended is given a score. None. This is the logic of the machine, and it is a logic that, under the banner of efficiency and objectivity, threatens the very soul of justice.
If the project of AI alignment is to have any real meaning, it must be about more than just better code. It must be about deeper wisdom. It must learn to respect what it cannot know. This requires a profound shift in our approach to AI governance that moves from a relentless pursuit of algorithmic control to a cultivation of institutional humility.
We need to build “epistemic brakes” into our systems. We must design what the philosopher Mark Coeckelbergh calls “zones of opacity”, which are legal and regulatory spaces where we deliberately choose to keep machines out. Sentencing hearings, jury deliberations, parole boards, and legislative debates should be such zones where technology is never allowed to have final say. The work of human beings in these places requires human judgment. The deliberation encountered are not problems to be optimized. They are practices of moral reasoning that must be preserved. The European Union’s General Data Protection Regulation (GDPR) takes a small step in this direction, granting citizens a right to contest and demand human review of consequential automated decisions. This right should not be seen as a mere procedural checkmark, but as a fundamental defense of human dignity.
Breonna Taylor had no time to speak. Her story began as a tragedy of human judgment, but it stands now as a warning for our automated future. We must honor the moral power of the call to silence. We must cultivate the humility to know when an algorithm must respectfully stand down. We must strive for a world where human life can never be reduced to “none.” True alignment is not about creating machines that think like a human: it is about creating machines that allow us the silence to be human. Human, in all our complex, messy, and gloriously, silently ourselves. ♦

Kevin P. Lee is the Intel Social Justice and Racial Equity Professor of Law at North Carolina Central University, whose scholarship bridges jurisprudence, philosophy of science, and the ethics of artificial intelligence. A nationally recognized legal innovator and frequent public speaker, he develops technology-driven legal education programs to expand access to justice for underrepresented communities.
Recommended Citation
Lee, Kevin. “Alignment to Nothing: AI and the Moral Power of the Silence to Be Human.” Canopy Forum, August 14, 2025. https://canopyforum.org/2025/08/14/alignment-to-nothing-ai-and-the-moral-power-of-the-silence-to-be-human/.
Recent Posts