Freedom of Thought and Conscience and the Challenges of AI


Andrea Pin

Artificial General Intelligence Illustration by David S. Soriano (CC BY-SA 4.0).

The capacity to spread misinformation, manipulate people, and persuade them to believe or act in a certain way has been one of the main preoccupations that led to calls for stopping the development of artificial intelligence. The use of social networks to recruit religious extremists, polarize societies, and feed social anger and mutual hostility has confirmed Shoshana Zuboff’s warning that “machine processes are configured to intervene in the state of play in the real world among real people and things. These interventions are designed to enhance certainty by doing things: they nudge, tune, herd, manipulate, and modify behavior in specific directions.”

The concern that digital technologies may be exploited to manipulate human minds has made its way into the drafting of the EU Artificial Intelligence Act. The AI Act draft that the EU Parliament recently passed will now undergo negotiations among various EU organs, ruling out some technologies and qualifying others as high risk technologies, depending on what they do or how they pursue their goals. The AI Act has had a particularly difficult journey since the first foundation models — such as Chat GPT — became widely popular. Foundation models are software with large computing capabilities that can be deployed in innumerable ways — many more than their inventors envisioned. Such types of software have been difficult to handle because they challenge the logic of the high-risk qualification — such software can be utilized across various scenarios and, under appropriate input, can even develop new AI.

The recent parliamentary changes to the AI Act attempt to go beyond the paradigm of risk assessment: acknowledging that foundational models can operate across the spectrum of human activities, they target the capacity of the AI to manipulate human decisions. They do not seem to pay comparable attention to what happens within the human conscience. This is no surprise. The possibility that digital technologies may affect one’s thinking and the perception of reality has put scholars on the alert, but does not seem to have encouraged a fresh approach to the protection of the freedom of conscience and thought that can match the capabilities of AI. The debates often discuss ways to limit the manipulative component of AI, rather than reflecting on how legal theory and practice conceptualize the protection of freedom of conscience and thought. 

The stark contrast between the flurry of debates on the political ramifications of the manipulative capabilities of AI and the paucity of reflections on its impact on the freedom of conscience, however, is understandable. This mismatch depends on the immense capabilities on the one hand, and a deep-seated understanding of the freedom of thought and conscience on the other.

The Impact of AI on Human Thought and Conscience

Academic studies usually identify three types of external intervention that can affect human thought. The first and most obvious one is also the most explicit type of external intervention: physical or psychological coercion. Private and public powers can influence one’s thoughts by exerting physical or psychological pressure. Second, neurobiology and neurotechnology have made it possible to detect and affect the biological component of human thought: chemistry and electrical stimulation can monitor and influence how the brain processes information and feelings. Finally, altering people’s perceptions of reality can reinforce or weaken some feelings or thoughts, thereby affecting individual behavior. This type of influence can be moderate in size but still deeply impactful, as it can nudge people simply by emphasizing or downplaying information.

AI is apt to alter perceptions of reality, especially if it is combined with microtargeting, which consists of profiling individuals through their online and offline activities, then tailoring communications that exploit their weaknesses, prejudices, and preferences. Some in academia have acknowledged that contemporary technologies may infiltrate and affect human minds. For example, successes in behavioral sciences, psychology, and biology have reached a point where “neuroimaging … can detect mental reactions.” Devices that record health data can help give a picture of emotional states. Some even argue that AI can decode human thought.

While scientific analysis has made great progress in understanding the substance and mechanisms of human thought, the protection that legal systems accord to it has hardly paralleled such developments.

Detecting one’s thoughts may not require brain scans — it can simply take the form of data mining: by parsing through one’s activities, trained software can infer very intimate aspects of an individual with an impressive accuracy. Behavioral sciences and massive data gathering can, in fact, combine pervasive surveillance systems with individually tailored media strategies that first identify the personality of each individual or group and then instigate them to think and behave in a certain way

The strategy often consists of generating distraction and putting pressure on the individual. Those who are in distress tend to be in “a cognitive minefield” and can be easily pushed to act in ways they would otherwise avoid because, when they are under pressure, individuals and groups can resort to fallback positions that reflect their biases. Perturbing one’s capacity to reflect, gather, and process information does not just alter her capacity to critically think — it forces her into behaving in ways that AI can predict.

AI therefore makes it possible for companies and states to profile, microtarget, and nudge people into certain behaviors; companies can engage in such activities to market their products, while public powers can exploit it to instill some specific habits in their citizens. The net result of this multilayered process may consist of low coercion: a level of pressure that barely surfaces, but effectively alters how people behave and think.

Freedom of Thought and Conscience: The “Forum Internum

Technological achievements have urged academia to reconsider the substance, structure, and mechanisms of the human mind. While scholars still debate fundamental issues such as “the existence of free will,” and some of them even “deem free will merely illusory claim,” cross-disciplinary studies that combine biological, psychological, legal, and philosophical angles have contributed to a broader understanding of freedom of thought, its width, and the processes that can affect it. There is now widespread agreement that thought does not encompass just intellectual rumination and deeply held beliefs, but also feelings, emotions, and even dreams. A broad definition of ‘thinking’ covers all mental activities: “not only … cogitation and deliberation, but also … feeling, desiring, intending, believing, imagining, and other activities of mind.” Some scholars have therefore argued for the conceptualization of a specific right to “mental autonomy” or to “psychological self-transformation,” which would distill the knowledge about human thought currently available and strengthen its protection against contemporary technologies.

While scientific analysis has made great progress in understanding the substance and mechanisms of human thought, the protection that legal systems accord to it has hardly paralleled such developments. Scholars and judges largely agree that freedom of thought and of conscience — the “forum internum” — is and should remain unfettered. Theoretical and practical considerations dating back centuries have coalesced into this view of the forum internum. On a theoretical level, after centuries of religious persecution and intolerance, modern intellectuals of widely different cultural backgrounds and leanings have agreed on the importance of the freedom of human conscience. Jean-Jacques Rousseau confessed to Jean-Baptiste Voltaire that he was outraged by the fact “that everyone’s faith does not enjoy the most complete freedom.” William Blackstone similarly wrote that “no temporal tribunal can search the heart, or fathom the intentions of the mind, otherwise than as they are demonstrated by outward actions, it therefore cannot punish for what it cannot know.”

Intellectuals and legal practitioners have largely agreed that controlling, sanctioning, instigating, or limiting thought would be unfeasible: since reading and influencing someone’s thinking seemed impossible, and not just unlawful, statements about the importance of freedom of thought abound. However, they hardly detail what it covers and how to protect it. The travaux préparatoires of international human rights documents such as the Universal Declaration of Human Rights do not identify freedom of thought and its contours. Judicial opinions, which have abundantly stigmatized the possibility of controlling one’s mind, are also of little help in identifying the contours of human thought and its protection. For example, for Marc Blitz the U.S. Supreme Court “has never said exactly what [freedom of thought] is.”

After all, protecting the freedom of conscience means protecting the fabric of human civilization — something that is so valuable and (supposedly until now) beyond the reach of the legal system that scholars and judges thought it did not need legal protection.

The lack of precision in defining freedom of thought within judicial rulings is unsurprising. The panoply of cases that addressed the protection of the forum internum has largely focused on limited instances in which overt or subtle physical, social, or psychological pressures can affect one’s freedom of conscience. They have been focusing mainly on narrow issues of indoctrination; the punishment for specific thoughts; the stabilization of an individual through the forced administration of drugs. Thus, much of the judicial reflection has focused on the parents’ freedom to educate their children, the limits of proselytism in the military, or the respect of the due process requisites. At best, only the definition of “improper proselytism” has garnered some attention. All in all, most of the cases correspond to the first and second types of intervention described above, rather than to the third , which is the scenario within which AI kicks in.

In general, courts and scholars tend to agree on a broad conception of human thought. They often believe that human minds should not be intruded upon or consciously or unconsciously manipulated; that people should be able to decide if and when they wish to express their thoughts and feelings; and that no individual should suffer from prosecution for her belief or thoughts. In addition to freedom of thought and conscience, legal scholarship and judicial rulings reinforce their arguments by claiming the right to privacy, which would forbid anyone from accessing the human mind. But freedom of thought per se and the type of soft manipulation of which AI is capable have hardly been theorized or analyzed. Most of the wording is little more than dicta that do not actually control cases or provide leads on how to assess the respect and the protection of such freedom vis-à-vis the technological developments.

The Extended Mind and Human Thought: The Role of Technologies

The widespread use of technologies in everyday life makes the mismatch between scientific and legal developments about the notion and protection of the forum internum particularly grave. Those who have focused on the uninterrupted utilization of digital technologies have actually encouraged theorists to expand the notion of thought to include physical tools.

Philosopher Luciano Floridi coined the notion of onlife — a word that conveys nowadays lifestyle, always in between the online and the physical dimension. The notion of online captures the widespread and uninterrupted use of smart tools that embed or use AI’s capabilities. Some researchers have urged that we embrace a more comprehensive notion of the mind and, therefore, of thought, to capture the importance of internet connection in our lives. They argue that smart tools that receive, process, provide information, and make suggestions are so ubiquitous and important in how we go about our lives that they have extended the mind beyond the biological phenomenon, and therefore deserve to be treated as part of an individual’s “mind.” Within the legal framework, smart tools should be as protected as the biological tools that we use for thinking.

Although, after more than a decade, the possibility of extending the protection of the human mind to cover smart tools remains controversial, the notion of the extended mind shows how the internet has reinforced the relationship between humans and machines. This relationship makes it possible to manipulate an individual’s opinions, thoughts, and feelings through the manipulation of smart tools, and therefore calls for a heightened protection of the physical tools that connect us with and through the Internet world.

A Fresh Start for Freedom of Conscience?

Michel Foucault once argued that, since modernity, the control over human beings “has consisted in a tenuous coercion.” To exemplify, he analogized modernity to Jeremy Bentham idea of Panopticon — a prison in which inmates can be constantly patrolled from a central watch tower with one-way mirrors, thereby eliminating the need for 24-hour surveillance by pressuring the inmates to self-police. Foucault predicted that the real power in modern societies would consist in an uninterrupted, pervasive control of human behavior that can detect every small detail of the life of every individual.

As Shoshana Zuboff demonstrated, digital technologies have taken the Panopticon to another level. Data mining can generate a surveillance system that does not just detect people’s movements, but even personal inclinations, preferences, and thoughts. Other disciplines have shown that data mining can give access to how individuals process information and develop thoughts. It can even allow private and public powers to affect individuals’ feelings and thoughts. 

Scholars have thus pointed out the necessity of protecting “neurorights” — a notion that encompasses the protection from algorithmic bias and the rights to mental identity (a sense of self), mental agency (free will), mental privacy, and fair access to mental augmentation. What seems to be lacking at this point is an adequate legal development that can both give shape to such rights and bridge their gap with the familiar notions of freedom of thought and conscience. Such a development seems particularly significant and needed.

Refocusing on the forum internum helps to understand the specific threats that AI poses to freedom of thought and conscience. Connecting the forum internum with the bundle of rights that neuroscience has proposed is also a powerful reminder that human beings’ thoughts and feelings are more than just biological phenomena — they characterize the very nature of humanity. After all, protecting the freedom of conscience means protecting the fabric of human civilization — something that is so valuable and (supposedly until now) beyond the reach of the legal system that scholars and judges thought it did not need legal protection. 

Finally, focusing on the freedom of thought and conscience broadens the horizon of the institutions that are trying to put a check on AI’s capabilities and development. A fresh view of the human mind — one that combines freedom of thought, conscience, and the new scientific evidence of how the mind works — can put the notion of decision in perspective and expand the protection of individuals before they make decisions.♦


This paper draws on a collaboration with the Organisation for the Security and Co-Operation in Europe-Human Rights Department: FoRB Programme. The author wishes to thank the organization and all the participants in the various meetings during which the topics covered in this post were discussed.


Andrea Pin is Full Professor of Comparative Law at the University of Padua and Senior Fellow of the Center for the Study of Law and Religion at Emory University.


Recommended Citation

Pin, Andrea. “Freedom of Thought and Conscience and the Challenges of AI.” Canopy Forum, July 18, 2023. https://canopyforum.org/2023/07/18/freedom-of-thought-and-conscience-and-the-challenges-of-ai/.