AI Regulation and the Risk of Ideological Capture: When Tech Becomes Religion


Nature (AI) by Alan Warburton (CC BY 4.0).

In 2014, users on LessWrong, an internet forum, encountered a post by a user named Roko, who posited that humanity would invent a superintelligence. This superintelligence would have the power to create fully accurate computerized simulations of people, essentially ending death. The AI, knowing that its creation is best for humanity, would punish those who fail to work for its creation by torturing their virtual selves forever. Among those at LessWrong, a computerized duplicate of you is effectively you, so the AI threatens modern humanity with eternal torment for disobedience. Roko’s Basilisk, as the concept came to be called, had more than a few similarities to Pascal’s Wager. In an earlier era, Christian theologians had debated whether those who had never heard of Christianity could achieve salvation. Now, believers in artificial intelligence’s redemptive power argued about AI’s ability to help them attain immortality.

Effective altruists, self-described rationalists, believers in the singularity, and transhumanists do not just have a lot in common with conventional religions; by legal metrics, their beliefs are religious. There are distinctions between these communities, but as Adam Becker observes in his book More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, “[t]hese groups—the longtermists, the advocates of the Singularity, the rationalists, and more—share deep connections. They’re connected directly by people—there is a great deal of overlap in membership among these groups—and they are connected by a set of common aims and beliefs.” Becker argues that they share core beliefs, aiming to solve problems with technology, that their objectives align with the tech industry, and they offer adherents a possibility of transcendence. Many of these groups share history, with their membership having links to the extropian movement of the late 1980s and 1990s. Perhaps equally critically, Becker and other writers have pointed to the shared funding structures that many of these groups have. Much as it’s coherent and sometimes analytically useful to speak of conservative Christians, uniting Latter-Day Saints, Pentecostals, Evangelicals, and Roman Catholics, in one phrase despite their differences, it can be a useful tool of analysis to examine AI beliefs together.

They hold millennialist views that the development of superintelligence will allow humanity to transcend its limitations and achieve goals that include ending death. They often fear the creation of evil artificial general intelligence (AGI), which they believe will bring about human extinction. They make assumptions about the nature of existence and the teleological direction of history. In order to fully grasp both the contents and implications of these views, we need the analytical tools of religious studies and legal scholarship.

The functional religiosity of AI beliefs poses a problem for AI policy. Those with apocalyptic beliefs about AI exert disproportionate influence over political leaders and academic circles. These voices should not be dominant on AI policy; instead, they should be treated like religious groups weighing in on the public sphere. Policymakers should be cautious about allowing one community to make regulations and policies with limited oversight. The risk is both regulatory capture by a religious community and that regulatory attempts can be misdirected.

When it comes to other religious beliefs, policymakers know that while religious groups can offer insight, their understandings of truth are grounded in their individual understandings of their beliefs. American evangelicals who believed in premillennialism regularly predicted nuclear conflicts that would end the world before Christ returned. While some of the believers did affect foreign affairs, these ideas were generally not accepted as mainstream guidance on policymaking. Or take the example of Quakers, who offered numerous policy suggestions during the Cold War; however, policymakers were aware that Quakers were pacifists and that their recommendations reflected that. Roman Catholic bioethicists have doctrinally mandated positions on issues like ectopic pregnancy, and Catholic bioethicists make vital contributions to bioethics, but it’s useful for others to understand how their perspective is grounded in religion. Religious communities are often responsive to guidance from their community’s leadership on political or social questions, and effective altruists likewise have a leadership structure that provides such coordination. Many of the beliefs about AI embraced by policymakers are religious, and non-believers are not obligated to treat these views as dispassionate expertise.

The concerns here are similar to those at play in John Rawls’ notion of public reason, that citizens in a democracy need a shared basis of concepts to justify laws and policies. Reasons grounded in religious traditions that are not shared can lack political legitimacy. While the debate of AI regulation may look secular, if the underlying logics are not, then it is not really being conducted through public reason.

Connecting Tech and Religion

The idea of tech having some connection to religion builds on current conversations in tech law and AI spaces. Karen Hao’s Empire of AI opens with an epigraph, a quote from Sam Altman, where the OpenAI CEO muses that the “most successful founders… are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so.” Claudia E. Haupt and Margaret Hu have discussed the First Amendment and the Separation of Tech and State, understanding technology firms as akin to a religious establishment. Several years ago, Timnit Gebru and Émile P. Torres discussed effective altruism as part of a broader movement to use language about safety from artificial intelligence to evade accountability. Gideon Lewis-Kraus’s profile of William MacAskill in The New Yorker mentions the resemblance of effective altruism to religion in passing, so the idea that something seems religious is definitely known.

Sociologist Carolyn Chen’s Work Pray Code made the case in 2022 that in Silicon Valley, work has become a replacement for many people for the kinds of social connections and the role traditional religious institutions once played. Companies also used the sacred and religious trappings to gain greater control over their workers and make them more productive without additional compensation. Chen wrote before LLMs became publicly prominent. While the companies that Chen was studying might have filled the role of churches, they did not have the coherent ideology that religions did, but that is no longer the case. 

Defining religion is notoriously difficult. Religious studies scholars have largely abandoned any effort to consistently define the term, observing that classifying certain practices and beliefs as religion has an arbitrariness, often reflecting social and political considerations. U.S. Law, however, has had to provide standards for what is “religion” because law treats it differently from non-religion. Because groups might seek advantages by classifying themselves as “religious,” either to be given the protection of the Free Exercise Clause or to seek advantage from the state in ways that would violate the Establishment Clause of the Constitution, U.S. law does not rely on groups’ self-definitions to define religion. The law provides a useful way to use fixed standards to determine if something is religious.

There has never been an explicit definition of religion from the U.S. Supreme Court. In the conscientious objector cases during the Vietnam War, the Supreme Court backed an expansive notion of religion, finding in United States v. Seeger that whatever filled the place of a traditional God counted as a religious belief in a Supreme Being, and the Court ruled that both agnostics and atheists could be religious conscientious objectors to war. Only a few years later, in Wisconsin v. Yoder, the Court upheld the ability of the Amish not to send their children to public school, arguing that religion was different from philosophical belief. The Court specifically compared the Amish to Henry David Thoreau’s views of society in Walden, observing, “Thoreau’s choice was philosophical and personal rather than religious, and such belief does not rise to the demands of the Religion Clauses.”

Courts often rely on the 1979 Third Circuit case Malnak v. Yogi to define religion. In Malnak, the appellants unsuccessfully tried to argue that their Transcendental Meditation practices were not religious and hence would not fall under the Establishment Clause. Judge Arlin Adams, in his concurrence, articulated a three-factor test for determining whether a belief system constitutes a religion. This was:

  1. Does the belief system deal with fundamental questions (ultimate concerns)?
  2. Is it comprehensive in nature, offering a “systematic series of answers”?
  3. Are there formal, external, or surface signs such as rituals, ceremony, holidays, observances, or a clergy?

The test was later criticized, particularly on the second point, as many religious systems do not have a comprehensive explanation of reality; for example, many Jewish communities are not explicit on what happens after death.

Applying Legal Definitions of Religion to Tech

I want to address the Malnak indices point by point while discussing effective altruism and rationalism, which, I believe, are the most coherently articulated AI-related positions. Courts in the United States have adopted other legal metrics of religion, but the factors in the Malnak concurrence are particularly insightful because they provide a concise heuristic for showing something legally “is” religion, even when the group being analyzed does not define itself as a religion.

1. Does the belief system deal with ultimate concerns? 

AI religious beliefs deal with both the origins of reality and the ultimate fate of humanity. Many effective altruists are believers in simulation theory. Simulation theory is the idea that all our existence is a computer projection of an external reality that we cannot perceive. This is similar to what is depicted in science fiction films, such as World on a Wire, Thirteenth Floor, and The Matrix. It’s also roughly consistent with Platonic forms and most forms of classical theism, in that here the world is presented as a projection or creation of another being or beings, who possess special or omniscient insight into our world and may intervene to affect our reality. The writing of philosopher Nicholas Bostrom on simulation theory has echoes of theological proofs for God. Elon Musk has claimed that the odds are “billions to one” that we live in base reality and not a simulation, which would make him a great deal more certain about ultimate reality than many clergy members are about theism. Simulation theory provides reality with a gnostic, hidden meaning. Some subset of ChatGPT users believe that their discussions with the LLM have connected them with the “secrets of the universe.”

2. Is the belief system comprehensive in nature?

AI theorization offers a comprehensive view, comparable to that of most theological systems. Simulation theory is inherently a view that explains reality and all existence. People who believe in the inevitable progress of AI , it turns out, have an easier time believing they can transcend death. The title of a 2023 Rolling Stone article on the views of Ray Kurzweil and Eliezer Yudkowsky summarizes the most extreme possible futures that AI theorists contemplate: “How A.I. Could Reincarnate Your Dead Grandparents — or Wipe Out Your Kids.” The idea is that AI will either bring complete apocalyptic destruction or utopia, the world poised between these two binary extremes.

Effective altruism and AI religion have a textual canon, much like a religion. William MacAskill’s book What We Owe the Future and Peter Singer’s The Most Good You Can Do: How Effective Altruism is Changing Our Ideas of Living Ethically provide an intellectual justification for EA beliefs in furthering the long-term good of humanity. Nicholas Bostrom’s Superintelligence: Paths, Dangers, and Strategies book about the dangers of AI has been influential. One less academic work, Eliezer Yudkowsky’s fan fiction, Harry Potter and the Methods of Rationality, popularized a set of rationalist ideas. Shared community around texts can be a way to create a cohesive tradition.

3. Are there formal “signs” such as rituals, ceremony, or a clergy?

The third prong of Malnak is the hardest for modern AI beliefs to satisfy. There are ritualistic behaviors here worth highlighting. Effective altruist circles use giving pledges, promising to commit ten percent of their income to “effective charities.” Giving What We Can, the main EA charity, advocates for this kind of practice. The ten percent number is the same percentage used as a tithe by many Christians, with the ten percent figure derived from the biblical command that the Israelites give a tenth of their produce to God. Taking a pledge has been likened to conversion, and EAs often solicit “converts” through outreach to university students, proselytizing for their faith. Another outward marker is that effective altruists have at times embraced cryonics, the freezing of human bodies, with the belief that future technology will enable individuals to be revived. Discussion of brain uploading has also become a frequent fixture of their circles.

While there is no trained clergy, various AI companies themselves may constitute a kind of for-profit religious corporation. Anthropic has considerable connections to effective altruism to the extent that it should probably be considered an instrument of that movement. The company’s president, Daniela Amodei, has denied there is a connection, but her husband, Holden Karnofsky, who is also part of the company, is one of the founders of Open Philanthropy, which was one of the main effective altruist grantmakers. Daniela’s brother, Dario Amodei, the CEO, was an early signatory for the Giving What We Can Pledge and lived in an EA community.

The Peril of AI Religion in Policymaking

If AI advocates do fulfill these three prongs of the Malnak test, we accept that they are religious in a legal sense. It matters that they are religious because they are exerting a profound influence on companies, policymaking, and the development of law.

The movement has also had an influence on OpenAI, currently the most prominent AI developer. From its founding as a nonprofit, OpenAI had connections with the EA movement. When the OpenAI board briefly ousted Sam Altman as CEO in 2023, one member, Tasha McCauley, was a leader in the EA movement. Many of those people most connected with EA, however, seem to have left for Anthropic. 

This AI religion shapes academic institutions. Many AI Safety groups at universities were funded by Open Philanthropy. Open Philanthropy paid undergraduate student leaders up to $80,000 a year (with extra for health insurance) to participate in the movement. As a result of this funding and activism by EA groups like Stanford’s AI Alignment Group, they are closely connected with the movement. FTX Futures, the charitable arm of FTX, the EA-linked crypto exchange that imploded due to fraud, was a major funder of academics. Until last year, Professor Nick Bostrom ran Oxford University’s Future of Humanity Institute, which was funded by those linked to EA. The Future of Humanity Institute shared an office at Trajan House in Oxford with the Centre for Effective Altruism, which we might regard as EA’s headquarters in the UK.

EAs and related believers have come to define public discourse on AI. Ross Douthat had Daniel Kokotajlo as a guest on his podcast, Interesting Times. Kokotajlo expressed the belief that AI would transform the world by 2027 or possibly 2028, leading to oligarchy and human extinction. Kokotajlo was billed as an “AI researcher.” His involvement with effective altruism was not mentioned; he is also a prolific poster on LessWrong, the rationalist internet forum. His AI 2027 scenario predicting the rise of superintelligence was co-authored with Scott Alexander, a major leader in the rationalist community. The point is not that Kokotajlo should be ignored, but that he is part of a discourse that involves a lot of non-publicly circulated or agreed-upon beliefs that inform his timeline. A comparison might be made to Billy Graham, who in 1951 predicted the end of the world within two years, a reasonable speculation during the Cold War but one also informed by his religious convictions.

In early October 2025, billionaire Peter Thiel concluded a series of lectures at the Commonwealth Club in San Francisco, laying out his belief that the Antichrist would be a figure akin to environmentalist Greta Thunberg or Yudkowsky. Thiel had previously backed Yudkowsky’s work and funded his Machine Intelligence Research Institute, and had launched the Singularity Summit conferences with him (which featured Bostrom and Kurzweil as speakers). Thiel now presented the desire to constrain AI development as opposed to Christianity and a world-threatening evil. Thiel has expanded on these views with his co-author, Sam Wolfe, in an article in the conservative Christian magazine First Things. Debates about the regulation of AI are now about theology and religion, with a form of AI-focused Christianity on one side. 

With regard to AI regulation, we need to be concerned about the discourse being about religion, overt (in Thiel’s case) or implicit (as it is among other AI believers). There is also the concern about a kind of religious regulatory capture, where all of the experts on a topic who are citing each other, running the conferences, staffing the universities, and serving on the boards at the AI companies, and even in policy meetings, are members of the same community. Listeners to the New York Times podcast just know Kokotajlo is an “AI researcher.” Users of Anthropic’s products just understand it to be a technology company, not understanding its effective altruist origins. In making policies about AI, those with functionally religious beliefs about AI are often the loudest voices. It is expected that experts are biased by their corporate ties, which should be scrutinized, but their shared religious and ideological kinship can be less apparent. We should be wary of policy decisions no longer relying on public reason, but based on internal beliefs of discrete communities.

An agenda dictated by the worldview of AI “believers,” focused on AI risks, minimizes the contemporary upheaval caused by this emerging technology, instead concentrating on the farther afield prospect of AI behaving like the villainous Skynet in The Terminator. Large Language Models and current AI systems create environmental harms, have the potential to cause unemployment in a variety of sectors, and require huge amounts of low-paid human labor to analyze and sort through often deeply disturbing content. Policymakers still need to address liability for damages if AI is misused, the role of such technology in growing wealth inequalities, and debates about ownership of intellectual property. Seeing AI as either a technological God or the devil risks ignoring what it is doing right now in the world we live in.

Having any community with such authority over the future of technology is perilous. Regulating AI will take a diversity of religious and theoretical perspectives. AI believers need to be welcome; they have deeply felt and thought-out ideas, some of which are useful and some of which are harmful and problematic, but in these conversations about regulation and policy, they cannot be the only people in the room.♦


Thanks to Aida Barnes-May, Michael McGovern, and the Yale Law ISP Writing Workshop for offering comments on an earlier draft of this piece.


Isaac Barnes May is a resident fellow of the Information  Society Project at Yale Law School. A graduate of Yale Law and Harvard Divinity Schools, Isaac holds a PhD in Religious Studies from the University of Virginia and has written two books on American religion. 


Recommended Citation

May, Isaac. “AI Regulation and the Risk of Ideological Capture: When Tech Becomes Religion.” Canopy Forum, October 27, 2025. https://canopyforum.org/2025/10/27/ai-regulation-and-the-risk-of-ideological-capture-when-tech-becomes-religion/

Recent Posts