The AI Debate Is Rigged by the Companies Building It
Don’t let corporations define one of the most urgent political conversations of our time.
The rise of AI has sparked a cultural conversation so contentious and misinformed that those with even the slightest understanding of large language models (LLMs) find themselves wondering if we have real hope for genuine societal understanding.
Many states have passed various forms of AI and AI-related legislation, but federal legislation becomes likelier by the day, even as we have a Congress filled with leaders who are arguably compromised or not literate in AI technologies. The threat of increasing legislation has spurred the political involvement of companies that intend to strike while the iron is hot when it comes to government oversight.
OpenAI is fighting to keep the regulation light, believing that inept legislation will hamper innovation and lead to international AI development outpacing the U.S. They believe this would put the U.S. at a strategic disadvantage, perhaps even a dangerous one, all while hurting their bottom line.
Anthropic advocates a regulatory framework to restrain what it sees as possible catastrophic outcomes of unrestrained development. The practical benefit is that such legislation could prevent competition from outpacing them while stymying smaller firms that couldn’t easily comply. Anthropic is often associated with the effective altruism (EA) movement and, according to critics, catastrophism.
The incentives for both perspectives are a combination of genuine ideology and self-interest. The OpenAI camp finds the Anthropic camp to be alarmist safety-maximalists standing in the way of progress. The Anthropic camp considers the OpenAI camp reckless to the point of danger in pursuit of profit.
If all of this sounds like the quibbling of self-important tech bros, you’re not alone. The problem is that all of them have a lot of money, and mailboxes are already being stuffed with propaganda for and against their various candidates. None of it will properly educate the public about the actual state of AI safety.
The issue of AI safety is urgent and real, even if concerns about artificial general intelligence (AGI) remain speculative and the conversation around environmental impact is often misinformed. The technology as it exists today does pose genuine challenges in job loss, local environmental health, national security, and even human rights as AI-weaponry becomes a live issue.
The problem is that the industry wishes to decide the terms of its own regulation. For all of the chest-thumping about safety, there’s not as much daylight between Anthropic and OpenAI as they’d wish you to believe. Both have been integrated into military systems. OpenAI and Anthropic both score poorly on independent AI safety scorecards. Both are driven by profit.
Much in the way industries like banking and pharmaceuticals have succeeded in ingratiating themselves far enough into our government apparatus that their interests overpower the interests of citizens, the AI industry is looking to follow in their footsteps.
We need to have sober, informed conversations about AI that reject the terms of the conversation being forced upon us by monied interests and instead root themselves in the real challenges that exist. We’re grappling with a level of technological change that is reshaping our world faster than we’re adapting.
The rise of AI doesn’t have to be a catastrophe for the planet, and it also won’t solve the human condition. But like the printing press, the World Wide Web, the smartphone, and a host of other innovations that came before, this technology will change the course of history. Whether that change is for the better or worse is up to us.



