The company’s current battle with the Pentagon distracts from the military use of its flagship model.
The San Francisco–based AI firm Anthropic has recently captured national headlines amid a significant public dispute with the Pentagon regarding AI safety protocols and concerns about its technology’s potential deployment for widespread domestic surveillance. Its resistance against the Trump administration’s demands—resulting last month in a federal ban on contracts with the company, which Anthropic has since legally challenged—has spurred a wave of support from influential Silicon Valley figures and its CEO, Dario Amodei.
This confrontation has rekindled a level of Silicon Valley solidarity reminiscent of the industry’s liberal period prior to the Trump era. The New York Times characterized the swell in backing as evolving from a subtle murmur into a pronounced outcry.
Independent journalist Jack Poulson shared with The American Conservative his view that Anthropic’s apparent victory in securing such endorsements might be intentional. He speculates that the dispute with the Trump administration serves less as a genuine defense of civil rights and more as a calculated publicity effort aimed at appealing to liberal and progressive groups in Silicon Valley. These groups, while wary of governmental surveillance, remain willing to equip the U.S. and allied governments with advanced capabilities to monitor, censor, and even eliminate perceived adversaries, provided the price is right.
Poulson, who left his senior scientist role at Google in 2017 in protest against the company’s involvement in developing a censored search engine for China, emphasized that Anthropic’s brand identity largely hinges on its ethical positioning as distinct from OpenAI. Anthropic’s engagement in a highly visible conflict—which reportedly boosted downloads of its Claude chatbot beyond ChatGPT in March—fits neatly within that brand narrative. He suggests this could be a strategy for Anthropic to portray itself as “#resistance,” helping its staff maintain acceptance within liberal circles.
Anthropic co-founder Jack Clark has experience with such tactical marketing from his tenure as OpenAI’s policy director, where he contributed to that company’s exploitation of its nonprofit status for financial gain.
Poulson highlighted several lesser-known revelations that undermine Anthropic’s portrayal as a firm uniquely devoted to ethical concerns about AI misuse and surveillance. In 2023, he uncovered a leaked meeting booklet showing Anthropic staff participating in confidential intelligence sessions involving senior CIA leaders—including the agency’s CTO and AI director—as well as officials from the Australian government.
This workshop, convened by entities linked to former Google CEO Eric Schmidt’s Special Competitive Studies Project and the Australian Strategic Policy Institute, was part of an initiative examining how large language models might be integrated within Western security frameworks. Concurrently, Anthropic has expanded its government dealings by appointing Steve Sloss, a longtime Palantir veteran, to lead its U.S. government sales and promoting its AI capabilities to intelligence organizations such as the National Geospatial-Intelligence Agency.
These facts prompt broader concerns about Anthropic’s connections to the deep state. Poulson noted the company’s collaboration with Palantir—known for its data fusion platforms serving both commercial and governmental sectors—despite Anthropic’s public cautions against such systems. He also referenced the CIA’s Open Source Enterprise, which has long explored using large language models to analyze immense volumes of publicly accessible data, raising questions about Anthropic’s role or involvement in those efforts following the 2023 meeting.
While the company now openly opposes certain Pentagon mandates, its flagship AI, Claude, is reportedly embedded within the U.S. military’s targeting systems as part of Palantir’s Maven platform. It aids military strategists by interpreting intelligence inputs and generating prioritized strike lists, including operations in Iran, and has likely been linked to the bombing of a school that killed over 160 individuals, mostly children. Anthropic’s technology was also reportedly utilized during an operation to abduct Venezuelan leader Nicolas Maduro, resulting in more than 80 deaths.
Although Anthropic promotes a stance against fully autonomous weapons, it appears to place few restrictions on its AI being employed to target non-American lives.
Despite being lauded as defenders of civil liberties and ethical AI, Anthropic’s deep ties with the U.S. security apparatus and the increasing military application of its technology suggest the need for a more critical perspective on its public image.
Original article: theamericanconservative.com
