The Ideology in the Machine
Leer en español
The Five-Star Candidate
Sometime in 2014, a team of engineers at Amazon’s Edinburgh office began building something they believed would revolutionize hiring. The idea was elegant in its simplicity: feed a machine learning system ten years of résumés submitted to the company, let it learn what a successful Amazon employee looked like, and then use it to rate new applicants on a scale of one to five stars — like a product review, but for people. The system would be fast, consistent, and, most importantly, objective. It would strip the messiness of human judgment from the hiring process and replace it with the clean logic of pattern recognition.
By 2015, the team had discovered something troubling. The algorithm, trained on a decade of résumés from a predominantly male workforce in a predominantly male industry, had learned a clear lesson: maleness was a predictor of success. It began systematically penalizing résumés that contained the word “women’s” — as in “women’s chess club captain” or “women’s studies.” It downgraded graduates of all-women’s colleges. It had, with mathematical precision, reverse-engineered the gender imbalance of the technology sector and encoded it as a hiring criterion. Amazon’s engineers spent years trying to make the system gender-neutral. They failed. The tool was quietly abandoned. As the ACLU later framed it, such tools “are not eliminating human bias — they are merely laundering it through software.”
The Amazon story is typically told as a cautionary tale about algorithmic bias — a glitch, a bug to be squashed, a technical problem awaiting a technical solution. But the deeper lesson is more unsettling. The algorithm wasn’t broken. It worked exactly as designed. It was given a dataset that reflected the world as it was — a world shaped by decades of gendered assumptions about who belongs in technical roles — and it faithfully learned that world’s ideology. The system didn’t introduce discrimination; it inherited it, formalized it, and scaled it. The bias wasn’t a contaminant in an otherwise pure process. It was the substrate.
This matters now more than ever. According to Gartner, 38% of HR leaders were piloting or implementing generative AI in HR processes as of January 2024. Large language models are increasingly mediating how billions of people access information, form opinions, draft arguments, and think through problems. And every one of these systems — from the hiring tool that rates you to the chatbot that helps you write an appeal letter — carries within it a set of assumptions about what is normal, what is valuable, who matters, and how the world works.
The central argument of this essay is straightforward but its implications are vast: artificial intelligence systems are never ideologically neutral. They encode the worldviews, economic incentives, and political assumptions of the people and institutions that build them. The notion that technology is an objective tool — that the machine simply reflects reality — is itself an ideological position, and perhaps the most dangerous one, because it forecloses the very questions we most urgently need to ask. The most consequential political choices of our era are being made not in legislatures or courtrooms but in training pipelines, reward models, and the 84-page constitutions that govern what a chatbot will and won’t say.
Code Is Law — and Now Code Is What Can Be Thought
The idea that technology carries politics is not new, though it is routinely forgotten. In 1980, the political theorist Langdon Winner published an essay in Daedalus titled “Do Artifacts Have Politics?” that has since become foundational in the field of science and technology studies. Winner’s most famous example was the work of Robert Moses, the legendary New York urban planner, who designed the overpasses on the parkways leading to Long Island’s public beaches with clearances too low for buses to pass beneath them. The effect — whether or not it was the conscious intent — was to exclude low-income residents and Black communities, who relied on public transit, from accessing those beaches. The politics were literally poured into concrete.
Winner cautioned that “to recognize the political dimensions in the shapes of technology does not require that we look for conscious conspiracies or malicious intentions.” The point was structural, not psychological. Technologies are designed within social contexts saturated with power relations, and the resulting artifacts carry those relations forward, often long after the original designers are gone. Moses’s overpasses still stand.
Two decades later, the legal scholar Lawrence Lessig extended this insight into the digital realm. In his 1999 book Code and Other Laws of Cyberspace, Lessig argued that source code functions as a form of regulation equivalent in power to law itself. He identified four regulators of human behavior — law, norms, markets, and architecture — and argued that in cyberspace, code is the dominant form of architecture. Unlike physical structures, code can be changed invisibly, deployed globally, and modified without the consent of those it governs. Lessig’s crucial insight was normative: whoever writes the code effectively writes the rules. “The only choice,” he warned, is “whether we collectively will have a role in their choice — and thus in determining how these values regulate — or whether collectively we will allow the coders to select our values for us.”
Lessig was writing about internet protocols and website architectures — the relatively legible infrastructure of the early web. What he could not have fully anticipated was how dramatically his framework would intensify in the era of large language models. In a 2024 paper published in Policy and Society, Stuart Russell and colleagues at Oxford extended Lessig’s analysis with a crucial caveat: in the era of generative AI, code is no longer law in the original sense. Deep neural networks are not designed in the way that internet protocols were designed. They are created through a massively resource-intensive training process of tuning trillions of parameters. It is not possible to encode a rule like “LLMs must not dispense medical advice” into the model itself. Instead, system engineers must hope the model abides by the desired behavior after sufficient reinforcement. The values are embedded in statistical weights rather than readable rules.
This is the critical escalation. Lessig showed that code regulates behavior online — what you can click, buy, or access. Algorithms of the 2010s regulated access to information — what search results appeared first, what news stories were amplified, what products were recommended. LLMs regulate something more fundamental still: they shape the boundaries of thought assistance available to humans who increasingly rely on these systems for learning, reasoning, and understanding. When a search engine determines what information appears first, it shapes access. When an LLM determines what concepts, framings, and words are available in response to a thinking request, it shapes the cognitive process itself. We have moved from code regulating action to code regulating cognition.
Kate Crawford, in her 2021 book Atlas of AI, describes artificial intelligence as “a technology of extraction” and shows how “the new infrastructures of AI reflect the beliefs and perspectives of a small group of people and serve the interests of the few at the expense of the many.” Ruha Benjamin, the Princeton sociologist, gave this dynamic a name: the “New Jim Code” — her term for the way automation “has the potential to hide, speed, and even deepen discrimination, while appearing neutral and even benevolent.” Both scholars, in different registers, are making the same argument Lessig and Winner made before them: the appearance of objectivity is itself a political achievement, and a dangerous one.
The California Ideology and Its Mutations
If AI systems encode ideology, the natural question is: whose ideology? The answer requires a brief intellectual genealogy of Silicon Valley’s dominant worldview — a worldview so pervasive that it has come to seem like the natural order of things rather than the historically contingent product of a particular time, place, and class.
The foundational text is Richard Barbrook and Andy Cameron’s 1995 essay “The Californian Ideology,” published in the British magazine Mute. Barbrook and Cameron identified the emerging worldview of the Bay Area technology sector as a “bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of Silicon Valley” — combining “the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies.” This ideology promoted technological determinism (technology will inevitably improve the world), radical individualism (the self is the fundamental unit of politics), deregulated markets (the market is the most efficient information processor), and disdain for government (the state is an obstacle to progress). The documentary filmmaker Adam Curtis later traced these convictions to deeper roots in Ayn Rand’s Objectivism — the philosophical system that elevated the visionary entrepreneur to the status of moral hero and cast collective governance as parasitic.
These were not merely cultural attitudes. They became design principles. The preference for scale over equity produced platforms that optimized for user growth while externalizing the costs of harassment, misinformation, and labor exploitation. The fetish for disruption over institutional continuity produced business models premised on circumventing regulations — from labor law (Uber) to financial regulation (cryptocurrency) to housing codes (Airbnb). The commitment to meritocratic individualism produced systems that centered the user as an atomized consumer making free choices in a marketplace, rather than as a citizen embedded in communities with shared obligations. The conviction that technology was inherently democratizing allowed its builders to avoid reckoning with the ways their products concentrated power.
By 2025, this ideology has mutated but not disappeared. In a striking analysis, American Affairs Journal described the shift as moving from Jeffersonian libertarianism — the romantic vision of a decentralized digital frontier — to something more Hamiltonian: “focused on building state capacity at the national level.” The evidence is abundant. Elon Musk’s Department of Government Efficiency (DOGE) represents not a retreat from government but an attempt to rewire it. Palantir’s contracts with defense and intelligence agencies embed Silicon Valley’s optimization logic into the machinery of state surveillance. The Anthropic-Pentagon standoff of 2025, in which the AI company’s refusal to remove guardrails against mass surveillance and autonomous weapons led Defense Secretary Pete Hegseth to designate it a “supply chain risk” — the first time this designation was used against an American company — revealed a new phase in which technology firms are not merely lobbying government but are enmeshed with it in ways that make the old libertarian posture untenable.
The effective accelerationism (e/acc) movement, popular among certain AI developers and venture capitalists, represents another mutation: a philosophy that embraces unconstrained technological development as an almost metaphysical good, treating any attempt at regulation or caution as a form of civilizational cowardice. Its mirror image, AI doomerism, treats artificial superintelligence as an existential risk that justifies concentrating control of AI development in the hands of a small techno-elite who can be trusted to navigate the danger. Both ideologies, despite their apparent opposition, share a common premise: that the future of AI should be determined by technologists, not by democratic publics.
The link to the systems these people build is direct. Market fundamentalism shapes what gets optimized — engagement, efficiency, profit. Techno-solutionism shapes what problems get addressed — those amenable to computational solutions, which tend not to include the structural inequalities that computation often deepens. Meritocratic individualism shapes who is centered — the user as autonomous agent, not the community as political subject. These are not neutral design choices. They are a worldview, and they are being universalized through the global deployment of AI systems built overwhelmingly in a handful of companies clustered in a fifty-mile stretch of Northern California.
Where Ideology Gets Embedded: Three Case Studies
Abstract arguments about technology and politics gain force through specifics. What follows are three case studies showing ideology operating inside technical systems at different layers of the stack — from the choice of fairness metrics, to the selection of training data and alignment strategies, to the architecture of guardrails that shape what an AI will and won’t say.
The Politics of Fairness: COMPAS and the Definitions We Choose
In 2016, ProPublica published a landmark investigation called “Machine Bias,” examining COMPAS, a recidivism prediction algorithm used across 46 U.S. states to inform bail, sentencing, and parole decisions. Analyzing more than 10,000 defendants in Broward County, Florida, the journalists found that Black defendants were 77% more likely than white defendants to be falsely flagged as high-risk for violent crime after controlling for criminal history, recidivism, age, and gender. White defendants, meanwhile, were more likely to be mislabeled as low risk. The system’s overall accuracy was roughly 61% — not dramatically better than a coin flip.
The company behind COMPAS, Northpointe (now Equivant), pushed back vigorously. And here the story takes a turn that is more instructive than the initial finding: Northpointe’s defense was statistically valid. Both ProPublica and Northpointe were correct — they were simply using different definitions of fairness. ProPublica focused on equalized false positive rates: the principle that the algorithm should be equally wrong about Black and white defendants. Northpointe focused on predictive parity: the principle that a given risk score should mean the same thing regardless of race. Mathematicians subsequently demonstrated that these two definitions of fairness are, in most real-world cases, mathematically incompatible. You cannot satisfy both simultaneously when base rates differ between groups.
This is not a technical failure. It is a political revelation. The choice of which fairness metric to optimize is an ideological choice about what kind of equality matters — equality of treatment or equality of outcomes, individual calibration or group-level equity. There is no neutral answer. Yet the system was deployed as though it were simply calculating risk, as though the numbers spoke for themselves. The ideology was in the metric, and the metric was invisible.
The LLM Lifecycle as Ideology Pipeline: Who the Model Thinks Is Important

A landmark 2025 study published in Nature npj Artificial Intelligence by Maarten Buyl and colleagues provides perhaps the most rigorous empirical evidence to date that LLMs are not ideologically neutral. The researchers prompted 19 popular LLMs to describe 3,991 prominent persons with political relevance, using all six of the United Nations’ official languages. The results were striking: the ideological stance of an LLM appeared to reliably reflect the worldview of its creators.
Among U.S.-based models alone, the researchers found significant normative differences. Google’s Gemini stood out as “particularly supportive of progressive societal values.” Elon Musk’s Grok, built by xAI, was “relatively more appreciative of political persons related to national sovereignty, centralized authority, and economic self-reliance, valuing national priorities over global integration.” Same underlying architecture. Same type of technology. Different creators, different ideologies.
Perhaps most revealingly, the study found that the same model produced different ideological outputs depending on the language of the prompt. An LLM responding in Arabic framed political figures differently than when responding in English or Mandarin — meaning these systems carry not one worldview but a language-dependent constellation of worldviews, each shaped by the cultural biases of the respective training corpora and the human labelers who annotated them. The model doesn’t have a single politics. It has a portfolio of politics, distributed unevenly across languages and cultures, reflecting the geopolitics of who produced the data it consumed.
Crucially, Buyl and colleagues reject the framework of “bias versus neutrality” entirely. Their results, they argue, constitute empirical evidence supporting philosophical arguments — from Foucault, Gramsci, and Mouffe — that neutrality is itself a culturally and ideologically defined concept. The researchers point to Chantal Mouffe’s concept of agonistic pluralism: a democratic model in which a plurality of ideological viewpoints compete openly, embracing political differences rather than suppressing them. Applied to LLMs, this means the correct response to embedded ideology is not the impossible pursuit of a view from nowhere but the cultivation of a genuine plurality of models with explicitly different value systems.
The mechanisms through which ideology enters the model are numerous and layered. Creating an LLM involves what the Buyl team calls “many human design choices which may, intentionally or inadvertently, engrain particular ideological views into its behavior.” These include the selection and curation of pre-training data (what texts are included or excluded determines the universe of thought the model can access); reinforcement learning from human feedback, or RLHF, in which human annotators rank outputs and their aggregate preferences become the model’s “values”; and constitutional AI frameworks that establish explicit priority hierarchies for model behavior. At every layer, someone is making a choice — about what to include, what to reward, what to forbid — and that choice carries politics.
Constitutional AI and the Architecture of Cognitive Constraint
No company has been more transparent about the ideological dimensions of AI development than Anthropic, the maker of Claude — and even their transparency reveals how deep the problem runs. Anthropic’s Constitutional AI framework is built around a written constitution that draws from sources including the UN Declaration of Human Rights, Apple’s terms of service, and other AI labs’ published principles. The company has been forthright about its reasoning: “AI models will have value systems, whether intentional or unintentional. One of our goals with Constitutional AI is to make those goals explicit.”
That constitution has evolved dramatically. What began as roughly 16 principles in 2022 has grown into an 84-page, 23,000-word document establishing a rigid priority hierarchy: Safety first, then Ethics, then Guideline Compliance, then Helpfulness. Only seven prohibitions are “hardcoded” — absolute restrictions such as those against providing instructions for biological weapons. Everything else is “softcoded”: adjustable by operators (the businesses that deploy Claude) but not by end users. This means that the individual using the system — the person whose thinking is being shaped — has the least control over the values governing their interaction.
The architecture of guardrails extends far beyond any single company’s constitution. A comprehensive taxonomy identifies 14 guardrail categories across 7 lifecycle phases of LLM development. Meta’s Llama Guard defines 13 safety categories. OpenAI’s Moderation API covers 13 harm dimensions. Each taxonomy — deciding what constitutes “toxicity,” “bias,” or “undesirable content” — embeds specific cultural and corporate judgments about acceptable thought. And research by EleutherAI has found that pre-training data filtering is over 10 times more effective at resisting reversal than post-training alignment methods. This means the earliest, least visible decisions about what data to include or exclude have the most durable impact on what the model “thinks” — and these are the decisions that are furthest from any form of public scrutiny.
What makes this different from traditional censorship is the direction of the intervention. Traditional censorship suppresses already-formed expression — a book is banned, a newspaper is shuttered, a protest is dispersed. LLM guardrails operate upstream of expression, at the stage of thought formation itself. They determine what concepts are accessible, what framings are available, what moral vocabularies are on offer. When a model declines to engage with a topic, or systematically frames it from one perspective, it shapes the user’s cognitive map. The average user, as researchers have noted, “doesn’t notice the ideological contradictions in the subtle details of the design: they receive a ready-made image and consider it ‘natural.’” But the naturalness is manufactured.
The connecting thread across all three case studies is this: ideology operates at every layer of the stack, from the macro-political question of what problems are worth solving to the micro-technical question of which loss function to optimize. It is embedded in training data, encoded in reward models, formalized in constitutions, and enforced by inference-time classifiers. None of this is accidental. And very little of it is transparent.
The Counterarguments and Their Limits
The strongest objections to this analysis deserve honest engagement. They are not trivial, and dismissing them would weaken rather than strengthen the case.
The open-source defense. If the problem is that a few companies are encoding their worldviews into dominant AI systems, isn’t the solution simply more models with more perspectives? The open-source AI ecosystem is vibrant and growing. A thriving subculture of “uncensored models” exists specifically to strip safety fine-tuning from open-weight models — a process achievable, according to technical analyses, “in minutes for under $1.” This is a real form of pluralism, and it matters.
But the counterargument has limits. The AI industry frames uncensored models almost entirely as a security threat rather than as a legitimate expression of cognitive autonomy — revealing, perhaps, its own assumptions about who should control the boundaries of permissible thought. More fundamentally, open-source models still inherit ideology through their training data, and assembling the massive datasets required for pre-training demands resources concentrated in very few hands. Openness at the model layer does not equal openness at the data layer, and the data layer, as we have seen, is where the most durable ideological commitments are encoded.
The alignment research defense. Isn’t Constitutional AI exactly the kind of transparency this essay is demanding? To a degree, yes, and Anthropic deserves credit for a rare corporate admission that neutrality is impossible. But their own Collective Constitutional AI experiment — in which roughly 1,000 Americans were invited to help shape the model’s principles — produced a revealing result: both the publicly sourced and internally sourced constitutions generated models whose outputs “are more representative of people who self-identify as Liberal, rather than Conservative.” Even democratic input processes are shaped by who participates, how questions are framed, and what range of answers is deemed acceptable. The process was more transparent than most, but it still reproduced a particular ideological center of gravity.
The “all institutions have values” deflection. This is perhaps the most common response: newspapers have editorial slants, universities have intellectual cultures, governments have ideologies — why should AI be different? The answer lies in scale, opacity, and epistemic authority. A newspaper’s editorial slant is visible and contestable; readers can compare it with other sources, identify the publisher, understand the economic incentives. An LLM’s ideological commitments are embedded in trillions of parameters that cannot be individually audited. As Russell and colleagues argued in their 2024 paper, “it is impossible to demonstrate compliance with a given regulatory specification” — meaning the values encoded in these systems are often opaque even to their creators. And unlike a newspaper, which most people understand to be a curated product, LLMs present their outputs with an aura of computational objectivity that actively discourages critical scrutiny.
The Anthropic-Pentagon standoff as stress test. The events of 2025 offered a vivid illustration of how deeply political these questions have become. When Anthropic CEO Dario Amodei refused a Pentagon deadline to drop restrictions preventing Claude from being used for mass domestic surveillance and fully autonomous weapons — stating the company “cannot in good conscience” comply — Defense Secretary Hegseth designated Anthropic a “supply chain risk,” a classification typically reserved for foreign adversaries. Hours later, OpenAI announced a Pentagon deal to deploy AI models on classified networks, reportedly with similar safety guardrails that Anthropic had proposed. NYU’s Stern Center for Business and Human Rights described the standoff as capturing “in microcosm” the tensions of AI governance. The episode revealed that guardrails are not just technical specifications but geopolitical instruments — and that the enforcement of AI values is subject to political retaliation.
Hundreds of employees from OpenAI, Google, Amazon, and Microsoft signed open letters supporting Anthropic’s position — even as Elon Musk attacked the company on X, calling it one that “hates Western civilization,” while his own xAI positioned to replace Anthropic in defense contracts. The incident echoed the departure of Timnit Gebru from Google in 2020, when roughly 2,700 Google employees and 4,300 external supporters protested her ouster over a co-authored paper, “On the Dangers of Stochastic Parrots,” that laid out risks of large language models and threatened core business models. As MIT Technology Review documented at the time, the episode demonstrated that corporate AI ethics research faces structural limits — that transparency has a ceiling set by profit.
None of these counterarguments invalidate the central thesis. They complicate it, which is different. Open-source AI and alignment research are genuine goods. But they operate within the same economic and ideological ecosystem they seek to reform. The question is not whether good-faith efforts exist but whether they are sufficient to the scale of the problem — and the evidence suggests they are not.
Toward an Ingredient Label for AI
If ideological neutrality in AI is impossible — if, as Mouffe argues, the very concept of neutrality is itself ideologically loaded — then the goal should not be purification but legibility. We do not demand that food be free of all ingredients; we demand that the ingredients be listed on the label. The analogy is imperfect but instructive. An ingredient label for AI would not eliminate the ideology baked into these systems, but it would make that ideology visible, contestable, and subject to informed choice.
What would such a label include? At minimum: the provenance of training data (what was included, what was excluded, and who made those decisions); the alignment methodology (RLHF, Constitutional AI, or other approaches, along with the demographic and cultural composition of the human labeling teams); the constitutional principles or value hierarchies governing the model’s behavior; and the guardrail taxonomies that define what the model treats as harmful, toxic, or unacceptable. These disclosures would not solve the problem — you cannot audit trillions of parameters — but they would shift the default assumption from “this system is neutral” to “this system carries values, and here is what we can tell you about them.”
Lessig’s democratic imperative, updated for the LLM era, demands something more structural. As scholars in the Digital Society journal have argued, free and open societies need to regain control over source code and critical digital infrastructure to avoid being governed by private companies operating without democratic accountability. For LLMs, this means public auditing frameworks with real enforcement power, mandatory disclosure of guardrail taxonomies, and democratic input into the value hierarchies that shape what models will and won’t do — input that goes beyond the self-selected samples of a company’s public engagement exercises.
The “Law Informs Code” research agenda emerging from Stanford and Northwestern offers another path: inverting Lessig’s formulation so that democratic law shapes AI architecture rather than the reverse. Law, in this framing, is itself a computational engine — one that converts opaque human values into legible and enforceable directives. The project is to embed the logic of democratic deliberation into the systems that increasingly govern our cognitive lives, making the rule of law upstream of the rule of code.
Mouffe’s agonistic pluralism points toward a complementary structural solution: rather than pursuing the impossible dream of a single neutral AI, society should cultivate a genuine plurality of models with explicitly different value systems. This is the opposite of the current trajectory, which is toward market consolidation around a handful of dominant platforms, each encoding its creator’s worldview as the default. A healthy AI ecosystem would look less like a monopoly and more like a library — many books, many authors, many perspectives, with readers equipped to navigate among them critically.
None of this will be easy. The opacity of LLMs — the fact that values emerge from the interaction of training data, reward models, and architectural choices in ways that resist inspection — makes traditional transparency mechanisms insufficient. But the difficulty of the task does not excuse inaction. And the first step is the hardest one: the cultural and intellectual shift from “these systems are neutral tools” to “these systems carry politics, and the politics matter.”
What the Machine Learned
Return, then, to that team of engineers in Edinburgh, training their five-star hiring algorithm on a decade of résumés. What the machine learned was not how to identify talent. It learned the ideology of an industry that had been systematically undervaluing women for ten years, and it encoded that ideology as mathematical truth. It learned that men’s names appeared on the résumés of people who were hired. It learned that the word “women’s” correlated with rejection. It learned that the graduates of certain colleges — the ones that happened to be all-women’s institutions — were less likely to have been selected in the past. And it turned these patterns into predictions, with the calm authority of a system that has no idea it is being political.
The machine didn’t malfunction. It worked. And in working, it revealed something that its creators hadn’t wanted to see: that ten years of hiring decisions, made by thousands of individuals who surely considered themselves fair, had produced a pattern indistinguishable from systematic discrimination. The algorithm didn’t create the ideology. It inherited it, amplified it, and gave it the sheen of objectivity.
Every large language model operating today has done something similar — not just with gender in hiring, but at a civilizational scale. These systems have absorbed the worldviews, hierarchies, moral vocabularies, and blind spots of their creators, their training data, and the particular historical moment in which they were built. They present this absorbed worldview as objective assistance — as what “the AI thinks” — and in doing so, they universalize the perspectives of a remarkably narrow slice of humanity. As researchers have observed, LLMs are carriers of civilizational projects, and the stronger their impact, the more authoritative and objective the algorithmic responses seem. A technology designed to unite humanity within a single information space actually reproduces and amplifies the rifts it claims to transcend.
The ideology in the machine is only invisible if we refuse to look. It is there in the training data that was selected and the data that was discarded. It is there in the reward model that taught the system what counts as a good answer. It is there in the 84-page constitution that ranks safety above helpfulness, and in the seven hardcoded prohibitions that someone — not you, not your elected representative, but someone — decided were the absolute limits. It is there in the fact that the same model says different things in different languages, reflecting the cultural assumptions of labelers you will never meet. It is there in the fairness metric that a company chose without telling you there were other metrics that would have produced a different result.
The question was never whether our AI systems have politics. They do. They always have. The question is whether we will demand the right to know what those politics are — and whether we will be allowed to ask. The most dangerous ideology is the one that presents itself as no ideology at all. The first act of resistance is simply demanding to see the label.
References
ACLU. (n.d.). Why Amazon’s automated hiring tool discriminated against women. American Civil Liberties Union. https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Anthropic. (2025). Claude’s model specification. https://www.anthropic.com/news/claudes-constitution
Axios. (2026, February 28). Anthropic to take Trump’s Pentagon to court over Claude dispute. Axios. https://www.axios.com/2026/02/28/anthropic-trump-pentagon-lawsuit-ai-dispute
Barbrook, R., & Cameron, A. (1995). The Californian ideology. Alamut. https://www.metamute.org/editorial/articles/californian-ideology
Bloomberg. (2026, February 27). Anthropic’s feud with Pentagon mushrooms into broader battle. Bloomberg. https://www.bloomberg.com/news/articles/2026-02-27/anthropic-s-feud-with-pentagon-mushrooms-into-broader-battle
CBS News. (2026a). Hegseth declares Anthropic a supply chain risk. CBS News. https://www.cbsnews.com/news/hegseth-declares-anthropic-supply-chain-risk/
CBS News. (2026b). Pentagon official lashes out at Anthropic. CBS News. https://www.cbsnews.com/news/pentagon-anthropic-feud-ai-military-says-it-made-compromises/
CBS News. (2026c). Trump orders federal agencies to stop using Anthropic’s AI technology. CBS News. https://www.cbsnews.com/news/trump-anthropic-ai-order-federal-agencies/
CNBC. (2026, February 27). Trump admin blacklists Anthropic as AI firm refuses Pentagon demands. CNBC. https://www.cnbc.com/2026/02/27/trump-anthropic-ai-pentagon.html
CNN. (2026a, February 26). Anthropic rejects latest Pentagon offer. CNN Business. https://www.cnn.com/2026/02/26/tech/anthropic-rejects-pentagon-offer
CNN. (2026b, February 27). OpenAI strikes deal with Pentagon hours after Trump admin bans Anthropic. CNN. https://edition.cnn.com/2026/02/27/tech/openai-pentagon-deal-ai-systems
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
DefenseScoop. (2026, February 27). Experts raise questions about Pentagon’s threat to blacklist Anthropic. DefenseScoop. https://defensescoop.com/2026/02/27/pentagon-threat-blacklist-anthropic-ai-experts-raise-concerns/
Hao, K. (2020, December 4). The paper that forced Timnit Gebru out of Google. MIT Technology Review. https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Lawfare. (2026). What the Defense Production Act can and can’t do to Anthropic. Lawfare. https://www.lawfaremedia.org/article/what-the-defense-production-act-can-and-can’t-do-to-anthropic
NPR. (2026a, February 26). Deadline looms as Anthropic rejects Pentagon demands. NPR. https://www.npr.org/2026/02/26/nx-s1-5727847/anthropic-defense-hegseth-ai-weapons-surveillance
NPR. (2026b, February 27). OpenAI announces Pentagon deal after Trump bans Anthropic. NPR. https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban
NYU Stern Center for Business and Human Rights. (2026). The cost of conscience: What the Anthropic-Pentagon feud means for AI governance. NYU Stern. https://bhr.stern.nyu.edu/quick-take/the-cost-of-conscience-what-the-anthropic-pentagon-feud-means-for-ai-governance/
OpenAI. (2026). Our agreement with the Department of War. OpenAI. https://openai.com/index/our-agreement-with-the-department-of-war/
TechStartups. (2026, February 24). Top tech news today: February 24, 2026. TechStartups. https://techstartups.com/2026/02/24/top-tech-news-today-february-24-2026/
The Washington Post. (2026, February 28). Pentagon’s Anthropic fight reshapes Silicon Valley relations. https://www.washingtonpost.com/technology/2026/02/28/pentagon-anthropic-fight-silicon-valley/
Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136. https://www.jstor.org/stable/20024652