There is no AI race <https://substack.com/@arnaudbertrand> Arnaud Bertrand <https://substack.com/@arnaudbertrand> Substack - Apr 24, 2026 ∙ Paid https://open.substack.com/pub/arnaudbertrand/p/there-is-no-ai-race?r=y77av&u... The Chinese have this great principle: “Seek truth from facts” (实事求是). It’s commonly associated with the Communist Party - because it’s indeed a key slogan of theirs - but, as is often the case in China, it’s just a modern usage of a much older idiom, first recorded in the Book of Han <https://en.wikipedia.org/wiki/Book_of_Han> (111 CE). What does it mean? Essentially, it’s an anti-ideology principle: rather than starting with a doctrine and looking at facts through its lens, you should go the other way round - “truth” is extracted from the world as it is. It’s basically an ode to empirical pragmatism. “Seeking truth from facts” is precisely what’s missing in the conversation on AI, which is stunningly doctrinal and ideological: apocalyptic doomers on one end, deluded techno-utopians on the other, all of it made worse by the great-power framing of the so-called “AI race.” Everyone is starting with the conclusion - be it “China bad, so they must lose the AI race,” or “AGI will kill us all,” or “AGI will herald a new era of abundance” - and working backwards to find facts that fit. It’s interesting to contrast this with the early days of the internet, which I’m sadly old enough to have witnessed as a late teenager and young adult. There was also, at the time, some ideological dimensions and a lot of naivety - we were definitely not “seeking truth from facts” either - but the mood was fundamentally optimistic, universalist, and free-spirited. These were doctrinal beliefs in the sense that nobody had actually checked whether any of it was true, but it was a shared doctrine. Everyone globally held roughly the same one, so there was no ideological battle to be had. For instance it’s pretty comical to look back at Bill Clinton’s famous 2000 assertion that the internet would inevitably liberalize China and that the government’s efforts to control it was "like trying to nail jello to the wall <https://www.iatp.org/sites/default/files/Full_Text_of_Clintons_Speech_on_Chi...>" - arguing that it was “an argument for accelerating the effort to bring China into the world.” Compare and contrast this with the current AI framing about China: today not only is there no talk of “bringing China in” - the entire policy architecture, from export controls to chip bans, is explicitly designed to keep China out. Nor is anybody expecting that AI will liberalize anyone: on the contrary, each side is convinced the other will use it to further their power with malign intent, surveil its population, and ultimately dominate the world. And, to be fair, the Chinese side is right to be convinced about this because that’s literally what the U.S. side is saying they’ll use AI for, which is also a complete contrast with the early internet discourse. Back then, the early web was largely built by kids in dorm rooms and garages who saw themselves as contributing to a global commons. Today the people building AI in the U.S. - a handful of labs working hand-in-glove with the national security state - are explicitly framing their work as an instrument of U.S. dominance. Take Palantir’s recent manifesto, which they published on X <https://x.com/PalantirTech/status/2045574398573453312?s=20>: it has zero pretense of building for the world, instead arguing that the “engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation,” that Western civilization must “prevail,” that hard power in this century "will be built on software" and “A.I. weapons,” and that coexistence with others is, implicitly off the table <https://x.com/RnaudBertrand/status/2045767857997484359?s=20>. And, just in case anyone hadn’t gotten the message, they recently changed their tagline to “Software that dominates <https://www.wired.com/story/palantir-what-the-company-does/>.” Looking back, it should have been obvious that a company that named itself after the palantíri - the seeing-stones that Sauron, Tolkien’s representation of absolute evil, used to corrupt and dominate the peoples of Middle-earth - was probably not going to be building tools for human flourishing… And it’s not just Palantir: it's pretty much the official position of the entire frontier U.S. lab ecosystem. As another illustration, take Dario Amodei, the CEO of Anthropic (the company behind Claude AI), who argues for an “entente strategy” <https://www.darioamodei.com/essay/machines-of-loving-grace> in which the West should use AI to achieve “robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy.” In essence, Amodei views AI as both a tool of military dominance and a tool of blackmail to force countries to align themselves with the West politically. Not exactly the open, universalist spirit of the early web, and a position virtually indistinguishable from that of Palantir. If one adopts a “seek truth from facts” approach to Anthropic, the reality of that company is a - very - stark contrast to their public image. Back in February, there was a huge media story around Anthropic refusing the Pentagon's demand that Claude be made available for mass domestic surveillance and fully autonomous weapons, and the subsequent (seeming) power struggle between the company and Pete Hegseth. The story, as told by virtually every mainstream outlet, was unambiguous: here was a responsible AI lab that had drawn an ethical line in the sand, "trying to do their best to help us from ourselves" as a Republican Senator put it <https://www.pbs.org/newshour/nation/anthropic-cannot-in-good-conscience-acce...>. The National Catholic Register even reported <https://www.ncregister.com/news/judge-sides-with-anthropic-2026-03-30-0cz4n7...> that a group of 14 Catholic moral theologians and ethicists had filed an amicus brief in the case, stating that “the teaching of the Catholic Church supports Anthropic’s decision”. What no-one spent too much time mentioning was the reason why the Pentagon was negotiating these terms with Anthropic in the first place: it stemmed from the fact that in January 2026, Secretary of War Pete Hegseth had issued a memorandum <https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ARTIFICIAL-INTELLIG...> aimed at “accelerating America's military AI dominance” that directed all Pentagon AI contracts to incorporate "any lawful use" language within 180 days - basically allowing the Pentagon to use AI for any purpose the Department considered lawful. Why did this matter for Anthropic specifically? Because Anthropic had spent the previous year and a half aggressively working to become the Pentagon's most deeply integrated frontier AI lab - and, at the time, its only one. In November 2024 they partnered <https://www.axios.com/2024/11/08/anthropic-palantir-amazon-claude-defense-ai> with - of all companies - Palantir “to make Anthropic's models available to U.S. intelligence and defense agencies.” In June 2025, they launched Claude Gov <https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-c...> - a dedicated product line custom-built for U.S. national security customers, already being deployed by agencies at the highest classification levels. A month later, in July 2025, they won a $200 million Pentagon contract <https://www.anthropic.com/news/anthropic-and-the-department-of-defense-to-ad...>. No other AI lab was remotely as deep as they were in the U.S. military and defense apparatus. This all means that, contrary to the "principled holdout" story the media ran with, Hegseth's memo didn't pull Anthropic into the war machine. It affected them because they were already fully embedded in it, more than any other player. Notably, Anthropic’s Claude was used by the Pentagon to capture Maduro, as reported by the WSJ <https://www.wsj.com/politics/national-security/pentagon-used-anthropics-clau...>: a story that came out less than 2 weeks before the whole media frenzy about this supposed “clash” between Anthropic and the Pentagon over AI ethics. Which really makes you wonder whether the "clash" was a genuine ethical dispute at all, or a PR operation designed to distract from the fact that Anthropic's AI had just been used by the U.S. military to illegally capture a foreign head of state… It’s also interesting to look at what the “clash” was about. What Anthropic said they refused was the use of their AI for “*mass* *domestic* surveillance” (emphasis on both *mass* and *domestic*) and “*fully autonomous* weapons” (emphasis on both *fully* and *autonomous*). That is their exact wording <https://www.anthropic.com/news/statement-department-of-war>. Which means, concretely, that AI surveillance is fine domestically as long as it’s not “mass.” It also means, critically, that mass AI surveillance is fine as long as it’s not domestic. So the rest of the world is on notice: Anthropic has absolutely no problem with the U.S. military-industrial complex using their AI to surveil all 8 billion inhabitants on Earth, provided it excludes the 340 million Americans. And even the latter *can* be surveilled, just not in a “mass” way (whatever that means). This, incidentally, is actually merely a restatement of U.S. law. Mass domestic surveillance of Americans is prohibited anyhow by the Fourth Amendment, and mass foreign surveillance is authorized under FISA Section 702 and Executive Order 12333 - the legal architecture Edward Snowden exposed in 2013. So Anthropic’s so-called “principled holdout” stance is them simply restating the current US legal status-quo, rebranding it as a “red line,” and being congratulated for “following the teaching of the Catholic Church” by theologians for it. Even though that very same legal architecture they’re defending, back when the Snowden revelations broke in 2013, was rightly condemned as the most sweeping surveillance regime in the world (which it factually is). In effect, when one “seeks truth from facts,” Anthropic realized the pretty impressive PR feat of getting applauded - even getting virtually sainted by Catholic theologians - for making the U.S. sweeping surveillance apparatus more grimly powerful with frontier AI. In Tolkien’s terms: sharpening the eye of Sauron. You have to hand it to them: impressive branding work, their PR folks definitely deserve a raise over that one. Same story with the “*fully autonomous* weapons” aspect of their “ethical stance.” First of all, what this concretely means is that if there was a situation where the Pentagon decided to commit a Gaza-like genocide with AI, asking it - hypothetically - to select targets, optimize timing, and execute the operation, Anthropic’s red line would be fully honored provided Pete Hegseth personally clicked the final “go.” That’s the “ethical” principle at play here: not *whether* Claude helps plan awful deeds, only *whether a human is in the loop when it happens*. And it goes further than this actually: in their statement on this matter <https://www.anthropic.com/news/statement-department-of-war> Anthropic specified that they don't even object to fully autonomous weapons as a category. They specifically write that such weapons "may prove critical for our national defense." Their only objection is that today's AI isn't reliable enough yet. And they helpfully offer “to work directly with the Department of War on R&D to improve the reliability of these systems.” In other words: Anthropic isn’t at all refusing to help build autonomous killing machines. They just want them to be good before they’re deployed: it’s an objection over the quality of the killing, not ethics. Again, what an extraordinary PR feat: getting Catholic theologians to bless what is functionally a pitch for accurate autonomous killing machines. And while Anthropic makes for a particularly instructive case study - given their near-saintly public image - they're really just one example among many. I could have equally picked on OpenAI, who quietly deleted <https://www.cnbc.com/2024/01/16/openai-quietly-removes-ban-on-military-use-o...> the prohibition on military use from their policies in January 2024 and then partnered with Anduril <https://www.anduril.com/news/anduril-partners-with-openai-to-advance-u-s-art...> to build AI for battlefield systems, with Sam Altman writing Washington Post op-eds <https://www.washingtonpost.com/opinions/2024/07/25/sam-altman-ai-democracy-a...> about the need for “democratic AI” to prevail over “authoritarian AI” - a framing indistinguishable from Amodei's or Palantir's, and laughable on its face when you consider that this so-called “democratic AI” is built for the global domination of others (the very opposite of democracy) and explicitly set up, as we just saw, for mass surveillance and autonomous killings. Or Google, which in February 2025 abandoned its long-standing pledge <https://www.cnbc.com/2025/02/04/google-removes-pledge-to-not-use-ai-for-weap...> not to develop AI for weapons or surveillance. Or Meta, which opened Llama up for U.S. national security use <https://www.theguardian.com/technology/2024/nov/05/meta-allows-national-secu...> in November 2024. This isn't a few bad apples, it’s virtually the entire ecosystem. So taking a step back, that's what you have on one side of the ledger: a U.S. obviously dead set on using AI not as a global commons but as a tool of submission and dominance for the United States. Now, smart readers (which is, of course, all of you) will think they know how the rest of this article is going to go: “he’ll present the other side of the ledger - i.e. China - with its open source models, say that’s the way to go, that we all ought to cheer for the camp that does in fact value some form of openness and universalism, bla bla bla”. Well… at the risk of disappointing you, I’m actually not going to do that, because a) I like to surprise, and b) that’d be wrong. The point is that, if we do indeed believe that AI ought to be a global commons, if we do believe in “seeking truth from facts” as opposed to ideology, by definition there shouldn’t be sides of a ledger in the first place: defaulting to “team China” is just the other side of the same mistake we've spent this whole article documenting. The framing itself - AI as a contest between civilizations, a race to be won - is the pathology. You can't seek truth from facts while still holding on to the premise that there must be a “winner”. Think back to, say, electricity: would it have been right to frame its development as a race to be won by one civilization over another? To cheer for whichever country happened to be ahead on transformers in 1890? To make electricity a matter of national allegiance, something you rooted for the way you’d root for a football team? It sounds absurd because it is absurd. Electricity was a general-purpose technology destined to become part of the shared operating system of human life, and the only sensible stance toward it was to want it developed well and diffused widely for everyone’s benefit, full stop. Sure, it’s absolutely true that China today has an infinitely better posture towards AI than the U.S. does. Case in point, just as I was in the process of writing this article Deepseek V4 was released <https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro>, and it's hard to imagine a more perfect illustration of the contrast. V4 is open-source under an MIT license, meaning anyone anywhere can download the weights, modify them, run them on whatever hardware they choose. It's competitive with GPT-5.5 and Claude Opus 4.7 on most AI benchmarks, and priced at a small fraction - or even “free” if you choose to download it and run it yourself. But the single most striking thing about V4 isn't the benchmarks or even the price. It's that V4 has zero dependency on Nvidia’s CUDA anywhere in its stack - it runs entirely on Huawei Ascend chips via Huawei's own CANN framework. In other words: China now not only has its own frontier AI models, it has its own domestic AI stack, top to bottom. And it's giving the whole thing away to the world - the exact opposite of the “hoard and dominate” posture of the U.S. labs. This is China seeing AI as a general-purpose technology built into the economy, shared across borders, and iterated on openly. And in fact one of the DeepSeek researchers wrote this on X <https://x.com/victor207755822/status/2047518146689732858?s=20> with the release of V4: “we stay true to long-termism and open source for all. AGI belongs to everyone.” When you cheer for this, you’re not cheering for the “China side” to win, you’re cheering for the principle that there shouldn’t be any side: that this technology - perhaps the most consequential general-purpose technology humanity has ever developed - should belong to everyone, should be built in the open, should be allowed to diffuse the way electricity or the internal combustion engine or antibiotics diffused, imperfectly but broadly, across the whole of human civilization. Let’s go back to electricity and look at what the world would have been like had a country decided to take towards it the approach that the U.S. is currently taking towards AI: imagine if, say, the United States in 1890 had declared electricity a matter of national security, classified the designs of Edison’s dynamos and Tesla’s induction motors as export-controlled, integrated its electrical companies directly into the War Department, framed the generator as a strategic weapon rather than a general-purpose technology, and spent the following century building its foreign policy around ensuring that only it, and politically-aligned nations, had access to the light bulb. Batshit insane, right? Well that’s EXACTLY the posture it’s taking towards AI. Had that happened, it’s painfully obvious we’d ALL have been immeasurably poorer for it, materially and morally. And the United States first and foremost, given that for electricity - as will undoubtedly be the case for AI - the real value didn’t lie in control of the technology but in its widespread diffusion and in what you built on top of it. Think about the U.S.’s “electricity giants”: companies like GE, Whirlpool or RCA didn’t get rich by “owning” electricity - they got rich by *selling what electricity made possible* into a world that was electrifying as fast as it could. The U.S.’s electrical fortune was built on the world electrifying alongside it, not against it. Does the analogy hold for AI? Yes, surprisingly well. I like Jensen Huang’s (Nvidia’s CEO) recent description of AI <https://www.dwarkesh.com/p/jensen-huang> as a “5-layer cake” made of 1) energy, 2) chips, 3) infrastructure, 4) models and lastly 5) applications. The implication of his point is that each layer save for the last one - the application layer - will ultimately be largely commoditized, and as such that’s where the real value lies: in the millions of specific products, services, and industrial processes that get built on top of the other 4 layers. It’s typical network building: the layers underneath eventually become utilities, and utilities are low-margin commodity businesses. It happened with electricity, it happened with phones, it happened with railroads, it happened with the internet itself. The operators of each layer got commoditized over time, while the durable, century-defining fortunes accrued at the top of the stack: GE on top of electricity, Apple on top of the mobile and telecom infrastructure, Amazon and Google on top of the internet. The pattern is so consistent across technologies that it's essentially a law of how general-purpose infrastructure creates value. There's no reason to think AI - a general-purpose technology of the same order - will turn out any different. If anything, the pattern may be even more pronounced, because the surface area of potential AI applications is larger than any previous general-purpose technology: every industry, every knowledge-work process, every product category can in principle be rebuilt with AI inside it. As such, if that is correct, it means that the approach the U.S. is taking is strategically incoherent on its own terms. Seeing AI diffusion as something that ought to be resisted, slowed, and controlled gets the economics exactly backwards: diffusion is *precisely* the thing that enables value in the first place. Arguing against it is just as misguided as if American legislators had argued for restrictions on global mobile adoption in 2007, in the name of “winning” the mobile revolution: the $3 trillion Apple of today exists precisely because it did the opposite. Diffusion wasn't the threat to Apple's value - diffusion *was* Apple's value. Now I can already hear you retort: “sure, but AI is different, what about AGI, surely the country that reaches it first will have a tremendous competitive advantage over everyone else, no?” Let’s look into this because it’s probably the main argument hawks are making in favor of the current American approach. The claim being that AGI is not like electricity. Electricity was a tool that humans used to do things. AGI will be an agent - a system that can itself reason, plan, conduct research, and improve itself. The moment a sufficiently capable AGI comes online, the argument runs, it will be able to compound its own advantages: designing better chips, writing its own successor systems, solving the bottlenecks that currently constrain human science and engineering. The country that controls that system will, in effect, have added a superhuman research-and-development engine to its economy, its military, and its intelligence apparatus. Rival countries will not be able to catch up by copying, because the leader will be using AGI to move faster than copying can achieve. By this logic, AGI is the last general-purpose technology - the one that hands permanent advantage to whoever gets it first - and treating it like “just another technology to diffuse” isn't pragmatism, it's catastrophic strategic naïveté. First of all, note the implied assumption in this thinking: that it would be acceptable - desirable, even - for one country to achieve permanent, structural dominance over every other country on Earth - provided, of course, that country is the United States. It’s presented as self-evident and the natural order of things but let’s be very clear about what this vision means when you strip away the techno-utopist language: the permanent subjugation of every human being who doesn't happen to be American. If you’re based outside the U.S. and you’re deluded enough to go along with this, let me suggest a thought experiment. Picture the current U.S. administration and imagine it with ten times its current leverage over your country. Because that's roughly what “permanent American AGI dominance” actually means for you. Every tariff decision, every threat that you already resent? Multiply it by an order of magnitude, make it permanent, and make it enforced by a technology your country cannot meaningfully match or resist. That is the future you quite literally cheer for if you happen to reflexively root for the Palantir-Anthropic-OpenAI vision for AI. Thankfully though, this is purely theoretical because it has no chance of happening anytime soon. Let’s come back down to earth: AI is at a stage right now where it cannot even reliably perform tasks a competent six-year-old can perform. Current frontier models still routinely hallucinate facts, fail at basic arithmetic, lose track of long conversations, and cannot navigate a physical world they have no experience of. They are, to be clear, extraordinary tools - vastly more useful than what existed even two years ago. But the leap from “extraordinary tool that still can't multiply two four-digit numbers reliably” to “self-improving superintelligence capable of reorganizing global power structures” is - at this stage - a leap of religious faith. If you don’t believe me, go try to change your flight date by chatting with any airline's AI customer service bot, and report back on how your imminent civilizational transformation feels. Or try to resolve literally *any* problem with the customer service bot of your bank or your telecom provider. These systems are all powered by current frontier AI, deployed at scale, by well-funded companies with every incentive to make them work. And yet the universal experience of interacting with them is an exercise in immense frustration as they fail to understand what you’re asking, misremember what you said two messages ago, confidently invent policies that don’t exist, and eventually escalate you to a human anyway. If this is what the technology can actually do when deployed in production by companies that have spent millions on integration - if THIS is the state of the art - the notion that it’s on the edge of becoming a self-improving god-like intelligence capable of dominating geopolitics is, let’s be charitable, difficult to reconcile with observable reality. The “seek truth from facts” reality is that model companies have an inherent interest in making everyone believe that AGI is just around the corner - because their entire business model, their valuations, and their political influence depend on it being true. Strip away the AGI narrative and the frontier U.S. labs are infrastructure providers in a brutally commoditizing market. In a world where this commoditization is allowed to run its course, the model labs become utilities - high fixed costs, eroding pricing power, the AT&T of the AI century. Their trillion-dollar valuations, their hundred-billion-dollar capital raises - all of it evaporates. Unless, of course, AGI is imminent - which it isn’t ¯\_(ツ)_/¯ And, in any case, the point is moot because even if AGI were imminent, a “seek truth from facts” look at the scoreboard tells you the hoarding strategy has already failed. DeepSeek V4 matches U.S. frontier models on most benchmarks that matter <https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro>. Alibaba's Qwen, Moonshot's Kimi, Zhipu's GLM are all at or near the frontier. The entire architecture of export controls, chip bans, and military-industrial integration was designed to prevent exactly this. It didn't work. Not “might not work” - it has already factually failed, today, on April 24, 2026. China has caught up, is releasing open-weight frontier models and just demonstrated with DeepSeek V4 it can do so on entirely domestic silicon. Continuing to act as if dominance is still achievable - as Palantir, Anthropic, and OpenAI keep insisting in their strategy documents - is just denial of reality. This is actually the main risk in AI right now. Not the superintelligence scenario the labs keep warning us about. Instead, it’s the labs themselves, or more precisely their lobbying power on the U.S. government, which has quietly become one of the most successful regulatory capture operations in history. A handful of companies - OpenAI, Anthropic, Palantir, their close allies - have managed to make every piece of public discourse about AI happen on their terms, which is factually leading the U.S. into concrete policy choices that are not only bad for the world, but also for the U.S. themselves. Bad for the U.S. because it systematically pours American capital into the layers that are commoditizing - chips, infrastructure, models - while actively shrinking the addressable market of the one layer where durable, century-defining fortunes actually get built. And their policies simultaneously encourage the exact outcome they’re designed to prevent: they’ve empirically helped create a credible alternative AI that is just as good, free, genuinely open and shipping on entirely non-American silicon - thereby amplifying the very commoditization they’re trying to delay. Bad for the world because while the US spends its energy on export controls, chip bans, and strategic denial, the actual work of figuring out how humanity integrates a transformative new technology - the norms, the institutions, the shared understanding of what AI should and shouldn’t do - is being crowded out of the conversation entirely. Every minute of political oxygen consumed by “how do we win” is a minute not spent on “how do we do this well.” And the darkest irony is that “how do we win” isn’t even a question the American public asked for - it’s a question manufactured, funded, and amplified by the small cluster of labs - the Anthropics, the OpenAIs, the Palantirs - whose business model depends on humanity never getting around to asking the better one. Long story short, there is, in fact, no AI race: 1. It's a narrative manufactured and amplified by a handful of labs whose valuations depend on it 2. The economics of general-purpose technologies actively punish “race winners” 3. Even on the hawks' own terms - AGI - there's no race to win since it’s already empirically been lost A hundred years from now, the idea that anyone seriously framed AI as a race between nations will sound exactly as absurd as a race to “win” electricity sounds to us today. The Book of Han had it right in 111 CE: seek truth from facts.
It's a long, slightly boring story (I did not make to reach the end), which reminds me the Vietnam times and the evergreen concept of "imperialism". Is there anything really new to say about USA...? ----- Original Message ----- From: Maurizio Borghi via nexa To: Nexa Sent: Friday, April 24, 2026 1:30 PM Subject: [nexa] There is no AI race There is no AI race Arnaud Bertrand Substack - Apr 24, 2026 ∙ Paid https://open.substack.com/pub/arnaudbertrand/p/there-is-no-ai-race?r=y77av&u... The Chinese have this great principle: “Seek truth from facts” (实事求是). It’s commonly associated with the Communist Party - because it’s indeed a key slogan of theirs - but, as is often the case in China, it’s just a modern usage of a much older idiom, first recorded in the Book of Han (111 CE). What does it mean? Essentially, it’s an anti-ideology principle: rather than starting with a doctrine and looking at facts through its lens, you should go the other way round - “truth” is extracted from the world as it is. It’s basically an ode to empirical pragmatism. “Seeking truth from facts” is precisely what’s missing in the conversation on AI, which is stunningly doctrinal and ideological: apocalyptic doomers on one end, deluded techno-utopians on the other, all of it made worse by the great-power framing of the so-called “AI race.” Everyone is starting with the conclusion - be it “China bad, so they must lose the AI race,” or “AGI will kill us all,” or “AGI will herald a new era of abundance” - and working backwards to find facts that fit. It’s interesting to contrast this with the early days of the internet, which I’m sadly old enough to have witnessed as a late teenager and young adult. There was also, at the time, some ideological dimensions and a lot of naivety - we were definitely not “seeking truth from facts” either - but the mood was fundamentally optimistic, universalist, and free-spirited. These were doctrinal beliefs in the sense that nobody had actually checked whether any of it was true, but it was a shared doctrine. Everyone globally held roughly the same one, so there was no ideological battle to be had. For instance it’s pretty comical to look back at Bill Clinton’s famous 2000 assertion that the internet would inevitably liberalize China and that the government’s efforts to control it was "like trying to nail jello to the wall" - arguing that it was “an argument for accelerating the effort to bring China into the world.” Compare and contrast this with the current AI framing about China: today not only is there no talk of “bringing China in” - the entire policy architecture, from export controls to chip bans, is explicitly designed to keep China out. Nor is anybody expecting that AI will liberalize anyone: on the contrary, each side is convinced the other will use it to further their power with malign intent, surveil its population, and ultimately dominate the world. And, to be fair, the Chinese side is right to be convinced about this because that’s literally what the U.S. side is saying they’ll use AI for, which is also a complete contrast with the early internet discourse. Back then, the early web was largely built by kids in dorm rooms and garages who saw themselves as contributing to a global commons. Today the people building AI in the U.S. - a handful of labs working hand-in-glove with the national security state - are explicitly framing their work as an instrument of U.S. dominance. Take Palantir’s recent manifesto, which they published on X: it has zero pretense of building for the world, instead arguing that the “engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation,” that Western civilization must “prevail,” that hard power in this century "will be built on software" and “A.I. weapons,” and that coexistence with others is, implicitly off the table. And, just in case anyone hadn’t gotten the message, they recently changed their tagline to “Software that dominates.” Looking back, it should have been obvious that a company that named itself after the palantíri - the seeing-stones that Sauron, Tolkien’s representation of absolute evil, used to corrupt and dominate the peoples of Middle-earth - was probably not going to be building tools for human flourishing… And it’s not just Palantir: it's pretty much the official position of the entire frontier U.S. lab ecosystem. As another illustration, take Dario Amodei, the CEO of Anthropic (the company behind Claude AI), who argues for an “entente strategy” in which the West should use AI to achieve “robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy.” In essence, Amodei views AI as both a tool of military dominance and a tool of blackmail to force countries to align themselves with the West politically. Not exactly the open, universalist spirit of the early web, and a position virtually indistinguishable from that of Palantir. If one adopts a “seek truth from facts” approach to Anthropic, the reality of that company is a - very - stark contrast to their public image. Back in February, there was a huge media story around Anthropic refusing the Pentagon's demand that Claude be made available for mass domestic surveillance and fully autonomous weapons, and the subsequent (seeming) power struggle between the company and Pete Hegseth. The story, as told by virtually every mainstream outlet, was unambiguous: here was a responsible AI lab that had drawn an ethical line in the sand, "trying to do their best to help us from ourselves" as a Republican Senator put it. The National Catholic Register even reported that a group of 14 Catholic moral theologians and ethicists had filed an amicus brief in the case, stating that “the teaching of the Catholic Church supports Anthropic’s decision”. What no-one spent too much time mentioning was the reason why the Pentagon was negotiating these terms with Anthropic in the first place: it stemmed from the fact that in January 2026, Secretary of War Pete Hegseth had issued a memorandum aimed at “accelerating America's military AI dominance” that directed all Pentagon AI contracts to incorporate "any lawful use" language within 180 days - basically allowing the Pentagon to use AI for any purpose the Department considered lawful. Why did this matter for Anthropic specifically? Because Anthropic had spent the previous year and a half aggressively working to become the Pentagon's most deeply integrated frontier AI lab - and, at the time, its only one. In November 2024 they partnered with - of all companies - Palantir “to make Anthropic's models available to U.S. intelligence and defense agencies.” In June 2025, they launched Claude Gov - a dedicated product line custom-built for U.S. national security customers, already being deployed by agencies at the highest classification levels. A month later, in July 2025, they won a $200 million Pentagon contract. No other AI lab was remotely as deep as they were in the U.S. military and defense apparatus. This all means that, contrary to the "principled holdout" story the media ran with, Hegseth's memo didn't pull Anthropic into the war machine. It affected them because they were already fully embedded in it, more than any other player. Notably, Anthropic’s Claude was used by the Pentagon to capture Maduro, as reported by the WSJ: a story that came out less than 2 weeks before the whole media frenzy about this supposed “clash” between Anthropic and the Pentagon over AI ethics. Which really makes you wonder whether the "clash" was a genuine ethical dispute at all, or a PR operation designed to distract from the fact that Anthropic's AI had just been used by the U.S. military to illegally capture a foreign head of state… It’s also interesting to look at what the “clash” was about. What Anthropic said they refused was the use of their AI for “mass domestic surveillance” (emphasis on both mass and domestic) and “fully autonomous weapons” (emphasis on both fully and autonomous). That is their exact wording. Which means, concretely, that AI surveillance is fine domestically as long as it’s not “mass.” It also means, critically, that mass AI surveillance is fine as long as it’s not domestic. So the rest of the world is on notice: Anthropic has absolutely no problem with the U.S. military-industrial complex using their AI to surveil all 8 billion inhabitants on Earth, provided it excludes the 340 million Americans. And even the latter can be surveilled, just not in a “mass” way (whatever that means). This, incidentally, is actually merely a restatement of U.S. law. Mass domestic surveillance of Americans is prohibited anyhow by the Fourth Amendment, and mass foreign surveillance is authorized under FISA Section 702 and Executive Order 12333 - the legal architecture Edward Snowden exposed in 2013. So Anthropic’s so-called “principled holdout” stance is them simply restating the current US legal status-quo, rebranding it as a “red line,” and being congratulated for “following the teaching of the Catholic Church” by theologians for it. Even though that very same legal architecture they’re defending, back when the Snowden revelations broke in 2013, was rightly condemned as the most sweeping surveillance regime in the world (which it factually is). In effect, when one “seeks truth from facts,” Anthropic realized the pretty impressive PR feat of getting applauded - even getting virtually sainted by Catholic theologians - for making the U.S. sweeping surveillance apparatus more grimly powerful with frontier AI. In Tolkien’s terms: sharpening the eye of Sauron. You have to hand it to them: impressive branding work, their PR folks definitely deserve a raise over that one. Same story with the “fully autonomous weapons” aspect of their “ethical stance.” First of all, what this concretely means is that if there was a situation where the Pentagon decided to commit a Gaza-like genocide with AI, asking it - hypothetically - to select targets, optimize timing, and execute the operation, Anthropic’s red line would be fully honored provided Pete Hegseth personally clicked the final “go.” That’s the “ethical” principle at play here: not whether Claude helps plan awful deeds, only whether a human is in the loop when it happens. And it goes further than this actually: in their statement on this matter Anthropic specified that they don't even object to fully autonomous weapons as a category. They specifically write that such weapons "may prove critical for our national defense." Their only objection is that today's AI isn't reliable enough yet. And they helpfully offer “to work directly with the Department of War on R&D to improve the reliability of these systems.” In other words: Anthropic isn’t at all refusing to help build autonomous killing machines. They just want them to be good before they’re deployed: it’s an objection over the quality of the killing, not ethics. Again, what an extraordinary PR feat: getting Catholic theologians to bless what is functionally a pitch for accurate autonomous killing machines. And while Anthropic makes for a particularly instructive case study - given their near-saintly public image - they're really just one example among many. I could have equally picked on OpenAI, who quietly deleted the prohibition on military use from their policies in January 2024 and then partnered with Anduril to build AI for battlefield systems, with Sam Altman writing Washington Post op-eds about the need for “democratic AI” to prevail over “authoritarian AI” - a framing indistinguishable from Amodei's or Palantir's, and laughable on its face when you consider that this so-called “democratic AI” is built for the global domination of others (the very opposite of democracy) and explicitly set up, as we just saw, for mass surveillance and autonomous killings. Or Google, which in February 2025 abandoned its long-standing pledge not to develop AI for weapons or surveillance. Or Meta, which opened Llama up for U.S. national security use in November 2024. This isn't a few bad apples, it’s virtually the entire ecosystem. So taking a step back, that's what you have on one side of the ledger: a U.S. obviously dead set on using AI not as a global commons but as a tool of submission and dominance for the United States. Now, smart readers (which is, of course, all of you) will think they know how the rest of this article is going to go: “he’ll present the other side of the ledger - i.e. China - with its open source models, say that’s the way to go, that we all ought to cheer for the camp that does in fact value some form of openness and universalism, bla bla bla”. Well… at the risk of disappointing you, I’m actually not going to do that, because a) I like to surprise, and b) that’d be wrong. The point is that, if we do indeed believe that AI ought to be a global commons, if we do believe in “seeking truth from facts” as opposed to ideology, by definition there shouldn’t be sides of a ledger in the first place: defaulting to “team China” is just the other side of the same mistake we've spent this whole article documenting. The framing itself - AI as a contest between civilizations, a race to be won - is the pathology. You can't seek truth from facts while still holding on to the premise that there must be a “winner”. Think back to, say, electricity: would it have been right to frame its development as a race to be won by one civilization over another? To cheer for whichever country happened to be ahead on transformers in 1890? To make electricity a matter of national allegiance, something you rooted for the way you’d root for a football team? It sounds absurd because it is absurd. Electricity was a general-purpose technology destined to become part of the shared operating system of human life, and the only sensible stance toward it was to want it developed well and diffused widely for everyone’s benefit, full stop. Sure, it’s absolutely true that China today has an infinitely better posture towards AI than the U.S. does. Case in point, just as I was in the process of writing this article Deepseek V4 was released, and it's hard to imagine a more perfect illustration of the contrast. V4 is open-source under an MIT license, meaning anyone anywhere can download the weights, modify them, run them on whatever hardware they choose. It's competitive with GPT-5.5 and Claude Opus 4.7 on most AI benchmarks, and priced at a small fraction - or even “free” if you choose to download it and run it yourself. But the single most striking thing about V4 isn't the benchmarks or even the price. It's that V4 has zero dependency on Nvidia’s CUDA anywhere in its stack - it runs entirely on Huawei Ascend chips via Huawei's own CANN framework. In other words: China now not only has its own frontier AI models, it has its own domestic AI stack, top to bottom. And it's giving the whole thing away to the world - the exact opposite of the “hoard and dominate” posture of the U.S. labs. This is China seeing AI as a general-purpose technology built into the economy, shared across borders, and iterated on openly. And in fact one of the DeepSeek researchers wrote this on X with the release of V4: “we stay true to long-termism and open source for all. AGI belongs to everyone.” When you cheer for this, you’re not cheering for the “China side” to win, you’re cheering for the principle that there shouldn’t be any side: that this technology - perhaps the most consequential general-purpose technology humanity has ever developed - should belong to everyone, should be built in the open, should be allowed to diffuse the way electricity or the internal combustion engine or antibiotics diffused, imperfectly but broadly, across the whole of human civilization. Let’s go back to electricity and look at what the world would have been like had a country decided to take towards it the approach that the U.S. is currently taking towards AI: imagine if, say, the United States in 1890 had declared electricity a matter of national security, classified the designs of Edison’s dynamos and Tesla’s induction motors as export-controlled, integrated its electrical companies directly into the War Department, framed the generator as a strategic weapon rather than a general-purpose technology, and spent the following century building its foreign policy around ensuring that only it, and politically-aligned nations, had access to the light bulb. Batshit insane, right? Well that’s EXACTLY the posture it’s taking towards AI. Had that happened, it’s painfully obvious we’d ALL have been immeasurably poorer for it, materially and morally. And the United States first and foremost, given that for electricity - as will undoubtedly be the case for AI - the real value didn’t lie in control of the technology but in its widespread diffusion and in what you built on top of it. Think about the U.S.’s “electricity giants”: companies like GE, Whirlpool or RCA didn’t get rich by “owning” electricity - they got rich by selling what electricity made possible into a world that was electrifying as fast as it could. The U.S.’s electrical fortune was built on the world electrifying alongside it, not against it. Does the analogy hold for AI? Yes, surprisingly well. I like Jensen Huang’s (Nvidia’s CEO) recent description of AI as a “5-layer cake” made of 1) energy, 2) chips, 3) infrastructure, 4) models and lastly 5) applications. The implication of his point is that each layer save for the last one - the application layer - will ultimately be largely commoditized, and as such that’s where the real value lies: in the millions of specific products, services, and industrial processes that get built on top of the other 4 layers. It’s typical network building: the layers underneath eventually become utilities, and utilities are low-margin commodity businesses. It happened with electricity, it happened with phones, it happened with railroads, it happened with the internet itself. The operators of each layer got commoditized over time, while the durable, century-defining fortunes accrued at the top of the stack: GE on top of electricity, Apple on top of the mobile and telecom infrastructure, Amazon and Google on top of the internet. The pattern is so consistent across technologies that it's essentially a law of how general-purpose infrastructure creates value. There's no reason to think AI - a general-purpose technology of the same order - will turn out any different. If anything, the pattern may be even more pronounced, because the surface area of potential AI applications is larger than any previous general-purpose technology: every industry, every knowledge-work process, every product category can in principle be rebuilt with AI inside it. As such, if that is correct, it means that the approach the U.S. is taking is strategically incoherent on its own terms. Seeing AI diffusion as something that ought to be resisted, slowed, and controlled gets the economics exactly backwards: diffusion is precisely the thing that enables value in the first place. Arguing against it is just as misguided as if American legislators had argued for restrictions on global mobile adoption in 2007, in the name of “winning” the mobile revolution: the $3 trillion Apple of today exists precisely because it did the opposite. Diffusion wasn't the threat to Apple's value - diffusion was Apple's value. Now I can already hear you retort: “sure, but AI is different, what about AGI, surely the country that reaches it first will have a tremendous competitive advantage over everyone else, no?” Let’s look into this because it’s probably the main argument hawks are making in favor of the current American approach. The claim being that AGI is not like electricity. Electricity was a tool that humans used to do things. AGI will be an agent - a system that can itself reason, plan, conduct research, and improve itself. The moment a sufficiently capable AGI comes online, the argument runs, it will be able to compound its own advantages: designing better chips, writing its own successor systems, solving the bottlenecks that currently constrain human science and engineering. The country that controls that system will, in effect, have added a superhuman research-and-development engine to its economy, its military, and its intelligence apparatus. Rival countries will not be able to catch up by copying, because the leader will be using AGI to move faster than copying can achieve. By this logic, AGI is the last general-purpose technology - the one that hands permanent advantage to whoever gets it first - and treating it like “just another technology to diffuse” isn't pragmatism, it's catastrophic strategic naïveté. First of all, note the implied assumption in this thinking: that it would be acceptable - desirable, even - for one country to achieve permanent, structural dominance over every other country on Earth - provided, of course, that country is the United States. It’s presented as self-evident and the natural order of things but let’s be very clear about what this vision means when you strip away the techno-utopist language: the permanent subjugation of every human being who doesn't happen to be American. If you’re based outside the U.S. and you’re deluded enough to go along with this, let me suggest a thought experiment. Picture the current U.S. administration and imagine it with ten times its current leverage over your country. Because that's roughly what “permanent American AGI dominance” actually means for you. Every tariff decision, every threat that you already resent? Multiply it by an order of magnitude, make it permanent, and make it enforced by a technology your country cannot meaningfully match or resist. That is the future you quite literally cheer for if you happen to reflexively root for the Palantir-Anthropic-OpenAI vision for AI. Thankfully though, this is purely theoretical because it has no chance of happening anytime soon. Let’s come back down to earth: AI is at a stage right now where it cannot even reliably perform tasks a competent six-year-old can perform. Current frontier models still routinely hallucinate facts, fail at basic arithmetic, lose track of long conversations, and cannot navigate a physical world they have no experience of. They are, to be clear, extraordinary tools - vastly more useful than what existed even two years ago. But the leap from “extraordinary tool that still can't multiply two four-digit numbers reliably” to “self-improving superintelligence capable of reorganizing global power structures” is - at this stage - a leap of religious faith. If you don’t believe me, go try to change your flight date by chatting with any airline's AI customer service bot, and report back on how your imminent civilizational transformation feels. Or try to resolve literally any problem with the customer service bot of your bank or your telecom provider. These systems are all powered by current frontier AI, deployed at scale, by well-funded companies with every incentive to make them work. And yet the universal experience of interacting with them is an exercise in immense frustration as they fail to understand what you’re asking, misremember what you said two messages ago, confidently invent policies that don’t exist, and eventually escalate you to a human anyway. If this is what the technology can actually do when deployed in production by companies that have spent millions on integration - if THIS is the state of the art - the notion that it’s on the edge of becoming a self-improving god-like intelligence capable of dominating geopolitics is, let’s be charitable, difficult to reconcile with observable reality. The “seek truth from facts” reality is that model companies have an inherent interest in making everyone believe that AGI is just around the corner - because their entire business model, their valuations, and their political influence depend on it being true. Strip away the AGI narrative and the frontier U.S. labs are infrastructure providers in a brutally commoditizing market. In a world where this commoditization is allowed to run its course, the model labs become utilities - high fixed costs, eroding pricing power, the AT&T of the AI century. Their trillion-dollar valuations, their hundred-billion-dollar capital raises - all of it evaporates. Unless, of course, AGI is imminent - which it isn’t ¯\_(ツ)_/¯ And, in any case, the point is moot because even if AGI were imminent, a “seek truth from facts” look at the scoreboard tells you the hoarding strategy has already failed. DeepSeek V4 matches U.S. frontier models on most benchmarks that matter. Alibaba's Qwen, Moonshot's Kimi, Zhipu's GLM are all at or near the frontier. The entire architecture of export controls, chip bans, and military-industrial integration was designed to prevent exactly this. It didn't work. Not “might not work” - it has already factually failed, today, on April 24, 2026. China has caught up, is releasing open-weight frontier models and just demonstrated with DeepSeek V4 it can do so on entirely domestic silicon. Continuing to act as if dominance is still achievable - as Palantir, Anthropic, and OpenAI keep insisting in their strategy documents - is just denial of reality. This is actually the main risk in AI right now. Not the superintelligence scenario the labs keep warning us about. Instead, it’s the labs themselves, or more precisely their lobbying power on the U.S. government, which has quietly become one of the most successful regulatory capture operations in history. A handful of companies - OpenAI, Anthropic, Palantir, their close allies - have managed to make every piece of public discourse about AI happen on their terms, which is factually leading the U.S. into concrete policy choices that are not only bad for the world, but also for the U.S. themselves. Bad for the U.S. because it systematically pours American capital into the layers that are commoditizing - chips, infrastructure, models - while actively shrinking the addressable market of the one layer where durable, century-defining fortunes actually get built. And their policies simultaneously encourage the exact outcome they’re designed to prevent: they’ve empirically helped create a credible alternative AI that is just as good, free, genuinely open and shipping on entirely non-American silicon - thereby amplifying the very commoditization they’re trying to delay. Bad for the world because while the US spends its energy on export controls, chip bans, and strategic denial, the actual work of figuring out how humanity integrates a transformative new technology - the norms, the institutions, the shared understanding of what AI should and shouldn’t do - is being crowded out of the conversation entirely. Every minute of political oxygen consumed by “how do we win” is a minute not spent on “how do we do this well.” And the darkest irony is that “how do we win” isn’t even a question the American public asked for - it’s a question manufactured, funded, and amplified by the small cluster of labs - the Anthropics, the OpenAIs, the Palantirs - whose business model depends on humanity never getting around to asking the better one. Long story short, there is, in fact, no AI race: 1.. It's a narrative manufactured and amplified by a handful of labs whose valuations depend on it 2.. The economics of general-purpose technologies actively punish “race winners” 3.. Even on the hawks' own terms - AGI - there's no race to win since it’s already empirically been lost A hundred years from now, the idea that anyone seriously framed AI as a race between nations will sound exactly as absurd as a race to “win” electricity sounds to us today. The Book of Han had it right in 111 CE: seek truth from facts.
Grazie Maurizio, davvero molto interessante e utile. Giulio Il giorno sab 25 apr 2026 alle 02:20 Alfredo Bregni via nexa < nexa@server-nexa.polito.it> ha scritto:
It's a long, slightly boring story (I did not make to reach the end), which reminds me the Vietnam times and the evergreen concept of "imperialism". Is there anything *really new *to say about USA...?
----- Original Message ----- *From:* Maurizio Borghi via nexa <nexa@server-nexa.polito.it> *To:* Nexa <nexa@server-nexa.polito.it> *Sent:* Friday, April 24, 2026 1:30 PM *Subject:* [nexa] There is no AI race
There is no AI race <https://substack.com/@arnaudbertrand> Arnaud Bertrand <https://substack.com/@arnaudbertrand> Substack - Apr 24, 2026 ∙ Paid
https://open.substack.com/pub/arnaudbertrand/p/there-is-no-ai-race?r=y77av&u...
The Chinese have this great principle: “Seek truth from facts” (实事求是). It’s commonly associated with the Communist Party - because it’s indeed a key slogan of theirs - but, as is often the case in China, it’s just a modern usage of a much older idiom, first recorded in the Book of Han <https://en.wikipedia.org/wiki/Book_of_Han> (111 CE).
What does it mean? Essentially, it’s an anti-ideology principle: rather than starting with a doctrine and looking at facts through its lens, you should go the other way round - “truth” is extracted from the world as it is. It’s basically an ode to empirical pragmatism.
“Seeking truth from facts” is precisely what’s missing in the conversation on AI, which is stunningly doctrinal and ideological: apocalyptic doomers on one end, deluded techno-utopians on the other, all of it made worse by the great-power framing of the so-called “AI race.” Everyone is starting with the conclusion - be it “China bad, so they must lose the AI race,” or “AGI will kill us all,” or “AGI will herald a new era of abundance” - and working backwards to find facts that fit.
It’s interesting to contrast this with the early days of the internet, which I’m sadly old enough to have witnessed as a late teenager and young adult. There was also, at the time, some ideological dimensions and a lot of naivety - we were definitely not “seeking truth from facts” either - but the mood was fundamentally optimistic, universalist, and free-spirited. These were doctrinal beliefs in the sense that nobody had actually checked whether any of it was true, but it was a shared doctrine. Everyone globally held roughly the same one, so there was no ideological battle to be had.
For instance it’s pretty comical to look back at Bill Clinton’s famous 2000 assertion that the internet would inevitably liberalize China and that the government’s efforts to control it was "like trying to nail jello to the wall <https://www.iatp.org/sites/default/files/Full_Text_of_Clintons_Speech_on_Chi...>" - arguing that it was “an argument for accelerating the effort to bring China into the world.”
Compare and contrast this with the current AI framing about China: today not only is there no talk of “bringing China in” - the entire policy architecture, from export controls to chip bans, is explicitly designed to keep China out. Nor is anybody expecting that AI will liberalize anyone: on the contrary, each side is convinced the other will use it to further their power with malign intent, surveil its population, and ultimately dominate the world.
And, to be fair, the Chinese side is right to be convinced about this because that’s literally what the U.S. side is saying they’ll use AI for, which is also a complete contrast with the early internet discourse.
Back then, the early web was largely built by kids in dorm rooms and garages who saw themselves as contributing to a global commons. Today the people building AI in the U.S. - a handful of labs working hand-in-glove with the national security state - are explicitly framing their work as an instrument of U.S. dominance.
Take Palantir’s recent manifesto, which they published on X <https://x.com/PalantirTech/status/2045574398573453312?s=20>: it has zero pretense of building for the world, instead arguing that the “engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation,” that Western civilization must “prevail,” that hard power in this century "will be built on software" and “A.I. weapons,” and that coexistence with others is, implicitly off the table <https://x.com/RnaudBertrand/status/2045767857997484359?s=20>.
And, just in case anyone hadn’t gotten the message, they recently changed their tagline to “Software that dominates <https://www.wired.com/story/palantir-what-the-company-does/>.”
Looking back, it should have been obvious that a company that named itself after the palantíri - the seeing-stones that Sauron, Tolkien’s representation of absolute evil, used to corrupt and dominate the peoples of Middle-earth - was probably not going to be building tools for human flourishing…
And it’s not just Palantir: it's pretty much the official position of the entire frontier U.S. lab ecosystem.
As another illustration, take Dario Amodei, the CEO of Anthropic (the company behind Claude AI), who argues for an “entente strategy” <https://www.darioamodei.com/essay/machines-of-loving-grace> in which the West should use AI to achieve “robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy.”
In essence, Amodei views AI as both a tool of military dominance and a tool of blackmail to force countries to align themselves with the West politically. Not exactly the open, universalist spirit of the early web, and a position virtually indistinguishable from that of Palantir.
If one adopts a “seek truth from facts” approach to Anthropic, the reality of that company is a - very - stark contrast to their public image.
Back in February, there was a huge media story around Anthropic refusing the Pentagon's demand that Claude be made available for mass domestic surveillance and fully autonomous weapons, and the subsequent (seeming) power struggle between the company and Pete Hegseth.
The story, as told by virtually every mainstream outlet, was unambiguous: here was a responsible AI lab that had drawn an ethical line in the sand, "trying to do their best to help us from ourselves" as a Republican Senator put it <https://www.pbs.org/newshour/nation/anthropic-cannot-in-good-conscience-acce...>. The National Catholic Register even reported <https://www.ncregister.com/news/judge-sides-with-anthropic-2026-03-30-0cz4n7...> that a group of 14 Catholic moral theologians and ethicists had filed an amicus brief in the case, stating that “the teaching of the Catholic Church supports Anthropic’s decision”.
What no-one spent too much time mentioning was the reason why the Pentagon was negotiating these terms with Anthropic in the first place: it stemmed from the fact that in January 2026, Secretary of War Pete Hegseth had issued a memorandum <https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ARTIFICIAL-INTELLIG...> aimed at “accelerating America's military AI dominance” that directed all Pentagon AI contracts to incorporate "any lawful use" language within 180 days - basically allowing the Pentagon to use AI for any purpose the Department considered lawful.
Why did this matter for Anthropic specifically? Because Anthropic had spent the previous year and a half aggressively working to become the Pentagon's most deeply integrated frontier AI lab - and, at the time, its only one. In November 2024 they partnered <https://www.axios.com/2024/11/08/anthropic-palantir-amazon-claude-defense-ai> with - of all companies - Palantir “to make Anthropic's models available to U.S. intelligence and defense agencies.” In June 2025, they launched Claude Gov <https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-c...> - a dedicated product line custom-built for U.S. national security customers, already being deployed by agencies at the highest classification levels. A month later, in July 2025, they won a $200 million Pentagon contract <https://www.anthropic.com/news/anthropic-and-the-department-of-defense-to-ad...>. No other AI lab was remotely as deep as they were in the U.S. military and defense apparatus.
This all means that, contrary to the "principled holdout" story the media ran with, Hegseth's memo didn't pull Anthropic into the war machine. It affected them because they were already fully embedded in it, more than any other player.
Notably, Anthropic’s Claude was used by the Pentagon to capture Maduro, as reported by the WSJ <https://www.wsj.com/politics/national-security/pentagon-used-anthropics-clau...>: a story that came out less than 2 weeks before the whole media frenzy about this supposed “clash” between Anthropic and the Pentagon over AI ethics. Which really makes you wonder whether the "clash" was a genuine ethical dispute at all, or a PR operation designed to distract from the fact that Anthropic's AI had just been used by the U.S. military to illegally capture a foreign head of state…
It’s also interesting to look at what the “clash” was about. What Anthropic said they refused was the use of their AI for “*mass* *domestic* surveillance” (emphasis on both *mass* and *domestic*) and “*fully autonomous* weapons” (emphasis on both *fully* and *autonomous*). That is their exact wording <https://www.anthropic.com/news/statement-department-of-war> .
Which means, concretely, that AI surveillance is fine domestically as long as it’s not “mass.” It also means, critically, that mass AI surveillance is fine as long as it’s not domestic.
So the rest of the world is on notice: Anthropic has absolutely no problem with the U.S. military-industrial complex using their AI to surveil all 8 billion inhabitants on Earth, provided it excludes the 340 million Americans. And even the latter *can* be surveilled, just not in a “mass” way (whatever that means).
This, incidentally, is actually merely a restatement of U.S. law. Mass domestic surveillance of Americans is prohibited anyhow by the Fourth Amendment, and mass foreign surveillance is authorized under FISA Section 702 and Executive Order 12333 - the legal architecture Edward Snowden exposed in 2013.
So Anthropic’s so-called “principled holdout” stance is them simply restating the current US legal status-quo, rebranding it as a “red line,” and being congratulated for “following the teaching of the Catholic Church” by theologians for it. Even though that very same legal architecture they’re defending, back when the Snowden revelations broke in 2013, was rightly condemned as the most sweeping surveillance regime in the world (which it factually is).
In effect, when one “seeks truth from facts,” Anthropic realized the pretty impressive PR feat of getting applauded - even getting virtually sainted by Catholic theologians - for making the U.S. sweeping surveillance apparatus more grimly powerful with frontier AI. In Tolkien’s terms: sharpening the eye of Sauron.
You have to hand it to them: impressive branding work, their PR folks definitely deserve a raise over that one.
Same story with the “*fully autonomous* weapons” aspect of their “ethical stance.”
First of all, what this concretely means is that if there was a situation where the Pentagon decided to commit a Gaza-like genocide with AI, asking it - hypothetically - to select targets, optimize timing, and execute the operation, Anthropic’s red line would be fully honored provided Pete Hegseth personally clicked the final “go.” That’s the “ethical” principle at play here: not *whether* Claude helps plan awful deeds, only *whether a human is in the loop when it happens*.
And it goes further than this actually: in their statement on this matter <https://www.anthropic.com/news/statement-department-of-war> Anthropic specified that they don't even object to fully autonomous weapons as a category. They specifically write that such weapons "may prove critical for our national defense." Their only objection is that today's AI isn't reliable enough yet. And they helpfully offer “to work directly with the Department of War on R&D to improve the reliability of these systems.”
In other words: Anthropic isn’t at all refusing to help build autonomous killing machines. They just want them to be good before they’re deployed: it’s an objection over the quality of the killing, not ethics.
Again, what an extraordinary PR feat: getting Catholic theologians to bless what is functionally a pitch for accurate autonomous killing machines.
And while Anthropic makes for a particularly instructive case study - given their near-saintly public image - they're really just one example among many. I could have equally picked on OpenAI, who quietly deleted <https://www.cnbc.com/2024/01/16/openai-quietly-removes-ban-on-military-use-o...> the prohibition on military use from their policies in January 2024 and then partnered with Anduril <https://www.anduril.com/news/anduril-partners-with-openai-to-advance-u-s-art...> to build AI for battlefield systems, with Sam Altman writing Washington Post op-eds <https://www.washingtonpost.com/opinions/2024/07/25/sam-altman-ai-democracy-a...> about the need for “democratic AI” to prevail over “authoritarian AI” - a framing indistinguishable from Amodei's or Palantir's, and laughable on its face when you consider that this so-called “democratic AI” is built for the global domination of others (the very opposite of democracy) and explicitly set up, as we just saw, for mass surveillance and autonomous killings. Or Google, which in February 2025 abandoned its long-standing pledge <https://www.cnbc.com/2025/02/04/google-removes-pledge-to-not-use-ai-for-weap...> not to develop AI for weapons or surveillance. Or Meta, which opened Llama up for U.S. national security use <https://www.theguardian.com/technology/2024/nov/05/meta-allows-national-secu...> in November 2024.
This isn't a few bad apples, it’s virtually the entire ecosystem.
So taking a step back, that's what you have on one side of the ledger: a U.S. obviously dead set on using AI not as a global commons but as a tool of submission and dominance for the United States.
Now, smart readers (which is, of course, all of you) will think they know how the rest of this article is going to go: “he’ll present the other side of the ledger - i.e. China - with its open source models, say that’s the way to go, that we all ought to cheer for the camp that does in fact value some form of openness and universalism, bla bla bla”.
Well… at the risk of disappointing you, I’m actually not going to do that, because a) I like to surprise, and b) that’d be wrong.
The point is that, if we do indeed believe that AI ought to be a global commons, if we do believe in “seeking truth from facts” as opposed to ideology, by definition there shouldn’t be sides of a ledger in the first place: defaulting to “team China” is just the other side of the same mistake we've spent this whole article documenting. The framing itself - AI as a contest between civilizations, a race to be won - is the pathology. You can't seek truth from facts while still holding on to the premise that there must be a “winner”.
Think back to, say, electricity: would it have been right to frame its development as a race to be won by one civilization over another? To cheer for whichever country happened to be ahead on transformers in 1890? To make electricity a matter of national allegiance, something you rooted for the way you’d root for a football team? It sounds absurd because it is absurd. Electricity was a general-purpose technology destined to become part of the shared operating system of human life, and the only sensible stance toward it was to want it developed well and diffused widely for everyone’s benefit, full stop.
Sure, it’s absolutely true that China today has an infinitely better posture towards AI than the U.S. does. Case in point, just as I was in the process of writing this article Deepseek V4 was released <https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro>, and it's hard to imagine a more perfect illustration of the contrast.
V4 is open-source under an MIT license, meaning anyone anywhere can download the weights, modify them, run them on whatever hardware they choose. It's competitive with GPT-5.5 and Claude Opus 4.7 on most AI benchmarks, and priced at a small fraction - or even “free” if you choose to download it and run it yourself. But the single most striking thing about V4 isn't the benchmarks or even the price. It's that V4 has zero dependency on Nvidia’s CUDA anywhere in its stack - it runs entirely on Huawei Ascend chips via Huawei's own CANN framework. In other words: China now not only has its own frontier AI models, it has its own domestic AI stack, top to bottom. And it's giving the whole thing away to the world - the exact opposite of the “hoard and dominate” posture of the U.S. labs.
This is China seeing AI as a general-purpose technology built into the economy, shared across borders, and iterated on openly. And in fact one of the DeepSeek researchers wrote this on X <https://x.com/victor207755822/status/2047518146689732858?s=20> with the release of V4: “we stay true to long-termism and open source for all. AGI belongs to everyone.”
When you cheer for this, you’re not cheering for the “China side” to win, you’re cheering for the principle that there shouldn’t be any side: that this technology - perhaps the most consequential general-purpose technology humanity has ever developed - should belong to everyone, should be built in the open, should be allowed to diffuse the way electricity or the internal combustion engine or antibiotics diffused, imperfectly but broadly, across the whole of human civilization.
Let’s go back to electricity and look at what the world would have been like had a country decided to take towards it the approach that the U.S. is currently taking towards AI: imagine if, say, the United States in 1890 had declared electricity a matter of national security, classified the designs of Edison’s dynamos and Tesla’s induction motors as export-controlled, integrated its electrical companies directly into the War Department, framed the generator as a strategic weapon rather than a general-purpose technology, and spent the following century building its foreign policy around ensuring that only it, and politically-aligned nations, had access to the light bulb.
Batshit insane, right? Well that’s EXACTLY the posture it’s taking towards AI.
Had that happened, it’s painfully obvious we’d ALL have been immeasurably poorer for it, materially and morally. And the United States first and foremost, given that for electricity - as will undoubtedly be the case for AI - the real value didn’t lie in control of the technology but in its widespread diffusion and in what you built on top of it. Think about the U.S.’s “electricity giants”: companies like GE, Whirlpool or RCA didn’t get rich by “owning” electricity - they got rich by *selling what electricity made possible* into a world that was electrifying as fast as it could. The U.S.’s electrical fortune was built on the world electrifying alongside it, not against it.
Does the analogy hold for AI? Yes, surprisingly well. I like Jensen Huang’s (Nvidia’s CEO) recent description of AI <https://www.dwarkesh.com/p/jensen-huang> as a “5-layer cake” made of 1) energy, 2) chips, 3) infrastructure, 4) models and lastly 5) applications.
The implication of his point is that each layer save for the last one - the application layer - will ultimately be largely commoditized, and as such that’s where the real value lies: in the millions of specific products, services, and industrial processes that get built on top of the other 4 layers.
It’s typical network building: the layers underneath eventually become utilities, and utilities are low-margin commodity businesses. It happened with electricity, it happened with phones, it happened with railroads, it happened with the internet itself. The operators of each layer got commoditized over time, while the durable, century-defining fortunes accrued at the top of the stack: GE on top of electricity, Apple on top of the mobile and telecom infrastructure, Amazon and Google on top of the internet. The pattern is so consistent across technologies that it's essentially a law of how general-purpose infrastructure creates value.
There's no reason to think AI - a general-purpose technology of the same order - will turn out any different. If anything, the pattern may be even more pronounced, because the surface area of potential AI applications is larger than any previous general-purpose technology: every industry, every knowledge-work process, every product category can in principle be rebuilt with AI inside it.
As such, if that is correct, it means that the approach the U.S. is taking is strategically incoherent on its own terms. Seeing AI diffusion as something that ought to be resisted, slowed, and controlled gets the economics exactly backwards: diffusion is *precisely* the thing that enables value in the first place. Arguing against it is just as misguided as if American legislators had argued for restrictions on global mobile adoption in 2007, in the name of “winning” the mobile revolution: the $3 trillion Apple of today exists precisely because it did the opposite. Diffusion wasn't the threat to Apple's value - diffusion *was* Apple's value.
Now I can already hear you retort: “sure, but AI is different, what about AGI, surely the country that reaches it first will have a tremendous competitive advantage over everyone else, no?”
Let’s look into this because it’s probably the main argument hawks are making in favor of the current American approach. The claim being that AGI is not like electricity. Electricity was a tool that humans used to do things. AGI will be an agent - a system that can itself reason, plan, conduct research, and improve itself. The moment a sufficiently capable AGI comes online, the argument runs, it will be able to compound its own advantages: designing better chips, writing its own successor systems, solving the bottlenecks that currently constrain human science and engineering.
The country that controls that system will, in effect, have added a superhuman research-and-development engine to its economy, its military, and its intelligence apparatus. Rival countries will not be able to catch up by copying, because the leader will be using AGI to move faster than copying can achieve. By this logic, AGI is the last general-purpose technology - the one that hands permanent advantage to whoever gets it first - and treating it like “just another technology to diffuse” isn't pragmatism, it's catastrophic strategic naïveté.
First of all, note the implied assumption in this thinking: that it would be acceptable - desirable, even - for one country to achieve permanent, structural dominance over every other country on Earth - provided, of course, that country is the United States. It’s presented as self-evident and the natural order of things but let’s be very clear about what this vision means when you strip away the techno-utopist language: the permanent subjugation of every human being who doesn't happen to be American.
If you’re based outside the U.S. and you’re deluded enough to go along with this, let me suggest a thought experiment. Picture the current U.S. administration and imagine it with ten times its current leverage over your country. Because that's roughly what “permanent American AGI dominance” actually means for you. Every tariff decision, every threat that you already resent? Multiply it by an order of magnitude, make it permanent, and make it enforced by a technology your country cannot meaningfully match or resist. That is the future you quite literally cheer for if you happen to reflexively root for the Palantir-Anthropic-OpenAI vision for AI.
Thankfully though, this is purely theoretical because it has no chance of happening anytime soon. Let’s come back down to earth: AI is at a stage right now where it cannot even reliably perform tasks a competent six-year-old can perform. Current frontier models still routinely hallucinate facts, fail at basic arithmetic, lose track of long conversations, and cannot navigate a physical world they have no experience of.
They are, to be clear, extraordinary tools - vastly more useful than what existed even two years ago. But the leap from “extraordinary tool that still can't multiply two four-digit numbers reliably” to “self-improving superintelligence capable of reorganizing global power structures” is - at this stage - a leap of religious faith.
If you don’t believe me, go try to change your flight date by chatting with any airline's AI customer service bot, and report back on how your imminent civilizational transformation feels. Or try to resolve literally *any* problem with the customer service bot of your bank or your telecom provider.
These systems are all powered by current frontier AI, deployed at scale, by well-funded companies with every incentive to make them work. And yet the universal experience of interacting with them is an exercise in immense frustration as they fail to understand what you’re asking, misremember what you said two messages ago, confidently invent policies that don’t exist, and eventually escalate you to a human anyway.
If this is what the technology can actually do when deployed in production by companies that have spent millions on integration - if THIS is the state of the art - the notion that it’s on the edge of becoming a self-improving god-like intelligence capable of dominating geopolitics is, let’s be charitable, difficult to reconcile with observable reality.
The “seek truth from facts” reality is that model companies have an inherent interest in making everyone believe that AGI is just around the corner - because their entire business model, their valuations, and their political influence depend on it being true. Strip away the AGI narrative and the frontier U.S. labs are infrastructure providers in a brutally commoditizing market. In a world where this commoditization is allowed to run its course, the model labs become utilities - high fixed costs, eroding pricing power, the AT&T of the AI century. Their trillion-dollar valuations, their hundred-billion-dollar capital raises - all of it evaporates.
Unless, of course, AGI is imminent - which it isn’t ¯\_(ツ)_/¯
And, in any case, the point is moot because even if AGI were imminent, a “seek truth from facts” look at the scoreboard tells you the hoarding strategy has already failed. DeepSeek V4 matches U.S. frontier models on most benchmarks that matter <https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro>. Alibaba's Qwen, Moonshot's Kimi, Zhipu's GLM are all at or near the frontier. The entire architecture of export controls, chip bans, and military-industrial integration was designed to prevent exactly this. It didn't work. Not “might not work” - it has already factually failed, today, on April 24, 2026. China has caught up, is releasing open-weight frontier models and just demonstrated with DeepSeek V4 it can do so on entirely domestic silicon. Continuing to act as if dominance is still achievable - as Palantir, Anthropic, and OpenAI keep insisting in their strategy documents - is just denial of reality.
This is actually the main risk in AI right now. Not the superintelligence scenario the labs keep warning us about. Instead, it’s the labs themselves, or more precisely their lobbying power on the U.S. government, which has quietly become one of the most successful regulatory capture operations in history. A handful of companies - OpenAI, Anthropic, Palantir, their close allies - have managed to make every piece of public discourse about AI happen on their terms, which is factually leading the U.S. into concrete policy choices that are not only bad for the world, but also for the U.S. themselves.
Bad for the U.S. because it systematically pours American capital into the layers that are commoditizing - chips, infrastructure, models - while actively shrinking the addressable market of the one layer where durable, century-defining fortunes actually get built. And their policies simultaneously encourage the exact outcome they’re designed to prevent: they’ve empirically helped create a credible alternative AI that is just as good, free, genuinely open and shipping on entirely non-American silicon - thereby amplifying the very commoditization they’re trying to delay.
Bad for the world because while the US spends its energy on export controls, chip bans, and strategic denial, the actual work of figuring out how humanity integrates a transformative new technology - the norms, the institutions, the shared understanding of what AI should and shouldn’t do - is being crowded out of the conversation entirely. Every minute of political oxygen consumed by “how do we win” is a minute not spent on “how do we do this well.” And the darkest irony is that “how do we win” isn’t even a question the American public asked for - it’s a question manufactured, funded, and amplified by the small cluster of labs - the Anthropics, the OpenAIs, the Palantirs - whose business model depends on humanity never getting around to asking the better one.
Long story short, there is, in fact, no AI race:
1.
It's a narrative manufactured and amplified by a handful of labs whose valuations depend on it 2.
The economics of general-purpose technologies actively punish “race winners” 3.
Even on the hawks' own terms - AGI - there's no race to win since it’s already empirically been lost
A hundred years from now, the idea that anyone seriously framed AI as a race between nations will sound exactly as absurd as a race to “win” electricity sounds to us today. The Book of Han had it right in 111 CE: seek truth from facts.
https://open.substack.com/pub/arnaudbertrand/p/there-is-no-ai-race?r=y77av&u...
...
V4 is open-source under an MIT license, meaning anyone anywhere can download the weights, modify them, run them on whatever hardware they choose. It's competitive with GPT-5.5 and Claude Opus 4.7 on most AI benchmarks, and priced at a small fraction - or even “free” if you choose to download it and run it yourself. But the single most striking thing about V4 isn't the benchmarks or even the price. It's that V4 has zero dependency on Nvidia’s CUDA anywhere in its stack - it runs entirely on Huawei Ascend chips via Huawei's own CANN framework. In other words: China now not only has its own frontier AI models, it has its own domestic AI stack, top to bottom. And it's giving the whole thing away to the world - the exact opposite of the “hoard and dominate” posture of the U.S. labs. ...
Questa parte non è chiarissima. Lungi da me difendere gli "accumulatori e dominatori" statunitensi ma in pratica la situazione è questa: Fino ad oggi chi voleva entrare nel campo dell'elaborazione AI aveva bisogno di un chip Nvidia e di un framework dal nome CUDA (in pratica un monopolio statunitense). Da adesso, a parità di potenza di calcolo, c'è l'alternativa cinese, chip Huawei Ascend e framework CANN. Il resto del mondo che farà? Continuerà a scegliere gli americani e quindi Nvidia o i cinesi e quindi Huawei? Usare il termine open-source, l'abbiamo detto tante volte, con l'AI è inappropriato. A.
participants (4)
-
Alfredo Bregni -
antonio -
de petra giulio -
Maurizio Borghi