Palantir’s manifesto at X

Palantir occupies a more extreme place in Silicon Valley but represents a new, growing ethos Big Tech is developing. We have seen the company’s official X message today.

A few early comments:

This is the nationalist turn. Big Tech now explicitly acts as an agent of a nation-state, which in turn justifies its regulations worldwide. You cannot claim that a state is authoritarian just because it restricts or bans software. The Chinese/Russian/ Turkish approach of digital sovereignty wins; global internet freedoms, once supported by the State Department, are dead.

This is a fascist declaration. It is also too triumphalist, proud, absolutist, and with such intensity that it looks like it was written by an incel.

I asked Perplexity for initial reactions: 

Palantir’s X post is being read as a proud, almost triumphalist declaration that warfare is now software‑defined and that Palantir sits at the center of this new regime; reactions so far are sharply polarized between investors/defense circles on one side and human‑rights, tech‑ethics, and parts of mainstream media on the other.[1][2][3][4]

Because the tweet itself is so short and absolutist, most reactions are interpretive: people are mapping it onto Palantir’s visible role in the Iran war, in Israel/Palestine, and in the Anthropic–Pentagon conflict rather than treating it as a standalone statement.[5][2][3][6][1]

How the declaration is being interpreted

  • Commenters are tying the tweet directly to Shyam Sankar’s and Alex Karp’s recent line that the Iran war is “the first large‑scale combat operation really driven” by AI and that the side that masters AI‑enabled targeting first wins.[7][5][2][8]
  • Many critics see the post as Palantir openly embracing the idea that algorithmic targeting and battle‑management systems are as decisive—and as dangerous—as tactical nukes (borrowing Karp’s own earlier metaphor), but without acknowledging civilian harm or accountability.[5][3]
  • Supporters in defense/VC networks tend to read it as a victory lap: proof that Palantir’s bet on “war software” is paying off in real conflicts, supposedly validating its strategy against more cautious AI companies.[1][8][6][7]

Early critical reactions (rights, media, AI ethics)

  • Human‑rights and humanitarian law circles are reacting very negatively, folding the tweet into ongoing accusations that Palantir is enabling unlawful or indiscriminate targeting in Gaza and Iran, especially where AI systems are involved in target selection with limited “meaningful verification.”[9][5][10][3]
  • Commentators highlight the discrepancy between Palantir’s public messaging about “ethical AI in defense decision‑support” and its celebratory tone about high‑tempo AI‑driven strike campaigns (e.g., thousands of targets in Iran within 24 hours), reading the tweet as evidence that speed and “productivity” are prioritized over civilian protection.[1][7][5][11]
  • In critical media (Democracy Now!, rights‑oriented outlets, Business & Human Rights Resource Centre), the declaration is folded into a broader narrative: AI‑driven warfare is being normalized and branded as a competitive edge, while mechanisms for legal and democratic oversight lag far behind.[5][10][3]

Supportive and strategic reactions (defense, investors)

  • Defense‑tech boosters, some retired officers, and venture/defense commentators are largely positive: they see the declaration as candid realism that AI targeting and command‑and‑control have already changed war, and that the West must move fast or “fall behind.”[1][2][12][8][6]
  • Posts and op‑eds in the defense‑industry ecosystem praise Palantir’s ability to deploy AI‑enabled systems like Maven extremely quickly (the “1,000 targets in 24 hours” meme) and explicitly contrast this with Anthropic’s decision to draw red lines on autonomous weapons and surveillance.[7][5][8][6]
  • Some financial analysis frames the declaration as consistent with Palantir’s long‑running self‑presentation at WEF and in investor communications: a firm that will unapologetically build war‑fighting infrastructure for Western allies, which is seen as both a moat and a source of political risk.[12][8][13][14][1]

Intra‑tech and Anthropic/OpenAI adjacent reactions

  • Within the AI sector, Palantir’s tweet is being juxtaposed with Anthropic’s and other labs’ refusals to support certain Pentagon uses (mass surveillance, fully autonomous weapons), and with Trump’s order to purge Anthropic from federal systems; Palantir’s declaration is read as a statement that it will occupy the space vacated by more cautious labs.[5][10][8][6]
  • Critics in the AI‑safety and alignment community argue that Palantir is effectively saying: “we build the war plumbing, others worry about the ethics,” especially given UK Palantir leadership’s insistence that responsibility for AI targeting lies with militaries, not with the company.[9][10][15]
  • There is also some technical skepticism: researchers point out that Palantir’s framing of AI as the decisive factor in war obscures messy realities—data quality, human operators, and political constraints—and may oversell what current systems can reliably do, particularly in complex environments like urban Gaza or Iran.[1][8][5]

Political and democratic‑theory angles

  • Karp’s recent statements about AI “redefining power, war, and economies” at Davos—and his explicit talk about who will be left behind—are being used to interpret the tweet as not just about military doctrine but also about domestic power: who controls AI infrastructure and how that reshapes democratic and geopolitical hierarchies.[12][14][16]
  • Commentators in more critical venues argue that Palantir’s declaration reflects a broader shift: core questions about when and how to use force are being embedded in proprietary software and data pipelines, where elected representatives and publics have little visibility.[5][10][3][14]

If you’re planning to write on this, what angle are you most drawn to—human‑rights/legal responsibility, intra‑AI industry politics (Anthropic vs. Palantir), or the democratic‑theory implications of “war as software”?

  1. https://www.wired.com/story/palantir-developer-conference-ai-war-alex-karp/
  2. https://www.bloomberg.com/news/articles/2026-03-24/palantir-cto-sees-iran-war-as-first-major-conflict-driven-by-ai
  3. https://www.business-humanrights.org/en/latest-news/palantir-allegedly-enables-israels-ai-targeting-amid-israels-war-in-gaza-raising-concerns-over-war-crimes/
  4. https://gizmodo.com/ai-military-weapons-reaim-summit-human-rights-1850128562/5
  5. https://www.democracynow.org/2026/3/18/ai_warfare
  6. https://fortune.com/2026/03/05/palantir-ceo-alex-karp-anthropic-pentagon-r-word-anthropic-pentagon/
  7. https://www.linkedin.com/posts/nicolaschaillan_incompetent-activity-7446600014212157440-3LXx
  8. https://lifearchitect.ai/palantir/
  9. https://www.bbc.com/news/articles/cdrm52g4pl2o
  10. https://www.business-humanrights.org/en/latest-news/palantir-ceo-comments-on-applying-ai-to-warfare-ai-dangers-to-humanity/
  11. https://blog.palantir.com/ethical-ai-in-defense-decision-support-systems-e3d57505db4f
  12. https://futurist.com/2026/01/24/palantirs-ceo-alex-karp-warns-ai-will-redefine-power-war-and-economies/
  13. https://finance.yahoo.com/markets/stocks/articles/palantir-stock-takes-burry-beatdown-120104102.html
  14. https://passblue.com/2025/10/12/palantir-seemingly-everywhere-all-at-once/
  15. https://www.aol.com/news/palantir-uk-boss-says-militaries-160354718.html
  16. https://newrepublic.com/post/207693/palantir-ceo-karp-disrupting-democratic-power
  17. https://www.facebook.com/cnbcinternational/posts/palantir-ceo-alex-karp-told-cnbc-that-his-companys-technology-is-being-used-in-t/1296268789027631/
  18. https://www.youtube.com/watch?v=1LcH4lP9XbA

Discover more from Erkan's Field Diary

Subscribe to get the latest posts sent to your email.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.