A trove of nearly 3,200 disclosure records, which ProPublica made public this week, reveals a web of financial ties between Trump officials and the industries they help regulate.
A trove of nearly 3,200 disclosure records, which ProPublica made public this week, reveals a web of financial ties between Trump officials and the industries they help regulate.
A Pew survey found 53% of Americans rate fellow citizens' morality as "bad," the opposite of most of the 24 other countries polled. Canada led positivity, with 92% calling fellow Canadians good.
An LLM does not need to pull a trigger or automate a missile to serve the cause of war. It can work to make obscene violence feel justified and rational — to give you the illusion that you have “thought the matter through.” 🧵
Musk shares Curtis Yarvin great replacement theory
Here’s Elon Musk again spreading the white nationalist great replacement theory - this time as summarized by Curtis Yarvin, an internet troll turned MAGA “intellectual” who has influenced VP JD Vance’s worldview. In the full quote, Yarvin compares the U.S. to South Africa - Musk’s native country.
"A classified report by the National Intelligence Council found that even a large-scale assault on Iran launched by the United States would be unlikely to oust the Islamic republic’s entrenched military and clerical establishment..."
Interactions with large language models blur the lines between what feels like human conversation and the more typical experience of using technology—which seems to be causing confusion even among what should be experienced and sophisticated users, writes Tech Policy Press fellow James Ball.
New: The White House is transforming the Iran strikes into a meme war. An "aesthetic of bloodlust" that gives Americans the empathy-free, Hollywood, video-game version of deadly combat www.washingtonpost.com/technology/2...
Join the Tech Policy Press newsletter. It comes out on Sunday. It is free! www.techpolicy.press/newsletter/
As the House Energy and Commerce Committee considers legislation aimed at protecting children online, it risks advancing bills that while well-intentioned may not be effective nor in line with what some parents and teens actually want, Aliya Bhatia and Michal Luria write.
Former top DOGE official in his mid-twenties named head of Pentagon AI efforts.
"Kliger, in social media posts between October 2024 and January 2025, has voiced controversial views and reposted content from white supremacist Nick Fuentes and self-described misogynist Andrew Tate."
The Trump administration’s escalating campaign in Iran marks the beginning of America’s first war in the age of large language models. These events make clear that those who work on AI safety must confront the limits of so-called “alignment to human values,” writes Eryk Salvaggio.
hahahaha
Tehran changed irrevocably after 2009. Following a year of mass protests against the disputed presidential election that returned Mahmoud Ahmadinejad to power, CCTV cameras proliferated across the capital. Surveillance systems rapidly expanded into universities, schools, kindergartens, cafés, and restaurants. Business owners were permitted to operate only if they granted security forces access to their footage. Urban space was folded into an integrated architecture of monitoring and control. That same year, without the knowledge of Iranian authorities, the Stuxnet worm infiltrated the Natanz nuclear facilities, marking the first confirmed instance of a cyberweapon causing physical destruction of critical infrastructure. It was a watershed moment: digital code had crossed decisively into kinetic effect. On March 2, the Financial Times reported that Tehran’s traffic cameras had been compromised for years by Israeli intelligence—by the same unit that had run Stuxnet. Detailed knowledge of the movements of Ali Khamenei reportedly enabled a targeted strike at his residence. If accurate, this represents a striking inversion: surveillance systems built for internal repression repurposed for external attack.
Azadeh Akbari (@azadehakbari.bsky.social):
Professor of Critical Data & Surveillance Studies at the Center for Critical Computational Studies, Goethe University Frankfurt and Founder and Director of the Surveillance in the Majority World Research Network
The war in Iran and the wider Middle East is full-stack evidence that the contemporary battlefield is increasingly privatized. Whether in predictive targeting, situational awareness, information warfare, or related domains, many of these emerging warfighting capabilities are increasingly developed, operated, and supplied by the private sector. And yet, who holds private companies accountable when the very states meant to regulate them are also hiring them as subcontractors? Today’s generation of venture-backed defense tech firms is, unlike traditional arms contractors, embedded deeply and continuously in the battlefield in real time – integrated from the software layer upward. Take the role of AI, which now plays a central role in the war in Iran. Anthropic may have recently red-lined the Pentagon’s use of its systems for autonomous weapons and the mass surveillance of Americans, but its technology has effectively empowered an illegal attack on the sovereign nation of Iran, ultimately leading to the overthrow and death of its Supreme Leader. And it’s not the first time this year that this San Francisco-based company has been used in an operation that illegally overthrew a foreign leader: its intelligence analysis, planning, and decision-support tasks were reportedly part of Trump’s arsenal in the Maduro overthrow. The privatization of warfare extends beyond software or the algorithm. No doubt if there are boots on the ground in Iran, troops may soon be using Meta + Anduril wearables to enhance soldier perception and decision-making on the streets of Tehran. These companies form and will continue to create the very exoskeletons of the modern military.
Brett Solomon (@brettsolomon.bsky.social)
Senior Research Fellow, Human Rights Center at the University of California, Berkeley School of Law
Betsy Popken (@betsypopken.bsky.social)
Executive Director, Human Rights Center, at the University of California, Berkeley School of Law
The spiraling conflict in the Middle East is the first large-scale, global conflict to test platform responses in an era when most of them have reduced their trust and safety teams and degraded their ability to fact-check or add context to war propaganda. While it is nearly impossible to quantify how those decisions have impacted the visibility and reach of false and misleading content about the war, there is a sense that, on at least some platforms, the guardrails are completely off. On X, for example, Iranian state-sponsored propaganda—including obvious instances of state-backed media outlets promoting AI-generated images alleging destruction of US facilities—is not only spreading without labels or community notes but is being served to some users in their “for you” feeds. In a chaotic breaking news environment, we can’t expect platforms to be able to respond to everything, but the very intentional decision to remove labels identifying state-sponsored media outlets means that audiences encountering this content are doing so without any contextual clues. Coupled with the ubiquity of AI-generated content across all platforms, the information environment feels demonstrably worse than it did during Russia’s full-scale invasion of Ukraine, when there was at least a sense that the platforms were trying.
Melanie Smith and Bret Schafer: Senior Directors of Information Operations, Institute for Strategic Dialogue (@isdglobal.bsky.social)
The first wave of American attacks during Operation Epic Fury saw the operational debut of LUCAS, a low-cost unmanned combat attack system, the first precise mass system fielded by the United States military. This long-range, one-way loitering munition was reverse-engineered from the Iranian Shahed-136, developed in 18 months, and integrated into CENTCOM in December 2025, just five months later. This pace was much faster than the Pentagon’s usual technology adoption timelines.
Lauren Kahn: Senior Research Analyst, Center for Security and Emerging Technology (CSET) at Georgetown University
The dispute over the use of Claude in autonomous weapons also lays bare the dangers of lethal targeting without sufficient human oversight. The laws of war require the military to distinguish between combatants and civilians, and refrain from attacks that cause excessive civilian harm. These determinations are often context specific and may require judgment that AI is ill-equipped to exercise. The Defense Department’s directive on autonomous weapons raises more questions than it answers about how the military addresses these risks. It requires senior Pentagon leaders to review whether autonomous weapons enable “appropriate levels of human judgment over the use of force.” But it’s unclear how this standard is satisfied when the weapon leaves no room for commanders or operators to override technical blind spots in life-and-death decisions. Congress should urgently impose safeguards to align autonomous weapons with the laws of war — and restrict the use of weapons that fall short.
Emile Ayoub and Amos Toh: Senior Counsels, Liberty and National Security Program, Brennan Center for Justice ( @emileayoub.bsky.social and @amostoh.bsky.social):
In the coming days, I will be looking at the following issues: How accurate are the Iran target lists especially after high priority targets have been expended? Do second and third-tier targets represent legitimate military objectives and is due attention being paid to preventing collateral civilian harm? As the data analyzed by Claude becomes noisier and susceptible to distortion (the AI slop problem), how is the model compensating for potentially lower accuracy or limited verifiability? What type of oversight is the US military exercising over AI-generated target lists? Given the unprecedented speed in which targets are being produced and then struck, are target verification procedures holding up sufficiently? How does the Pentagon’s legal review process to ensure compliance with the laws of armed conflict interface with the model? When it comes to after action reviews of strikes, how are these being conducted? Reporting indicates that AI models are also evaluating strikes after they have been carried out; is it appropriate for these tools to conduct self-assessments regarding lethal strikes?
Steven Feldstein: Senior Fellow, Democracy, Conflict, and Governance Program at the Carnegie Endowment for International Peace (@stevenfeldstein.bsky.social):
We are in the dangerous territory where Large Language Models (LLMs) and generative AI are being normalized as a valid technology to be instrumented within both AI-Decision Support Systems (DSS) and Lethal Autonomous Weapons Systems (LAWS) for targeting purposes. Yet current framing regarding Anthropic’s and OpenAI’s negotiations with the US’s Department of War instead risks overindexing on myopic interpretations of human oversight, or a particular companies' so-called 'red lines', papering over what should be the real target of our scrutiny: that generative AI algorithms are a flawed and inaccurate technology that fabricate and "hallucinate" outputs, often at a rate of 50% accuracy, where they're unlikely to be able to solve tasks outside of their data distribution and training data sets. Generative AI’s inability to handle novel scenarios that would arise from the fog of war thus raises serious questions about whether they can be successful in military settings. Furthermore, these “hallucinations” are an inherent property of these models given their probabilistic nature, with model providers stating that these issues are to persist.
Heidy Khlaaf: Chief AI Scientist, AI Now Institute (@heidykhlaaf.bsky.social):
The expanding war in Iran brought to the fore questions about the role of technology in armed conflict, including the controversial use of new artificial intelligence technologies. Tech Policy Press invited perspectives from experts on what they are watching for as the situation unfolds.
For politically active billionaires and their allies in Washington, social media is becoming an instrument of political power, writes Paddy Leerssen. Broadly, a new regulatory paradigm for content moderation is emerging: the EU writes laws, the US buys shares.
In this Tech Policy piece, I criticize how framings of Anthropic’s & OpenAI’s negotiations with the US’s DoW overindex on myopic interpretations of human oversight, papering over what should be the real target of our scrutiny: that generative AI algorithms are a flawed and inaccurate technology.
WTF
Get in touch!
Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise.
Amodei: "Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so." www.anthropic.com/news/where-s...
Could also be titled "Wednesday"
bsky.app/profile/rgoo...