Justin Hendrix's Avatar

Justin Hendrix

@justinhendrix

Concerned with tech, media and democracy. CEO & Editor at Tech Policy Press. Research & Adjunct Professor at NYU Tandon School of Engineering. Opinions mine.

135,382
Followers
1,493
Following
5,640
Posts
28.04.2023
Joined
Posts Following

Latest posts by Justin Hendrix @justinhendrix

Preview
Documents Reveal a Web of Financial Ties Between Trump Officials and the Industries They Help Regulate ProPublica is releasing a trove of disclosure records that detail the finances of more than 1,500 Trump appointees, including former lobbyists, industry executives and at least a dozen officials who d...

A trove of nearly 3,200 disclosure records, which ProPublica made public this week, reveals a web of financial ties between Trump officials and the industries they help regulate.

07.03.2026 17:00 👍 647 🔁 356 💬 22 📌 37
Preview
U.S. was only country in a worldwide survey to say most fellow citizens are bad people A new Pew survey shows that other countries’ citizens tend to look more favorably on their neighbors.

A Pew survey found 53% of Americans rate fellow citizens' morality as "bad," the opposite of most of the 24 other countries polled. Canada led positivity, with 92% calling fellow Canadians good.

07.03.2026 16:21 👍 52 🔁 20 💬 8 📌 1

An LLM does not need to pull a trigger or automate a missile to serve the cause of war. It can work to make obscene violence feel justified and rational — to give you the illusion that you have “thought the matter through.” 🧵

06.03.2026 15:02 👍 55 🔁 22 💬 3 📌 1
Musk shares Curtis Yarvin great replacement theory

Musk shares Curtis Yarvin great replacement theory

Here’s Elon Musk again spreading the white nationalist great replacement theory - this time as summarized by Curtis Yarvin, an internet troll turned MAGA “intellectual” who has influenced VP JD Vance’s worldview. In the full quote, Yarvin compares the U.S. to South Africa - Musk’s native country.

07.03.2026 01:11 👍 241 🔁 74 💬 26 📌 6
Preview
Intel report warns large-scale war ‘unlikely’ to oust Iran’s regime A classified U.S. report doubts that Iran’s opposition would take power following either a short or extended U.S. military campaign.

"A classified report by the National Intelligence Council found that even a large-scale assault on Iran launched by the United States would be unlikely to oust the Islamic republic’s entrenched military and clerical establishment..."

07.03.2026 13:02 👍 51 🔁 21 💬 1 📌 0
Preview
Anthropomorphism Is Breaking Our Ability to Judge AI Tech Policy Press fellow James Ball asks, how should we interact with a technology designed to ‘speak’ with us on what appear to be human terms?

Interactions with large language models blur the lines between what feels like human conversation and the more typical experience of using technology—which seems to be causing confusion even among what should be experienced and sophisticated users, writes Tech Policy Press fellow James Ball.

07.03.2026 09:54 👍 44 🔁 15 💬 2 📌 1
Preview
The White House is transforming the Iran strikes into a meme war The White House is using memes that make light of violent combat in Iran, mixing footage of real missile strikes with clips from action films and video games.

New: The White House is transforming the Iran strikes into a meme war. An "aesthetic of bloodlust" that gives Americans the empathy-free, Hollywood, video-game version of deadly combat www.washingtonpost.com/technology/2...

07.03.2026 00:09 👍 128 🔁 51 💬 11 📌 12
Preview
Newsletter Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. We publish opinion and analysis.

Join the Tech Policy Press newsletter. It comes out on Sunday. It is free! www.techpolicy.press/newsletter/

07.03.2026 00:35 👍 10 🔁 1 💬 0 📌 1
Preview
Congress’ Child Safety Bills Sound Good. Families Suggest They Won't Work. Lawmakers risk advancing bills that may not be effective nor in line with what some parents and teens actually want, Michal Luria and Aliya Bhatia write.

As the House Energy and Commerce Committee considers legislation aimed at protecting children online, it risks advancing bills that while well-intentioned may not be effective nor in line with what some parents and teens actually want, Aliya Bhatia and Michal Luria write.

06.03.2026 20:33 👍 7 🔁 4 💬 1 📌 0
Preview
DOGE Aide Who Helped Gut CFPB Was Warned About Potential Conflicts of Interest Before he helped fire most Consumer Financial Protection Bureau staffers, DOGE’s Gavin Kliger was warned about his investments and advised to not take any actions that could benefit him personally, ac...

ProPublica last year: www.propublica.org/article/cfpb...

06.03.2026 17:43 👍 38 🔁 13 💬 1 📌 1
Preview
Pentagon taps former DOGE official to lead its AI efforts The Pentagon on Friday named as Chief Data Officer a computer ​scientist who aided billionaire Elon Musk's ‌efforts to overhaul the government last year and who has boosted white supremacists and miso...

Former top DOGE official in his mid-twenties named head of Pentagon AI efforts.

"Kliger, in ‌social ⁠media posts between October 2024 and January 2025, has voiced controversial views and reposted content from white supremacist Nick Fuentes ​and self-described ​misogynist ⁠Andrew Tate."

06.03.2026 17:41 👍 323 🔁 201 💬 23 📌 34
Preview
Americas First War in Age of LLMs Exposes Myth of AI Alignment The military is turning to tools that relieve the burden of conscience and function like a moral sedative, writes Eryk Salvaggio.

The Trump administration’s escalating campaign in Iran marks the beginning of America’s first war in the age of large language models. These events make clear that those who work on AI safety must confront the limits of so-called “alignment to human values,” writes Eryk Salvaggio.

06.03.2026 14:29 👍 50 🔁 28 💬 3 📌 6

hahahaha

06.03.2026 13:38 👍 9 🔁 0 💬 0 📌 0
Tehran changed irrevocably after 2009. Following a year of mass protests against the disputed presidential election that returned Mahmoud Ahmadinejad to power, CCTV cameras proliferated across the capital. Surveillance systems rapidly expanded into universities, schools, kindergartens, cafés, and restaurants. Business owners were permitted to operate only if they granted security forces access to their footage. Urban space was folded into an integrated architecture of monitoring and control.

That same year, without the knowledge of Iranian authorities, the Stuxnet worm infiltrated the Natanz nuclear facilities, marking the first confirmed instance of a cyberweapon causing physical destruction of critical infrastructure. It was a watershed moment: digital code had crossed decisively into kinetic effect.

On March 2, the Financial Times reported that Tehran’s traffic cameras had been compromised for years by Israeli intelligence—by the same unit that had run Stuxnet. Detailed knowledge of the movements of Ali Khamenei reportedly enabled a targeted strike at his residence. If accurate, this represents a striking inversion: surveillance systems built for internal repression repurposed for external attack.

Tehran changed irrevocably after 2009. Following a year of mass protests against the disputed presidential election that returned Mahmoud Ahmadinejad to power, CCTV cameras proliferated across the capital. Surveillance systems rapidly expanded into universities, schools, kindergartens, cafés, and restaurants. Business owners were permitted to operate only if they granted security forces access to their footage. Urban space was folded into an integrated architecture of monitoring and control. That same year, without the knowledge of Iranian authorities, the Stuxnet worm infiltrated the Natanz nuclear facilities, marking the first confirmed instance of a cyberweapon causing physical destruction of critical infrastructure. It was a watershed moment: digital code had crossed decisively into kinetic effect. On March 2, the Financial Times reported that Tehran’s traffic cameras had been compromised for years by Israeli intelligence—by the same unit that had run Stuxnet. Detailed knowledge of the movements of Ali Khamenei reportedly enabled a targeted strike at his residence. If accurate, this represents a striking inversion: surveillance systems built for internal repression repurposed for external attack.

Azadeh Akbari (@azadehakbari.bsky.social):
Professor of Critical Data & Surveillance Studies at the Center for Critical Computational Studies, Goethe University Frankfurt and Founder and Director of the Surveillance in the Majority World Research Network

06.03.2026 12:56 👍 7 🔁 0 💬 0 📌 0
The war in Iran and the wider Middle East is full-stack evidence that the contemporary battlefield is increasingly privatized. Whether in predictive targeting, situational awareness, information warfare, or related domains, many of these emerging warfighting capabilities are increasingly developed, operated, and supplied by the private sector.

And yet, who holds private companies accountable when the very states meant to regulate them are also hiring them as subcontractors? Today’s generation of venture-backed defense tech firms is, unlike traditional arms contractors, embedded deeply and continuously in the battlefield in real time – integrated from the software layer upward.

Take the role of AI, which now plays a central role in the war in Iran. Anthropic may have recently red-lined the Pentagon’s use of its systems for autonomous weapons and the mass surveillance of Americans, but its technology has effectively empowered an illegal attack on the sovereign nation of Iran, ultimately leading to the overthrow and death of its Supreme Leader.

And it’s not the first time this year that this San Francisco-based company has been used in an operation that illegally overthrew a foreign leader: its intelligence analysis, planning, and decision-support tasks were reportedly part of Trump’s arsenal in the Maduro overthrow.

The privatization of warfare extends beyond software or the algorithm. No doubt if there are boots on the ground in Iran, troops may soon be using Meta + Anduril wearables to enhance soldier perception and decision-making on the streets of Tehran. These companies form and will continue to create the very exoskeletons of the modern military.

The war in Iran and the wider Middle East is full-stack evidence that the contemporary battlefield is increasingly privatized. Whether in predictive targeting, situational awareness, information warfare, or related domains, many of these emerging warfighting capabilities are increasingly developed, operated, and supplied by the private sector. And yet, who holds private companies accountable when the very states meant to regulate them are also hiring them as subcontractors? Today’s generation of venture-backed defense tech firms is, unlike traditional arms contractors, embedded deeply and continuously in the battlefield in real time – integrated from the software layer upward. Take the role of AI, which now plays a central role in the war in Iran. Anthropic may have recently red-lined the Pentagon’s use of its systems for autonomous weapons and the mass surveillance of Americans, but its technology has effectively empowered an illegal attack on the sovereign nation of Iran, ultimately leading to the overthrow and death of its Supreme Leader. And it’s not the first time this year that this San Francisco-based company has been used in an operation that illegally overthrew a foreign leader: its intelligence analysis, planning, and decision-support tasks were reportedly part of Trump’s arsenal in the Maduro overthrow. The privatization of warfare extends beyond software or the algorithm. No doubt if there are boots on the ground in Iran, troops may soon be using Meta + Anduril wearables to enhance soldier perception and decision-making on the streets of Tehran. These companies form and will continue to create the very exoskeletons of the modern military.

Brett Solomon (@brettsolomon.bsky.social)
Senior Research Fellow, Human Rights Center at the University of California, Berkeley School of Law

Betsy Popken (@betsypopken.bsky.social)
Executive Director, Human Rights Center, at the University of California, Berkeley School of Law

06.03.2026 12:55 👍 10 🔁 0 💬 1 📌 1
The spiraling conflict in the Middle East is the first large-scale, global conflict to test platform responses in an era when most of them have reduced their trust and safety teams and degraded their ability to fact-check or add context to war propaganda. While it is nearly impossible to quantify how those decisions have impacted the visibility and reach of false and misleading content about the war, there is a sense that, on at least some platforms, the guardrails are completely off.

On X, for example, Iranian state-sponsored propaganda—including obvious instances of state-backed media outlets promoting AI-generated images alleging destruction of US facilities—is not only spreading without labels or community notes but is being served to some users in their “for you” feeds. In a chaotic breaking news environment, we can’t expect platforms to be able to respond to everything, but the very intentional decision to remove labels identifying state-sponsored media outlets means that audiences encountering this content are doing so without any contextual clues. Coupled with the ubiquity of AI-generated content across all platforms, the information environment feels demonstrably worse than it did during Russia’s full-scale invasion of Ukraine, when there was at least a sense that the platforms were trying.

The spiraling conflict in the Middle East is the first large-scale, global conflict to test platform responses in an era when most of them have reduced their trust and safety teams and degraded their ability to fact-check or add context to war propaganda. While it is nearly impossible to quantify how those decisions have impacted the visibility and reach of false and misleading content about the war, there is a sense that, on at least some platforms, the guardrails are completely off. On X, for example, Iranian state-sponsored propaganda—including obvious instances of state-backed media outlets promoting AI-generated images alleging destruction of US facilities—is not only spreading without labels or community notes but is being served to some users in their “for you” feeds. In a chaotic breaking news environment, we can’t expect platforms to be able to respond to everything, but the very intentional decision to remove labels identifying state-sponsored media outlets means that audiences encountering this content are doing so without any contextual clues. Coupled with the ubiquity of AI-generated content across all platforms, the information environment feels demonstrably worse than it did during Russia’s full-scale invasion of Ukraine, when there was at least a sense that the platforms were trying.

Melanie Smith and Bret Schafer: Senior Directors of Information Operations, Institute for Strategic Dialogue (@isdglobal.bsky.social)

06.03.2026 12:54 👍 6 🔁 0 💬 1 📌 0
The first wave of American attacks during Operation Epic Fury saw the operational debut of LUCAS, a low-cost unmanned combat attack system, the first precise mass system fielded by the United States military. This long-range, one-way loitering munition was reverse-engineered from the Iranian Shahed-136, developed in 18 months, and integrated into CENTCOM in December 2025, just five months later. This pace was much faster than the Pentagon’s usual technology adoption timelines.

The first wave of American attacks during Operation Epic Fury saw the operational debut of LUCAS, a low-cost unmanned combat attack system, the first precise mass system fielded by the United States military. This long-range, one-way loitering munition was reverse-engineered from the Iranian Shahed-136, developed in 18 months, and integrated into CENTCOM in December 2025, just five months later. This pace was much faster than the Pentagon’s usual technology adoption timelines.

Lauren Kahn: Senior Research Analyst, Center for Security and Emerging Technology (CSET) at Georgetown University

06.03.2026 12:52 👍 6 🔁 0 💬 1 📌 0
The dispute over the use of Claude in autonomous weapons also lays bare the dangers of lethal targeting without sufficient human oversight. The laws of war require the military to distinguish between combatants and civilians, and refrain from attacks that cause excessive civilian harm. These determinations are often context specific and may require judgment that AI is ill-equipped to exercise.

The Defense Department’s directive on autonomous weapons raises more questions than it answers about how the military addresses these risks. It requires senior Pentagon leaders to review whether autonomous weapons enable “appropriate levels of human judgment over the use of force.” But it’s unclear how this standard is satisfied when the weapon leaves no room for commanders or operators to override technical blind spots in life-and-death decisions. Congress should urgently impose safeguards to align autonomous weapons with the laws of war — and restrict the use of weapons that fall short.

The dispute over the use of Claude in autonomous weapons also lays bare the dangers of lethal targeting without sufficient human oversight. The laws of war require the military to distinguish between combatants and civilians, and refrain from attacks that cause excessive civilian harm. These determinations are often context specific and may require judgment that AI is ill-equipped to exercise. The Defense Department’s directive on autonomous weapons raises more questions than it answers about how the military addresses these risks. It requires senior Pentagon leaders to review whether autonomous weapons enable “appropriate levels of human judgment over the use of force.” But it’s unclear how this standard is satisfied when the weapon leaves no room for commanders or operators to override technical blind spots in life-and-death decisions. Congress should urgently impose safeguards to align autonomous weapons with the laws of war — and restrict the use of weapons that fall short.

Emile Ayoub and Amos Toh: Senior Counsels, Liberty and National Security Program, Brennan Center for Justice ( @emileayoub.bsky.social and @amostoh.bsky.social):

06.03.2026 12:51 👍 11 🔁 4 💬 1 📌 0
In the coming days, I will be looking at the following issues:

How accurate are the Iran target lists especially after high priority targets have been expended? Do second and third-tier targets represent legitimate military objectives and is due attention being paid to preventing collateral civilian harm? As the data analyzed by Claude becomes noisier and susceptible to distortion (the AI slop problem), how is the model compensating for potentially lower accuracy or limited verifiability?
What type of oversight is the US military exercising over AI-generated target lists? Given the unprecedented speed in which targets are being produced and then struck, are target verification procedures holding up sufficiently? How does the Pentagon’s legal review process to ensure compliance with the laws of armed conflict interface with the model?
When it comes to after action reviews of strikes, how are these being conducted? Reporting indicates that AI models are also evaluating strikes after they have been carried out; is it appropriate for these tools to conduct self-assessments regarding lethal strikes?

In the coming days, I will be looking at the following issues: How accurate are the Iran target lists especially after high priority targets have been expended? Do second and third-tier targets represent legitimate military objectives and is due attention being paid to preventing collateral civilian harm? As the data analyzed by Claude becomes noisier and susceptible to distortion (the AI slop problem), how is the model compensating for potentially lower accuracy or limited verifiability? What type of oversight is the US military exercising over AI-generated target lists? Given the unprecedented speed in which targets are being produced and then struck, are target verification procedures holding up sufficiently? How does the Pentagon’s legal review process to ensure compliance with the laws of armed conflict interface with the model? When it comes to after action reviews of strikes, how are these being conducted? Reporting indicates that AI models are also evaluating strikes after they have been carried out; is it appropriate for these tools to conduct self-assessments regarding lethal strikes?

Steven Feldstein: Senior Fellow, Democracy, Conflict, and Governance Program at the Carnegie Endowment for International Peace (@stevenfeldstein.bsky.social):

06.03.2026 12:50 👍 7 🔁 1 💬 1 📌 0
We are in the dangerous territory where Large Language Models (LLMs) and generative AI are being normalized as a valid technology to be instrumented within both AI-Decision Support Systems (DSS) and Lethal Autonomous Weapons Systems (LAWS) for targeting purposes. Yet current framing regarding Anthropic’s and OpenAI’s negotiations with the US’s Department of War instead risks overindexing on myopic interpretations of human oversight, or a particular companies' so-called 'red lines', papering over what should be the real target of our scrutiny: that generative AI algorithms are a flawed and inaccurate technology that fabricate and "hallucinate" outputs, often at a rate of 50% accuracy, where they're unlikely to be able to solve tasks outside of their data distribution and training data sets.

Generative AI’s inability to handle novel scenarios that would arise from the fog of war thus raises serious questions about whether they can be successful in military settings. Furthermore, these “hallucinations” are an inherent property of these models given their probabilistic nature, with model providers stating that these issues are to persist.

We are in the dangerous territory where Large Language Models (LLMs) and generative AI are being normalized as a valid technology to be instrumented within both AI-Decision Support Systems (DSS) and Lethal Autonomous Weapons Systems (LAWS) for targeting purposes. Yet current framing regarding Anthropic’s and OpenAI’s negotiations with the US’s Department of War instead risks overindexing on myopic interpretations of human oversight, or a particular companies' so-called 'red lines', papering over what should be the real target of our scrutiny: that generative AI algorithms are a flawed and inaccurate technology that fabricate and "hallucinate" outputs, often at a rate of 50% accuracy, where they're unlikely to be able to solve tasks outside of their data distribution and training data sets. Generative AI’s inability to handle novel scenarios that would arise from the fog of war thus raises serious questions about whether they can be successful in military settings. Furthermore, these “hallucinations” are an inherent property of these models given their probabilistic nature, with model providers stating that these issues are to persist.

Heidy Khlaaf: Chief AI Scientist, AI Now Institute (@heidykhlaaf.bsky.social):

06.03.2026 12:48 👍 9 🔁 2 💬 1 📌 0
Preview
Key Questions on the Role of Technology in the Expanding Middle East War Tech Policy Press asked experts working at the intersection of technology policy, security, and international affairs to share what they are watching.

The expanding war in Iran brought to the fore questions about the role of technology in armed conflict, including the controversial use of new artificial intelligence technologies. Tech Policy Press invited perspectives from experts on what they are watching for as the situation unfolds.

06.03.2026 12:46 👍 51 🔁 27 💬 6 📌 3
Preview
Shareholder Control and the New Politics of Platform Regulation The TikTok deal in the US reveals a new era of tech oligarchy. Paddy Leerssen unpacks why platform ownership matters and how it can be held accountable.

For politically active billionaires and their allies in Washington, social media is becoming an instrument of political power, writes Paddy Leerssen. Broadly, a new regulatory paradigm for content moderation is emerging: the EU writes laws, the US buys shares.

06.03.2026 12:46 👍 22 🔁 18 💬 1 📌 2

In this Tech Policy piece, I criticize how framings of Anthropic’s & OpenAI’s negotiations with the US’s DoW overindex on myopic interpretations of human oversight, papering over what should be the real target of our scrutiny: that generative AI algorithms are a flawed and inaccurate technology.

06.03.2026 12:17 👍 38 🔁 16 💬 3 📌 0
Preview
A Timeline of the Anthropic-Pentagon Dispute The dispute raises a variety of policy, legal, and ethical questions, and its outcome could set an important precedent.

Updated this timeline with recent developments.

06.03.2026 05:08 👍 37 🔁 14 💬 2 📌 3

WTF

06.03.2026 05:06 👍 2 🔁 0 💬 1 📌 0

Get in touch!

06.03.2026 05:05 👍 2 🔁 0 💬 0 📌 0
Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise.

Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise.

06.03.2026 03:23 👍 14 🔁 4 💬 0 📌 2
Preview
Where things stand with the Department of War A statement from Dario Amodei

Amodei: "Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so." www.anthropic.com/news/where-s...

06.03.2026 03:22 👍 23 🔁 10 💬 4 📌 7

Could also be titled "Wednesday"

06.03.2026 03:11 👍 7 🔁 0 💬 0 📌 0

bsky.app/profile/rgoo...

06.03.2026 02:51 👍 14 🔁 3 💬 1 📌 1