Experts from industry and civil society will explore:
🔹 How companies can safeguard sensitive health data
🔹 Legal & reputational risks around data use
🔹 Practical tools for HR, compliance & leadership
📅 Mar. 10 | 1 PM ET | Virtual
Register:
Experts from industry and civil society will explore:
🔹 How companies can safeguard sensitive health data
🔹 Legal & reputational risks around data use
🔹 Practical tools for HR, compliance & leadership
📅 Mar. 10 | 1 PM ET | Virtual
Register:
Data Privacy Webinar.
EVENT: In the wake of Dobbs, everyday digital data can be used as evidence in reproductive health investigations. How should companies respond? Join us March 10 at 1 PM ET for a discussion on health privacy, corporate responsibility, and the risks of commercial data. cdt.org/event/when-c...
This kind of pricing “tailor-fits” prices to a person’s perceived susceptibility to pay. Like a custom suit, the targeted consumer ends up paying more. As George Slover explained, bespoke pricing is unfair, discriminatory, and turns Adam Smith’s “invisible hand” against consumers.
CDT’s George Slover testified before a joint hearing of the PA Assembly’s House Majority Policy Committee & Senate Democratic Policy Committee on dynamic pricing. He spoke about “bespoke pricing” — using consumers’ personal data profiles to secretly charge them more.
CDT research shows that LGBTQ+ students and their parents oppose these policies. We’ll be watching closely to see how states and ED respond to this litigation.
CDT’s recently published blog examines nationally evolving student privacy policies regarding gender expansive students, and the mounting pressure within the federal government, especially at ED, to require forced outing in K-12 schools.
California’s SAFETY Act, the law protecting gender expansive student privacy & currently the subject of a U.S. Dept of Education FERPA enforcement action, was paused by a Supreme Court ruling this week.
CNBC covers CDT’s coalition letter urging Congress to investigate the Pentagon’s decision to designate Anthropic a supply chain risk — warning the move sets a “dangerous precedent” and could undermine U.S. competitiveness in AI.
In a new op-ed, CDT’s Aliya Bhatia & Michal Luria explain why many families are skeptical of mandates like selfie-based age checks & platform restrictions. Parents & teens say they want flexible tools & stronger privacy protections.
In a new letter, we raise concerns about bills like KOSA, the Safe Messaging for Kids Act, Sammy’ Law, & the App Store Accountability Act. These proposals could expand access to children’s data, create security risks, & chill the use of essential online communications services.
The House Energy & Commerce Committee is holding a markup on kids’ online safety bills today, but privacy protections must be both strong and constitutional to actually protect children.
Congress, not contract disputes, should set the rules for AI and national security. Read the letter:
The signers warn the move could undermine U.S. innovation, raise serious First Amendment concerns, and set a dangerous precedent for government retaliation against companies over AI safety safeguards.
Using this authority against a U.S. company would be a major departure from its purpose — protecting against foreign adversaries, not American firms.
white document on grey background
A bipartisan group of military, national security and AI is urging Congress to scrutinize the Pentagon’s threatened designation of Anthropic as a supply chain risk. cdt.org/insights/def...
Today @cdt.org and allies shared a letter from dozens of military, national security and AI leaders raising deep concerns about the Pentagon's threat to designate Anthropic a supply chain risk: cdt.org/insights/def...
“This data broker industry is largely unregulated at the federal level, and the Department of Defense is entering into contracts to purchase such data. Apparently, it wants to reserve Anthropic’s AI to analyze that data and draw intelligence from it, even when the data pertains to Americans.”
CDT’s Nojeim: “There is a whole industry of data brokers who purchase and sell location information about Americans.”
“Greg Nojeim, the director of CDT’s Project on Security and Surveillance, told Gizmodo that the DOD is purchasing commercially available information on Americans, and it’s all technically legal right now.”
“Jain [added] that, ‘transparency around what the models have done to mitigate those risks … makes sense to address.’”
CDT’s Jain: “The federal government obviously has an important and legitimate interest in mitigating those kinds of risks.”
“Samir Jain, vice president of policy at CDT, suggested that safety standards could be focused specifically on national security risks, such as models developing chemical or biological weapons.”
CDT’s Bogen: “When people are engaging deeply over days or weeks, I don't think we know anywhere near enough about the prevalence of unfortunate events where people are drawn into acute mental health crises.”
CDT’s Bogen: “As AI models implement memory, they initially appear to be more helpful. But the longer conversations tend to go, the more fragile the guardrails seem to become.”
“Miranda Bogen, the director of the CDT’s AI Governance Lab, says that a potential factor in this tragedy was Google’s decision in August 2025 to make Gemini’s memory automatic and persistent.”
“According to CDT, 1 in 5 teens reported either having a romantic relationship with an AI companion or knowing someone who has. Teens expressed that these [types of] conversations [are] more satisfying than interactions with real people.”
Encrypted Client Hello is now RFC 9849
This RFC defines an extension to Transport Layer Security that improves privacy for web users. Huge team effort and a win for the internet at large. Now to get deployment up...
Some words I wrote about this for @cdt.org: cdt.org/insights/enc...
🔹 What to watch as similar federal efforts move forward
CDT’s analysis of the White House’s Executive Order on Preventing Woke AI looks at what these requirements could mean for government tech procurement, what the First Amendment allows, and how developers may respond. cdt.org/insights/wha...
🔹 Why “anti-woke AI” is a technical mirage
We break down the significant technical hurdles that make truly “ideologically neutral” AI systems essentially impossible in practice. cdt.org/insights/ant...