Jeremy Malcolm's Avatar

Jeremy Malcolm

@jere.my

Trust and safety in a human rights framework Founder, @AskLex.ai. Chair, @c4osl.org. Head of Trust & Safety, @liberato.io jere.my

363
Followers
503
Following
474
Posts
26.07.2023
Joined
Posts Following

Latest posts by Jeremy Malcolm @jere.my

"Too often, online policy debates treat safety and freedom as a zero-sum choice: more safety is assumed to require less liberty, and more liberty is assumed to tolerate harm." https://c4osl.org/february-2026-newsletter #Newsletter #Freedom #Safety #Harm

"Too often, online policy debates treat safety and freedom as a zero-sum choice: more safety is assumed to require less liberty, and more liberty is assumed to tolerate harm." https://c4osl.org/february-2026-newsletter #Newsletter #Freedom #Safety #Harm

"Too often, online policy debates treat safety and freedom as a zero-sum choice: more safety is assumed to require less liberty, and more liberty is assumed to tolerate harm." https://c4osl.org/february-2026-newsletter #Newsletter #Freedom #Safety #Harm

06.03.2026 18:01 πŸ‘ 2 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Someone on Twitter, or here? If the former and it’s who I think you’re talking about, they are not worth taking seriously. Yes the allegations they’re making are false, but with the Nazi links that you’ve pointed out, they don’t have any credibility anyway.

03.03.2026 08:17 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Criminal penalties are not the right instrument for addressing content that is offensive but causes no direct harm. Porn tends to focus on less offensive scenarios like adult step-siblings but mainstream works like Game of Thrones could also be chilled if not criminalized if this comes into effect.

25.02.2026 23:29 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The amendments have been tabled but not yet passed.

25.02.2026 23:24 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Against β€˜chat control’: we can’t eliminate child abuse by eliminating privacy Banning online anonymity tools like Tor won’t stop crime. It will only drive people underground and normalize government control over the internet

The UK is about to criminalize incest-themed porn. There is zero doubt that this will send more people to the dark web. Governments have to accept that their quixotic attempt to be the final arbiters of Internet morality is doing far more harm than good. www.theguardian.com/commentisfre...

25.02.2026 12:49 πŸ‘ 3 πŸ” 4 πŸ’¬ 2 πŸ“Œ 2

I'll be at @rightscon.org this May with an amazing panel of professionals representing human rights activists, artists, the LGBTQ+ community, and the trust and safety profession, to talk about how we can better draw the line between personal expression such as novels, and lived abuse like deepfakes.

18.02.2026 02:09 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

When criminal law punishes fiction because it is offensive, rather than because it harms, we’ve crossed an important line. The decisive question isn’t whether content is offensive. It’s whether it harms a real, non-consenting victim. That’s the line child protection law must defend.

18.02.2026 01:55 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Deepfakes, Fiction, and the Future of CSAM Law - Center for Online Safety and Liberty CSAM is defined by victimhood. Remove the victim, and the concept becomes hollow β€” and its enforcement unjust.

"But loosening the definition of CSAM to include victimless content is neither necessary nor sufficient to address this problem." I address three better solutions that target the problem directly: education, non-consensual intimate imagery (NCII) laws, and data protection laws.

18.02.2026 01:52 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Deepfakes of real children are different. They are a form of image-based sexual abuse. They involve non-consensual use of a real child’s likeness, which causes real harm and deserves a strong response.

18.02.2026 01:50 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

CSAM (Child Sexual Abuse Material) was coined to replace β€œchild pornography” to emphasize one thing: real victims. Its moral and legal force comes from the fact that someone was harmed. Expanding the definition of CSAM to include victimless content doesn’t strengthen the law.
It dilutes it.

18.02.2026 01:46 πŸ‘ 4 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
Preview
Deepfakes, Fiction, and the Future of CSAM Law - Center for Online Safety and Liberty CSAM is defined by victimhood. Remove the victim, and the concept becomes hollow β€” and its enforcement unjust.

An Australian writer was just convicted of child abuse for a novel about fictional roleplay that would be legal in real life.

A week earlier, UNICEF called for expanding CSAM definitions to include AI content β€œeven without an identifiable victim.”

These are not unrelated developments.

18.02.2026 01:29 πŸ‘ 6 πŸ” 7 πŸ’¬ 1 πŸ“Œ 1

To make things worse, Australian authorities do not even track the difference between real child abuse cases and obscenity prosecutions like this one. As a result, nobody knows how many real child abuse cases Australia has.

11.02.2026 22:33 πŸ‘ 3 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0

This is an appalling decision that trivialises the experiences of actual victims of child abuse. Mastrosa is in no way a #FemaleAbuser but a victim of an out-of-control legal system that has lost sight of its priorities.

11.02.2026 05:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I’m OK with this.

09.02.2026 04:37 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Beyond the Filter: Tech-Facilitated Gender-Based Violence - Center for Online Safety and Liberty Jeremy and Brandy talk to Sofia Bonilla about tech-facilitated gender-based violence or TFGBV, including the recent Grok deepfakes scandal.

β€œTech-facilitated gender-based violence is not a new harm β€” it’s old gender violence reproduced and amplified in digital spaces, without geographical limits.” https://c4osl.org/beyond-the-filter-tech-facilitated-gender-based-violence/ #Podcast #DigitalSpaces #OnlineSafety

02.02.2026 20:01 πŸ‘ 1 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Another tactic: gratuitously highlighting the worst of the worst, to trigger the reader’s disgust reflex so fast that they overlook overreach. Nobody speaks up, due to stigma.

01.02.2026 09:44 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1
Post image

This does not surprise me in the slightest, I presume you know the history of Meta's internal analyses of reports to NCMEC?

It's almost as if they benefit from publishing really huge mediatastic numbers that somehow misrepresent the reality and extent of challenges

alecmuffett.com/article/15902

01.02.2026 09:41 πŸ‘ 6 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0
Drawing the Line Watchlist 2025 – Drawing the Line

(A side-note: I don’t endorse the use of the term AI-CSAM in the linked article. CSAM refers to, and should only ever be used to refer to, material with real and identifiable victims. To do otherwise diminishes the gravity of CSA and diverts resources away from fighting it.)

01.02.2026 09:06 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Letter to NCMEC about AI-CSAM Report Statistics Today, Bloomberg issued a jaw-dropping report about the hundreds of thousands of CyberTipline reports with a generative AI component that Amazon filed to the National Center for Missing and Exploited ...

There’s been a lot of press about an explosion of AI generated content that resembles CSAM. But the thing is, none of that reporting was true. It was based on the misclassification of a large volume of reports of real CSAM. cyberlaw.stanford.edu/letter-to-nc...

01.02.2026 09:00 πŸ‘ 11 πŸ” 6 πŸ’¬ 1 πŸ“Œ 0

I will be at @rightscon.org this May, leading a workshop on the Drawing the Line Watchlist which exposes how the line between abuse material and art and fiction is being deliberately blurred worldwide, and the costs that this exacts on creators, fans, and survivors. More details soon.

19.01.2026 04:30 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Australia, which hides these statistics by not tracking them, needs to get the same memo. Instead authorities are bragging about supposed child abuse arrests and seizures that are actually just some guy with anime on his phone. www.abf.gov.au/newsroom-sub...

19.01.2026 04:22 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

The UK’s increasing focus on diverting child protection resources into victimless obscenity crimes (which now constitute 40% of all image-based prosecutions) has accompanied a 60% drop in real CSAM prosecutions since 2017. drawingthelineprinciples.org/watchlist

19.01.2026 04:19 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Irish authorities, at least, may be finally stepping away from wasting sex crime prosecution resources on anime fans.

19.01.2026 04:17 πŸ‘ 12 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Preview
False Positives, Real Harm: When Child Safety Systems Get It Wrong - Center for Online Safety and Liberty A growing number of innocent Internet users are being falsely accused of child exploitation by inaccurate AI systems.

New from me: "Shortcomings in online child-safety reporting systems cut both ways. While it is unacceptable when real CSAM remains online without being taken down, falsely accusing innocent people of serious crimes isn’t a trade-off that we should have to accept." c4osl.org/false-positi...

11.01.2026 22:54 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Drawing the line: When child safety laws lose sight of real children Newly released data reveal how a no-compromise approach to AI-generated and other fictional sexual content depicting children has diverted resources away from prosecuting real child sexual abuse mater...

policyreview.info/articles/new... An interesting piece from @jere.my (who I don't always agree with but he's right to be concerned about censorship).

It seems there are multiple issues here.

1) A lack of transparency from governments and private actors.

2) Sensationalistic framing practices.

04.01.2026 08:49 πŸ‘ 3 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0

In my legal practice, I’m encountering an increasing volume of questions about false positives in online child-safety reporting systems and the legal risks they create for adults engaging in lawful conduct. I’ve been sounding the alarm about this for years, but it’s only getting worse, not better.

29.12.2025 18:18 πŸ‘ 10 πŸ” 4 πŸ’¬ 0 πŸ“Œ 1
Preview
Beyond the Filter: Drawing the Line Watchlist - Center for Online Safety and Liberty In this special edition of Beyond the Filter, hosts Brandy Brightman and Jeremy Malcolm present the global launch of the Drawing the […]

β€œGovernments are redirecting resources from protecting children to policing expression.” β€” Jeremy Malcolm
Listen to the Watchlist episode:
-> https://c4osl.org/beyond-the-filter-drawing-the-line-watchlist/ #BeyondTheFilter #DrawingTheLine #FreeExpression

19.12.2025 18:01 πŸ‘ 3 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

From last week's Drawing the Line webinar, arts advocate Emma Shapiro explains how child safety is being used as a pretext to censor visual arts online. Continue watching: https://c4osl.org/drawing-the-line-webinar #DrawingTheLine #ChildSafety #FreeExpression

20.12.2025 20:02 πŸ‘ 1 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Preview
Beyond the Filter: Drawing the Line Watchlist - Center for Online Safety and Liberty In this special edition of Beyond the Filter, hosts Brandy Brightman and Jeremy Malcolm present the global launch of the Drawing the […]

β€œFrom a linguistic lens, there are so few pieces of language that are in and of themselves harmful. It all depends on context.” β€” Zora Rush Hear more in this week’s Beyond the Filter:
-> https://c4osl.org/beyond-the-filter-drawing-the-line-watchlist/

18.12.2025 18:00 πŸ‘ 2 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

From last week's launch webinar for the Drawing the Line Watchlist 2025, when I was asked a question about the Australian police prosecuting a woman for writing an 18+ fetish novel. Watch more at c4osl.org/drawing-the-line-webinar.

19.12.2025 06:17 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0