Maggie Harrison Dupré's Avatar

Maggie Harrison Dupré

@mharrisondupre

Award-winning journalist at Futurism covering AI and its impacts on industries, media/information, and people. Send tips by email to: maggie@futurism.com or Signal: mhd.39

6,723
Followers
1,518
Following
751
Posts
12.06.2023
Joined
Posts Following

Latest posts by Maggie Harrison Dupré @mharrisondupre

Hahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahaha

07.03.2026 03:11 👍 37 🔁 1 💬 0 📌 0

Their power....

07.03.2026 02:13 👍 3 🔁 0 💬 0 📌 0
Preview
Bodycam video contradicts ICE claims in fatal shooting of U.S. citizen Ruben Ray Martinez in Texas Video of last year's fatal shooting of Ruben Ray Martinez obtained by CBS News appears to contradict claims that Martinez was shot by an ICE agent because he "accelerated" and "intentionally ran over"...

DHS claimed "an ICE agent had fired 'defensive shots' into Martinez's vehicle after Martinez 'intentionally ran over' another agent."

But body cam video now "shows that Martinez's vehicle, a blue Ford Fusion, was stationary or going at a very low rate of speed when he was fatally shot."

07.03.2026 00:59 👍 3899 🔁 1817 💬 71 📌 108
A tuxedo cat named westley happily perched atop my shoulders

A tuxedo cat named westley happily perched atop my shoulders

How your email actually finds me

07.03.2026 01:13 👍 85 🔁 2 💬 3 📌 0
Post image

What does this imply for those of us who are not Christian, and who labor under the assumption that we're protected by the First Amendment?

05.03.2026 20:33 👍 5739 🔁 1326 💬 452 📌 333
Fired DHS secretary Kristi Noem wearing a $60k Rolex watch while doing a media stunt in front of prisoners at El Salvador's CECOT prison

Fired DHS secretary Kristi Noem wearing a $60k Rolex watch while doing a media stunt in front of prisoners at El Salvador's CECOT prison

And for what

05.03.2026 19:00 👍 12 🔁 0 💬 1 📌 0
Preview
Google's AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges A wrongful death suit against Google alleges that the tech giant's AI urged a Florida man to arm himself and steal a robot body for it.

On the new and disturbing wrongful death lawsuit against Google, which alleges that Gemini directed a 36-yo man -- who reportedly had no history of mental illness -- to commit violence against others before encouraging him to take his own life:

futurism.com/artificial-i...

04.03.2026 17:05 👍 36 🔁 16 💬 3 📌 3
It’s likely that we’ll learn more as OpenAI gears up to roll out the feature, and we could see it being especially helpful for users with a diagnosed mental illness who know that intensive AI use could stand to intersect in destructive ways with their mental health. Futurism has reported on several cases of ChatGPT users who successfully managed a mental illness for several years before falling into a ChatGPT-tied crisis. In multiple cases we’ve reviewed, in addition to reinforcing scientific or spiritual delusions, ChatGPT has encouraged users with a mental illness not to continue taking their prescribed medication, agreed that users were somehow misdiagnosed by human professionals, or driven wedges between users and their real-world support system. One ChatGPT user now suing OpenAI, a 34-year-old schizoaffective man named John Jacquez, told us that had he known ChatGPT could reinforce delusions, he “never would’ve touched” the product.

It’s likely that we’ll learn more as OpenAI gears up to roll out the feature, and we could see it being especially helpful for users with a diagnosed mental illness who know that intensive AI use could stand to intersect in destructive ways with their mental health. Futurism has reported on several cases of ChatGPT users who successfully managed a mental illness for several years before falling into a ChatGPT-tied crisis. In multiple cases we’ve reviewed, in addition to reinforcing scientific or spiritual delusions, ChatGPT has encouraged users with a mental illness not to continue taking their prescribed medication, agreed that users were somehow misdiagnosed by human professionals, or driven wedges between users and their real-world support system. One ChatGPT user now suing OpenAI, a 34-year-old schizoaffective man named John Jacquez, told us that had he known ChatGPT could reinforce delusions, he “never would’ve touched” the product.

That said, OpenAI still doesn’t warn new ChatGPT users that extensive use could negatively impact their mental health — which, sure, is still being studied and litigated, though there is a growing consensus among experts, both anecdotally and in studies, that chatbots can likely exacerbate existing mental health conditions or worsen nascent crises. Millions of people manage mental illness every day; with the “trusted contact feature,” it would be up to the user to even be aware that chatbots could pose some level of risk to their mental health, and then also want a loved one to be notified of any concerning use patterns.

That said, OpenAI still doesn’t warn new ChatGPT users that extensive use could negatively impact their mental health — which, sure, is still being studied and litigated, though there is a growing consensus among experts, both anecdotally and in studies, that chatbots can likely exacerbate existing mental health conditions or worsen nascent crises. Millions of people manage mental illness every day; with the “trusted contact feature,” it would be up to the user to even be aware that chatbots could pose some level of risk to their mental health, and then also want a loved one to be notified of any concerning use patterns.

That “want” is important. A huge number of people lean on AI for emotional support and advice. This is due in part to AI’s low cost and accessibility when compared to oft-inaccessible human therapy — but also, in many cases, because it may feel easier or safer for someone to share sensitive or revealing thoughts with a non-human bot.

In other words, some users could be discussing mental health troubles, or perhaps sharing delusional or dangerous ideas, with ChatGPT expressly because they don’t want to share those thoughts or ideas with another person — a reality that both AI companies and regulators looking at these issues will need to contend with. And to that end, if OpenAI’s internal monitoring tools signal that someone may be in crisis, but that user hasn’t opted to list a trusted contact, what does the company do with that kind of information?

That “want” is important. A huge number of people lean on AI for emotional support and advice. This is due in part to AI’s low cost and accessibility when compared to oft-inaccessible human therapy — but also, in many cases, because it may feel easier or safer for someone to share sensitive or revealing thoughts with a non-human bot. In other words, some users could be discussing mental health troubles, or perhaps sharing delusional or dangerous ideas, with ChatGPT expressly because they don’t want to share those thoughts or ideas with another person — a reality that both AI companies and regulators looking at these issues will need to contend with. And to that end, if OpenAI’s internal monitoring tools signal that someone may be in crisis, but that user hasn’t opted to list a trusted contact, what does the company do with that kind of information?

Also unclear how many users would willingly opt into something like this -- after all, a lot of people turn to chatbots for emotional support expressly because they don't want to discuss sensitive/revealing topics with a person, and believe AI to be a safer, more private place:

03.03.2026 22:55 👍 10 🔁 1 💬 0 📌 0
OpenAI announced the new feature last week in a blog post, billed as an “update on our mental health-related work.” It said it’s “working closely” with its Council on Well-Being and AI and Global Physicians Network — two internally-regulated groups of experts that were launched after reports of AI-tied mental health crises began to emerge, as well as news of a high-profile lawsuit last August revealing the death by suicide of a 16-year-old ChatGPT user named Adam Raine — to roll out the feature, which it’s marketing as an adult-focused endeavor distinct from its efforts to integrate parental controls and other systems designed to identify and protect minors.

OpenAI announced the new feature last week in a blog post, billed as an “update on our mental health-related work.” It said it’s “working closely” with its Council on Well-Being and AI and Global Physicians Network — two internally-regulated groups of experts that were launched after reports of AI-tied mental health crises began to emerge, as well as news of a high-profile lawsuit last August revealing the death by suicide of a 16-year-old ChatGPT user named Adam Raine — to roll out the feature, which it’s marketing as an adult-focused endeavor distinct from its efforts to integrate parental controls and other systems designed to identify and protect minors.

The announcement comes after extensive public reporting — in addition to at least thirteen separate consumer safety lawsuits — about OpenAI customers being pulled into delusional or suicidal spirals with ChatGPT following extensive, often deeply intimate use of the chatbot.

The company doesn’t offer much detail about the feature in the post, simply saying it will “allow adult users to designate someone to receive notifications when they may need additional support.” It has yet to define any reporting standards around what might actually compel the system to flag a person’s use, though, which will be a tricky policy question. Would someone need to explicitly declare intent to hurt or kill themselves, or possibly someone else, for their loved one to be notified? Or would the feature be designed to track and flag less-explicit signs that a user could be in a heightened state of crisis — for example, signs that they could be manic, expressing delusional beliefs, or experiencing psychosis?

The announcement comes after extensive public reporting — in addition to at least thirteen separate consumer safety lawsuits — about OpenAI customers being pulled into delusional or suicidal spirals with ChatGPT following extensive, often deeply intimate use of the chatbot. The company doesn’t offer much detail about the feature in the post, simply saying it will “allow adult users to designate someone to receive notifications when they may need additional support.” It has yet to define any reporting standards around what might actually compel the system to flag a person’s use, though, which will be a tricky policy question. Would someone need to explicitly declare intent to hurt or kill themselves, or possibly someone else, for their loved one to be notified? Or would the feature be designed to track and flag less-explicit signs that a user could be in a heightened state of crisis — for example, signs that they could be manic, expressing delusional beliefs, or experiencing psychosis?

Lots of open questions here about reporting thresholds, and to what extent it would actually work. (And this is backdropped by research showing that chatbots, ChatGPT included, are historically pretty bad at reliably recognizing signs of crisis and responding well/correctly/helpfully.)

03.03.2026 22:43 👍 8 🔁 0 💬 1 📌 0
Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems. Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.

Since introducing parental controls in September 2025, we’ve seen encouraging engagement from families and will continue building on these protections. Working closely with experts from our Council on Well-Being and AI and our Global Physicians Network, we will also soon be introducing a trusted contact feature, which will allow adult users to designate someone to receive notifications when they may need additional support. As a reminder, parents also receive safety notifications about their teens’ use of ChatGPT through parental controls. We’ll share more as these updates roll out in ChatGPT.

Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems. Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery. Since introducing parental controls in September 2025, we’ve seen encouraging engagement from families and will continue building on these protections. Working closely with experts from our Council on Well-Being and AI and our Global Physicians Network, we will also soon be introducing a trusted contact feature, which will allow adult users to designate someone to receive notifications when they may need additional support. As a reminder, parents also receive safety notifications about their teens’ use of ChatGPT through parental controls. We’ll share more as these updates roll out in ChatGPT.

Interesting announcement in an OpenAI post last week -- the company said it's going to roll out a "trusted contact feature" in ChatGPT, which will alert a user's designated loved one if they appear to be showing signs of a mental health crisis:

futurism.com/artificial-i...

03.03.2026 22:30 👍 20 🔁 3 💬 6 📌 3
Preview
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes Ars Technica has fired senior AI reporter Benj Edwards following an outrage-sparking controversy involving AI-fabricated quotes.

Futurism has confirmed that Ars Technica terminated senior AI reporter Benj Edwards following a controversy over his role in the publication and retraction of an article that included AI-fabricated quotes:

futurism.com/artificial-i...

03.03.2026 00:34 👍 36 🔁 10 💬 2 📌 6

At what point do you simply lose your laser privileges??? Gentle parenting has gone too far

27.02.2026 14:19 👍 13 🔁 1 💬 1 📌 0
Preview
Exclusive | OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago The ChatGPT maker opted against informing Canadian authorities about Jesse Van Rootselaar’s descriptions of violence last June.

@georgiawells.bsky.social's initial (huge) story here:

www.wsj.com/us-news/law/...

27.02.2026 01:56 👍 4 🔁 2 💬 1 📌 0
Preview
ChatGPT and the Tumbler Ridge shooter Podcast Episode · Front Burner · February 26 · 34m

Thanks to @markusoff.bsky.social and CBC for having me on Front Burner to talk about sensitive and important issues around ChatGPT + Tumbler Ridge, AI safety, and self-regulation:

podcasts.apple.com/ca/podcast/c...

27.02.2026 01:54 👍 13 🔁 5 💬 3 📌 0
Preview
ChatGPT and the Tumbler Ridge shooter Podcast Episode · Front Burner · February 26 · 34m

In this edition of CBC's Front Burner podcast, @mharrisondupre.bsky.social (a senior staff writer at Futurism.com) and @markusoff.bsky.social discuss how chatbots can validate, rather than discourage, users’ dark or violent ideas. I think it's well worth a listen. podcasts.apple.com/ca/podcast/c...

26.02.2026 13:29 👍 2 🔁 1 💬 0 📌 0

Unfortunately I think the Olympic hockey saga was designed in a lab piss me off

26.02.2026 00:47 👍 16 🔁 0 💬 0 📌 0
Preview
Blind refugee abandoned by Border Patrol dies in Buffalo. A nearly blind refugee abandoned by Border Patrol miles from his home dies in Buffalo after having been missing for nearly a week.

Here's a story about his death www.investigativepost.org/2026/02/25/b...

25.02.2026 20:30 👍 2439 🔁 1162 💬 164 📌 350
Preview
New App Detects the Radio Fingerprint of Smart Glasses and Warns You When Someone Is Using Them Nearby Yves Jeanrenaud is a scholar and amateur software developer behind Nearby Glasses, an open-source smart glasses detection app.

Bleak and cool: I wrote about a new open-source app that detects smart glasses peeping nearby. Still early days, but it's a fascinating look at the kind of grassroots resistance emerging against Big Tech's growing monopoly over public space.

25.02.2026 17:35 👍 108 🔁 38 💬 4 📌 1

the summer version

25.02.2026 02:30 👍 5 🔁 0 💬 0 📌 0
Preview
AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking ChatGPT and other AI chatbots are reinforcing users' delusions about other people — fueling fixations linked to stalking and abuse.

Perpetrators have long used new technologies to enable abusive behavior. @mharrisondupre.bsky.social finds that AI delusions are amplifying domestic abuse, harassment, and stalking in disturbing ways. futurism.com/artificial-i...

23.02.2026 16:34 👍 12 🔁 7 💬 0 📌 0
Preview
Exclusive | OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago The ChatGPT maker opted against informing Canadian authorities about Jesse Van Rootselaar’s descriptions of violence last June.

Last summer, OpenAI employees debated alerting law enforcement about Jesse Van Rootselaar's interactions with ChatGPT

In February, Van Rootselaar became the suspect in the mass shooting that devastated a rural town in British Columbia

www.wsj.com/us-news/law/...

20.02.2026 22:09 👍 141 🔁 74 💬 6 📌 16

Made a video (!) about my story on chatbots reinforcing user fixations on other real people, and how that reinforcement can lead to real-world harm:

19.02.2026 17:18 👍 116 🔁 42 💬 3 📌 2

Just spitballing here but maybe — maybe! — it's possible to foster excellence in women's sports by letting girls and women fucking live and be individuals vs trying to obsessively control every element of their existence

20.02.2026 15:00 👍 37 🔁 4 💬 0 📌 0
Preview
A fake ICE tip line reveals neighbors reporting neighbors A Nashville comedian’s deportation hotline, set up as a joke, has gone viral among viewers who say it shows the “banality of evil personified” in the U.S. immigration crackdown.

New: A comedian set up a fake ICE tip line as a joke. Then 100 calls flooded in: neighbors ratting on neighbors, a teacher reporting a kindergartener. Fans say the viral TikToks revealed deportation's "banality of evil." Conservatives say he should be in prison wapo.st/4kM4qbF

20.02.2026 12:02 👍 8021 🔁 3000 💬 275 📌 618
Post image

This really is a great picture:

20.02.2026 02:08 👍 29162 🔁 3678 💬 26 📌 313

Can't say enough how much of a joy it was to see Alysa Liu win gold today. Women's sports are so often centered on control. To watch a young athlete embody freedom and strength and come out on top is so special

20.02.2026 03:16 👍 140 🔁 12 💬 2 📌 0

Fantastic piece - and harrowing story, so much for such a young person to go through

20.02.2026 00:49 👍 1 🔁 0 💬 0 📌 0

My latest for @arstechnica.com : a story about a Georgia college student who was told by ChatGPT that he was “an oracle.” (And a shout out to @mharrisondupre.bsky.social , who has been doing amazing work on these kinds of cases!)

bsky.app/profile/arst...

20.02.2026 00:18 👍 7 🔁 4 💬 1 📌 0
Preview
Grok Exposed a Porn Performer’s Legal Name and Birthdate—Without Even Being Asked In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahl's full legal name and birthdate to the public, information she'd protected until now.

In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahl's full legal name and birthdate to the public, information she'd protected until now.

19.02.2026 16:15 👍 1383 🔁 450 💬 35 📌 95

extremely rude tbh

19.02.2026 18:43 👍 2 🔁 0 💬 0 📌 0