Hahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahaha
Hahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahaha
Their power....
DHS claimed "an ICE agent had fired 'defensive shots' into Martinez's vehicle after Martinez 'intentionally ran over' another agent."
But body cam video now "shows that Martinez's vehicle, a blue Ford Fusion, was stationary or going at a very low rate of speed when he was fatally shot."
A tuxedo cat named westley happily perched atop my shoulders
How your email actually finds me
What does this imply for those of us who are not Christian, and who labor under the assumption that we're protected by the First Amendment?
Fired DHS secretary Kristi Noem wearing a $60k Rolex watch while doing a media stunt in front of prisoners at El Salvador's CECOT prison
And for what
On the new and disturbing wrongful death lawsuit against Google, which alleges that Gemini directed a 36-yo man -- who reportedly had no history of mental illness -- to commit violence against others before encouraging him to take his own life:
futurism.com/artificial-i...
It’s likely that we’ll learn more as OpenAI gears up to roll out the feature, and we could see it being especially helpful for users with a diagnosed mental illness who know that intensive AI use could stand to intersect in destructive ways with their mental health. Futurism has reported on several cases of ChatGPT users who successfully managed a mental illness for several years before falling into a ChatGPT-tied crisis. In multiple cases we’ve reviewed, in addition to reinforcing scientific or spiritual delusions, ChatGPT has encouraged users with a mental illness not to continue taking their prescribed medication, agreed that users were somehow misdiagnosed by human professionals, or driven wedges between users and their real-world support system. One ChatGPT user now suing OpenAI, a 34-year-old schizoaffective man named John Jacquez, told us that had he known ChatGPT could reinforce delusions, he “never would’ve touched” the product.
That said, OpenAI still doesn’t warn new ChatGPT users that extensive use could negatively impact their mental health — which, sure, is still being studied and litigated, though there is a growing consensus among experts, both anecdotally and in studies, that chatbots can likely exacerbate existing mental health conditions or worsen nascent crises. Millions of people manage mental illness every day; with the “trusted contact feature,” it would be up to the user to even be aware that chatbots could pose some level of risk to their mental health, and then also want a loved one to be notified of any concerning use patterns.
That “want” is important. A huge number of people lean on AI for emotional support and advice. This is due in part to AI’s low cost and accessibility when compared to oft-inaccessible human therapy — but also, in many cases, because it may feel easier or safer for someone to share sensitive or revealing thoughts with a non-human bot. In other words, some users could be discussing mental health troubles, or perhaps sharing delusional or dangerous ideas, with ChatGPT expressly because they don’t want to share those thoughts or ideas with another person — a reality that both AI companies and regulators looking at these issues will need to contend with. And to that end, if OpenAI’s internal monitoring tools signal that someone may be in crisis, but that user hasn’t opted to list a trusted contact, what does the company do with that kind of information?
Also unclear how many users would willingly opt into something like this -- after all, a lot of people turn to chatbots for emotional support expressly because they don't want to discuss sensitive/revealing topics with a person, and believe AI to be a safer, more private place:
OpenAI announced the new feature last week in a blog post, billed as an “update on our mental health-related work.” It said it’s “working closely” with its Council on Well-Being and AI and Global Physicians Network — two internally-regulated groups of experts that were launched after reports of AI-tied mental health crises began to emerge, as well as news of a high-profile lawsuit last August revealing the death by suicide of a 16-year-old ChatGPT user named Adam Raine — to roll out the feature, which it’s marketing as an adult-focused endeavor distinct from its efforts to integrate parental controls and other systems designed to identify and protect minors.
The announcement comes after extensive public reporting — in addition to at least thirteen separate consumer safety lawsuits — about OpenAI customers being pulled into delusional or suicidal spirals with ChatGPT following extensive, often deeply intimate use of the chatbot. The company doesn’t offer much detail about the feature in the post, simply saying it will “allow adult users to designate someone to receive notifications when they may need additional support.” It has yet to define any reporting standards around what might actually compel the system to flag a person’s use, though, which will be a tricky policy question. Would someone need to explicitly declare intent to hurt or kill themselves, or possibly someone else, for their loved one to be notified? Or would the feature be designed to track and flag less-explicit signs that a user could be in a heightened state of crisis — for example, signs that they could be manic, expressing delusional beliefs, or experiencing psychosis?
Lots of open questions here about reporting thresholds, and to what extent it would actually work. (And this is backdropped by research showing that chatbots, ChatGPT included, are historically pretty bad at reliably recognizing signs of crisis and responding well/correctly/helpfully.)
Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems. Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery. Since introducing parental controls in September 2025, we’ve seen encouraging engagement from families and will continue building on these protections. Working closely with experts from our Council on Well-Being and AI and our Global Physicians Network, we will also soon be introducing a trusted contact feature, which will allow adult users to designate someone to receive notifications when they may need additional support. As a reminder, parents also receive safety notifications about their teens’ use of ChatGPT through parental controls. We’ll share more as these updates roll out in ChatGPT.
Interesting announcement in an OpenAI post last week -- the company said it's going to roll out a "trusted contact feature" in ChatGPT, which will alert a user's designated loved one if they appear to be showing signs of a mental health crisis:
futurism.com/artificial-i...
Futurism has confirmed that Ars Technica terminated senior AI reporter Benj Edwards following a controversy over his role in the publication and retraction of an article that included AI-fabricated quotes:
futurism.com/artificial-i...
At what point do you simply lose your laser privileges??? Gentle parenting has gone too far
@georgiawells.bsky.social's initial (huge) story here:
www.wsj.com/us-news/law/...
Thanks to @markusoff.bsky.social and CBC for having me on Front Burner to talk about sensitive and important issues around ChatGPT + Tumbler Ridge, AI safety, and self-regulation:
podcasts.apple.com/ca/podcast/c...
In this edition of CBC's Front Burner podcast, @mharrisondupre.bsky.social (a senior staff writer at Futurism.com) and @markusoff.bsky.social discuss how chatbots can validate, rather than discourage, users’ dark or violent ideas. I think it's well worth a listen. podcasts.apple.com/ca/podcast/c...
Unfortunately I think the Olympic hockey saga was designed in a lab piss me off
Here's a story about his death www.investigativepost.org/2026/02/25/b...
Bleak and cool: I wrote about a new open-source app that detects smart glasses peeping nearby. Still early days, but it's a fascinating look at the kind of grassroots resistance emerging against Big Tech's growing monopoly over public space.
the summer version
Perpetrators have long used new technologies to enable abusive behavior. @mharrisondupre.bsky.social finds that AI delusions are amplifying domestic abuse, harassment, and stalking in disturbing ways. futurism.com/artificial-i...
Last summer, OpenAI employees debated alerting law enforcement about Jesse Van Rootselaar's interactions with ChatGPT
In February, Van Rootselaar became the suspect in the mass shooting that devastated a rural town in British Columbia
www.wsj.com/us-news/law/...
Made a video (!) about my story on chatbots reinforcing user fixations on other real people, and how that reinforcement can lead to real-world harm:
Just spitballing here but maybe — maybe! — it's possible to foster excellence in women's sports by letting girls and women fucking live and be individuals vs trying to obsessively control every element of their existence
New: A comedian set up a fake ICE tip line as a joke. Then 100 calls flooded in: neighbors ratting on neighbors, a teacher reporting a kindergartener. Fans say the viral TikToks revealed deportation's "banality of evil." Conservatives say he should be in prison wapo.st/4kM4qbF
This really is a great picture:
Can't say enough how much of a joy it was to see Alysa Liu win gold today. Women's sports are so often centered on control. To watch a young athlete embody freedom and strength and come out on top is so special
Fantastic piece - and harrowing story, so much for such a young person to go through
My latest for @arstechnica.com : a story about a Georgia college student who was told by ChatGPT that he was “an oracle.” (And a shout out to @mharrisondupre.bsky.social , who has been doing amazing work on these kinds of cases!)
bsky.app/profile/arst...
In the latest in a string of privacy abuses from the chatbot, Grok provided porn performer Siri Dahl's full legal name and birthdate to the public, information she'd protected until now.
extremely rude tbh