It hardly matters if a person gives final approval. They have been provided with a convincing counterfeit of an intelligence report. Automation bias and institutional culture discourages rigorous independent checking. It's an inevitable result of systemic choices. Choices that are being celebrated.
06.03.2026 15:18
๐ 3
๐ 0
๐ฌ 0
๐ 0
The Most Moral Army | Los Angeles Review of Books
Mary Turfah examines Israeli officialsโ weaponization of language, particularly that of medicine, in an attempt to reframe their genocide in Gaza.
Media will keep drooling over the "surgical precision" of "AI" targeting. Nobody involved is behaving as though they will face accountability. The chatbot creates permission; it grants plausible deniability. Discourse often gets hung up on human intent, but this is the impact.
06.03.2026 15:18
๐ 3
๐ 0
๐ฌ 1
๐ 0
It's easily lost, but the US military does not need a chatbot to *find* or execute a double-tap bombing of a girls elementary school. The chatbot provides something shaped like a target analysis without the diligence, cognitive processes, and accountability of a human conducting a target analysis.
06.03.2026 15:18
๐ 4
๐ 0
๐ฌ 1
๐ 0
Nine, mostly white dudes on stage in a manel.
Everyone wants to sign letters The Future of Life Institute puts out every few years, it seems.
Take a look at this manel which happened around their first letter ๐
The billionaires and eugenicists on this manel are the actual existential risks to humanity we should worry about.
04.03.2026 23:26
๐ 85
๐ 25
๐ฌ 5
๐ 2
Yes, banned from Bluesky. Blacksky recently moved to its own appview so Bluesky-banned users like ลink are available. This is obviously still limiting, but there could be a future in which Bluesky is not a singular power in the network. (I moved in the fall; can recommend.)
05.03.2026 05:03
๐ 1
๐ 0
๐ฌ 0
๐ 0
Another day, another chatbot convincing someone to kill themselves. But yeah let's keep talking about how to put these in our classrooms!
04.03.2026 16:11
๐ 112
๐ 50
๐ฌ 1
๐ 0
Only "shocking" in how these products are marketed and widespread failure to reflect on well-known cognitive biases going back to the ELIZA Effect (1966). These problems are inherent to the tech because LLMs are purely linguistic devices. Convincing (at times) counterfeiters, always epistemic void.
03.03.2026 17:16
๐ 0
๐ 0
๐ฌ 0
๐ 0
The fraudulent citations to real sources is a Grammarly feature (not even ironically). See it (and more counterfeiting) in action in this video:
www.carleton.edu/ai/blog/gram...
03.03.2026 17:05
๐ 4
๐ 2
๐ฌ 1
๐ 0
AI and Ethics
AIย and Ethics seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It ...
CFP, AI Resistance, Refusal, Reclamation & Reimagining: Ethical Imperatives and Emerging Practices! Note "This collection is focused ...on the strategies & actions of individuals, communities, organisations & collectives to actively resist, refuse, reimagine & reclaim 'artificial intelligence."
03.03.2026 15:30
๐ 41
๐ 23
๐ฌ 0
๐ 2
Over 50% of game developers now think generative AI is bad for the industry, a dramatic increase from just 2 years ago: 'I'd rather quit the industry than use generative AI'
The latest GDC survey also found that managers are more likely to use generative AI than their employees.
Also note that many developers are subject to coercive measures to maximize "AI" use and self-report success (performance eval), quality be damned. And contrast the rosy claims of the Potemkin performance for investors to what developers say when able to speak freely.
03.03.2026 14:27
๐ 7
๐ 1
๐ฌ 0
๐ 0
Excellent analysis of that study.
03.03.2026 14:18
๐ 2
๐ 0
๐ฌ 1
๐ 0
Evidence for the first case is surprisingly flimsy, even contradicted by Anthropic's own study (as well as the 2025 METR report showing strong cognitive bias in self-assessment). And people switch to maladaptive use modes thinking it's helping even when it causes a 30% drop in understanding.
03.03.2026 14:16
๐ 5
๐ 0
๐ฌ 1
๐ 0
Good thread on the psychological violence and emptiness of wrangling synthetic text.
Still, note that "hallucination" is a misnomer; the epistemic nihilism is always. Meaning is strictly in the eye of the beholder and correctness or falsehood can only ever be incidental, linguistic serendipity.
03.03.2026 07:05
๐ 20
๐ 8
๐ฌ 0
๐ 0
They are making money inside companies by automating reconciliation workflows. By qualifying inbound leads. By generating compliance documentation. By reducing customer support overhead. By stitching together painful operational tasks that humans hate doing.
These people tell on themselves like clockwork. It's great at creating plausible deniability for financial fraud, regulatory compliance fraud, and deceptive responses to customers (unless a court says we have to honor its promises). Just cuz we believe these jobs don't deserve to be done correctly.
03.03.2026 05:03
๐ 3
๐ 0
๐ฌ 1
๐ 0
Try now? ลink has been working on staging.blacksky.community since mid-Jan, and only today on main (not the first time I tried on the new appview, but it started working sometime within the past few hours for me).
03.03.2026 03:18
๐ 1
๐ 0
๐ฌ 1
๐ 0
Fun that it's been up over 14 hours with an identical quote printed twice. The quote itself so generic it could be slop. Totally normal and intentional editorial standards.
03.03.2026 01:41
๐ 1
๐ 0
๐ฌ 0
๐ 0
AI is a a lack of consent machine that is designed to move us all toward slavery
02.03.2026 19:42
๐ 79
๐ 37
๐ฌ 0
๐ 0
And yet universities continue behaving as though a heartfelt plea to Russ Vought will convince this administration on the merits to stop impounding funds and dismantling the agencies.
02.03.2026 15:33
๐ 20
๐ 9
๐ฌ 0
๐ 0
The decline in gas while there is a run on turbine manufacturing is interesting. With hundreds of TWh of new US data centers supposedly coming by 2030, you gotta wonder if coal will bear more of that demand than has been forecast.
02.03.2026 15:22
๐ 0
๐ 0
๐ฌ 0
๐ 0
Ah, yes. The signature of a product that totally stands on its own merits and will soon deliver on its promise to be a $2T/year industry.
02.03.2026 01:14
๐ 22
๐ 1
๐ฌ 0
๐ 0
Text that reads, "what if, rather than talking about AI broadly, which includes a wide range of technologies that have existed long before ChatGPTโs 2022 launch and that have a variety various functionalities, purposes, and implications, we use more specific terms like โ(text/image/code) generative AI,โ โLLMs,โ or โchatbotsโ? What if rather than talking about โAIโ writing, we identified LLM outputs as โsynthetic text,โ โsynthetic media,โ or simply โoutputโ? What if we stopped saying that LLMs can โreadโ or โthink,โโwhich they canโtโand instead described what is occurring in these moments as โprocessingโ? What if, rather than โhallucination,โ we used โinaccuracy,โ โerror,โ โmisinformation,โ or even โdisinformationโ? How might we, as rhetoricians and as computers and writing scholars, use our expertise to more critically study the discourses and rhetorics that are used to discuss these products, in ways that go beyond isolated experiences and single use cases, to analyze the broader social, political, and global contexts in which generative AI is embedded, including how it might function to โreinforce dominant ideologies and power structuresโ? And how might we then build systems and infrastructures that meaningfully take up what we find from such analyses?"
In this talk, I interrogated the use of the word "critical" in conversations about generative AI in education, and argue for care and precision in how we talk about these products. wacclearinghouse.org/docs/proceed...
Check out the full Proceedings here: wacclearinghouse.org/.../cw2025/p...
01.03.2026 14:49
๐ 23
๐ 11
๐ฌ 1
๐ 0
Reminder that text extruders never have intent. Each model can emit positive or negative responses based on linguistic perturbation of the prompt. Models can have stronger biases than others, but even that is not an ideology. The epistemic nihilism is inherent to all such models.
01.03.2026 20:05
๐ 30
๐ 13
๐ฌ 0
๐ 0
When systems have to work, you need people with an accurate, nuanced mental model. Experts with a mediocre product are more capable than poorly-oriented engineers with a great product, and vibe-coding does not produce great products. Knowledge is an essential output of software engineering.
01.03.2026 19:53
๐ 5
๐ 1
๐ฌ 0
๐ 0
We should entertain the possibility that the "dispute" was a viral marketing campaign.
01.03.2026 18:44
๐ 3
๐ 0
๐ฌ 0
๐ 0
The inevitability frame is insidious, but it unravels if we apply agency standards (e.g., Air Canada retracting their chatbot when the court held them responsible for its promises) or strict liability (as in the People-First Chatbot Bill: epic.org/wp-content/u...). Also useful ๐
01.03.2026 17:00
๐ 17
๐ 2
๐ฌ 0
๐ 0
OFFS. When Iran tried to interefere in 2020, researchers caught them and called them out. Then the Benz-Weiss-Taibbi-Musk-Jordan-Trump axis labeled those researchers "censors" ... and set about defunding them and dismantling their organizations.
28.02.2026 16:43
๐ 3878
๐ 1269
๐ฌ 36
๐ 42
People are proud of using the Permission Manufacturing Machine. Strict scrutiny must be applied to such decisions.
01.03.2026 15:22
๐ 5
๐ 1
๐ฌ 1
๐ 0
I don't know if this relation is causal, but these are the results one should expect when using an LLM for "target identification". People are choosing to place these products in such positions, where their purpose is laundering of liability.
www.middleeasteye.net/news/least-2...
01.03.2026 06:43
๐ 13
๐ 2
๐ฌ 2
๐ 0
Unions are the locus of power best positioned to refuse "AI" weaponized as an anti-labor device, which is the primary function in the workplace. Unions are us, and thus are what we make of them. Talk to your colleagues, speak up at union meetings, demand solidarity. There will be more success.
28.02.2026 15:29
๐ 18
๐ 11
๐ฌ 1
๐ 0