I read the whole interview: a crisis communications team spent three days working every answer nearly to death.
I read the whole interview: a crisis communications team spent three days working every answer nearly to death.
Universal design makes access easier for everyone.
A profligate, partisan ploy: paying a princely price for a performative probe, then pocketing the proof to protect powerful people.
bsky.app/profile/keun...
S.M.A.R.T. Goals.
Wy are people carrying rat poison in checked baggage?
The novel legal defense theory at work: "No one has ever been sued for this, we should be safe."
In a statement to The Verge, Alex Gay, vice president of product and corporate marketing at Grammarly parent company Superhuman, commented: βThe Expert Review agent doesnβt claim endorsement or direct participation from those experts; it provides suggestions inspired by works of experts and points users toward influential voices whose scholarship they can then explore more deeply.β
This is going to be an interesting set of lawsuits:
www.theverge.com/ai-artificia...
You donβt have to participate in AIβs massive hype inflation, writes critical informatics scholar Britt S. Paris. You have a right to refuse the βinevitableβ.
242 Days till U.S. midterms.
The New Yorker Kristi Noem will be remembered as the most incompetent Secretary in the 23-year history of the D.H.S. now
I donβt think the incompetence actually is going to be what Kristi Noem is remembered for.
It will be the killings, the lawlessness, and the grift, if I had to guess.
Cats evolved as a species 10,000 years ago and it took us till the 1970s to agree they had emotions.
GenAI is all of three years old but yes of course it has sentience.
βOneβ beer.
Fun thread about AI:
I am recommending to everyone thinking it is a trivial process to shift work to AI and then collapse multiple job responsibilities into newly defined human roles to read: www.hup.harvard.edu/books/978067...
I actually started reading more non-fiction during the pandemic and now am back to 50/50.
By rule, I RT any post referencing 'Seeing Like a State' (or 'Normal Accidents').
the harder the industry invests on pushing narratives of AI as (only) positive, inevitable, and inherently good for society/business, the more any criticism of this narrative becomes βtoo radicalβ, βunworkableβ and βunrealisticβ
βIn testing by CalMatters, they [the chatbots] often answered general questions correctly but struggled with more specific ones. East Los Angeles Collegeβs bot couldnβt even correctly name its own president.β
I don't understand politicians who want to, "bring back real macho industrial jobs like coal mining" and also fawn over tech CEOs who argue that a chunk of software they wrote has developed feelings and anxieties.
Here is a true fact: GenAI is making it possible for PR companies to create factually true but fine-tuned mass customization of messages for individual and mass social distribution.
Meanwhile, journalism will always be slower and more expensive with fewer AI-generated stock photos.
I am here but I am missing all of the festivities this year. I will look for that film online tho. Thanks.
Media literacy now is 80% information hygiene. You get one strike on social media and I block or mute you at the first instance of using AI as a source of news or analysis.
It is self-preservation. Garbage in, garbage out.
The most human thing about AI are its built-in cognitive biases. This one is βsocial desirability bias.β
Mar 4 2026 BlueSky post claiming that it explained why we bombed a school in Iran "INFORMATION THAT IS A DECADE OLD" and had A Claude chat answer as its source
Hey guys this is not a viable source. Besides the fact that any AI system being used by the government presumably has access to more updated date (I sure hope). Claude is not a person, it can't answer questions about why it did something, it confidently lies, it doesn't have access to other's state.
But since it was written (from notes, I guess) by AI - the trap is expending human cognitive effort to argue with it. Which is why its view of the future is largely false.
The premise assumes that because a tool (hypothetically but still unproven) makes research cheaper and faster, the social structures of academia will automatically bend to that efficiency.
That is predicting highways not traffic jams - infrastructure not social structures and consequences.
I have tried to avoid commenting on this post, but I read it again and it is such classic Tech Utopianism.
(It also claims to be AI-authored so tough to know if the faults are the modelβs or the authorβs?)
open.substack.com/pub/alexande...
Such a fascinating potential story that I would not retweet with a 10-foot pole without additional details and sourcing.
Took the Regional Redeye back from the west coast last night. Thatβs where you take an early evening flight back to the hub, sleep there and take the earliest flight home the next morning.
Which is a thing you do not to lie, but to cast aspersions on truth as something that can be known.