Yep. There is an underlying issue here that reliance on LLMs also undermines safety critical assessments in a distributed way, creating a bigger threat surface. But also, I don't get the impression that Trump particularly gives a damn about that.
@rachelcoldicutt
Hopeful technologist. Community tech, careful innovation, socially progressive tech policy. https://www.careful.industries https://hopeful.technology https://buttondown.email/justenoughinternet DMs don't work but hello@careful.industries will find me
Yep. There is an underlying issue here that reliance on LLMs also undermines safety critical assessments in a distributed way, creating a bigger threat surface. But also, I don't get the impression that Trump particularly gives a damn about that.
Anyway, sorry to distribute my anxiety via the timeline but ... this doesn't feel good at all.
But also a lot of things about the present moment make me feel professionally more anxious than usual, which is not ... a good sign. We've had over a year of the US govt routinely showing unthinkable things can happen, that decades of the warnings from civil liberties orgs were not overblown
This is where we are with the findings on our foresight review www.lrfoundation.org.uk/news/an-end-... if you're working on sociotechnical assurance or safety by design am v keen to hear of best practice ...
Is just like at school: you just need one idiot to not follow the rules and suddenly you're all in detention. I think if Anthropic had any backbone they'd respond to this moment by stopping development of general purpose models and withdrawing some tools.
Or, to put it another way, I've changed my mind: I think existential threat is now a relevant framing for AI safety. But it's not the existential threat of loss of control of AGI, it's the existential threat of the US govt having the ability to rashly use LLMs that have been deployed in haste
Purpose limitation and acceptable use policies hinge on the tacit acceptance of a social contract and shared norms around rights and justice. When a technology can be used to enable more or less anything, and when safe conditions of use can't be guaranteed, that changes the purpose of innovation
The Trump administration has drawn up tight rules for civilian artificial intelligence contracts that would require AI companies to allow "any lawful" use of their models amid a stand-off between the Pentagon and Anthropic. A draft of new government guidelines, seen by the FT, mandates that AI groups that want to do business with the government grant the US an irrevocable licence to use their systems for all legal purposes. The guidance from the US General Services Administration (GSA) would apply to civilian contracts and is part of a government-wide effort to strengthen procurement of AI services.
One of the things I'm working on at the moment is a foresight review on the safe adoption of AI, and I'm
pretty sure this move by the US govt busts any myth that it's possible to robustly assure general purpose AI giftarticle.ft.com/giftarticle/...
A picture of a woman wearing colourful clothes and a sash standing in front of an audience with a panel that says Data and AI
At the @connectedbydata.org Power and Participation in Public Data conference today, hearing from the brilliant @jenitennison.com
oh also it doesn't include N Ireland (Great Britain and N Ireland is the correct designation), so UK tends to be preferred
Britons usually refers to Celtic/Ancient people - British people would be more usual, but also I suspect it is a deliberate choice to invoke "citizens" rather than all people in the UK
I read a lot of terrible tech surveys that retread old ground but this seems like a genuinely useful and novel bit of research, that is hopefully close enough to the Labour Party for some ministers to actually read and take seriously. (Although, honestly, "Britons" is more than a bit weird.)
Established Liberals, Sceptical Scrollers and the Incrementalist Left are much more likely to use AI than their peers
Also whodathunk that AI is a centrist technology (this made me laugh, re: some of the "Bluesky hates AI" discourse)
Established liberals and progressive activists are more likely than most to consider themselves tech savvy - on this chart rooted patriots are the least tech savvy segment
Also, reckon this would have been useful to have to hand when digital ID was being announced as an anti-immigration tool last year. A little look at the "rooted patriots" might have saved quite a lot of bother.
Honestly, absolute LOLs on the terminology front
Over half of Britons concerned that change is happening too fast - fairly tied results with most people thinking a little too fast or about the right pace
Fairly consistent thoughts across the board on the pace of technological change, which is *interesting* given how divided views are along political lines among the opinionati. Little pro-tech boost for the Greens, which is interesting.
Britons would rather their employers have the most say in how new technologies are introduced in the workplace. Who do you think should have the most say over how automation and new technologies are introduced in the workplace?
Most surprising result to me is that people seem to be happy delegating decisions about technologies to employers rather than to workers or the labour movement. (Also what a sign of the times that Green voters are more in favour of workers autonomy than Lab voters.)
Allowing AI to control drones and robots to make autonomous decisions to use lethal force, running an entire news publication autonomously, replacing medical professionals for basic medical advice, using AI to make employment decisions, recommending criminal sentences in court, deciding whether to arrest or charge someone, replacing teachers with AI in the classroom - all score around 50% or higher on the negative scale
The questions on what people do and don't want AI to be used for are unusually good and specific and, in the main, the results are really interesting - perhaps also assisted by the fact that more people have used AI now and have an idea what it does. (Formatting is a bit b0rked but red = no.)
Once I got past the fact that the use of the term Britons (and sometimes "Britains") made me feel a bit queasy, there is a load of interesting stuff in this Labour Digital/More in Common polling on public attitudes to tech. www.moreincommon.org.uk/blog/britons...
The thing that is different, I think, is that women are behaving differently: lots of senior women I know are great allies, tired of BS, making change whenever they can. So things are different. On paper they might look "better", and they are in some ways, but that change hasn't been a straight line
And these days, certainly in the fields of tech and policy that I work in, the glass ceiling remains, patronising sexism is the norm, and the structural barriers that looked like they might shift in the 2010s - access to funding, women in C-suite roles - are back where they were in the 2000s.
For instance, in my own working life, I've seen work place sexism change - but that's to say it's different, not gone. Pre-2010, many women I knew in professional jobs experienced sexual assault at work. It was normal. In the 2010s, "diversity" was more fashionable, but so was being patronised.
In reality social change happens in clusters and is full of forwards and backwards motion. Not everything can be reduced to a statistical pattern of the kind that can be predicted by an LLM.
Secondly, the belief that change - or progress, perhaps - happens in a straight line seems to be very prevalent at the moment. Whether it's technology adoption or social change, change is messy and complex. I wonder if it's because the enchartification of everything demands seeing coherent patterns
Reading this on the attitudes of Gen Z men, 2 thoughts occur. Firstly there is presumably no comparative data about what Boomer men thought in their 20s. It seems highly likely that the actual experience of being married will play a part in shaping one's attitudes www.theguardian.com/world/2026/m...
I woke up at 4:30 today and broke my resolution about not looking at the news in the morning. Let me tell you, it was not worth it.
A week continues to be a long time in technology news giftarticle.ft.com/giftarticle/...
*I know about the problems! kthxbai
<tentatively enters the discourse> One of the many* problems with genAI is that it is no good at writing and the people who are best at having opinions on Bluesky are the ones who are good at writing and whose work may have been nicked for training data , so obvs they don't like it.</runs away>
๐