This sounds fun!
@gaiamarcus
- Director @adalovelaceinst.bsky.social:ensuring data & AI work for ppl & society - Stint in government - led #NationalDataStrategy; roles in Cabinet Office, ONS & MHCLG - Charity roles inc. Samaritans Trustee; staff @ The RSA, Centrepoint, ParkinsonsUK
This sounds fun!
π§΅ I recently spoke with Taylor Owen @theglobeandmail.com Machines Like Us podcast about an urgent question: How do you make a technology safe when the political will to govern it has evaporated? And what happens if we don't? www.theglobeandmail.com/podcasts/mac...
Modi offers his MANAV (or human) Vision for AI @ #AIImpactSummit
M: moral and ethical AI systems
A: accountable AI governance
N: national AI + data sovereignty
A: accessible and inclusive AI
V: AI should be valid and verifiable
What these principles mean in practice in India remains to be seen.
So excited about our new board appointments.l: Ed Humpherson, @mmitchell.bsky.social and @geomblog.bsky.social
With expertise ranging across AI research, computer science and public statistics, they are Ada values aligned w a shared focus on the public interest, accountability, fairness and rigour
Governments regulate AI and deploy it across public uses. This AI Policy & Governance Working Group panel @ #AIImpactSummit examines accountability, procurement and safety when the state is both regulator and user. @gaiamarcus.bsky.social @ruchowdh.bsky.social @futureoflife.org impact.indiaai.gov.in
For people who care about that sort of thing, the membership of the National Data Library Expert Advisory Group has been published today - including me www.gov.uk/government/g...
You were very good and balanced, I thought
Listenng to BBCTodayProgramme - grt to hear thoughtful discussion frm parents on consequences & pitfalls of social media ban - & why we should be worried about adults screen time, too (or maybe more). Very live to the qstn of where are YP supposed to be going now, w reduction of third spaces fr thm
if youβre passionate about AI accountability research and enjoy working in a vibrant lab with a multi-disciplinary team but not interested in doing traditional academic work, this position might be for you
I can choose my bank, I can choose my online supermarket, I can pick where I buy books and clothes and watch TV. I cannot choose whether or not to interact with the state, it is not the same as going shopping.
"One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation."
The Trump administration is engaged in norm destructionβbreaking expectations about transparent governance and public oversight while installing new assumptions about how technological development should be directed. What it has advanced is not the absence of AI regulation but its rearrangement, often by caprice: intensive state intervention operating through industrial policy, trade restrictions, immigration controls, equity stakes in private firms (selected by the state), the redirection of research funding, and the strategic preemption of state authority. Many of these actions face legal challenge, and some may not survive judicial review. But the pattern itselfβthe systematic preference for executive discretion over deliberative processβreveals an approach to governance that will shape AI policy regardless of how individual cases are decided. This is not deregulation. Not in the least. It is hyper-regulation by other means.
In a new commentary in Science that I think I'll be referencing a lot on Tech Policy Press, Alondra Nelson (@alondra.bsky.social) says while the Trump's approach to AI is widely understood as "deregulation," when you zoom out, that's not really what's going on. www.science.org/doi/10.1126/...
About the PhD: Audits and evaluation of AI systems β and the broader context that AI systems operate in β have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent βground truthβ in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.
are you displeased with todayβs AI safety evaluation landscape and curious about what greater conceptual clarity, methodological soundness, and rigour in AI evaluation could look like? if so, consider coming to Dublin to pursue a PhD with me
apply here: aial.ie/hiring/phd-a...
pls repost
...safeguards, to the decision to not have regulation that covers or partially covers entirely predictable harms.
I'd agree, but I'd say this is always the case. AIl technologies are a result of a series of decisions a series of people have made. In this case everything from the data models were trained on, to the capabilities that were prioritised, to (presumably?) fine-tuning, to releasing a tool without ..
I'd file having appropriate regulation and governance under "our ability to manage the risks" - regulation is essentially one of the tools for ensuring that those able to manage risks are held to do so. But 100% agree these aren't risks that can't be managed, they just aren't being.
Impossibile on which axes? As in technically or politically or both?
π This is a pivotal opportunity for the UK government to distinguish itself as a leader in effective AI governance, and build a regulatory system that prevents harms before they happen.
π Learn more about our polling on AI regulation here: Great (public) expectations | share.google/WlX20c8lmYRD...
Stat picture card - 89% of the UK public say it is important to regulate AI independently
π’ This isnβt an unpopular idea: nearly 9 in 10 people in the UK want independent AI regulation. Yet the current oversight of AI falls far behind that of other sectors (like aviation, pharmaceuticals and financial services), with no clear plans for improvement.
Ada graph of sector regulation which shows how absent AI regulation is, compared to a range of other sectors Compared to the following: Aviation Financial Services Pharma Food Safety Foundation models/general-purpose Al lack regulation lack real coverage beyond voluntary standards across: Proactive risk monitoring Safety standards Independent standards Market entry authorisation Post-market monitoring Independent regulator Enforcement powers Accountability measures Transparency/reporting requirements Routes for redress
βοΈ We cannot stay ahead of these harms without robust AI regulation. Currently, we are chasing after individual tragedies and scandals, attempting to plug the gaps with existing laws and regulation (image).
This isnβt enough: we need to manage harms at the source, and not just manage the symptoms.
π€Are these the AI futures you were hoping for?!
At Ada towers we've been doing a lot of reflecting on the recent Grok scandal.
It shows what happens when AI capabilities outpace our ability to manage their risks, & when people and societal impacts aren't front of mind for those developing tech.
π§΅
ππMind the gap?! The public has (great!) expectations...and doesn't think AI should be seen as being A-Exceptional.ππ
Our nat repr polling shows a growing divide between public expectations and discomfort w the status quo, and govt's lack lack of action
lnkd.in/e638XGXU
π§΅
If you're new here and happen to see this, here's a Critical Tech starter pack, full of interesting folk to follow go.bsky.app/VFxSES6 And if you have a small account (<10k followers) and would like to be added for visibility, let me know! (Although response times may be slow)
In our last blog for 2025, @gaiamarcus.bsky.social reflects on whether the 'AI train' is on the right track: www.adalovelaceinstitute.org/blog/mind-th...
I was unbelievably lucky to have thoughtful, respectful and skilled staff who talked me through options, understood my concerns and personal risk factors, and spoke to me of numbers, evidence and different treatment options. We need public discussion to be similarly nuanced.
...was a far better option than being given a 70% odds of further intervention (likely: forceps/ventose and/or an emergency C section) if I proceeded to 'full' induction
Anectdata as I can't find a reference- but via someone who worked for the UK's Evaluation Taskforce, so likely true- my understanding is that only 1/10 first births in the UK are 'without intervention'. In my case, a semi-elective C-section after three days of failed induction...
It is incredibly infuriating to hear caesarian birth being positioned as the other option to a 'straightforward' birth. Very very few people are having the latter, especially for first births.
"The current lack of oversight departs from established norms: regulatory mechanisms that are standard in other high-impact domains remain largely absent in AI governance"
New study on public attitudes to AI regulation, from the Ada Lovelace Institute: www.adalovelaceinstitute.org/policy-brief...
AI chatbots are already causing real harm, yet UK law offers almost no meaningful protection. New analysis from Ada Lovelace Institute shows existing regulations donβt cover the risks posed by Advanced AI Assistants, writes Julia Smakman.