Gaia Marcus's Avatar

Gaia Marcus

@gaiamarcus

- Director @adalovelaceinst.bsky.social:ensuring data & AI work for ppl & society - Stint in government - led #NationalDataStrategy; roles in Cabinet Office, ONS & MHCLG - Charity roles inc. Samaritans Trustee; staff @ The RSA, Centrepoint, ParkinsonsUK

11,004
Followers
2,715
Following
242
Posts
25.08.2024
Joined
Posts Following

Latest posts by Gaia Marcus @gaiamarcus

This sounds fun!

04.03.2026 12:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
When did common sense AI policy become radical? How do you make a technology safe when nobody seems particularly interested in regulating it – and what might happen if we don’t?

🧡 I recently spoke with Taylor Owen @theglobeandmail.com Machines Like Us podcast about an urgent question: How do you make a technology safe when the political will to govern it has evaporated? And what happens if we don't? www.theglobeandmail.com/podcasts/mac...

24.02.2026 13:34 πŸ‘ 43 πŸ” 19 πŸ’¬ 1 πŸ“Œ 0

Modi offers his MANAV (or human) Vision for AI @ #AIImpactSummit

M: moral and ethical AI systems
A: accountable AI governance
N: national AI + data sovereignty
A: accessible and inclusive AI
V: AI should be valid and verifiable

What these principles mean in practice in India remains to be seen.

19.02.2026 05:53 πŸ‘ 11 πŸ” 3 πŸ’¬ 3 πŸ“Œ 0

So excited about our new board appointments.l: Ed Humpherson, @mmitchell.bsky.social and @geomblog.bsky.social

With expertise ranging across AI research, computer science and public statistics, they are Ada values aligned w a shared focus on the public interest, accountability, fairness and rigour

16.02.2026 11:00 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Governments regulate AI and deploy it across public uses. This AI Policy & Governance Working Group panel @ #AIImpactSummit examines accountability, procurement and safety when the state is both regulator and user. @gaiamarcus.bsky.social @ruchowdh.bsky.social @futureoflife.org impact.indiaai.gov.in

10.02.2026 02:27 πŸ‘ 18 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Preview
National Data Library Expert Advisory Group Information about the National Data Library (NDL) Expert Advisory Group including its role and members.

For people who care about that sort of thing, the membership of the National Data Library Expert Advisory Group has been published today - including me www.gov.uk/government/g...

05.02.2026 12:56 πŸ‘ 41 πŸ” 9 πŸ’¬ 3 πŸ“Œ 2

You were very good and balanced, I thought

02.02.2026 08:11 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Listenng to BBCTodayProgramme - grt to hear thoughtful discussion frm parents on consequences & pitfalls of social media ban - & why we should be worried about adults screen time, too (or maybe more). Very live to the qstn of where are YP supposed to be going now, w reduction of third spaces fr thm

23.01.2026 07:41 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

if you’re passionate about AI accountability research and enjoy working in a vibrant lab with a multi-disciplinary team but not interested in doing traditional academic work, this position might be for you

21.01.2026 16:25 πŸ‘ 45 πŸ” 40 πŸ’¬ 1 πŸ“Œ 0

I can choose my bank, I can choose my online supermarket, I can pick where I buy books and clothes and watch TV. I cannot choose whether or not to interact with the state, it is not the same as going shopping.

17.01.2026 15:51 πŸ‘ 173 πŸ” 28 πŸ’¬ 9 πŸ“Œ 1
Preview
The mirage of AI deregulation One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation. In early December 2025, President Donald Tr...

"One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation."

16.01.2026 15:42 πŸ‘ 33 πŸ” 11 πŸ’¬ 1 πŸ“Œ 0
The Trump administration is engaged in norm destructionβ€”breaking expectations about transparent governance and public oversight while installing new assumptions about how technological development should be directed. What it has advanced is not the absence of AI regulation but its rearrangement, often by caprice: intensive state intervention operating through industrial policy, trade restrictions, immigration controls, equity stakes in private firms (selected by the state), the redirection of research funding, and the strategic preemption of state authority. Many of these actions face legal challenge, and some may not survive judicial review. But the pattern itselfβ€”the systematic preference for executive discretion over deliberative processβ€”reveals an approach to governance that will shape AI policy regardless of how individual cases are decided. This is not deregulation. Not in the least. It is hyper-regulation by other means.

The Trump administration is engaged in norm destructionβ€”breaking expectations about transparent governance and public oversight while installing new assumptions about how technological development should be directed. What it has advanced is not the absence of AI regulation but its rearrangement, often by caprice: intensive state intervention operating through industrial policy, trade restrictions, immigration controls, equity stakes in private firms (selected by the state), the redirection of research funding, and the strategic preemption of state authority. Many of these actions face legal challenge, and some may not survive judicial review. But the pattern itselfβ€”the systematic preference for executive discretion over deliberative processβ€”reveals an approach to governance that will shape AI policy regardless of how individual cases are decided. This is not deregulation. Not in the least. It is hyper-regulation by other means.

In a new commentary in Science that I think I'll be referencing a lot on Tech Policy Press, Alondra Nelson (@alondra.bsky.social) says while the Trump's approach to AI is widely understood as "deregulation," when you zoom out, that's not really what's going on. www.science.org/doi/10.1126/...

16.01.2026 15:41 πŸ‘ 50 πŸ” 29 πŸ’¬ 3 πŸ“Œ 4
About the PhD: 
Audits and evaluation of AI systems β€” and the broader context that AI systems operate in β€” have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity.

This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as:

    What does it mean to represent β€œground truth” in proxies, synthetic data, or computational simulation?
    How do we reliably measure abstract and complex phenomena?
    What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail?
    How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies?
    Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation.

The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

About the PhD: Audits and evaluation of AI systems β€” and the broader context that AI systems operate in β€” have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent β€œground truth” in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

are you displeased with today’s AI safety evaluation landscape and curious about what greater conceptual clarity, methodological soundness, and rigour in AI evaluation could look like? if so, consider coming to Dublin to pursue a PhD with me

apply here: aial.ie/hiring/phd-a...

pls repost

15.01.2026 11:55 πŸ‘ 190 πŸ” 140 πŸ’¬ 6 πŸ“Œ 12

...safeguards, to the decision to not have regulation that covers or partially covers entirely predictable harms.

14.01.2026 19:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I'd agree, but I'd say this is always the case. AIl technologies are a result of a series of decisions a series of people have made. In this case everything from the data models were trained on, to the capabilities that were prioritised, to (presumably?) fine-tuning, to releasing a tool without ..

14.01.2026 19:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I'd file having appropriate regulation and governance under "our ability to manage the risks" - regulation is essentially one of the tools for ensuring that those able to manage risks are held to do so. But 100% agree these aren't risks that can't be managed, they just aren't being.

14.01.2026 19:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Impossibile on which axes? As in technically or politically or both?

14.01.2026 17:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
LinkedIn This link will take you to a page that’s not on LinkedIn

πŸ† This is a pivotal opportunity for the UK government to distinguish itself as a leader in effective AI governance, and build a regulatory system that prevents harms before they happen.

πŸ“– Learn more about our polling on AI regulation here: Great (public) expectations | share.google/WlX20c8lmYRD...

14.01.2026 17:33 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Stat picture card - 89% of the UK public say it is important to regulate AI independently

Stat picture card - 89% of the UK public say it is important to regulate AI independently

πŸ“’ This isn’t an unpopular idea: nearly 9 in 10 people in the UK want independent AI regulation. Yet the current oversight of AI falls far behind that of other sectors (like aviation, pharmaceuticals and financial services), with no clear plans for improvement.

14.01.2026 17:30 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Ada graph of sector regulation which shows how absent AI regulation is, compared to a range of other sectors

Compared to the following:


Aviation

Financial Services

Pharma

Food Safety



Foundation models/general-purpose Al lack regulation lack real coverage beyond voluntary standards across:

Proactive risk monitoring

Safety standards

Independent standards
Market entry authorisation

Post-market monitoring


Independent regulator


Enforcement powers

Accountability measures

Transparency/reporting requirements
Routes for redress

Ada graph of sector regulation which shows how absent AI regulation is, compared to a range of other sectors Compared to the following: Aviation Financial Services Pharma Food Safety Foundation models/general-purpose Al lack regulation lack real coverage beyond voluntary standards across: Proactive risk monitoring Safety standards Independent standards Market entry authorisation Post-market monitoring Independent regulator Enforcement powers Accountability measures Transparency/reporting requirements Routes for redress

βš–οΈ We cannot stay ahead of these harms without robust AI regulation. Currently, we are chasing after individual tragedies and scandals, attempting to plug the gaps with existing laws and regulation (image).

This isn’t enough: we need to manage harms at the source, and not just manage the symptoms.

14.01.2026 17:28 πŸ‘ 2 πŸ” 3 πŸ’¬ 2 πŸ“Œ 0

πŸ€”Are these the AI futures you were hoping for?!

At Ada towers we've been doing a lot of reflecting on the recent Grok scandal.

It shows what happens when AI capabilities outpace our ability to manage their risks, & when people and societal impacts aren't front of mind for those developing tech.

🧡

14.01.2026 17:22 πŸ‘ 2 πŸ” 3 πŸ’¬ 2 πŸ“Œ 0
LinkedIn This link will take you to a page that’s not on LinkedIn

πŸš‚πŸš‚Mind the gap?! The public has (great!) expectations...and doesn't think AI should be seen as being A-Exceptional.πŸš‚πŸš‚

Our nat repr polling shows a growing divide between public expectations and discomfort w the status quo, and govt's lack lack of action

lnkd.in/e638XGXU

🧡

04.12.2025 12:21 πŸ‘ 10 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1

If you're new here and happen to see this, here's a Critical Tech starter pack, full of interesting folk to follow go.bsky.app/VFxSES6 And if you have a small account (<10k followers) and would like to be added for visibility, let me know! (Although response times may be slow)

08.01.2026 12:27 πŸ‘ 48 πŸ” 20 πŸ’¬ 7 πŸ“Œ 3
Preview
Mind the gap: reflections on 2025 Is the β€˜AI train’ on the right track?

In our last blog for 2025, @gaiamarcus.bsky.social reflects on whether the 'AI train' is on the right track: www.adalovelaceinstitute.org/blog/mind-th...

18.12.2025 18:10 πŸ‘ 7 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0

I was unbelievably lucky to have thoughtful, respectful and skilled staff who talked me through options, understood my concerns and personal risk factors, and spoke to me of numbers, evidence and different treatment options. We need public discussion to be similarly nuanced.

17.12.2025 08:40 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

...was a far better option than being given a 70% odds of further intervention (likely: forceps/ventose and/or an emergency C section) if I proceeded to 'full' induction

17.12.2025 08:38 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

Anectdata as I can't find a reference- but via someone who worked for the UK's Evaluation Taskforce, so likely true- my understanding is that only 1/10 first births in the UK are 'without intervention'. In my case, a semi-elective C-section after three days of failed induction...

17.12.2025 08:36 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It is incredibly infuriating to hear caesarian birth being positioned as the other option to a 'straightforward' birth. Very very few people are having the latter, especially for first births.

17.12.2025 08:32 πŸ‘ 6 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Great (public) expectations New polling shows the public expect AI to be governed with far more rigour than current policy delivers

"The current lack of oversight departs from established norms: regulatory mechanisms that are standard in other high-impact domains remain largely absent in AI governance"

New study on public attitudes to AI regulation, from the Ada Lovelace Institute: www.adalovelaceinstitute.org/policy-brief...

05.12.2025 11:34 πŸ‘ 8 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Preview
UK Laws Do Not Provide Effective Protection From Chatbot Harms | TechPolicy.Press Julia Smakman explores how the UK is falling behind on laws and policies needed to protect people from the growing risks of AI chatbots and advanced assistants.

AI chatbots are already causing real harm, yet UK law offers almost no meaningful protection. New analysis from Ada Lovelace Institute shows existing regulations don’t cover the risks posed by Advanced AI Assistants, writes Julia Smakman.

08.12.2025 19:02 πŸ‘ 11 πŸ” 8 πŸ’¬ 0 πŸ“Œ 0