Israeli attacks on UNIFIL peace keepers tonight. Irish service men were not directly hit, but they aided their counterparts from Ghana. This will further deteriorate Irish Israel relations.
@abeba
Founder & PI @aial.ie. Assistant Professor of AI, School of Computer Science & Statistics, @tcddublin.bsky.social AI accountability, AI audits & evaluation, critical data studies. Cognitive scientist by training. Ethiopian in Ireland. She/her
Israeli attacks on UNIFIL peace keepers tonight. Irish service men were not directly hit, but they aided their counterparts from Ghana. This will further deteriorate Irish Israel relations.
I definitely want to hear more deep thoughts from the guy who claimed Elon Musk's purchase of twitter would "extend the light of consciousness"
I am once again tapping the "support independent, worker-owned, billionaire free news" sign. the most powerful people in the world should not be deciding whether or not we know about this sort of thing. putting Marisa's subscription link below
pervert glasses. so apt
This is what I keep saying when people play the AI productivity card. A lot of productivity tech doesn't actually solve productivity problems, it enables or even exacerbates them.
if the invention of emails was to make us efficient and effective, how come i can do emails till the cows come home yet feel unproductive, inefficient, and irritable
a. piece. of. software. is. not. the. type. of. entity. that. can. gain. consciousness.
the harder the industry invests on pushing narratives of AI as (only) positive, inevitable, and inherently good for society/business, the more any criticism of this narrative becomes βtoo radicalβ, βunworkableβ and βunrealisticβ
Hey, suprise! The big foundation model providers are not complying to the @ec.europa.eu AI act.
My student ThΓ©o de Pinho (University of Lille) made this video showing the dynamics of a multi-agent simulation showing the evolution of speciation.
youtu.be/GJsBIk-9iyE?...
I think one of the most staggering industry shifts in my 16 years as a tech reporter is that itβs not become a question of βshould our product help the government kill and/or surveil people?β but βto what extent?β
www.anthropic.com/news/where-s...
Workshop on genetics, eugenics and scientific racism next week! #Philsci #Philosophy #Ethics #HPS #Sociology #AcademicSky
We will be conducting hybrid sessions, you can find the zoom link at www.imseam.uni-heidelberg.de/en/heinzelma...
I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines grith.ai/blog/clineje...
because the rest are cowards
Reuters Exclusive
"U.S. military investigators believe it is likely that U.S. forces were responsible for an apparent strike on an Iranian girls' school."
"The strike would rank among the worst cases of civilian casualties in decades of U.S. conflicts in the βMiddle East."
This law to increase rents has had the completely foreseen side effect of also increasing evictions (so the landlords can put up rents).
www.thejournal.ie/your-stories...
β
Wilfully lying about people seeking asylum and other migrants
β
Framing migrants as criminals or a "burden.
β
Inflammatory rhetoric
β
Increased deportations
β
Deporting children
β
Targeting specific nationalities
Etc etc
Labour is absolutely copying Trump
www.theguardian.com/uk-news/2026...
Tech companies have until August 2026 to comply with the EU AI Act's rule requiring public summaries of AI model training data.
New analysis shows that the big AI players (OpenAI, Google, Anthropic, Microsoft) have so far neglected this requirement for GPT, Gemini, Claude, and other models.
The team (Dick Blankvoort, @harshp.com, & Maximilian Gahntz) will be presenting this work at FAccT π in June, and you can access the preprint and analysis here: aial.ie/research/gpa...
If you have feedback and/or are interested to collaborate with them, please reach out to them.
end/
And here for more coverage: www.euractiv.com/news/researc...
16/
This work is timely and is already being covered by media:
www.techpolicy.press/how-big-ai-d...
15/
Even though obligations entered into effect August last year, enforcement won't start until August this year. Hope this work helps ensure accountability from Day 1 where the AI Office identifies and takes swift action when training is not transparent and rights are not being respected.
14/
For the big providers, this is a warning that typically-used tactics like obfuscation, intentional misinterpretation, or making claims without backing them up will not necessarily work. This quality assessment surfaces such tactics and hope enforcement and penalties follow.
13/
For smaller providers, especially open source and hobby projects, the analysed summaries and their high scores should help provide assurance that good compliance standards are being met, and that meeting obligations is not a significant burden.
12/
A key challenge has been obtaining the public summaries, as locations, names, and modalities vary across providers which makes it difficult to locate them. The team recommends the AI Office to set up a centralised portal to host these summaries.
11/
The developed framework is helpful to identify the extent to which these summaries meet the intended goals of the obligation and the template, and provide a quantifiable measure to see where improvements are needed, or to take enforcement action.
10/
The team also found a sparse document on Microsoft Phi's repo which contained sections that matched the summary, but which wasn't labelled as such. Since Microsoft hasn't provided any other document, they scored it anyway. It received a failing grade as it is nowhere close to being complete.
9/
Bielik also scored well, though considerably less in Usefulness due to lack of some key information. Still, their good score in other sections is commendable. The team reached out to all four providers, and only Apertus responded, and agreed to improve their next summary.
8/
The team scored summaries from Apertus, Bria, and SmolLM, whose documents scored high, were clearly provided with good faith, and are both transparent and useful. Apertus in particular scored the highest. All had minor issues that can be trivially fixed, and none affect rights exercises.
7/
To measure the transparency and usefulness, the team created a set of 242 questions that comprehensively assess each field of the public summary, including how it was provided, whether it gives all info β including bonus points for optional content, to derive a score and a grade.
6/