Kris Shrishak's Avatar

Kris Shrishak

@krisshrishak

Enforce Senior Fellow at @iccl.bsky.social | Tech-policy, algorithmic decision making, privacy, cryptography, Internet security, human rights | KrisShrishak@eupolicy.social

864
Followers
22
Following
169
Posts
08.02.2024
Joined
Posts Following

Latest posts by Kris Shrishak @krisshrishak

Thanks to the support of @sarahaapalainen.bsky.social, Delfine Gaillard and Ayça Dibekoğlu from @coe.int and all the Equality bodies involved in the project. A special thanks to Louise Hooper, Nele Roekens and Milla Vidina for feedback that improved this report.

16.02.2026 16:00 👍 2 🔁 0 💬 1 📌 0
Preview
European policy guidelines on AI and algorithm-driven discrimination for equality bodies and other national human rights structures As artificial intelligence and automated decision-making systems become embedded in public administration and key private sectors, these guidelines empower equality bodies to identify, prevent and add...

Last month the @coe.int published a report written by @soizicpenicaud.bsky.social and me for equality bodies and other national human rights structures in Europe. It provides policy guidelines on AI and algorithm-driven discrimination.

edoc.coe.int/en/artificia...

16.02.2026 16:00 👍 2 🔁 0 💬 1 📌 0

@shamimmalekmian.bsky.social @ihrec.bsky.social

16.02.2026 15:35 👍 1 🔁 0 💬 0 📌 0

Finally, dug my own live example (from 12 June 2025) out of the screenshot pile. Unfortunately I didn't capture the question - it would've been something about when exactly was I supposed to cancel my TP status when switching over to a work permit.

03.02.2026 14:59 👍 2 🔁 2 💬 3 📌 0
Post image

I would think of this as "algorithm bias", which encapsulates various algorithmic elements including the training process that contributes to biased algorithmic systems.

A screenshot from a report I did for EDPB (data protection regulators in the EU) www.edpb.europa.eu/our-work-too...

28.12.2025 17:26 👍 2 🔁 2 💬 0 📌 0

I think it's also important to recognize that LLM "summaries" are not epistemically grounded in the ideas of the source material. When they're correct, it's more because *other* writing (in the training corpus) contains similar language. That makes it especially bad when applied to novel results.

17.12.2025 06:18 👍 36 🔁 8 💬 3 📌 1

Ethical AI is an oxymoron, like automated science

16.12.2025 22:10 👍 448 🔁 107 💬 16 📌 4

@ageaction.bsky.social @disabilityfed.bsky.social @claredotc0m.bsky.social @ocoireland.bsky.social

17.12.2025 08:16 👍 1 🔁 0 💬 0 📌 0
Preview
Ireland Unprepared for AI Act Implementation None of the nine bodies in Ireland which will receive additional powers through the EU’s Artificial Intelligence (AI) Act to protect fundamental rights have received additional resources from the Iris...

6. "Additional funding and resources to nine agencies responsibility for protecting human rights under the EU AI Act."

@ihrec.bsky.social

www.iccl.ie/news/ireland...

17.12.2025 08:16 👍 3 🔁 0 💬 1 📌 0

5. "All public bodies and semi-state entities using AI in public services must publish annual evidence-based reports detailing benefits, disadvantages, and any inequalities identified. These reports should be made publicly accessible to ensure transparency and accountability."

@abeba.bsky.social

17.12.2025 08:16 👍 4 🔁 0 💬 1 📌 0
Preview
ICCL to address Oireachtas Joint Committee on AI ICCL is calling for an independent national AI Office and clarity on how and when AI and algorithmic systems are used by public bodies.

3. "Developing a national AI risk register within the national AI office to identify and monitor systemic risks across sectors."

4. "Introducing mandatory algorithmic impact assessments for high-risk AI systems in public services."

@soizicpenicaud.bsky.social

www.iccl.ie/press-releas...

17.12.2025 08:16 👍 4 🔁 0 💬 1 📌 0
Preview
Submission to the Irish Government on AI Act Implementation 17 July 2024 ICCL Enforce responded to the consultation from the Irish government on the national implementation of the EU AI Act. By 2 August 2025, the Irish government will appoint the regulators re...

2. Involvement of people affected by AI: "establishing a Citizens’ Assembly on Artificial Intelligence Digitalisation and Technology to facilitate inclusive public dialogue and democratic input on AI policy and ethics."

www.iccl.ie/news/submiss...

17.12.2025 08:16 👍 3 🔁 0 💬 1 📌 0

... We should treat it [#AIAct] as a minimum baseline for national AI regulation, not a maximum standard."

@iccl.bsky.social

17.12.2025 08:16 👍 3 🔁 0 💬 1 📌 0
Joint Committee on Artificial Intelligence publishes First Interim Report with 85 recommendations

Ireland's Joint Committee on Artificial Intelligence published its interim report yesterday. The Committee makes a number of recommendations, which include:

1. "Ireland must not shy away from the EU #AI Act or try to dilute it...

www.oireachtas.ie/en/press-cen...

17.12.2025 08:16 👍 18 🔁 11 💬 1 📌 0
12 December 2025
Thoughtfully Shaping Our Digital Future
To the parties forming the government in the Senate and House of Representatives, as well as the outgoing administration,

We are writing to you in recognition of your crucial responsibility for shaping current and future AI policy, overseeing digitization, and upholding public values. We are a coalition of scientists, experts, and representatives of civil society organizations. We believe it is essential to address these matters together.

This letter has two objectives:

a) Provide context for current plans, including the AI Delta Plan, the AIC4NL position paper, and the Invest-NL AI Deep Dive. These investment proposals often rely on assumptions that lack scientific evidence and do not fully reflect public values.

b) To offer a constructive, well-substantiated alternative approach to digital futures based on people, nature, and democracy. We believe a collaborative process should guide decisions on the needs, scope, and nature of investments by bringing together scientists, civil society organizations, and stakeholders.

12 December 2025 Thoughtfully Shaping Our Digital Future To the parties forming the government in the Senate and House of Representatives, as well as the outgoing administration, We are writing to you in recognition of your crucial responsibility for shaping current and future AI policy, overseeing digitization, and upholding public values. We are a coalition of scientists, experts, and representatives of civil society organizations. We believe it is essential to address these matters together. This letter has two objectives: a) Provide context for current plans, including the AI Delta Plan, the AIC4NL position paper, and the Invest-NL AI Deep Dive. These investment proposals often rely on assumptions that lack scientific evidence and do not fully reflect public values. b) To offer a constructive, well-substantiated alternative approach to digital futures based on people, nature, and democracy. We believe a collaborative process should guide decisions on the needs, scope, and nature of investments by bringing together scientists, civil society organizations, and stakeholders.

📝 OPEN LETTER 📝

Are you based in NL 🇳🇱 ? Do you also want government to thoughtfully shape our digital future, with care for people and nature? Please share and sign 🖊️ this letter addressed to parties forming the new Dutch government and outgoing administration.

📝 openletter.earth/zorgvuldig-a...

13.12.2025 19:52 👍 53 🔁 37 💬 2 📌 7

@robin.berjon.com I think you will enjoy this paper if you have not read it yet repository.ubn.ru.nl/bitstream/ha...

17.12.2025 07:25 👍 5 🔁 0 💬 2 📌 0
Finally, connectionism, and cognitive science generally, can rid
ourselves of the hidden conflicts of interest inherent in taking
industry funding to build and use such models (Forbes & Guest,
2025; Gerdes, 2022; Liesenfeld & Dingemanse, 2024; Liesenfeld
et al., 2023). This is possible by requesting that we and our fellow
practitioners disclose such conflicts during and at the point
of publication. Relatedly, we need to acknowledge that such
relationships to industry effectively bend our metatheoretical
positions towards un-, or minimally a-, scientific reasoning that we
are under obligation to keep in check if not at bay (also see Andrews
et al., 2024; Bender et al., 2021; Birhane & Guest, 2021; Forbes &
Guest, 2025; Gerdes, 2022; Guest, 2024; Spanton & Guest, 2022).
Ultimately, it is up to us, theoreticians and modelers alike, to decide
on the fate of our own fields and on the basis on which we create,
understand, and reason about and over our models. Connectionism
can be perhaps be redeemed, but it requires us to: sacrifice
superficial understanding of what role models play and what they
constitute; halt the “anything goes” antiscientific dictum of industry
funding; and become aware of what follows from our reasoning
when we engage mechanistic and/or functional explanations; and if
done carelessly, we risk being incoherent or self-undermining.
Snatching defeat from the jaws of victory seems to be connectionists’ speciality, however the only difference may be that, this
time round the stakes are higher both for science specifically and
society at large.

Finally, connectionism, and cognitive science generally, can rid ourselves of the hidden conflicts of interest inherent in taking industry funding to build and use such models (Forbes & Guest, 2025; Gerdes, 2022; Liesenfeld & Dingemanse, 2024; Liesenfeld et al., 2023). This is possible by requesting that we and our fellow practitioners disclose such conflicts during and at the point of publication. Relatedly, we need to acknowledge that such relationships to industry effectively bend our metatheoretical positions towards un-, or minimally a-, scientific reasoning that we are under obligation to keep in check if not at bay (also see Andrews et al., 2024; Bender et al., 2021; Birhane & Guest, 2021; Forbes & Guest, 2025; Gerdes, 2022; Guest, 2024; Spanton & Guest, 2022). Ultimately, it is up to us, theoreticians and modelers alike, to decide on the fate of our own fields and on the basis on which we create, understand, and reason about and over our models. Connectionism can be perhaps be redeemed, but it requires us to: sacrifice superficial understanding of what role models play and what they constitute; halt the “anything goes” antiscientific dictum of industry funding; and become aware of what follows from our reasoning when we engage mechanistic and/or functional explanations; and if done carelessly, we risk being incoherent or self-undermining. Snatching defeat from the jaws of victory seems to be connectionists’ speciality, however the only difference may be that, this time round the stakes are higher both for science specifically and society at large.

Two, the conflicts of interest for modern connectionism practitioners and supporters need to start to be disclosed and dismantled: "halt the “anything goes” antiscientific dictum of industry funding; and become aware of what follows from our reasoning"

For more see: doi.org/10.1037/rev0...

11/n

17.10.2025 13:43 👍 11 🔁 6 💬 1 📌 0

Very disappointing that ACM using AI summaries that are highly likely to lead readers astray.

We can expect more research papers misrepresenting other papers. We can thank AI summaries for that.

17.12.2025 07:16 👍 2 🔁 1 💬 0 📌 0

Shouldn't it be an opt-in? I don't think it should be the responsibility of authors to opt-out of an AI tool that has been imposed on them and their work.

17.12.2025 07:09 👍 2 🔁 0 💬 0 📌 0

"One of the front lines in the Algorithm Wars is Ireland. Meta, TikTok, YouTube, X and Snapchat all have international bases here. Meta alone ran €85 billion worth of revenue through its Dublin HQ last year... Meta’s corporation tax payment of €367 million."

16.12.2025 15:26 👍 1 🔁 1 💬 0 📌 0

"The EU’s rather timid and belated attempts to limit the damage inflicted by the social media oligarchs are not the only reason for Trumpworld’s determination to take it down. Climate change (and the threat to America’s vast fossil fuel industries from the EU’s green agenda) has to be factored in."

16.12.2025 15:26 👍 1 🔁 0 💬 1 📌 0

"For opium, think algorithms designed to get children hooked on poisonous images delivered straight into their developing brains. And for China then, think the EU now."

16.12.2025 15:26 👍 2 🔁 0 💬 1 📌 0

From the excellent @fotoole.bsky.social
"Now, as then, the world’s largest superpower has fused its interests with those of pusher cartels – for the opium lords of 1839, think the tech bros of 2025."

16.12.2025 15:26 👍 3 🔁 2 💬 1 📌 0

The bottom of the article has some of the errors. As @wendylyon.bsky.social who identified these errors emphasises, these are only a selection of errors.

bsky.app/profile/wend...

19.11.2025 16:16 👍 1 🔁 1 💬 0 📌 0

"The Minister states that “the chatbot was tested extensively...checking responses to “55 questions out of which 53 were deemed successful"

Wendy Lyon, an immigration & human rights solicitor at Abbey law, found that many of these 53 “successful” responses were wrong, unhelpful & misleading."

19.11.2025 12:36 👍 130 🔁 71 💬 3 📌 6
Preview
Was the Department of Justice using experimental chatbots to give immigration advice? “The public should not be used as guinea pigs, particularly vulnerable groups in a legal process which could be impacted by a chatbot giving an incorrect answer.”

The excellent @shamimmalekmian.bsky.social @dublininquirer.com
whose earlier piece triggered our investigation has also written about this

www.dublininquirer.com/was-the-depa...

19.11.2025 11:13 👍 9 🔁 5 💬 2 📌 0

And the department is not immune to the #AIHype.

One of the documents received through FOI requests makes this claim:

“Copilot doesn’t just connect ChatGPT with Microsoft 365,” but it “turn[s] your words into the most powerful productivity tool on the planet.”

19.11.2025 11:13 👍 3 🔁 1 💬 2 📌 0
Preview
Department of Justice chatbots mislead people seeking information Irish Department of Justice internalises the AI hype and takes no responsibility for misleading chatbots.

Our investigation into Irish Department of Justice use of chatbots. The department hides behind disclaimers while deploying misleading chatbots.

@iccl.bsky.social @abeba.bsky.social @johnnyryan.bsky.social

www.iccl.ie/news/irish-d...

19.11.2025 11:13 👍 48 🔁 28 💬 1 📌 9
Post image Post image

what a great honour to receive the 25th Irish Tatler Women of the Year Award in the category of Innovation. the optimist in me thinks this marks a turn for recognition of the importance of critical work and accountability in AI

www.businesspost.ie/life-luxury/...

18.11.2025 18:13 👍 181 🔁 22 💬 20 📌 0

I fully endorse this. Political leaders are falling prey to AI hype, and seem to lack any kind of help from reputable sources who could counter the false claims of tech CEOs. Where are their science advisers???

17.11.2025 19:37 👍 62 🔁 27 💬 3 📌 0