Weβre happy to share that our lab has received a Collaboration Award during the ICAI Day in recognition of our dedicated efforts to organize cross-lab events that foster collaboration and amplify our collective impact.
@navatintarev
(she/her) Full Professor of Explainable AI, University of Maastricht, NL. Lab director of the lab on trustworthy AI in Media (TAIM). Director of Research at the Department of Advanced Computing Sciences. IPN board member (incoming 2026). navatintarev.com
Weβre happy to share that our lab has received a Collaboration Award during the ICAI Day in recognition of our dedicated efforts to organize cross-lab events that foster collaboration and amplify our collective impact.
Bulat Khaertdinov is presenting our recommender systems demo at the Dutch-Belgian Information Retrieval Workshop. (Photo credit: Alain Starke)
Two submissions to be presented at MediaEval. Beyond Similarity: Two-Stage Retrieval for News Image Search (NewsImages) and Early Fusion and Pre-text task learning for Video Memorability Prediction (Memorability track). Led by Bulat Khaertdinov and Aashutosh Ganesh, a.o.
The proceedings of ECAI-2025 are online (including my frontieries in AI position paper):
dx.doi.org/10.3233/FAIA...
πΉ Job alert: Tenure-Track Faculty in Artificial Intelligence and Machine Learning at @cispa.de
πSaarbrΓΌcken & St. Ingbert π©πͺ
π
Apply by Nov 18th
π https://career.cispa.de/jobs/tenure-track-faculty-in-artificial-intelligence-and-machine-learning-f-m-d-2025-2026-73
@xai-at-dacs.bsky.social @umdacs.bsky.social @maastrichtu.bsky.social
We answered many interesting Qβs. Incl. legislation of platforms (DSA), possible commercialization of our research vs collab w companies, whether our research could help with polarization, and whether to train LLMs on smaller more representative data!
What an honor to represent Maastricht university and to highlight the impact of research on society. We (w Cedric Waterschoot) enjoyed talking about how scientists could inform how we see information online.
The report of Dagstuhl Seminar 25142 "Explainability in Focus: Advancing Evaluation through Reusable Experiment Design" is now published as part of the periodical Dagstuhl Reports: drops.dagstuhl.de/entities/doc...
Organized by: Simone Stumpf, Elizabeth Daly and Stefano Teso
Looking forward to present this position paper at the frontiers in AI at ECAI !
Measuring Explanation Quality β a Path Forward
ecai2025.org/frontiers-in...
On the 16th of October in Leiden, Cedric Waterschoot and I will attend the "Avond van wetenschap & maatschappij" (evening of science and society).
*** Why am I seeing this? ***
π‘Provocative thesis: Scientists should play an important role in shaping how people see and interpret information online.
βIt actually doesnβt take much to be considered a difficult woman. Thatβs why there are so many of us.β ~ Jane Goodall
βYou cannot get through a single day without having an impact on the world around you. What you do makes a difference, and you have to decide what kind of difference you want to make." ~ Jane Goodall
A preliminary call for papers for #umap2026 is now available on the conference's website. Check it out, mark your calendars, and get to work on those papers. www.um.org/umap2026/cal...
@umapconf.bsky.social (#recsys2025)
I'm hiring again! Please share. I'm recruiting a postdoc research fellow in human-centred AI for scalable decision support. Join us to investigate how to balance scalability and human control in medical decision support. Closing date: 4 October (AEST).
uqtmiller.github.io/recruitment/
We have two papers accepted by our PhD students, one at SIGIR 2025: βRecGaze: The First Eye Tracking and User Interaction Dataset for Carousel Interfacesβ by Jingwei Kang and one at NAACL 2025: βkNN For Whisper And Its Effect On Bias And Speaker Adaptationβ by Maya Nachesa. Please check them out!
A new addition for the summer is a placeholder gallery for visual explanation interfaces so visitors can see what these are and just how varied they can be (not yet platform-proofed).
navatintarev.com
Delayed summer announcement: my new website is up and should be more mobile-friendly than its predecessor.
Ehud Reiter writes more about this in his blog here: ehudreiter.com/2025/06/25/p...
Pre-print here: arxiv.org/abs/2506.18760
4) Domain shift: The world has changed since the model was built. This includes societal changes (eg, legalisation of same-sex marriage) and changes in scientific knowledge and interventions.
2)Domain knowledge shows that the feature does not matter: Scientific evidence shows that the feature does not make a sig. difference. 3)Insufficient data: The feature may matter, but the model builders did not have sufficient high-quality training data to reliably model the featureβs impact.
She identified four reasons for explaining why a feature is ignored:
1) Data shows the feature does not matter: The feature is ignored because the data shows that the feature has minimal impact on the modelβs prediction.
Our joint PhD student Adarsa Sivaprasad is presenting her work at an AI and Healthcare conference: Patient-Centred Explainability in IVF Outcome Prediction. She has been studying what kind of explanations users need from OPIS, which is a tool that predicts the likelihood of success in IVF.
πΉ Demo Track β Bulat Khaertdinov (with Mirela Carmia Popa) showcasing VisualReF: an Interactive Image Search Prototype with Visual Relevance Feedback ππΌοΈ
πΉ The RecSys Challenge 2025 β Francesco Barile as co-organizer of this yearβs challenge! π₯ More info here: www.recsyschallenge.com/2025/
πΉ Doctoral Consortium β Dina Zilbershtein on Fair and Transparent Recommender Systems for Advertisements π‘
πΉ Short Paper Track β Cedric Waterschoot (with Francesco Barile) asking: βConsistent Explainers or Unreliable Narrators? Understanding LLM-generated Group Recommendationsβ π€π
π To summarize, University of Maastricht and our Explainable Artificial Intelligence theme is heading to ACM RecSys 2025 with a line-up of contributions π
β¨ Hereβs where you can find us:
Many thanks to the colleagues who supplied feedback on early drafts and others with who I simply discussed these ideas with less formally! Pre-print here: navatintarev.com/fai_tintarev....
I conclude by proposing constructive strategies for balancing empirical rigor with practical realities when assessing the quality of explainable AI:
a) systematic reporting of user, task, and context;
b) an investment in reproducibility studies, and
c) more meta-analyses of experiments.
If changing the user, task, or context βchangesβ explanation quality by 10%, it may not be meaningful to report a 2-3% performance improvement that does not control for these variables.