Prof. Nava Tintarev's Avatar

Prof. Nava Tintarev

@navatintarev

(she/her) Full Professor of Explainable AI, University of Maastricht, NL. Lab director of the lab on trustworthy AI in Media (TAIM). Director of Research at the Department of Advanced Computing Sciences. IPN board member (incoming 2026). navatintarev.com

215
Followers
236
Following
95
Posts
16.12.2024
Joined
Posts Following

Latest posts by Prof. Nava Tintarev @navatintarev

Post image

We’re happy to share that our lab has received a Collaboration Award during the ICAI Day in recognition of our dedicated efforts to organize cross-lab events that foster collaboration and amplify our collective impact.

30.10.2025 16:02 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

Bulat Khaertdinov is presenting our recommender systems demo at the Dutch-Belgian Information Retrieval Workshop. (Photo credit: Alain Starke)

27.10.2025 11:23 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Two submissions to be presented at MediaEval. Beyond Similarity: Two-Stage Retrieval for News Image Search (NewsImages) and Early Fusion and Pre-text task learning for Video Memorability Prediction (Memorability track). Led by Bulat Khaertdinov and Aashutosh Ganesh, a.o.

27.10.2025 11:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
IOS Press Ebooks - ECAI 2025 - 28th European Conference on Artificial Intelligence, 25-30 October 2025, Bologna, Italy – Including 14th Conference on Prestigious Applications of Intelligent Systems (P... The term Artificial Intelligence was first used in 1956 by Professor John McCarthy. Since then, the field of AI has grown enormously, and pervades many aspects of daily life. This publication presents...

The proceedings of ECAI-2025 are online (including my frontieries in AI position paper):

dx.doi.org/10.3233/FAIA...

24.10.2025 09:14 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Tenure-Track Faculty in Artificial Intelligence and Machine Learning (f/m/d) 2025/2026 | Career

🏹 Job alert: Tenure-Track Faculty in Artificial Intelligence and Machine Learning at @cispa.de

πŸ“SaarbrΓΌcken & St. Ingbert πŸ‡©πŸ‡ͺ
πŸ“… Apply by Nov 18th
πŸ”— https://career.cispa.de/jobs/tenure-track-faculty-in-artificial-intelligence-and-machine-learning-f-m-d-2025-2026-73

22.10.2025 09:37 πŸ‘ 6 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

@xai-at-dacs.bsky.social @umdacs.bsky.social @maastrichtu.bsky.social

17.10.2025 06:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image Post image Post image

We answered many interesting Q’s. Incl. legislation of platforms (DSA), possible commercialization of our research vs collab w companies, whether our research could help with polarization, and whether to train LLMs on smaller more representative data!

17.10.2025 06:48 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

What an honor to represent Maastricht university and to highlight the impact of research on society. We (w Cedric Waterschoot) enjoyed talking about how scientists could inform how we see information online.

17.10.2025 06:48 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The report of Dagstuhl Seminar 25142 "Explainability in Focus: Advancing Evaluation through Reusable Experiment Design" is now published as part of the periodical Dagstuhl Reports: drops.dagstuhl.de/entities/doc...

Organized by: Simone Stumpf, Elizabeth Daly and Stefano Teso

13.10.2025 08:26 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
LinkedIn This link will take you to a page that’s not on LinkedIn

Looking forward to present this position paper at the frontiers in AI at ECAI !

Measuring Explanation Quality β€” a Path Forward

ecai2025.org/frontiers-in...

13.10.2025 06:49 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

avondwenm.nl/wp-content/u...

13.10.2025 06:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

On the 16th of October in Leiden, Cedric Waterschoot and I will attend the "Avond van wetenschap & maatschappij" (evening of science and society).
*** Why am I seeing this? ***
πŸ’‘Provocative thesis: Scientists should play an important role in shaping how people see and interpret information online.

13.10.2025 06:48 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

β€œIt actually doesn’t take much to be considered a difficult woman. That’s why there are so many of us.” ~ Jane Goodall

03.10.2025 07:49 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

β€œYou cannot get through a single day without having an impact on the world around you. What you do makes a difference, and you have to decide what kind of difference you want to make." ~ Jane Goodall

03.10.2025 07:48 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preliminary Call for Papers​ – ACM UMAP 2026

A preliminary call for papers for #umap2026 is now available on the conference's website. Check it out, mark your calendars, and get to work on those papers. www.um.org/umap2026/cal...
@umapconf.bsky.social (#recsys2025)

24.09.2025 07:44 πŸ‘ 4 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0
Recruitment

I'm hiring again! Please share. I'm recruiting a postdoc research fellow in human-centred AI for scalable decision support. Join us to investigate how to balance scalability and human control in medical decision support. Closing date: 4 October (AEST).
uqtmiller.github.io/recruitment/

16.09.2025 04:34 πŸ‘ 2 πŸ” 7 πŸ’¬ 1 πŸ“Œ 1

We have two papers accepted by our PhD students, one at SIGIR 2025: β€œRecGaze: The First Eye Tracking and User Interaction Dataset for Carousel Interfaces” by Jingwei Kang and one at NAACL 2025: β€œkNN For Whisper And Its Effect On Bias And Speaker Adaptation” by Maya Nachesa. Please check them out!

27.05.2025 10:00 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Nava Tintarev Prof. Nava Tintarev

A new addition for the summer is a placeholder gallery for visual explanation interfaces so visitors can see what these are and just how varied they can be (not yet platform-proofed).
navatintarev.com

10.09.2025 13:32 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Delayed summer announcement: my new website is up and should be more mobile-friendly than its predecessor.

10.09.2025 13:32 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Ehud Reiter writes more about this in his blog here: ehudreiter.com/2025/06/25/p...

Pre-print here: arxiv.org/abs/2506.18760

10.09.2025 13:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

4) Domain shift: The world has changed since the model was built. This includes societal changes (eg, legalisation of same-sex marriage) and changes in scientific knowledge and interventions.

10.09.2025 13:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2)Domain knowledge shows that the feature does not matter: Scientific evidence shows that the feature does not make a sig. difference. 3)Insufficient data: The feature may matter, but the model builders did not have sufficient high-quality training data to reliably model the feature’s impact.

10.09.2025 13:31 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

She identified four reasons for explaining why a feature is ignored:
1) Data shows the feature does not matter: The feature is ignored because the data shows that the feature has minimal impact on the model’s prediction.

10.09.2025 13:31 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
LinkedIn This link will take you to a page that’s not on LinkedIn

Our joint PhD student Adarsa Sivaprasad is presenting her work at an AI and Healthcare conference: Patient-Centred Explainability in IVF Outcome Prediction. She has been studying what kind of explanations users need from OPIS, which is a tool that predicts the likelihood of success in IVF.

10.09.2025 13:31 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
RecSys Challenge 2025 RecSys Challenge 2025

πŸ”Ή Demo Track – Bulat Khaertdinov (with Mirela Carmia Popa) showcasing VisualReF: an Interactive Image Search Prototype with Visual Relevance Feedback πŸ”πŸ–ΌοΈ
πŸ”Ή The RecSys Challenge 2025 – Francesco Barile as co-organizer of this year’s challenge! πŸ”₯ More info here: www.recsyschallenge.com/2025/

10.09.2025 13:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
RecSys Challenge 2025 RecSys Challenge 2025

πŸ”Ή Doctoral Consortium – Dina Zilbershtein on Fair and Transparent Recommender Systems for Advertisements πŸ’‘
πŸ”Ή Short Paper Track – Cedric Waterschoot (with Francesco Barile) asking: β€œConsistent Explainers or Unreliable Narrators? Understanding LLM-generated Group Recommendations” πŸ€–πŸ“š

10.09.2025 13:28 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

πŸš€ To summarize, University of Maastricht and our Explainable Artificial Intelligence theme is heading to ACM RecSys 2025 with a line-up of contributions πŸŽ‰
✨ Here’s where you can find us:

10.09.2025 13:28 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Many thanks to the colleagues who supplied feedback on early drafts and others with who I simply discussed these ideas with less formally! Pre-print here: navatintarev.com/fai_tintarev....

10.09.2025 13:27 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

I conclude by proposing constructive strategies for balancing empirical rigor with practical realities when assessing the quality of explainable AI:
a) systematic reporting of user, task, and context;
b) an investment in reproducibility studies, and
c) more meta-analyses of experiments.

10.09.2025 13:27 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

If changing the user, task, or context β€˜changes’ explanation quality by 10%, it may not be meaningful to report a 2-3% performance improvement that does not control for these variables.

10.09.2025 13:27 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0