Nanne van Noord's Avatar

Nanne van Noord

@nanne

Assistant Professor of Visual Culture and Multimedia at University of Amsterdam. http://nanne.github.io

299
Followers
164
Following
48
Posts
08.09.2023
Joined
Posts Following

Latest posts by Nanne van Noord @nanne

Interested to find out why there are fewer women artists in Dutch museums? Few days left to apply!

19.02.2026 14:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Vacancy β€” Postdoc Quantifying Gender Inequality in Visual Art <p><span>Are you passionate about art, gender equality, and data-driven research?</span><span>Β Join the HERAtlas project to uncover the "invisible" women of art history<span>.</span>Β We are looking for a Postdoc to combine data science, psychology, and history to reveal the structural barriers behind gender inequality in the creative industries and translate these insights into public storytelling<span>.</span></span></p>

For the HERAtlas project at the University of Amsterdam (Netherlands) we are looking for a Postdoc to combine data science, psychology, and digital history to uncover the "invisible" women of art history!

More info at: werkenbij.uva.nl/en/vacancies...

29.01.2026 13:04 πŸ‘ 8 πŸ” 5 πŸ’¬ 0 πŸ“Œ 1

Congrats @mila-oiva.bsky.social ! Excited to see all the amazing things you'll be doing at FAU 🀩

29.01.2026 09:58 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
JUTTERS
JUTTERS YouTube video by Meike

Two students of our lab are presenting an artwork at NeurIPS, how amazing is that? Really impressed with the project openreview.net/pdf?id=BZjSU..., and the video they made for it!

www.youtube.com/watch?v=L631...

19.11.2025 13:53 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Wait, you get summaries of your own papers? That seems like step up from the "I see you work on <insert topic I've not touched in my life>" emails at least

19.11.2025 08:39 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

And lastly, if @neuripsconf.bsky.social would choose to reverse the decisions on the papers affected by space constraints, we would be happy and able to accommodate their presentation

19.09.2025 10:01 πŸ‘ 26 πŸ” 11 πŸ’¬ 0 πŸ“Œ 0

You're arguing in bad faith, so this will be my last reply.

But yes, if you actually want to learn about multimodality then you shouldnt read about MLLM.

27.07.2025 20:02 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I'm not sure what the point here is, but if you're going to believe Gemini over actual research done by AI researchers there isn't much more to discuss.

If you're willing to actually learn about this then you can start here: arxiv.org/abs/2505.19614, or even here: academic.oup.com/dsh/article/...

27.07.2025 19:14 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

That's a bit sealion-y, but I'll bite - *artificial* neural networks are a poorly analogy.

Those different details also matter a lot; especially because the brain isn't just floating in a jar, it's part of an embodied system.

27.07.2025 19:05 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This is where your misunderstanding is happening, as they are not elementary pieces. For the visual tokens a lot of the semantics have already been determined, and hence the interpretations it can arrive at are limited.

Brain analogy really doesnt hold here. NN != Brains.

27.07.2025 14:59 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Its clearly not; neural nets are a poor analogy for the brain, and clearly don't work the same way.

27.07.2025 14:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This, plus the (initial) interpretation of the modalities should not be independent - even at the pixel/word-level we may want to interpret differently depending on the other modalities (e.g., sense disambiguation)

Partial Information Decomposition has been used to formalise some of this

27.07.2025 08:48 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

No.. that's not how any of that works πŸ˜΅β€πŸ’«

27.07.2025 08:13 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It means I said 'mix' to explain the process, but I obviously know this involves attention - so the Gemini explanation is not meaningfully different.

Potential limited: if key visual info is missing, then attention wont recover that. So alot of 'decisions' about visual are made before fusion

26.07.2025 23:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Ah, I see how you and Gemini misunderstood. I was talking about extracting visual tokens, and mix referred to attention.

That doesnt make it meaningfully multimodal; potential of visual tokens is still limited by visual encoder.

Anyway, if I wanted to talk to an LLM I would do that directly

26.07.2025 22:37 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Please do explain then how whatever you're referring to is different and actually meaningfully multimodal.

26.07.2025 22:08 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

*all semantic information* is quite the claim; in our experiments they miss a lot of semantics from visual

'text space' in that after the image encoder the visual information is fixed, and mixed with text tokens for seq2text - which is not how multimodality works..

26.07.2025 20:41 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Natively is a bit of an exaggeration, as it's mostly just other modalities mapped to text space as input - but this makes their 'understanding' rather shallow

26.07.2025 19:51 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Identifying Prompted Artist Names from Generated Images A common and controversial use of text-to-image models is to generate pictures by explicitly naming artists, such as "in the style of Greg Rutkowski". We introduce a benchmark for prompted-artist reco...

This paper on identifying prompted artist names from generated images is such a fun and creative take on data attribution arxiv.org/abs/2507.18633

Wonder if it would do something meaningful for analysing artistic influence for human-made art πŸ€”

25.07.2025 07:20 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

This paper is πŸ’―

Generally, I have the impression NLP does better at this than CV - but clearly both fields should push studying culture beyond just looking at national identities

24.07.2025 08:45 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If the priority is to dunk on people that know less about AI, instead of being accurate, that could be a conclusion I guess.

18.07.2025 16:09 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Visual Geometry Group - University of Oxford Computer Vision group from the University of Oxford

It would be weird to describe this 2012 system, that is doing search, as an SVM classifier doing search: www.robots.ox.ac.uk/%7Evgg/publi...

Similarly, I wouldn't describe an LLM that translates a query to a destination for a Waymo as an 'LLM driving a car'

18.07.2025 15:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I'm not questioning your definition of searching, I'm questioning your use of "LLMs".

I don't think defining an LLM as a transformer-based NN is inaccurate, in which case it isn't doing search by itself, and then it would be fine to argue that it can only hallucinate.

18.07.2025 15:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

That statement mostly seems to apply to hosted commercial systems. It takes more than just downloading an LLM from huggingface to have a system that does this.

Sure an LLM can be trained to formulate queries and process results, but the system doing the searching is more than 'just' an LLM.

18.07.2025 14:51 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Fair, but still meaningful to make the distinction between LLMs and reasoning models, as not all LLMs are reasoning models. Especially if the point is to communicate across silos.

18.07.2025 13:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Do LLMs do search? Afaik there have been systems built around LLMs that do search, and then send these results back to them (i.e., RAG-like) - but that isn't the same as an LLM doing search.

18.07.2025 13:30 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I couldnt find EurIPS registration costs; hopefully they can address this by lowering costs for authors

But yes - this has been absurd; especially for those with visa issues - and I do think for that group this is a (minor) improvement

17.07.2025 08:52 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Not my intention to defend the requirement for a full registration, but this has been common practice for a while across multiple conferences.

The main change of new locations seems primarily that those with US visa issues will be able to present somewhere. But it doesnt really change costs

17.07.2025 08:31 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This considers registration only, no? One could register for in person, but not go - folks with visa issues have had to do this

17.07.2025 08:13 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This distinction is also useful because it makes it harder to avoid responsibility, as its easy to avoid directly working on surveillance - yet harder to avoid doing CV work that is surveillance-enabling.

Unless your position is that these are the same?

30.06.2025 10:09 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0