Leonie Bossemeyer's Avatar

Leonie Bossemeyer

@bossemel

PhD student at The University of Edinburgh

268
Followers
832
Following
7
Posts
01.02.2024
Joined
Posts Following

Latest posts by Leonie Bossemeyer @bossemel

Post image

FGVC's not dead!

The 13th Workshop on Fine-Grained Visual Categorization has been accepted to CVPR 2026, in Denver, Colorado!

CALL FOR PAPERS: sites.google.com/view/fgvc13/

From Ecology to Medical Imagining, join us as we tackle the long tail and the limits of visual discrimination! #CVPR2026 #AI

13.01.2026 18:10 πŸ‘ 20 πŸ” 8 πŸ’¬ 1 πŸ“Œ 3
Post image

We have two open PhD positions at the interface of AI and ecology. Start dates are Sept 2026.

We are looking for candidates with a background in AI/CS, Math, Stats, or Physics that are passionate about solving challenging problems in these domains.Β 

Application deadline is in two weeks.

05.01.2026 14:14 πŸ‘ 19 πŸ” 14 πŸ’¬ 1 πŸ“Œ 0

I will be presenting this work from 11AM-2PM at #NeurIPS2025 in San Diego today! Come by poster #2012 to learn more :)

04.12.2025 17:15 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
CleverBirds: A Multiple-Choice Benchmark for Fine-grained Human Knowledge Tracing Mastering fine-grained visual recognition, essential in many expert domains, can require that specialists undergo years of dedicated training. Modeling the progression of such expertize in humans rema...

Heading to San Diego now for NeurIPS! Send me a message if you want to meet and chat about CV & human learning, or meet me at the poster session Thursday morning where I'll present CleverBirds:
arxiv.org/abs/2511.08512

01.12.2025 10:45 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Excited to share our paper Representational Difference Explanations (RDX) was accepted to #NeurIPS2025! πŸŽ‰RDX is a new method for model diffing designed to isolate πŸ” representational differences. 1/7

19.11.2025 16:49 πŸ‘ 8 πŸ” 4 πŸ’¬ 1 πŸ“Œ 2

More info:
πŸ“„ Paper: arxiv.org/abs/2511.08512
πŸ—‚οΈ Data: huggingface.co/datasets/bos...
πŸ’» Code: github.com/visipedia/cl...
🌐 Project website: cleverbirds-benchmark.github.io

5/5

12.11.2025 15:34 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Even strong sequence models struggle here, predicting how recognition evolves is genuinely hard.
CleverBirds sets a new challenge for understanding visual learning dynamics.
4/5

12.11.2025 15:34 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

Collected in collaboration with #eBird, CleverBirds spans 10K+ species and 40K learners across six years.

It’s one of the largest datasets for visual expertise, tracking how people build recognition ability over time.

3/5

12.11.2025 15:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Two panels. Left panels shows how humans answer the quiz. Right panel shows how quiz answers are stacked per user to create the knowledge tracing task.

Two panels. Left panels shows how humans answer the quiz. Right panel shows how quiz answers are stacked per user to create the knowledge tracing task.

In CleverBirds, ML models have to predict human learning: inferring skills form past answers to anticipate future recognition.

2/5

12.11.2025 15:31 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
CleverBirds: A Multiple-Choice Benchmark for Fine-grained Human Knowledge Tracing Mastering fine-grained visual recognition, essential in many expert domains, can require that specialists undergo years of dedicated training. Modeling the progression of such expertize in humans rema...

From medicine to geo-guessing, humans can get incredibly good at solving visual recognition tasks.
But how is this skill learned, and can we model its progression?
We present CleverBirds, accepted #NeurIPS2025, a large-scale benchmark for visual knowledge tracing.
πŸ“„ arxiv.org/abs/2511.08512
1/5

12.11.2025 15:29 πŸ‘ 7 πŸ” 2 πŸ’¬ 1 πŸ“Œ 4
Post image Post image Post image

Prof. @tokehoye.bsky.social (Aarhus University) and I have an open PhD position (jointly advised) on biodiversity monitoring with camera trap networks. Deadline: 15-Jan-2026

Please help us share this post among students you know with an interest in Machine Learning and Biodiversity! πŸ€–πŸͺ²πŸŒ±

11.11.2025 13:12 πŸ‘ 20 πŸ” 11 πŸ’¬ 1 πŸ“Œ 2
Post image

Excited to share my first work as a PhD student at EdinburghNLP that I will be presenting at EMNLP!

RQ1: Can we achieve scalable oversight across modalities via debate?

Yes! We show that debating VLMs lead to better model quality of answers for reasoning tasks.

01.11.2025 19:29 πŸ‘ 2 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

There are now millions of publicly-available AI models – which one is right for you?

We introduce CODA ( #ICCV2025 Highlight! ), a method for *active model selection.* CODA selects the best model for your data with any labeling budget – often as few as 25 labeled examples. 1/

@iccv.bsky.social

13.10.2025 18:00 πŸ‘ 12 πŸ” 6 πŸ’¬ 2 πŸ“Œ 1
Post image

Interested in doing a PhD in machine learning at the University of Edinburgh starting Sept 2026?

My group works on topics in vision, machine learning, and AI for climate.

For more information and details on how to get in touch, please check out my website:
homepages.inf.ed.ac.uk/omacaod

16.10.2025 09:15 πŸ‘ 39 πŸ” 18 πŸ’¬ 2 πŸ“Œ 0
Post image

Congratulations to everyone who got their @neuripsconf.bsky.social papers accepted πŸŽ‰πŸŽ‰πŸŽ‰

At #EurIPS we are looking forward to welcoming presentations of all accepted NeurIPS papers, including a new β€œSalon des RefusΓ©s” track for papers which were rejected due to space constraints!

19.09.2025 09:13 πŸ‘ 50 πŸ” 16 πŸ’¬ 1 πŸ“Œ 5

Reminder that the deadlines for submitting papers to the FGVC workshop at #CVPR2025 are coming up soon.

The scope of the workshop is quite broad, e.g. fine-grained learning, multi-modal, human in the loop, etc.

More info here:
sites.google.com/view/fgvc12/...

@cvprconference.bsky.social

01.03.2025 09:48 πŸ‘ 7 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0

New working paper with @tobiasbergmann.bsky.social on the deficit-investment trade-off of deficit rules. Comments and feedback are more than welcome! πŸ“©

24.01.2025 15:54 πŸ‘ 7 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Post image

❓How can we predict where a species may be found when observations are limited?

✨Introducing Le-SINR: A text to range map model that can enable scientists to produce more accurate range maps with fewer observations.

Thread 🧡

09.12.2024 15:11 πŸ‘ 20 πŸ” 8 πŸ’¬ 1 πŸ“Œ 1
Post image

🎯 How can we empower scientific discovery in millions of nature photos?

Introducing INQUIRE: A benchmark testing if AI vision-language models can help scientists find biodiversity patterns- from disease symptoms to rare behaviors- hidden in vast image collections.

ThreadπŸ‘‡πŸ§΅

06.12.2024 20:28 πŸ‘ 88 πŸ” 33 πŸ’¬ 3 πŸ“Œ 3