What does it mean for visual tokens to be "interpretable" to LLM? And how to we measure it?
These, and many more pressing questions are addressed!
Introducing LatentLens -- a new, more faithful tool for interpretability! Honoured to have collaborated with
@bennokrojer.bsky.social on this!
11.02.2026 17:11
๐ 4
๐ 1
๐ฌ 0
๐ 0
Check out this amazing work by @karstanczak.bsky.social on rethinking LLM alignment through frameworks from multiple disciplines!
04.03.2025 20:17
๐ 2
๐ 0
๐ฌ 0
๐ 0
Check out the new MMTEB benchmark๐ if you are looking for an extensive, reproducible and open-source evaluation of text embedders!
20.02.2025 15:44
๐ 3
๐ 1
๐ฌ 0
๐ 0
#Repl4NLP will be co-located with NAACL this year in Albuquerque, New Mexico!
24.12.2024 17:02
๐ 3
๐ 0
๐ฌ 0
๐ 0
Excited to be at #NeurIPS2024 this week. Happy to meet up and chat about retrievers, RAG, embedders etc, or anything LLM-related!
10.12.2024 18:06
๐ 4
๐ 0
๐ฌ 0
๐ 0
Would love to join! Thanks!
29.11.2024 18:12
๐ 0
๐ 0
๐ฌ 0
๐ 0
(2/2) For me the most important contribution of the work is ScholarQABench, an expert curated benchmark for scientific literature survey.
I'll be using OpenScholar for the next few weeks, I hope to find some good papers!
29.11.2024 16:59
๐ 2
๐ 0
๐ฌ 1
๐ 0
(1/2) jumping back into this! read OpenScholar by @akariasai.bsky.social et al
I am quite excited by the abilities of LLMs to assist in scientific discovery and literature review.
29.11.2024 16:59
๐ 3
๐ 1
๐ฌ 1
๐ 0
Restarting an old routine "Daily Dose of Good Papers" together w @vaibhavadlakha.bsky.social
Sharing my notes and thoughts here ๐งต
23.11.2024 00:04
๐ 61
๐ 8
๐ฌ 5
๐ 3
Honoured to be on the list! https://t.co/15CucCbxOu
20.11.2024 17:55
๐ 0
๐ 0
๐ฌ 0
๐ 0
Join us and be part of an amazing research community! Feel free to reach out of your want to know more about Mila or the application process. https://t.co/Z3QT7hFAS7
15.10.2024 15:45
๐ 0
๐ 0
๐ฌ 0
๐ 0
Completely agree, super well organised and executed! ๐ https://t.co/wGkts8EGAb
09.10.2024 20:46
๐ 0
๐ 0
๐ฌ 0
๐ 0
Excited to welcome @COLM_conf to the city of best bagels! ๐ฅฏ Looking forward to it! https://t.co/wUxyrDr3x6
09.10.2024 20:45
๐ 0
๐ 0
๐ฌ 0
๐ 0
A little teaser for LLM2Vec @COLM_conf!
Stop by Tuesday morning poster session to know how we officiated the marriage of BERTs and Llamas! ๐ฆ https://t.co/E3HB1mwVvv
05.10.2024 03:59
๐ 0
๐ 0
๐ฌ 0
๐ 0
RIP freedom of speech! https://t.co/PXMS9xMnvH
28.08.2024 18:07
๐ 0
๐ 0
๐ฌ 0
๐ 0
๐๐ LLMs are the new text encoders! https://t.co/4FZ2LXCPSd
28.08.2024 14:45
๐ 0
๐ 0
๐ฌ 0
๐ 0
Amazing talk by @PontiEdoardo. ๐๐It is interesting how many different ways exist to make LLMs more efficient! https://t.co/2f2L8zLiH3
15.08.2024 09:50
๐ 0
๐ 0
๐ฌ 0
๐ 0
First ever arena for embedding models! โ๏ธ
Excited to see how this will change evaluation in this space! ๐ https://t.co/H4FoMJrQaA
30.07.2024 16:40
๐ 0
๐ 0
๐ฌ 0
๐ 0
Looking for an emergency reviewer for EMNLP / ARR familiar with RAG and language models. Please reach out if you can review a paper in the next couple of days.
22.07.2024 07:53
๐ 0
๐ 0
๐ฌ 0
๐ 0
Great to see LLM2Vec being used for multilingual machine translation! ๐ I believe LLM2Vec will serve as backbone of many more applications in the future! https://t.co/G18aqJ2xuv
19.06.2024 15:32
๐ 0
๐ 0
๐ฌ 0
๐ 0
However, this could mean we are past the point where MTEB serves as a useful signal ๐. Improving beyond the numbers we are seeing today (by training on synthetic data) carries the risk of optimizing for the benchmark rather than building general purpose embedding models. 5/N
30.04.2024 20:30
๐ 0
๐ 0
๐ฌ 0
๐ 0
Interestingly, Meta-Llama-3-8B only slightly outperforms Mistral-7B, the previously best model when combined with LLM2Vec ๐ค. We might have reached a point where better base models are not sufficient to make substantial improvements on MTEB. 3/N
30.04.2024 20:30
๐ 0
๐ 0
๐ฌ 0
๐ 0
In the supervised setting, applying LLM2Vec to Meta-Llama-3-8B leads to a new state-of-the-art performance (65.01) on MTEB among models trained on publicly available data only. 2/N https://t.co/UJoOTJ4L5r
30.04.2024 20:30
๐ 0
๐ 0
๐ฌ 0
๐ 0
Exciting discovery! Triggers DONโT transfer universally ๐ฎ. Check out the paper for detailed experiments and analysis. https://t.co/Op7gGWBEdb
25.04.2024 14:54
๐ 0
๐ 0
๐ฌ 0
๐ 0
Applying LLM2Vec costs same as ~2 cappuccinos! https://t.co/O6iFXAJgoB
22.04.2024 14:21
๐ 0
๐ 0
๐ฌ 0
๐ 0
Very nice and intuitive explanation of our work lLM2Vec by @IntuitMachine!
Using causal LLMs for representation tasks without any architecture modifications is like driving a sports car in reverse ๐๏ธ๐คฏ
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/RfBNydFW9y
12.04.2024 15:31
๐ 0
๐ 0
๐ฌ 0
๐ 0
Great summary of our recent LLM2Vec paper! Thanks @ADarmouni!
All resources available at our project page - https://t.co/hwAiv2yrPT https://t.co/bY4DoP5ms1
12.04.2024 15:23
๐ 0
๐ 0
๐ฌ 0
๐ 0
This is going to be my new way of bookmarking papers now! https://t.co/Dbn5juFBD9
11.04.2024 02:48
๐ 0
๐ 0
๐ฌ 0
๐ 0
Huggingface paper page by @_akhaliq - https://t.co/MRuPwtYCsZ
10.04.2024 03:18
๐ 0
๐ 0
๐ฌ 0
๐ 0
This work was done with wonderful collaborators - @ParishadBehnam @mariusmosbach @DBahdanau @NicolasChapados and @sivareddyg 10/N
10.04.2024 00:20
๐ 0
๐ 0
๐ฌ 0
๐ 0