To appear @ #ICLR2025! We show that LMs represent semantically-equivalent inputs across languages, modalities, etc. similarly. This shared representation space is structured by the LM's dominant language, which is also relevant to recent phenomena where LMs "think" in ChineseποΈ in Englishπ contexts
22.01.2025 18:10
π 11
π 2
π¬ 0
π 0
We have released our code at github.com/ZhaofengWu/s.... We hope that this could be useful for future studies understanding the how LMs work!
17.12.2024 15:26
π 3
π 1
π¬ 0
π 0
Would love to be added! Thank youuu π
05.12.2024 22:09
π 1
π 0
π¬ 0
π 0
π‘We find that models βthinkβ π in English (or in general, their dominant language) when processing distinct non-English or even non-language data types π€― like texts in other languages, arithmetic expressions, code, visual inputs, & audio inputsβΌοΈ π§΅β¬οΈ arxiv.org/abs/2411.04986
02.12.2024 18:08
π 12
π 1
π¬ 2
π 2
Hi Marc! Do you mind adding me to the pack? Thanks!
01.12.2024 04:37
π 1
π 0
π¬ 1
π 0
πthank you thank you!!
25.11.2024 04:14
π 0
π 0
π¬ 0
π 0