Thanks for sharing! I searched for "language industrialization" on Google before, but found nothing 😂
Thanks for sharing! I searched for "language industrialization" on Google before, but found nothing 😂
I wrote a short piece on “language industrialization”: hal.science/view/index/d... I came up with the idea of “language industrialization” when I was writing my research proposal last year, but I’m curious why so few people have discussed it under this term or similar ones before...
Cela me rappelle un exposé www.ens.psl.eu/agenda/works... Plus précisément gauthierroussilhe.com
C'est un peu hors sujet, mais savoir un peu de latin et de grec, ça aide parfois à trouver un bon acronyme pour un modèle ou une méthode 😂
Many thanks again to my collaborators! Looking forward to meeting more people at upcoming conferences!
(--) Are Large Language Models Chameleons? An Attempt to Simulate Social Surveys arxiv.org/abs/2405.19323 (on another topic, oral presentation at ESRA)
And one new paper I‘ve mentioned to many people:
(5) code_transformed: The Influence of Large Language Models on Code arxiv.org/abs/2506.12014
(3) LLM as a Broken Telephone: Iterative Generation Distorts Information aclanthology.org/2025.acl-lon...
(4) Wikipedia in the Era of LLMs: Evolution and Risks arxiv.org/abs/2503.02879
(1) Human-LLM Coevolution: Evidence from Academic Writing aclanthology.org/2025.finding...
(2) The Impact of Large Language Models in Academia: from Writing to Speaking aclanthology.org/2025.finding...
Another unforgettable summer!
I was glad to present some of my recent work on "the impact of LLMs in society" at ACL (*3+1), IC2S2 (*2), ESRA, Youth in HD, and ICSSI.
Here are the papers and posters:
Previous work:
(1) Is ChatGPT Transforming Academics' Writing Style arxiv.org/abs/2404.08627
(2) The Impact of Large Language Models in Academia: from Writing to Speaking arxiv.org/abs/2409.13686
[New preprint] Human-LLM Coevolution: Evidence from Academic Writing arxiv.org/abs/2502.09606
Hint 1: To delve or not to delve, that is the intricate question!
Hint 2: A short and easy-to-read paper!
Still the word frequency in arXiv abstracts! 👇👇👇
This translation has some issues 😂 ChatGPT does better. "Fly Over Southern University of Science and Technology: One Continuous Shot Covering 2,970 Mu in 11 Minutes · Double First-Class · SUSTech · Giant Campus · Aerial Campus View" PS: 1 mu ≈ 0.165 acres
Interesting, I know all four students in the video 😂 It was clearly filmed between 2013 and 2017, and the campus looks quite different now 🧐
You can find a little if you search her name 😬 My guess is that not many Chinese have moved here from X
Probably the only one that mentions her name so far:
bsky.app/profile/yili...
More discussion on X:
x.com/sunjiao123su...
Almost no discussion about Rosalind Picard here. Can I assume that most Chinese AI researchers are still on Twitter/X?
We are more interested in the density of LLM-style texts and its relative value (comparisons between categories and over time) than in establishing how many people are using LLMs – this can be estimated with the help of questionnaires, and it is not possible to get an accurate estimate only based on simulated data.
And, in an earlier paper 🧐😎 arxiv.org/abs/2404.08627
Interesting work! In addition to what you mentioned, we also noticed that more LLM-style words have started to appear in the presentations of ML conferences: arxiv.org/abs/2409.13686
Maybe it's hard to define the previous distribution? (not PI, just intuition) 👀 "All happy families are alike; each unhappy family is unhappy in its own way."