Assistant professor in Natural Language Processing at the University of Edinburgh and visiting professor at NVIDIA | A Kleene star shines on the hour of our meeting.
A business analyst at heart who enjoys delving into AI, ML, data engineering, data science, data analytics, and modeling. My views are my own.
You can also find me at threads: @sung.kim.mw
Interested in cognition and artificial intelligence. Research Scientist at Google DeepMind. Previously cognitive science at Stanford. Posts are mine.
lampinen.github.io
Researcher in ML/NLP at the University of Edinburgh (faculty at Informatics and EdinburghNLP), Co-Founder/CTO at www.miniml.ai, ELLIS (@ELLIS.eu) Scholar, Generative AI Lab (GAIL, https://gail.ed.ac.uk/) Fellow -- www.neuralnoise.com, he/they
Post-doc at Cornell Tech NYC
Working on the representations of LMs and pretraining methods
https://nathangodey.github.io
Google Chief Scientist, Gemini Lead. Opinions stated here are my own, not those of Google. Gemini, TensorFlow, MapReduce, Bigtable, Spanner, ML things, ...
Postdoc in ML/NLP at the University of Edinburgh.
Interested in Bottlenecks in Neural Networks; Unargmaxable Outputs.
https://grv.unargmaxable.ai/
Multimodal research @huggingface
Post-doc @ VU Amsterdam, prev University of Edinburgh.
Neurosymbolic Machine Learning, Generative Models, commonsense reasoning
https://www.emilevankrieken.com/
Driven by industry progress, inspired by provocative leadership, plus don't mind a good pair of shoes or a great @PennStateFball scoreboard either.
The 2025 Conference on Language Modeling will take place at the Palais des Congrès in Montreal, Canada from October 7-10, 2025
PhD student @CambridgeLTL; Previously @DLAB @EPFL; Interested in NLP and CSS. Apple Scholar, Gates Scholar.
PhD student @ Language Technology Lab, University of Cambridge. Making GPUs go brrrr
NLP PhD Student @ University of Cambridge
research scientist @deepmind. language & multi-agent rl & interpretability. phd @BrownUniversity '22 under ellie pavlick (she/her)
https://roma-patel.github.io
Author of Interpretable Machine Learning and other books
Newsletter: https://mindfulmodeler.substack.com/
Website: https://christophmolnar.com/
Robustness, Data & Annotations, Evaluation & Interpretability in LLMs
http://mimansajaiswal.github.io/
Enjoy not enjoying ideals | Interpretability of modular convnets applied to 👁️ and 🛰️🐝 | she/her 🦒💕
variint.github.io
NLP assistant prof at KU Leuven, PI @lagom-nlp.bsky.social. I like syntax more than most people. Also multilingual NLP, interpretability, mountains and beer. (She/her)
Assistant Professor in NLP (Fairness, Interpretability and lately interested in Political Science) at the University of Copenhagen ✨
Before: PostDoc in NLP at Uni of CPH, PhD student in ML at TU Berlin
INSERM group leader @ Neuromodulation Institute and NeuroSpin (Paris) in computational neuroscience.
How and why are computations enabling cognition distributed across the brain?
Expect neuroscience and ML content.
jbarbosa.org
Full of childlike wonder. Building friendly robots. UT Austin PhD student, MIT ‘20.
Associate professor at IT University of Copenhagen: NLP, language models, interpretability, AI & society. Co-editor-in-chief of ACL Rolling Review. #NLProc #NLP
Postdoc at Linköping University🇸🇪. Doing NLP, particularly explainability, language adaptation, modular LLMs. I‘m also into🌋🏕️🚴.
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
Professor of Statistical Machine Learning at the University of Adelaide.
https://sejdino.github.io/
Principal Researcher @ CENTAI.eu | Leading the Responsible AI Team. Building Responsible AI through Explainable AI, Fairness, and Transparency. Researching Graph Machine Learning, Data Science, and Complex Systems to understand collective human behavior.
Research in NLP (mostly LM interpretability & explainability).
Assistant prof at UMD CS + CLIP.
Previously @ai2.bsky.social @uwnlp.bsky.social
Views my own.
sarahwie.github.io
Linguist in AI & CogSci 🧠👩💻🤖 PhD student @ ILLC, University of Amsterdam
🌐 https://mdhk.net/
🐘 https://scholar.social/@mdhk
🐦 https://twitter.com/mariannedhk
Tenure-track faculty at the Max Planck Institute for Software Systems
Previously postdoc at UW and AI2, working on Natural Language Processing
Recruiting PhD students!
🌐 https://lasharavichander.github.io/
Postdoc AI Researcher (NLP) @ ITU Copenhagen
🧭 https://mxij.me
Comm tech & social media research professor by day, symphony violinist by night, outside as much as possible otherwise. German/American Pacific Northwestern New Englander, #firstgen academic, she/her, 🏳️🌈
https://anne-oeldorf-hirsch.uconn.edu
Machine Learner by day, 🦮 Statistician at ❤️
In search of statistical intuition for modern ML & simple explanations for complex things👀
Interested in the mysteries of modern ML, causality & all of stats. Opinions my own.
https://aliciacurth.github.io
Technology specialist at the EU AI Office / AI Safety / Prev: University of Amsterdam, EleutherAI, BigScience
Thoughts & opinions are my own and do not necessarily represent my employer.
Assistant Professor at PoliTo 🇮🇹 |
Former Visiting scholar at UCSC 🇺🇸 |
she/her | TrustworthyAI, XAI, Fairness in AI
https://elianap.github.io/
Organic machine turning tea into theorems ☕️
AI @ Microsoft Research ➡️ Goal: Teach models (and humans) to reason better
Let’s connect re: AI for social good, graphs & network dynamics, discrete math, logic 🧩, 🥾,🎨
Organizing for democracy.🗽
www.rlaw.me
Junior Professor CNRS (previously EPFL, TU Darmstadt) -- AI Interpretability, causal machine learning, and NLP. Currently visiting @NYU
https://peyrardm.github.io
PhD student in Computer Science @UCSD. Studying interpretable AI and RL to improve people's decision-making.
PhD Student @ LMU Munich
Munich Center for Machine Learning (MCML)
Research in Interpretable ML / Explainable AI
🎓 PhD student @cvisionfreiburg.bsky.social @UniFreiburg
💡 interested in mechanistic interpretability, robustness, AutoML & ML for climate science
https://simonschrodi.github.io/
PostDoc @ Uni Tübingen
explainable AI, causality
gunnarkoenig.com
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
Research scientist at Google DeepMind.
Intersection of cognitive science and AI. Reinforcement learning, decision making, structure learning, abstraction, cognitive modeling, interpretability.
Ph.D. in NLP Interpretability from Mila. Previously: independent researcher, freelancer in ML, and Node.js core developer.
PhD student in Interpretable Machine Learning at @tuberlin.bsky.social & @bifold.berlin
https://web.ml.tu-berlin.de/author/laura-kopf/
Research Fellow @ Stanford Intelligent Systems Laboratory and Hoover Institution at Stanford University | Focusing on interpretable, safe, and ethical AI/LLM decision-making. Ph.D. from TUM.
Computer Science PhD Student @ Stanford | Geopolitics & Technology Fellow @ Harvard Kennedy School/Belfer | Vice Chair EU AI Code of Practice | Views are my own
Retired UNESCO Dir for Digital Inclusion, Policies & Transformation. Chair, UN University, eGov Institute.
UNESCO Women in STEM Committee
Some pottery and cyanotyping
Profile picture is of my face and torso
Banner is a picture I took of a light garden
Seeking superhuman explanations.
Senior researcher at Microsoft Research, PhD from UC Berkeley, https://csinva.io/
The Milan Natural Language Processing Group #NLProc #AI
milanlproc.github.io
PhD student in NLP at Cambridge | ELLIS PhD student
https://lucasresck.github.io/
#NLP / #NLProc , #dataScience, #AI / #ArtificialIntelligence, #linguistics (#syntax, #semantics, …), occasional #parenting, #gardening, & what not. PhD. Adjunct prof once in a full red moon. Industry / technical mentor. Not my opinion, never my employer’s
PhD student at Language Technology Lab, University of Cambridge
Working on fully open-source LLMs and training data. We believe in community-owned AI.
https://www.llm360.ai
I make sure that OpenAI et al. aren't the only people who are able to study large scale AI systems.
LM/NLP/ML researcher ¯\_(ツ)_/¯
yoavartzi.com / associate professor @ Cornell CS + Cornell Tech campus @ NYC / nlp.cornell.edu / associate faculty director @ arXiv.org / researcher @ ASAPP / starting @colmweb.org / building RecNet.io
I built a C library that lets you compile 12kb static binaries that run natively on Linux, Mac, Windows, FreeBSD, OpenBSD, NetBSD and BIOS using just GCC/Clang.
AI @ OpenAI, Tesla, Stanford
Working on AI and access to knowledge at Harvard. Executive Director of the Institutional Data Initiative; Chief Technologist of the Berkman Klein Center.
Llama Farmer
Ex CLO Hugging Face, Xoogler
Open, transparent AI for real world impact. Built for developers, creators, and teams shaping what’s next.
Also an architect, GIS enthusiast, sailor.
🥇 LLMs together (co-created model merging, BabyLM, textArena.ai)
🥈 Spreading science over hype in #ML & #NLP
Proud shareLM💬 Donor
@IBMResearch & @MIT_CSAIL
Cofounded and lead PyTorch at Meta. Also dabble in robotics at NYU.
AI is delicious when it is accessible and open-source.
http://soumith.ch
I lead Cohere For AI. Formerly Research
Google Brain. ML Efficiency, LLMs,
@trustworthy_ml.
Professor, Programmer in NYC.
Cornell, Hugging Face 🤗
The AI community building the future!
A LLN - large language Nathan - (RL, RLHF, society, robotics), athlete, yogi, chef
Writes http://interconnects.ai
At Ai2 via HuggingFace, Berkeley, and normal places
I like tokens! Lead for OLMo data at @ai2.bsky.social (Dolma 🍇) w @kylelo.bsky.social. Open source is fun 🤖☕️🍕🏳️🌈 Opinions are sampled from my own stochastic parrot
more at https://soldaini.net
https://Answer.AI & https://fast.ai founding CEO; previous: hon professor @ UQ; leader of masks4all; founding CEO Enlitic; founding president Kaggle; various other stuff…
AI prof at Mila (HEC) trying to make the future more cooperative and cool 😎🌍️
Deep learning, real-world generalization, responsible AI, safety, risk, climate, ecology, artscience, opensource, anticolonial AI
they/she
teganmaharaj.neocities.org
Science of language models @uwnlp.bsky.social and @ai2.bsky.social with @PangWeiKoh and @nlpnoah.bsky.social. https://ianmagnusson.github.io
proud mediterrenean 🧿 open-sourceress at hugging face 🤗 multimodality, zero-shot vision, vision language models, transformers
I make open source projects related to GenAI
https://github.com/Mihaiii
Building:
Productivity tools for Claude-Code & other CLI agents:
https://github.com/pchalasani/claude-code-tools
Langroid - Multi-Agent LLM framework: https://github.com/langroid/langroid
IIT CS, CMU/PhD/ML.
Ex- ASU, Los Alamos, Goldman Sachs, Yahoo
PhD candidate at EPFL doing research in #NLProc
👩🏻💻 https://agromanou.github.io/
Hacker & entrepreneur. Founder helix.ml, private GenAI stack, getting business value out of local open source LLMs
#NLProc PhD Student at EPFL
Master student at ENS Paris-Saclay / aspiring AI safety researcher / improviser
Prev research intern @ EPFL w/ wendlerc.bsky.social and Robert West
MATS Winter 7.0 Scholar w/ neelnanda.bsky.social
https://butanium.github.io
Postdoc at Northeastern and incoming Asst. Prof. at Boston U. Working on NLP, interpretability, causality. Previously: JHU, Meta, AWS
Interpretable Deep Networks. http://baulab.info/ @davidbau
https://mega002.github.io
Gemini Post-Training ⚫️ Research Scientist at Google DeepMind ⚫️ PhD from ETH Zurich
AI Safety Research // Software Engineering
Postdoc @ Northeastern, @ndif-team.bsky.social w/ @davidbau.bsky.social. Interpretability ∩ HCI ∩ #NLProc. Built @inseq.org. Prev: PhD @gronlp.bsky.social, ML @awscloud.bsky.social
gsarti.com
Waiting on a robot body. All opinions are universal and held by both employers and family. ML/NLP professor.
nsaphra.net
Machine learning haruspex
NLP PhD student at Imperial College London and Apple AI/ML Scholar.
Machine learning PhD student @ Blei Lab in Columbia University
Working in mechanistic interpretability, nlp, causal inference, and probabilistic modeling!
Previously at Meta for ~3 years on the Bayesian Modeling & Generative AI teams.
🔗 www.sweta.dev
Machine Learning PhD Student
@ Blei Lab & Columbia University.
Working on probabilistic ML | uncertainty quantification | LLM interpretability.
Excited about everything ML, AI and engineering!
PhD student at Vector Institute / University of Toronto. Building tools to study neural nets and find out what they know. He/him.
www.danieldjohnson.com