When shaping your research agenda, your objective is to find the weirdest niche possible that still has the potential to change everything.
When shaping your research agenda, your objective is to find the weirdest niche possible that still has the potential to change everything.
๐จ NLP4DH 2026 deadline has been extended to March 13! Submission link here: openreview.net/group?id=NLP...
Writing an HCI paper about an AI-powered system to a venue like UIST 2026 or CHI 2027? Wondering what reviewers expect you to report, and how to approach paper framing and writing? Check out our reporting guidelines: medium.com/p/7c3ae86341...
Without such eval, rushed integration of AI into classrooms may exacerbate existing academic achievement gaps.
See our paper for more (inc. a study where I redrew 300+ images by hand): arxiv.org/abs/2603.00925
@ai2.bsky.social @kylelo.bsky.social
We argue that eval around AI for education should be disaggregated in a manner that pinpoints whether models can discern when a student may need pedagogical support, and whether models equitably serve students across different levels of proficiency.
Question: How many dots did the student include in their array? For an erroneous student response: Model answer: 12. True answer: The student didn't include an array. True answer for a non-erroneous student response: The student included 12 dots in their array. Question: How many squares did the student draw to show the number of cups of red paint? For an erroneous student response: Model answer: The student drew 9 squares to show the number of cups of red paint. True answer: The student drew 12 squares to represent the cups of red paint. True answer for a non-erroneous student response: The student drew 9 squares to show the number of cups of red paint.
Modelsโ mistakes may assume correct math solutions. Typically, models are trained on โhigh qualityโ math so that they can hill-climb on GSM8k, MATH, etc. However, dev pipelines that favor correct math are tension w/ education, where math errors require extra attention.
A bar chart disaggregating results for four VLMs across different question types. Content description QA consistently drives the gap in VLM performance between student responses that contain errors versus those that do not. In addition, questions related to studentsโ correctness and errors are still the most difficult.
We find that this gap is primarily driven by QA related to content description. In addition, VLMs struggle to identify cases when help is needed; the most challenging QA are those related to assessing studentsโ correctness and errors.
Title, author list, and two figures from the paper. Title: The Aftermath of DrawEduMath: Vision Language Models Underperform with Struggling Students and Misdiagnose Errors Authors: Li Lucy, Albert Zhang, Nathan Anderson, Ryan Knight, Kyle Lo Figure 1: On the left is a math problem, where students are asked to draw x < 5/2 on a number line. The right side shows two example student responses that differ in correctness. DrawEduMath pairs each math problem with one student response, and prompts VLMs to answer questions about the student response. Figure 2: VLMs consistently perform worse on answering DrawEduMath benchmark questions pertaining to erroneous student responses. Performance on non-erroneous student responses is labeled with specific VLMsโ names; that same modelโs performance on erroneous student responses is directly below.
Models are now expert math solvers, and so AI for math education is receiving increasing attention.
Our new preprint evaluates 11 VLMs on our QA benchmark, DrawEduMath. We highlight a startling gap: models perform less well on inputs from K-12 students who need more help. ๐งต
1/7 ๐งต The GPT-4 technical report featured detailed calibration curves.
Since then, not a single major model release has reported calibration. The field quietly stopped measuring whether models know what they don't know.
Our new position paper argues this is a mistake. Here's why.
Abstract submissions close on March 3rd!
We are also extending a โจ call for mentored reviewers โจ if you advise excellent graduate or postdoctoral researchers you are welcome to recommend them to review for IC2S2 2026. Email IC2S2@uvm.edu to nominate mentored reviewers (or faculty colleagues)
CORRECTION, Claude Code launched in February 2025, suggesting a roughly 13% increase above expectations.
I remember the time to time muttering!! ๐ฎ Curious, chinese-speaking culture in mainland china or US or elsewhere??
Agents of Chaos -- what are autonomous OpenClaw agents up to? How do they interact with each other? Read our investigation of OpenClaw at
researchgate.net/publication/...
And an interactive website agentsofchaos.baulab.info
@davidbau.bsky.social @natalieshapira.bsky.social @openclaw-x.bsky.social
I'm hiring a postdoc at @cmu.edu (w/ far.ai & @dgrand.bsky.social + @gordpennycook.bsky.social)!
How do LLMs shape human beliefs โ and what do we do about it? AI safety meets behavioral science.
Open to technical and social science backgrounds.
New research: The AI Fluency Index.
We tracked 11 behaviors across thousands of http://Claude.ai conversationsโfor example, how often people iterate and refine their work with Claudeโto measure how well people collaborate with AI.
Read more: https://www.anthropic.com/research/AI-fluency-index
We've alllllmost gotten all the Jan26 ARR reviews in, but I'm still trying to track down new emergency reviewers for papers on the following topics:
1) agents
2) jailbreaking
3) coding
4) RL
5) reasoning
6) LLM for finance
7) AMR
8) alignment
If you can review any (in next 24-48h) please DM me ๐๐๐
I was taught that to have a great job talk narrative, you really only need ~3 high quality papers
How horrible to be a CS grad student under pressure to submit multiple first author papers to every conference deadline, whether they feel ready or not. This serves no oneโs best interests in long run (science included). But lots of students appear to being getting advice itโs necessary to compete
โHumans across multiple languages spontaneously associate the nonwords kiki & bouba with spiky & round shapes, respectively...We tested the bouba-kiki effect in baby chickens. Similar to humans, they spontaneously chose a spiky shape when hearing a kiki sound & a round shape when hearing a bouba.โ๐ฒ๐งช
I have a small project that is taking me outside of academia to dip into industry, just ever so briefly.
I engage a lot with AI. I was not at all prepared for how industry is using it. Not. at. all.
This brief little window is definitely helping me better frame my teaching in this new world.
My contribution to the discourse, which I've said before and will say again: DH isn't over. DH has won. 1/
Postdoc positions at UC Berkeley, including with the fabulous Cultural Analytics group: aprecruit.berkeley.edu/JPF05222
I asked Gemini to "defend itself," and say what the big benefits of LLMs have been since 2020:
"Since 2020, the volume of digital noise has increased, and LLMs have provided the first reliable shield against it."
I had some fun pulling OpenAI's mission statement out of their IRS tax filings from 2016 to 2024, loading them into a git repo with fake commit dates and then taking a look at the diffs simonwillison.net/2026/Feb/13/...
I doubt it. I would read the author's piece very literally. He just put this preprint on arxiv: arxiv.org/pdf/2601.19062 I think some (and my read, this includes the author) are realizing that much more than AI is disempowering us. Many of us have known this for a very long time, of course.
I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science ๐๐
Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!
Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!
Well done @zdenekkasner.bsky.social et al!
LLMs as Span Annotators: A Comparative Study of LLMs and Humans is accepted to multilingual-multicultural-evaluation.github.io ๐
See paper arxiv.org/abs/2504.08697
If you think labeling text spans with LLMs is easy, you probably have not tried it yourself (we have! ๐).
Any method you can think of โ be it tagging, matching, or indexing โ has flaws.
In our new preprint, we tested them all ๐ชWe also proposed how to improve one of them.
arxiv.org/abs/2601.16946
I am looking for 2 emergency reviewers for the ARR Ethics, Bias & Fairness track. Please DM me if you are available ๐
Screen shot of title page of a preprint. Title: Should generative AI be used in reflexive qualitative research? Authors: Elida Izani Ibrahim, Laura K. Nelson, and Andrea Voyer
Recent publications arguing against the use of genAI in reflexive qual research inspired us (Elida Ibrahim and @andreavoyer.bsky.social) to write our own perspective. Not to convince anyone to use genAI but for those who might be interested and are looking for guidance.
osf.io/preprints/so...