‼️
‼️
Soon hiring a lab manager! Looking for someone who is really interested in language neuroscience, who is organised, motivated, a great communicator, and who works well in a research team. Express interest by submitting this form: tinyurl.com/glysn-labman...
Reposts appreciated!
The Visual Learning Lab is hiring TWO lab coordinators!
Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)—with flexible summer start dates.
Hopkins Cog Sci is hiring! We have two open faculty positions: one in vision, and one language. Please repost!
Why do AI models struggle with social scenes? 🧐 Our new preprint with @lisik.bsky.social reveals a fundamental gap: most AI vision models lack explicit 3D pose information that humans rely on for social judgments.
Read the full work: arxiv.org/abs/2511.03988
Excited to share our work on mechanisms of naturalistic audiovisual processing in the human brain 🧠🎬!!
www.biorxiv.org/content/10.1...
Call for applications to cognitive science PhD program with QR code to the link above
The department of Cognitive Science @jhu.edu is seeking motivated students interested in joining our interdisciplinary PhD program! Applications due 1 Dec
Our PhD students also run an application mentoring program for prospective students. Mentoring requests due November 15.
tinyurl.com/2nrn4jf9
🚨New preprint w/ @lisik.bsky.social!
Aligning Video Models with Human Social Judgments via Behavior-Guided Fine-Tuning
We introduce a ~49k triplet social video dataset, uncover a modality gap (language > video), and close via novel behavior-guided fine-tuning.
🔗 arxiv.org/abs/2510.01502
🚨🚨🚨 The Subjectivity Lab is looking for a lab manager! The position is available immediately. We want someone who can help coordinate our large sample fMRI study, plus other behavioral work. Because *gestures at everything* the job was approved only now (ends in June 2026). Great opportunity! 🧵 1/4
My lab at USC is recruiting!
1) research coordinator: perfect for a recent graduate looking for research experience before applying to PhD programs: usccareers.usc.edu REQ20167829
2) PhD students: see FAQs on lab website dornsife.usc.edu/hklab/faq/
These findings highlight the importance of visual-semantic signals, above and beyond spoken language content, across cortex, even in the language network.
The code to replicate the analyses and figures is available here: github.com/Isik-lab/ubi...
8/8
Follow-up analyses showed that both social perception and language regions were best predicted by later vision model layers that map onto both high-level social semantic signals (valence, the presence of a social interaction, faces).
7/n
Importantly, vision and language embeddings are only weakly correlated throughout the movie, suggesting that the vision and language embeddings are each predicting distinct variance in the neural responses.
6/n
We find that vision embeddings dominate prediction across cortex. Surprisingly, even language-selective regions were well predicted by vision model embeddings, as well as or better than language model features.
5/n
We densely labeled the vision and language features of the movie using a combination of human annotations and vision and language deep neural network (DNN) models and linearly mapped these features to fMRI responses using an encoding model
4/n
To address this, we collected fMRI data from 34 participants while they watched a 45- minute naturalistic audiovisual movie. Critically, we used functional localizer experiments to identify social interaction perception and language-selective regions in the same participants.
3/n
Humans effortlessly extract social information from both the vision and language signals around us. However, most work (even most naturalistic fMRI encoding work) is limited to studying unimodal processing. How does the brain process simultaneous multimodal social signals?
2/n
Excited to share new work with @hleemasson.bsky.social , Ericka Wodka, Stewart Mostofsky and @lisik.bsky.social! We investigated how simultaneous vision and language signals are combined in the brain using naturalistic+controlled fMRI. Read the paper here: osf.io/b5p4n
1/n
What shapes the topography of high-level visual cortex?
Excited to share a new pre-print addressing this question with connectivity-constrained interactive topographic networks, titled "Retinotopic scaffolding of high-level vision", w/ Marlene Behrmann & David Plaut.
🧵 ↓ 1/n
Despite everything going on, I may have funds to hire a postdoc this year 😬🤞🧑🔬 Open to a wide variety of possible projects in social and cognitive neuroscience. Get in touch if you are interested! Reposts appreciated.
📢 Excited to announce our paper at #ICLR2025: “Modeling dynamic social vision highlights gaps between deep learning and humans”! w/ @emaliemcmahon.bsky.social, Colin Conwell, Mick Bonner, @lisik.bsky.social
📆 Thur, Apr, 24: 3:00-5:30 - Poster session 2 (#64)
📄 bit.ly/4jISKES%E2%8... [1/6]
Shown is an example image that participants viewed either in EEG, fMRI, and a behavioral annotation task. There is also a schematic of a regression procedure for jointly predicting fMRI responses from stimulus features and EEG activity.
I am excited to share our recent preprint and the last paper of my PhD! Here, @imelizabeth.bsky.social, @lisik.bsky.social, Mick Bonner, and I investigate the spatiotemporal hierarchy of social interactions in the lateral visual stream using EEG-fMRI.
osf.io/preprints/ps...
#CogSci #EEG
This is incredibly cool: if you search for a condition that’s affected your family, the site returns stats on how much NIH has done for that disease, *and* a contact form for reaching out to tell your Members of Congress why you want to see them defend NIH.
Pass it on!
New paper! 🧠 **The cerebellar components of the human language network**
with: @hsmall.bsky.social @moshepoliak.bsky.social @gretatuckute.bsky.social @benlipkin.bsky.social @awolna.bsky.social @aniladmello.bsky.social and @evfedorenko.bsky.social
www.biorxiv.org/content/10.1...
1/n 🧵
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......
Substantial updates to the list of cancelled grants👇
- THANK YOU to all who have contributed. Crowdsourcing restores faith in humanity.
- It's still a work in progress. You'll see more updates shortly.
- There are multiple teams & efforts engaged in tracking & advocacy. More to come soon!
As a result of Trump’s slashes to research funding, dozens of graduate programs have announced reductions and cancellations of graduate admissions slots.
If you are an impacted applicant, please fill out this survey: docs.google.com/forms/d/e/1F...
🧪🧠🧬🔬🥼👩🏼🔬🧑🔬
Our language neuroscience lab (evlab.mit.edu) is looking for a new lab manager/FT RA to start in the summer. Apply here: tinyurl.com/3r346k66 We'll start reviewing apps in early Mar. (Unfortunately, MIT does not sponsor visas for these positions, but OPT works.)
Hey Bsky friends on #neuroskyence! Very excited to share our
@iclr-conf.bsky.social paper: TopoNets! High-performing vision and language models with brain-like topography! Expertly led by grad student Mayukh and Mainak! A brief thread...
✨i'm hiring a lab manager, with a start date of ~September 2025! to express interest, please complete this google form: forms.gle/GLyAbuD779Rz...
looking for someone to join our multi-disciplinary team, using OPM, EEG, iEEG and computational techniques to study speech and language processing! 🧠