Congrats!!
Congrats!!
I was thinking you can just
uv add --editable
But am not 100% sure
Why not uv add
First they took "alignment", and now they are taking "EM"?? Interpretability research transgressions know no limits.
BREAKING: going outside is nice
I hope itβs a burn it all down and restart the system of incentivization for teaching well and learning well moment.
I'm looking for a reviewer for a paper on measuring syntactic productivity (lots of maths!) due a week from now. Please shoot me an email if you could review!
Is there no way to set your own master default? I always forget to change them and it drives me crazy!
Just realized that upon graduating, my university publishes my dissertation to ProQuest but then I immediately lose access to ProQuest lol. Feels very symbolic.
I defended last month, attended NAACL, spent some time in Japan, and now I am looking forward to joining Kensho as a research scientist next month!
I think most LLM research has mainly pursued three questions: are models undertrained (need more compute)? Do models have too few parameters (need bigger model)? Is my test data out of distribution (need more data).
Space Jam
Thank you!
Are you guys still active on the server? Is there another link :D?
I'm giving a lecture on language model debiasing to my undergrad NLP course on Friday but I'm not super up to date on the research. Does anyone have any suggestions for papers/topics to cover?
They always come the day after the deadline π€·ββοΈ
That feeling when a single \small makes your paper exactly 8 pages.
What do you consider to be the sota LLM for massively multilingual NLP? Does it heavily depend on language/task/domain?
I just gave a talk where I referred to it that way in my slides, but later was searching for it in recent papers and noticed that maybe no one says that anymore haha.
I recall some years ago people started calling pretraining tasks (like causal language modeling) self-supervised. I am not sure if anyone uses this term anymore -- is that because we just assume pretraining == next token prediction? What do we call it now?
π
Why do so many job applications require you to fill out today's date? The form is literally being processed on a today's-date-tracking-machine
Saying nascent is the best.
I learn so much from reviewing, itβs the papers I review that I keep coming back to for my own ideas and citations. They broaden and deepen my view on the field. Letβs give it the time it deserves.
Here is a list of ML OSS & Open Source / Science enthusiasts I found on Bluesky π¦
go.bsky.app/8MFcfXd
Let me know if you find such people here!
I'm still new here and probably the list misses many must-add people, so let's built it togetherπͺ
π
As opposed to now, where NLP is a subfield of LLMs.
Trying to put together a list of multilingual* NLP people. If you want to be added, let me know! bsky.app/profile/did:...
*Also, what do we call this intersection of multilingual and crosslingual research? I hate it when people say multilingual NLP to mean NLP for a non-English language.
πββοΈ