This thread is a bit long, but I thought itβd be interesting to share just one of the mundane parts of the deep learning stack that break and have to be rethought as models and training scale.
@pbontrager
AI researcher & engineer @Meta working on @PyTorch torchtune in NYC; interests in generative models, RL, and evolutionary strategies π» https://github.com/pbontrager π https://tinyurl.com/philips-papers
This thread is a bit long, but I thought itβd be interesting to share just one of the mundane parts of the deep learning stack that break and have to be rethought as models and training scale.
To save, you need to let each GPU save their own partial safetensors, because communication is slow, and then line up the memory blocks and merge into one file.
Safetensors are great for hosting checkpoints and make no assumptions about if your model is distributed by saving full unshared parameters. To work natively with safetensors, DCP needs to tell each GPU the exact slice of data to read without loading the full parameter.
On startup, DCP has to map your old GPU layout to your new one so each GPU knows which file to read from and only read the data they need. But thereβs one last problem; when youβre ready to take your model to another tool (serving, eval, etc), it expects safetenors checkpoints.
Distributed Checkpoints (DCP) solve this by having every GPU save their own checkpoint asynchronously so you can save a checkpoint in less than a second. But this creates a new problem, the next time you want to use the model, you might have a different number of GPUs.
What goes into saving checkpoints is not something that many people think about, but as models get bigger this becomes a challenge. The biggest open models now have checkpoints over 700gb that can take tens of minutes every time you want to consolidate into a checkpoint.
pytorch.org/blog/hugging...
Iβm enjoying it while it lasts before everything fully homogenizes again
We've built a simulated driving agent that we trained on 1.6 billion km of driving with no human data.
It is SOTA on every planning benchmark we tried.
In self-play, it goes 20 years between collisions.
Arenβt these two paradoxes functionally the same? en.m.wikipedia.org/wiki/Braess%...
In the Alice In Wonderland (github.com/LAION-AI/AIW) reasoning and generalization benchmark, DeepSeek R1 appears to perform much more like o1 mini than o1 -preview. (Plot from laion-ai)
What are the best benchmarks for reasoning models?
Can we just study LLM activations/behavior because itβs interesting and it can tell us things about language and AI without imbuing artificial importance or meaning on top of it?
Haha, that wasnβt lost on me. Facebookβs still going strong, but itβs a different site and users from when I was in HS.
If you can choose who follows you, that sounds more like βfriendsβ from the old Facebook days.
I found out about Warp because I was on jury duty with one of their devs π Itβs been great compared to the Macβs default terminal.
How do you add these?
Maybe letβs go the other direction and include blog posts in CVs too.
That would imply that we solved self-driving (image recognition) and search (language understanding), among other things.
This could be a good case for mixed models. The model parsing the text could likely be smaller or be fairly cheap like DeepSeek
Thankfully in a small startup you only have to sell an idea to a couple of people and you can get going.
One startup I joined had a model getting 95% on benchmarks but terrible in practice. Spent the first 6 months developing new benchmarks instead of a new model.
I always set out to propose a new idea and end up having to proposing a new benchmark instead
What if humanity knows X and wants to understand Z. If a computer can give us Y so that we can understand Z, that would be useful for science. Though Iβd say that we still didnβt know Y ourselves yet.
Imagine if under the hood O1 is just calling βwrite better codeβ over and over again π
I posted about this recently. Benchmarks show what models canβt do, not what they can do.
Plagiarize other peopleβs research
Imagine being an editor for an LLM, so much work with low confidence that youβll have something interesting in the end.
I remember a lot of focus being on the loss function. My impression was that we thought we had models that would work well if only we had a good perceptual loss to train them with. In comes the GAN
Base models are closer, but theyβre still affected by the companyβs decisions on which data to filter out and more indirectly on what data is given free hosting on the internet.