Amazing, congrats!
Amazing, congrats!
Indeed, additionally If computing the forces in HMC is computationally cheap - it's extremely hard to beat it with learnt samplers in terms of wall clock time. Similar challenge to Neural PDE solvers.
Happy Newtonβs birthday to those who celebrate
Should cite Newton first π
In arxiv.org/abs/2303.00848, @dpkingma.bsky.social and @ruiqigao.bsky.social had suggested that noise augmentation could be used to make other likelihood-based models optimise perceptually weighted losses, like diffusion models do. So cool to see this working well in practice!
Very nice results!
This is your monthly reminder that understanding deep learning does not require rethinking generalization, and it never did.
Brilliant talk by Ilya, but he's wrong on one point.
We are NOT running out of data. We are running out of human-written text.
We have more videos than we know what to do with. We just haven't solved pre-training in vision.
Just go out and sense the world. Data is easy.
Hi!
Neat!
More a linear algebra meme than a QM meme π
Super excited to preprint our work on developing a Biomolecular Emulator (BioEmu): Scalable emulation of protein equilibrium ensembles with generative deep learning from @msftresearch.bsky.social ch AI for Science.
www.biorxiv.org/content/10.1...
Amazing work!
Just use github
My former student @msalbergo.bsky.social has beautiful papers describing the approach on the right. βFlow matchingβ and βstochastic interpolantsβ were concurrent developments of the same core idea.
arxiv.org/abs/2209.15571
arxiv.org/abs/2303.08797
π§΅ Today with @polymathicai.bsky.social and others we're releasing two massive datasets that span dozens of fields - from bacterial growth to supernova!
We want this to enable multi-disciplinary foundation model research.
As a statistical inequality hoarder, I approve π
A paper a day, episode 15.
You liked the matrix cookbook? Youβre gonna love this one. 100 statistics inequalities just for your personal enjoyment. As they say in French, moi jβai BienaymΓ© cet article !
arxiv.org/abs/2102.07234
What do you mean by physical? Eg If the target density lives on a specific manifold, RFs will generally not respect that naively (eg think of data living on a torus). One would have to choose paths along geodesics for example and then properly integrate the flow on the manifold.
Yes, though a simplification is that the log-det-jac is the integral of the divergence. So "just" need to solve a 1d integral.
You can, just need to integrate it's divergence to get the likelihood.
What do you mean? That's the whole field of optimisation (preconditioning, proximal, natural, etc...)
π₯Ή
I have quite a few of those π
For the theoretical side, thereβs no better resource than the 2024 book by Blondel and Roulet:
arxiv.org/abs/2403.14606
For the practical side, my friend @willtebbutt.bsky.social wrote stellar documentation explaining autodiff with mutation:
compintell.github.io/Mooncake.jl/...
Not sure it works well here neither. Still seeing lots of cartoons and cat pictures instead of papers.
We're hiring! Our team at Google DeepMind is looking for a research engineer to join us. More details in the link below.
boards.greenhouse.io/deepmind/job...
π€
A cargo train would probably be the most efficient by far?
This video takes pedagogy to a whole new level in teaching the SchrΓΆdinger equation! youtu.be/uVKMY-WTrVo?...