Naver Labs Europe organizes a Workshop on AI for Robotics in the French Alpes (Grenoble), the 4th edition. This year the topic is 'Spatial AI', registration is open!
Naver Labs Europe organizes a Workshop on AI for Robotics in the French Alpes (Grenoble), the 4th edition. This year the topic is 'Spatial AI', registration is open!
πLooking for a multi-view depth method that just works?
We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.
More info:
nianticlabs.github.io/mvsanywhere/
Cool work! Do you think thereβs any architectural bias that prevents learning extrema rather than only minima or only maxima? Or is it mostly the repeatability issue? Just has me thinking about classical SIFT DoG and Harris handling both light/dark
This is a fun example with a continuous transition between distinct 3D scenes!
Here's a reconstruction of a movie establishing shot
Weβve had fun testing the limits of MASt3R-SLAM on in-the-wild videos. Hereβs the drone video of a Minnesota bowling alley that weβve always wanted to reconstruct! Different scene scales, dynamic objects, specular surfaces, and fast motion.
Iβm not too familiar with it, but it seems there is some equivalence noted in Section 3.2 of βFeature preserving point set surfaces based on nonβlinear kernel regressionβ.
Introducing MASt3R-SLAM, the first real-time monocular dense SLAM with MASt3R as a foundation.
Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map.
With @ericdexheimer.bsky.social* @ajdavison.bsky.social (*Equal Contribution)