Sergio Izquierdo's Avatar

Sergio Izquierdo

@sizquierdo

PhD candidate at University of Zaragoza. Previously intern at Niantic Labs and Skydio. Working on 3D reconstruction and Deep Learning. serizba.github.io

54
Followers
66
Following
12
Posts
06.12.2024
Joined
Posts Following

Latest posts by Sergio Izquierdo @sizquierdo

Thanks for sharing!

I put care and love into the cover, creating it in QGIS and Illustrator to showcase my beloved Zaragoza

20.01.2026 12:41 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Civil Software Licenses

One concern that I have as an AI researcher when publishing code is that it can potentially be used in dual-use applications.
To solve this, we propose Civil Software Licenses. They prevent dual-use while being minimal in the restrictions they impose:

civil-software-licenses.github.io

31.07.2025 17:36 πŸ‘ 16 πŸ” 3 πŸ’¬ 3 πŸ“Œ 0

Presenting today at #CVPR poster 81.

Code is available at github.com/nianticlabs/...

Want to try it on an iPhone video? On Android? On any other sequence you have? We got you covered. Check the repo.

14.06.2025 14:25 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Presenting it now at #CVPR

14.06.2025 14:24 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Happy to be one of them

15.05.2025 10:45 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

We focused on depth from videos and as you pointed we didn't train on datasets with different captures per scene.

31.03.2025 15:51 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
MVSAnywhere: Zero-Shot Multi-View Stereo MVSAnywhere: Zero-Shot Multi-View Stereo, CVPR 2025

Check the website: nianticlabs.github.io/mvsanywhere/
And the paper: arxiv.org/pdf/2503.22430
Code coming soon!

Great work with @mohamedsayed.bsky.social @mdfirman.bsky.social @guiggh.bsky.social D. Turmukhambetov @jcivera.bsky.social @oisinmacaodha.bsky.social @gbrostow.bsky.social J. Watson

31.03.2025 12:52 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

πŸ’‘Use case:

We show how the accurate and robust depths from MVSAnywhere serve to regularize gaussian splats, obtaining much cleaner scene reconstructions.

As MVSAnywhere is agnostic to the scene scale, this is plug-and-play for your splats!

31.03.2025 12:52 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Quantitative results of MVSAnywhere

Quantitative results of MVSAnywhere

πŸ†Results:

MVSAnywhere achieves state-of-the-art results on the Robust Multi-View Depth Benchmark, showing its strong generalization performance.

31.03.2025 12:52 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

🧩Challenge: Varying Depth Scales & Unknown Ranges

πŸ”ΉMost models require a known depth range to estimate the cost volume.
βœ…MVSAnywhere estimates an initial range based on camera scale and setup and refines it. It predicts at the same scale as the input cameras!

31.03.2025 12:52 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Qualitative results of mvsanywhere

Qualitative results of mvsanywhere

🧩Challenge: Domain Generalization

πŸ”ΉPrevious models struggle across different domains ( indoor🏠 vs outdoor🏞️).
βœ…MVSAnywhere uses a transformer architecture and is trained on a large array of varied synthetic datasets

31.03.2025 12:52 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
MVSAnywhere works with dynamic objects and casually captured videos.

MVSAnywhere works with dynamic objects and casually captured videos.

🧩Challenge: Robustness to casually captured videos

πŸ”ΉMVS methods completely rely on the matches of the cost volume (not working for low overlap & dynamic)
βœ…MVSAnywhere successfully combines strong single-view image priors with multi-view information from our cost volume

31.03.2025 12:52 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

πŸ”Looking for a multi-view depth method that just works?

We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.

More info:
nianticlabs.github.io/mvsanywhere/

31.03.2025 12:52 πŸ‘ 40 πŸ” 10 πŸ’¬ 2 πŸ“Œ 4

MASt3R-SLAM code release!
github.com/rmurai0610/M...

Try it out on videos or with a live camera

Work with
@ericdexheimer.bsky.social*,
@ajdavison.bsky.social (*Equal Contribution)

25.02.2025 17:23 πŸ‘ 51 πŸ” 10 πŸ’¬ 2 πŸ“Œ 3
Post image Post image Post image

MegaLoc: One Retrieval to Place Them All
@berton-gabri.bsky.social Carlo Masone

tl;dr: DINOv2-SALAD, trained on all available VPR datasets works very well.
Code should at github.com/gmberton/Meg..., but not yet
arxiv.org/abs/2502.17237

25.02.2025 10:03 πŸ‘ 13 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0