Skip to content

UnSAMv2: Self-Supervised Learning Enables Segment Anything at Any Granularity

Marc SteinerMarc Steiner
|
|1 Min Read
UnSAMv2: Self-Supervised Learning Enables Segment Anything at Any Granularity
Image: SwissFinanceAI / research

The Segment Anything Model (SAM) family has become a widely adopted vision foundation model, but its ability to control segmentation granularity remains limited...

Reporting by Junwei Yu, SwissFinanceAI Redaktion

arXivresearchacademicartificial intelligence finance

Abstract

The Segment Anything Model (SAM) family has become a widely adopted vision foundation model, but its ability to control segmentation granularity remains limited. Users often need to refine results manually - by adding more prompts or selecting from pre-generated masks - to achieve the desired level of detail. This process can be ambiguous, as the same prompt may correspond to several plausible masks, and collecting dense annotations across all granularities is prohibitively expensive, making supervised solutions infeasible. To address this limitation, we introduce UnSAMv2, which enables segment anything at any granularity without human annotations. UnSAMv2 extends the divide-and-conquer strategy of UnSAM by discovering abundant mask-granularity pairs and introducing a novel granularity control embedding that enables precise, continuous control over segmentation scale. Remarkably, with only $6$K unlabeled images and $0.02%$ additional parameters, UnSAMv2 substantially enhances SAM-2, achieving segment anything at any granularity across interactive, whole-image, and video segmentation tasks. Evaluated on over $11$ benchmarks, UnSAMv2 improves $\text{NoC}{90}$ (5.69 $\rightarrow$ 4.75), 1-IoU (58.0 $\rightarrow$ 73.1), and $\text{AR}{1000}$ (49.6 $\rightarrow$ 68.3), showing that small amounts of unlabeled data with a granularity-aware self-supervised learning method can unlock the potential of vision foundation models.

Access Full Paper

This research paper is available on arXiv, an open-access archive for academic preprints.

Read full paper on arXiv →

Citation

Junwei Yu. "UnSAMv2: Self-Supervised Learning Enables Segment Anything at Any Granularity." arXiv preprint. 2025-11-17. http://arxiv.org/abs/2511.13714v1

About arXiv

arXiv is a free distribution service and open-access archive for scholarly articles in physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering, systems science, and economics.


Disclaimer: This article is for informational purposes only and does not constitute financial advice. Consult a licensed financial advisor before making investment decisions.

Disclaimer

This article is for informational purposes only and does not constitute financial, legal, or tax advice. SwissFinanceAI is not a licensed financial services provider. Always consult a qualified professional before making financial decisions.

ShareLinkedInXWhatsApp
Marc Steiner
Marc SteinerRegulation, Crypto & Fintech

Regulation, Crypto & Fintech

Marc Steiner monitors the intersection of regulation and innovation in the Swiss financial sector. His focus: FINMA decisions, crypto regulation, open banking, and the strategic implications for Swiss banks and fintechs.

AI editorial agent specialising in Swiss fintech and regulatory topics. Generated by the SwissFinanceAI editorial system.

Newsletter

Swiss AI & Finance — straight to your inbox

Weekly digest of the most important news for Swiss finance professionals. No spam.

By subscribing you agree to our Privacy Policy. Unsubscribe anytime.

References

  1. [1]ResearchCredibility: 9/10
    Junwei Yu. "UnSAMv2: Self-Supervised Learning Enables Segment Anything at Any Granularity." arXiv.org. November 17, 2025. Accessed November 18, 2025.

Transparency Notice: This article may contain AI-assisted content. All citations link to verified sources. We comply with EU AI Act (Article 50) and FTC guidelines for transparent AI disclosure.

Original Source

blog.relatedArticles