Skip to Main Content (Press Enter)

Logo UNIMORE
  • ×
  • Home
  • Corsi
  • Insegnamenti
  • Professioni
  • Persone
  • Pubblicazioni
  • Strutture
  • Terza Missione
  • Attività
  • Competenze

UNI-FIND
Logo UNIMORE

|

UNI-FIND

unimore.it
  • ×
  • Home
  • Corsi
  • Insegnamenti
  • Professioni
  • Persone
  • Pubblicazioni
  • Strutture
  • Terza Missione
  • Attività
  • Competenze
  1. Pubblicazioni

Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation

Contributo in Atti di convegno
Data di Pubblicazione:
2024
Citazione:
Training-Free Open-Vocabulary Segmentation with Offline Diffusion-Augmented Prototype Generation / Barsellotti, Luca; Amoroso, Roberto; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita. - (2024), pp. 3689-3698. ( 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 Seattle, WA 17th-21st June 2024) [10.1109/CVPR52733.2024.00354].
Abstract:
Open-vocabulary semantic segmentation aims at segmenting arbitrary categories expressed in textual form. Previous works have trained over large amounts of image-caption pairs to enforce pixel-level multimodal alignments. However captions provide global information about the semantics of a given image but lack direct localization of individual concepts. Further training on large-scale datasets inevitably brings significant computational costs. In this paper we propose FreeDA a training-free diffusion-augmented method for open-vocabulary semantic segmentation which leverages the ability of diffusion models to visually localize generated concepts and local-global similarities to match class-agnostic regions with semantic classes. Our approach involves an offline stage in which textual-visual reference embeddings are collected starting from a large set of captions and leveraging visual and semantic contexts. At test time these are queried to support the visual matching process which is carried out by jointly considering class-agnostic regions and global semantic similarities. Extensive analyses demonstrate that FreeDA achieves state-of-the-art performance on five datasets surpassing previous methods by more than 7.0 average points in terms of mIoU and without requiring any training. Our source code is available at https://aimagelab.github.io/freeda/.
Tipologia CRIS:
Relazione in Atti di Convegno
Elenco autori:
Barsellotti, Luca; Amoroso, Roberto; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita
Autori di Ateneo:
BARALDI LORENZO
CORNIA MARCELLA
CUCCHIARA Rita
Link alla scheda completa:
https://iris.unimore.it/handle/11380/1333026
Link al Full Text:
https://iris.unimore.it//retrieve/handle/11380/1333026/739255/2024_CVPR_Open_Vocabulary_Segmentation.pdf
Titolo del libro:
Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
Pubblicato in:
IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION
Series
  • Utilizzo dei cookie

Realizzato con VIVO | Designed by Cineca | 26.4.4.0