Skip to Main Content (Press Enter)

Logo UNIMORE
  • ×
  • Home
  • Degree programmes
  • Modules
  • Jobs
  • People
  • Research Outputs
  • Academic units
  • Third Mission
  • Projects
  • Skills

UNI-FIND
Logo UNIMORE

|

UNI-FIND

unimore.it
  • ×
  • Home
  • Degree programmes
  • Modules
  • Jobs
  • People
  • Research Outputs
  • Academic units
  • Third Mission
  • Projects
  • Skills
  1. Research Outputs

Semantic Residual Prompts for Continual Learning

Conference Paper
Publication Date:
2025
Short description:
Semantic Residual Prompts for Continual Learning / Menabue, M.; Frascaroli, E.; Boschini, M.; Sangineto, E.; Bonicelli, L.; Porrello, A.; Calderara, S.. - 15119 LNCS:(2025), pp. 1-18. ( 18th European Conference on Computer Vision, ECCV 2024 Milano, Italy 29 Sep - 4 Oct, 2024) [10.1007/978-3-031-73030-6_1].
abstract:
Prompt-tuning methods for Continual Learning (CL) freeze a large pre-trained model and train a few parameter vectors termed prompts. Most of these methods organize these vectors in a pool of key-value pairs and use the input image as query to retrieve the prompts (values). However, as keys are learned while tasks progress, the prompting selection strategy is itself subject to catastrophic forgetting, an issue often overlooked by existing approaches. For instance, prompts introduced to accommodate new tasks might end up interfering with previously learned prompts. To make the selection strategy more stable, we leverage a foundation model (CLIP) to select our prompts within a two-level adaptation mechanism. Specifically, the first level leverages a standard textual prompt pool for the CLIP textual encoder, leading to stable class prototypes. The second level, instead, uses these prototypes along with the query image as keys to index a second pool. The retrieved prompts serve to adapt a pre-trained ViT, granting plasticity. In doing so, we also propose a novel residual mechanism to transfer CLIP semantics to the ViT layers. Through extensive analysis on established CL benchmarks, we show that our method significantly outperforms both state-of-the-art CL approaches and the zero-shot CLIP test. Notably, our findings hold true even for datasets with a substantial domain gap w.r.t. the pre-training knowledge of the backbone model, as showcased by experiments on satellite imagery and medical datasets. The codebase is available at https://github.com/aimagelab/mammoth.
Iris type:
Relazione in Atti di Convegno
Keywords:
Continual learning; Prompt-based learning; Rehearsal-free;
List of contributors:
Menabue, M.; Frascaroli, E.; Boschini, M.; Sangineto, E.; Bonicelli, L.; Porrello, A.; Calderara, S.
Authors of the University:
CALDERARA Simone
PORRELLO ANGELO
SANGINETO Enver
Handle:
https://iris.unimore.it/handle/11380/1372319
Book title:
Proceedings of the 18th European Conference on Computer Vision
Published in:
LECTURE NOTES IN COMPUTER SCIENCE
Journal
LECTURE NOTES IN COMPUTER SCIENCE
Series
  • Overview

Overview

URL

http://arxiv.org/abs/2403.06870v3
  • Use of cookies

Powered by VIVO | Designed by Cineca | 26.4.5.0