Publication Date:
2025
Short description:
Shifted Window Fourier Transform and Retention for Image Captioning / Hu, Jia Cheng; Cavicchioli, R.; Capotondi, A.. - 15293:(2025), pp. 165-179. ( 31st International Conference on Neural Information Processing, ICONIP 2024 nzl 2024) [10.1007/978-981-96-6596-9_12].
abstract:
Image Captioning is an important Language and Vision task that finds application in a variety of contexts, ranging from healthcare to autonomous vehicles. As many real-world applications rely on devices with limited resources, much effort in the field was put into the development of lighter and faster models. However, much of the current optimizations focus on the Transformer architecture in contrast to the existence of more efficient methods. In this work, we introduce SwiFTeR, an architecture almost entirely based on Fourier Transform and Retention, to tackle the main efficiency bottlenecks of current light image captioning models, being the visual backbone’s onerosity, and the decoder’s quadratic cost. SwiFTeR is made of only 20M parameters, and requires 3.1 GFLOPs for a single forward pass. Additionally, it showcases superior scalability to the caption length and its small memory requirements enable more images to be processed in parallel, compared to the traditional transformer-based architectures. For instance, it can generate 400 captions in one second. Although, for the time being, the caption quality is lower (110.2 CIDEr-D), most of the decrease is not attributed to the architecture but rather an incomplete training practice which currently leaves much room for improvements. Overall, SwiFTeR points toward a promising direction to new efficient architectural design. The implementation code will be released in the future.
Iris type:
Relazione in Atti di Convegno
Keywords:
Fourier-Transform; Image-Captioning; Retention
List of contributors:
Hu, Jia Cheng; Cavicchioli, R.; Capotondi, A.
Book title:
Lecture Notes in Computer Science
Published in: