Skip to Main Content (Press Enter)

Logo UNIMORE
  • ×
  • Home
  • Degree programmes
  • Modules
  • Jobs
  • People
  • Research Outputs
  • Academic units
  • Third Mission
  • Projects
  • Skills

UNI-FIND
Logo UNIMORE

|

UNI-FIND

unimore.it
  • ×
  • Home
  • Degree programmes
  • Modules
  • Jobs
  • People
  • Research Outputs
  • Academic units
  • Third Mission
  • Projects
  • Skills
  1. Research Outputs

AZIM: Arabic-Centric Zero-Shot Inference for Multilingual Topic Modeling With Enhanced Performance on Summarized Text

Academic Article
Publication Date:
2025
Short description:
AZIM: Arabic-Centric Zero-Shot Inference for Multilingual Topic Modeling With Enhanced Performance on Summarized Text / Aftar, Sania; Rehman, Abdul; Bergamaschi, Sonia; Gagliardelli, Luca. - In: IEEE ACCESS. - ISSN 2169-3536. - 13:(2025), pp. 114370-114383. [10.1109/access.2025.3584309]
abstract:
Topic modeling is an unsupervised learning technique, that is extensively used for discovering latent topics in huge text corpora. However, existing models often fall short in cross-lingual scenarios, particularly for morphologically rich and low-resource languages such as Arabic. Cross-lingual topic analysis extracts shared topics across languages but often relies on resource-intensive datasets or limited translation dictionaries, restricting its diversity and effectiveness. Transfer learning provides a promising solution to these challenges. This presents AZIM, an Arabic-centric extension of ZeroShotTM, adapted to use Arabic as the training language for zero-shot multilingual topic modeling. The model’s performance is evaluated across diverse Latin-script and non-Latin-script languages, focusing on its adaptability to Modern Standard Arabic (MSA) and Classical Arabic (CA). Additionally, the study explores the impact of summarized versus general text. The results illustrate that the summarized versions of the datasets consistently outperform their baselines in terms of interpretability and coherence. Furthermore, the model also illustrates robust cross-lingual generalization as shown by non-Latin scripts such as Persian and Urdu outperforming certain Latin-based languages. However, variations in performance between the languages show the complex nature of multilingual embeddings. The performance difference between Modern Standard Arabic and Classical Arabic reveals that the limitations of the pre-trained embeddings, namely, their bias towards modern corpora. These findings point out the importance of adapting techniques for morphologically rich and low-resource languages for the purpose of enhancing the cross-lingual topic modeling.
Iris type:
Articolo su rivista
Keywords:
Low resource languages; MSA and classical Arabic; multilingual embeddings; zero-shot cross-lingual topic modeling
List of contributors:
Aftar, Sania; Rehman, Abdul; Bergamaschi, Sonia; Gagliardelli, Luca
Authors of the University:
BERGAMASCHI Sonia
GAGLIARDELLI LUCA
Handle:
https://iris.unimore.it/handle/11380/1398444
Full Text:
https://iris.unimore.it//retrieve/handle/11380/1398444/956640/AZIM_Arabic-Centric_Zero-Shot_Inference_for_Multilingual_Topic_Modeling_With_Enhanced_Performance_on_Summarized_Text.pdf
Published in:
IEEE ACCESS
Journal
  • Use of cookies

Powered by VIVO | Designed by Cineca | 26.4.4.0