Summarizing Indian Languages using Multilingual Transformers based Models

03/29/2023
by   Dhaval Taunk, et al.
0

With the advent of multilingual models like mBART, mT5, IndicBART etc., summarization in low resource Indian languages is getting a lot of attention now a days. But still the number of datasets is low in number. In this work, we (Team HakunaMatata) study how these multilingual models perform on the datasets which have Indian languages as source and target text while performing summarization. We experimented with IndicBART and mT5 models to perform the experiments and report the ROUGE-1, ROUGE-2, ROUGE-3 and ROUGE-4 scores as a performance metric.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset