πŸ“… 09 January 2025
DOI: 10.26877/asset.v7i1.1211

Comparative Performance of Transformer Models for Cultural Heritage in NLP Tasks

Advance Sustainable Science, Engineering and Technology
Universitas Persatuan Guru Republik Indonesia Semarang

πŸ“„ Abstract

AI and Machine Learning are crucial in advancing technology, especially for processing large, complex datasets. The transformer model, a primary approach in natural language processing (NLP), enables applications like translation, text summarization, and question-answer (QA) systems. This study compares two popular transformer models, FlanT5 and mT5, which are widely used yet often struggle to capture the specific context of the reference text. Using a unique Goddess Durga QA dataset with specialized cultural knowledge about Indonesia, this research tests how effectively each model can handle culturally specific QA tasks. The study involved data preparation, initial model training, ROUGE metric evaluation (ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-Lsum), and result analysis. Findings show that FlanT5 outperforms mT5 on multiple metrics, making it better at preserving cultural context. These results are impactful for NLP applications that rely on cultural insight, such as cultural preservation QA systems and context-based educational platforms.

πŸ”– Keywords

#Artificial Intelligence; Machine Learning; NLP Cultural Knowledge; Transformer Models; QA Goddess Durga Dataset

ℹ️ Informasi Publikasi

Tanggal Publikasi
09 January 2025
Volume / Nomor / Tahun
Volume 7, Nomor 1, Tahun 2025

πŸ“ HOW TO CITE

Suryanto, Tri Lathif Mardi; Wibawa, Aji Prasetya; Hariyono, Hariyono; Nafalski, Andrew, "Comparative Performance of Transformer Models for Cultural Heritage in NLP Tasks," Advance Sustainable Science, Engineering and Technology, vol. 7, no. 1, Jan. 2025.

ACM
ACS
APA
ABNT
Chicago
Harvard
IEEE
MLA
Turabian
Vancouver

πŸ“š References & Citations

Artikel ini telah dikutip oleh 6 publikasi lainnya.

πŸ”— Artikel Terkait dari Jurnal yang Sama

πŸ“Š Statistik Sitasi Jurnal