Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
AURORA-M: Open Source Continual Pre-training for Multilingual Language and Code
Institute of Science Tokyo, Japan.
MIT-IBM Watson Lab.
Sapienza University of Rome, Italy; Babelscape.
LAION.
Show others and affiliations
2025 (English)In: Proceedings of the 31st International Conference on Computational Linguistics: Industry Track / [ed] Owen Rambow; Leo Wanner; Marianna Apidianaki; Hend Al-Khalifa; Barbara Di Eugenio; Steven Schockaert; Kareem Darwish; Apoorv Agarwal, Association for Computational Linguistics (ACL) , 2025, p. 656-678Conference paper, Published paper (Other academic)
Abstract [en]

Pretrained language models are integral part of AI applications, but their high computational cost for training limits accessibility. Initiatives such as Bloom and StarCoder aim to democratize access to pretrained models for collaborative community development. Despite these efforts, such models encounter challenges such as limited multilingual capabilities, risks of catastrophic forgetting during continual pretraining, and the high costs of training models from scratch, alongside the need to align with AI safety standards and regulatory frameworks. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435B additional tokens, Aurora-M surpasses 2T tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. We evaluate Aurora-M across a wide range of tasks and languages, showcasing its robustness against catastrophic forgetting and its superior performance in multilingual settings, particularly in safety evaluations. We open-source Aurora-M and its variants to encourage responsible open-source development of large language models at https://huggingface.co/aurora-m.

Place, publisher, year, edition, pages
Association for Computational Linguistics (ACL) , 2025. p. 656-678
National Category
Natural Language Processing
Research subject
Machine Learning
Identifiers
URN: urn:nbn:se:ltu:diva-112342Scopus ID: 2-s2.0-105000111106OAI: oai:DiVA.org:ltu-112342DiVA, id: diva2:1951605
Conference
31st International Conference on Computational Linguistics (COLING 2025), Abu Dhabi, UAE, January 19-24, 2025
Note

ISBN for host publication: 979-8-89176-197-1

Available from: 2025-04-11 Created: 2025-04-11 Last updated: 2025-10-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusPublisher's full text

Authority records

Adewumi, Tosin

Search in DiVA

By author/editor
Adewumi, Tosin
By organisation
Embedded Internet Systems Lab
Natural Language Processing

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 83 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf