danmaku icon

Where are we Still Split on Tokenization?

5 Lượt xem22/04/2024

Many Natural Language Processing (NLP) tasks are labeled on the token level, for these tasks, the first step is to identify the tokens (tokenization). Because this step is often considered to be a solved problem, gold tokenization is commonly assumed. In this paper, we investigate if this task is solved with supervised tokenizers. To this end, we propose an effient multi-task model for tokenization that performs on-par with the state-of-the-art. We use this model to reflect on the status of performance on the tokenization task by evaluating on 122 languages in 20 different scripts. We show that tokenization performance is mainly dependent on the amount and consistency of annotated data as well as difficulty of the task in the writing systems. We conclude that besides inconsistencies in the data and exceptional cases the task can be considered solved for Latin languages for in-dataset settings (gt;$99.5 F1). However, performance is 0.75 F1 point lower on average for datasets in other scripts and performance deteriorates in cross-dataset setups.\footnote{Code is
warn iconKhông được đăng tải lại nội dung khi chưa có sự cho phép của nhà sáng tạo
creator avatar

Đề xuất cho bạn

  • Tất cả
  • Anime
tama naman kase hahaha
0:20
BARBERO | Gelonimation
8:04