ArXiv Preprint
Sanskrit is a classical language with about 30 million extant manuscripts fit
for digitisation, available in written, printed or scannedimage forms. However,
it is still considered to be a low-resource language when it comes to available
digital resources. In this work, we release a post-OCR text correction dataset
containing around 218,000 sentences, with 1.5 million words, from 30 different
books. Texts in Sanskrit are known to be diverse in terms of their linguistic
and stylistic usage since Sanskrit was the 'lingua franca' for discourse in the
Indian subcontinent for about 3 millennia. Keeping this in mind, we release a
multi-domain dataset, from areas as diverse as astronomy, medicine and
mathematics, with some of them as old as 18 centuries. Further, we release
multiple strong baselines as benchmarks for the task, based on pre-trained
Seq2Seq language models. We find that our best-performing model, consisting of
byte level tokenization in conjunction with phonetic encoding (Byt5+SLP1),
yields a 23% point increase over the OCR output in terms of word and character
error rates. Moreover, we perform extensive experiments in evaluating these
models on their performance and analyse common causes of mispredictions both at
the graphemic and lexical levels. Our code and dataset is publicly available at
https://github.com/ayushbits/pe-ocr-sanskrit.
Ayush Maheshwari, Nikhil Singh, Amrith Krishna, Ganesh Ramakrishnan
2022-11-15