Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In International journal of information technology : an official journal of Bharati Vidyapeeth's Institute of Computer Applications and Management

The usage of various software applications has grown tremendously due to the onset of Industry 4.0, giving rise to the accumulation of all forms of data. The scientific, biological, and social media text collections demand efficient machine learning methods for data interpretability, which organizations need in decision-making of all sorts. The topic models can be applied in text mining of biomedical articles, scientific articles, Twitter data, and blog posts. This paper analyzes and provides a comparison of the performance of Latent Dirichlet Allocation (LDA), Dynamic Topic Model (DTM), and Embedded Topic Model (ETM) techniques. An incremental topic model with word embedding (ITMWE) is proposed that processes large text data in an incremental environment and extracts latent topics that best describe the document collections. Experiments in both offline and online settings on large real-world document collections such as CORD-19, NIPS papers, and Tweet datasets show that, while LDA and DTM is a good model for discovering word-level topics, ITMWE discovers better document-level topic groups more efficiently in a dynamic environment, which is crucial in text mining applications.

Avasthi Sandhya, Chauhan Ritu, Acharjya Debi Prasanna

2022-Nov-20

Embedded topic model, Probabilistic machine learning, Scientific documents, Topic embedding, Topic model, Twitter data