Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In IEEE transactions on neural networks and learning systems

Training machines to understand natural language and interact with humans is one of the major goals of artificial intelligence. Recent years have witnessed an evolution from matching networks to pretrained language models (PrLMs). In contrast to the plain-text modeling as the focus of the PrLMs, dialog texts involve multiple speakers and reflect special characteristics, such as topic transitions and structure dependencies, between distant utterances. However, the related PrLM models commonly represent dialogs sequentially by processing the pairwise dialog history as a whole. Thus, the hierarchical information on either utterance interrelation or speaker roles coupled in such representations is not well addressed. In this work, we propose compositional learning for holistic interaction across the utterances beyond the sequential contextualization from PrLMs, in order to capture the utterance-aware and speaker-aware representations entailed in a dialog history. We decouple the contextualized word representations by masking mechanisms in transformer-based PrLM, making each word only focus on the words in the current utterance, other utterances, and two speaker roles (i.e., utterances of the sender and utterances of the receiver), respectively. In addition, we employ domain-adaptive training strategies to help the model adapt to the dialog domains. Experimental results show that our method substantially boosts the strong PrLM baselines in four public benchmark datasets, achieving new state-of-the-art performance over previous methods.

Zhang Zhuosheng, Zhao Hai, Liu Longxiang

2022-Nov-14