Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Frontiers in robotics and AI

Natural language is inherently a discrete symbolic representation of human knowledge. Recent advances in machine learning (ML) and in natural language processing (NLP) seem to contradict the above intuition: discrete symbols are fading away, erased by vectors or tensors called distributed and distributional representations. However, there is a strict link between distributed/distributional representations and discrete symbols, being the first an approximation of the second. A clearer understanding of the strict link between distributed/distributional representations and symbols may certainly lead to radically new deep learning networks. In this paper we make a survey that aims to renew the link between symbolic representations and distributed/distributional representations. This is the right time to revitalize the area of interpreting how discrete symbols are represented inside neural networks.

Ferrone Lorenzo, Zanzotto Fabio Massimo


compositional distributional semantic models, compositionality, concatenative compositionality, deep learning (DL), distributed representation, natural language processing (NLP)