Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Computational intelligence and neuroscience

With the rapid development of computer technology, the loss of long-distance information in the transmission process is a prominent problem faced by English machine translation. The self-attention mechanism is combined with convolutional neural network (CNN) and long-term and short-term memory network (LSTM). An English intelligent translation model based on LSTM-SA is proposed, and the performance of this model is compared with other deep neural network models. The study adds SA to the LSTM neural network model and constructs the English translation model of LSTM-SA attention embedding. Compared with other deep learning algorithms such as 3RNN and GRU, the LSTM-SA neural network algorithm has faster convergence speed and lower loss value, and the loss value is finally stable at about 8.6. Under the three values of adaptability, the accuracy of LSTM-SA neural network structure is higher than that of LSTM, and when the adaptability is 1, the accuracy of LSTM-SA neural network improved the fastest, with an accuracy of nearly 20%. Compared with other deep learning algorithms, the LSTM-SA neural network algorithm has a better translation level map under the three hidden layers. The proposed LSTM-SA model can better carry out English intelligent translation, enhance the representation of source language context information, and improve the performance and quality of English machine translation model.

Yang Yifang

2022