Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Journal of chemical information and modeling

The endeavors to pursue a robust multitask model to resolve intertask correlations have lasted for many years. A multitask deep neural network, as the most widely used multitask framework, however, experiences several issues such as inconsistent performance improvement over the independent model benchmark. The research aims to introduce an alternative framework by using the problem transformation methods. We build our multitask models essentially based on the stacking of a base regressor and classifier, where the multitarget predictions are realized from an additional training stage on the expanded molecular feature space. The model architecture is implemented on the QM9, Alchemy, and Tox21 datasets, by using a variety of baseline machine learning techniques. The resultant multitask performance shows 1 to 10% enhancement of forecasting precision, with the task prediction accuracy being consistently improved over the independent single-target models. The proposed method demonstrates a notable superiority in tackling the intertarget dependence and, moreover, a great potential to simulate a wide range of molecular properties under the transformation framework.

Tan Zheng, Li Yan, Shi Weimei, Yang Shiqing