Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Studies in health technology and informatics ; h5-index 23.0

In previous work, we implemented a deep learning model with CamemBERT and PyTorch, and built a microservices architecture using the TorchServe serving library. Without TorchServe, inference time was three times faster when the model was loaded once in memory compared when the model was loaded each time. The preloaded model without TorchServe presented comparable inference time with the TorchServe instance. However, using a PyTorch preloaded model in a web application without TorchServe would necessitate to implement functionalities already present in TorchServe.

Guerdoux Guillaume, Tiffet Théophile, Bousquet Cedric


Artificial Intelligence, COVID-19, MLOps, Social Media, Vaccines