Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Advanced materials (Deerfield Beach, Fla.)

Deep learning has become ubiquitous, touching daily lives across the globe. Today, traditional computer architectures are stressed to their limits in efficiently executing the growing complexity of data and models. Compute-in-memory (CIM) can potentially play an important role in developing efficient hardware solutions that reduce data movement from compute-unit to memory, known as the von Neumann bottleneck. At its heart is a cross-bar architecture with nodal non-volatile-memory elements that performs an analog multiply-and-accumulate operation, enabling the matrix-vector-multiplications repeatedly used in all neural network workloads. The memory materials can significantly influence final system-level characteristics and chip performance, including speed, power, and classification accuracy. With an over-arching co-design viewpoint, this review assesses the use of cross-bar based CIM for neural networks, connecting the material properties and the associated design constraints and demands to application, architecture and performance. We consider both digital and analog memory, assess the status for both training and inference, and provide metrics for the collective set of properties existing and new non-volatile memory materials will need to demonstrate for a successful CIM technology. This article is protected by copyright. All rights reserved.

Haensch Wilfried, Raghunathan Anand, Roy Kaushik, Chakrabarti Bhaswar, Phatak Charudatta M, Wang Cheng, Guha Supratik

2022-Dec-29

Compute in-memory, analog compute, cross-bar array, non-volatile memory