Type of work:
The so-called memory wall presents an opportunity for neuromorphic accelerators that can perform computations directly inside the memory array where the network’s parameters are stored, namely; Compute-in-Memory. However, this approach faces several challenges for large-scale system design. Such bottlenecks lay mainly in the power hungry ADC and the process variability effects due to analog computation. The basic processing element in such a system is an array (or crossbar) of memory devices. This work focuses on FeFET technology as the storage unit for the memory crossbar and targets architectures that rely on bit slicing/decomposition (10.1109/ASAP49362.2020.00027). The main motivation behind this work is to design and optimize the memory array and the mixed signal blocks surrounding it to target the aforementioned bottlenecks in CIM.
- CMOS Analog and Mixed-signal design techniques
- Deep neural networks
- Memory technologies