3rd Intl. Workshop on Emerging Memory Solutions & Applications
Co-located with DATE'18 conference at March 23, 2018 in Dresden, Germany
Evolving memory technologies are triggering intense interdisciplinary activities; they have the potential to provide many benefits, such as energy efficiency, density, re-configurability, non-volatility, novel computational structures and approaches, massive parallelism, etc. And therefore overcome both technology and design limitations (e.g., leakage and memory wall) to answer the requirements of today’s and future applications such as IoT, big-data, healthcare, etc. These characteristics may force to deeply revisit existing computing and storage paradigms.
This workshop (third edition) aims at providing a forum to discuss challenges, trends, solutions and applications of these rapidly evolving memory technologies by gathering researchers and engineers from academia and industry; it also aims at creating a unique network of competence and experts in all aspects of emerging memory solutions and technologies including manufacturing, architectures, design, automation and test.
Call for Papers
- General Chair: Said Hamdioui – TU Delft (NL)
- Program Co-Chair: Christian Weis – TU Kaiserslautern (DE)
- Program Co-Chair: Bastien Giraud – CEA-LETI (FR)
- Panel Chair: Ian O’Connor – INL (FR)
- Publicity Chair: Matthias Jung – Fraunhofer IESE (DE)
- Proceeding Chair: Jean-Philippe Noel – CEA-LETI (FR)
- Steering Committee Member: Erik Jan Marinissen – IMEC (BE)
You are invited to participate and submit your contributions to the DATE 2018 Friday Workshop on Emerging Memory Solutions and Applications. The areas of interest include (but are not limited to) the following topics:
- Volatile embedded memories (DRAM, SRAM, CAM etc.),
- Advanced non-volatile memories (3D Flash, ReRAM, PCM, MRAM, STT-MRAM etc.),
- Memories based on emerging devices/technologies (TFET, nanowires, CNTs, etc.), 3D stacking of memories,
- Logic and arithmetic designs based on emerging memristive devices,
- Application-specific memory solutions such as near memory computing,
- Computing paradigms based on emerging memristive devices such as neuromorphic computing, computing-in-memory,
- Automation, compilers, programming models, etc. for the above designs and architectures.
Submissions are invited in the form of (extended) abstracts not exceeding two pages and must be sent in as PDF file to <email@example.com> with “DATE18-EMS-WS” as subject. All submissions will be evaluated for selection with respect to their suitability for the workshop, originality, and technical quality. Selected submissions will be accepted for oral or poster presentation. At the workshop, an Electronic Workshop Digest will be made available to all workshop participants, which will include all material that authors are willing to provide: abstract, paper, slides, poster, etc.
|Paper Submission Deadline:||January 8, 2018 |
|Notification of Acceptance:||January 10, 2018|
|Camera-Ready Material:||February 28, 2018|
This one-day event will consist of two plenary keynotes, regular and poster presentations, and a panel session. The panel of the workshop will be about the topic "Deep Learning: what is the best memory to choose?". We are looking forward to a very lively discussion.
|08:40||Opening||Welcome Address||Christian Weis (TUK)|
|08:45||Keynote||"Processing Data Where It Makes Sense in Modern Computing Systems: Enabling In-Memory Computation"||Onur Mutlu (ETHZ, CMU)
||on Emerging Neuromorphic Applications||Chair:
Pascal Vivet (CEA)
|SpiNNaker2: Energy Efficient Neuromorphic Computing in 22nm FDSOI CMOS||Sebastian Höppner
|10:00||Coffee Break||Poster Session 1|
|10:30||Industry Talk||“Applications of phase-change memory in
non-von Neumann computing”
||Manuel Le Gallo (IBM)|
|11:00||Panel||"Deep Learning: what is the best Memory to
Ian O'Connor (ECL)
Andrew Walker (Spin Transfer Technologies, Schiltron)
Elisa Vianello (CEA-Leti)
Manuel Le Gallo (IBM)
Paul Franzon (NCSU)
Pieter Weckx (IMEC)
|13:00||Keynote||"Memory-Centric Architectures for
||Paul Franzon (NCSU)|
|13:30||Special Session||on In-Memory Computing
Said Hamdioui (TU Delft)
|Memristive Memory Processing Unit (mMPU)
for Real Processing in Memory
||Nishil Talati (TECHNION)|
|The Processing In Memory Revolution||Fabrice Devaux (UPMEM)
|Conceptual design of a RISC-V compatible processor core using ternary arithmetic unit with memristive MLC storage||Dietmar Fey (FAU)|
|14:30||Coffee Break||Poster Session 2|
|on Emerging RRAMs and Alternatives
Bastien Giraud (CEA)
|Challenges for Memristive Circuit Design
||Anne Siemon (RWTH)|
|Reliability Modeling Framework for
||Rajendra Bishnoi (KIT)|
|Compact modeling of
resistive switching memories
||Marc Bocquet (IM2NP)|
|16:00||Paper Session||Open Call Paper Session||Chair:
Matthias Jung (Fraunhofer)
|Quantifying the Performance Overhead of NVML||William Wang (ARM)|
|RRAM-based Automata Processor
Yu (TU Delft)
|Energy Efficient DRAM Cache with
Unconventional Row Buffer Size
3-D Priority Address Encoder for 3-D Content Addressable Memories
|Asynchronous Ultra Wide Voltage Range
Two-Port SRAM Circuit for Fast Wake-up IoT Platform
Keynote and Invited Talk Speakers
Prof. Onur Mutlu:
Onur Mutlu is a Professor of Computer Science at ETH Zurich. He is also a faculty member at Carnegie Mellon University, where he previously held the William D. and Nancy W. Strecker Early Career Professorship. His current broader research interests are in computer architecture, systems, and bioinformatics. He is especially interested in interactions across domains and between applications, system software, compilers, and microarchitecture, with a major current focus on memory and storage systems. A variety of techniques he and his group have invented over the years have influenced industry and have been employed in commercial microprocessors and memory/storage systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. His industrial experience spans starting the Computer Architecture Group at Microsoft Research (2006-2009), and various product and research positions at Intel Corporation, Advanced Micro Devices, VMware, and Google. He received the inaugural IEEE Computer Society Young Computer Architect Award, the inaugural Intel Early Career Faculty Award, faculty partnership awards from various companies, and a healthy number of best paper or "Top Pick" paper recognitions at various computer systems and architecture venues. His computer architecture course lectures and materials are freely available on YouTube, and his research group makes software artifacts freely available online. For more information, please see his webpage at http://people.inf.ethz.ch/omutlu/.
"Processing Data Where It Makes Sense in Modern Computing Systems: Enabling In-Memory Computation"
Abstract: Today's systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in systems that cause performance, scalability and energy bottlenecks: 1) data access from memory is already a key bottleneck as applications become more data-intensive and memory bandwidth and energy do not scale well, 2) energy consumption is a key constraint in especially mobile and server systems, 3) data movement is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today.
At the same time, conventional memory technology is facing many scaling challenges in terms of reliability, energy, and performance. As a result, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of slightly higher cost. The emergence of 3D-stacked memory plus logic as well as the adoption of error correcting codes inside the latest DRAM chips are an evidence of this trend.
In this talk, I will discuss some recent research that aims to practically enable computation close to data. After motivating trends in applications as well as technology, we will discuss at least two promising directions: 1) performing massively-parallel bulk operations in memory by exploiting the analog operational properties of DRAM, with low-cost changes, 2) exploiting the logic layer in 3D-stacked memory technology in various ways to accelerate important data-intensive applications. In both approaches, we will discuss relevant cross-layer research, design, and adoption challenges in devices, architecture, systems, and programming models. Our focus will be the development of in-memory processing designs that can be adopted in real computing platforms at low cost.
Prof. Paul D. Franzon:
Paul D. Franzon is currently the Cirrus Logic Distinguished Professor of Electrical and Computer Engineering at North Carolina State University. He earned his Ph.D. from the University of Adelaide, Adelaide, Australia in 1988. He has also worked at AT&T Bell Laboratories, DSTO Australia, Australia Telecom and three companies he cofounded, Communica, LightSpin Technologies and Polymer Braille Inc. His current interests center on the technology and design of complex microsystems incorporating VLSI, MEMS, advanced packaging and nano-electronics. He has lead several major efforts and published over 200 papers in these areas. In 1993 he received an NSF Young Investigators Award, in 2001 was selected to join the NCSU Academy of Outstanding Teachers, in 2003, selected as a Distinguished Alumni Professor, and received the Alcoa Research Award in 2005. He served with the Australian Army Reserve for 13 years as an Infantry Solider and Officer. He is a Fellow of the IEEE.
"Memory-Centric Architectures for Artificial Intelligence"
Abstract: There is much current interest in building custom accelerators for machine learning and machine intelligence algorithms. However, at their root, many of these algorithms are very memory intensive in both capacity and bandwidth needs. Thus there is a need for memory-processor codesign to obtain the most of these algorithms. This talk presents multiple options to achieve high performance.
A 3DIC logic on memory stack has been designed to support the compute needs for multiple parallel inference engines running deep networks simultaneously, for example as would be needed for autonomous vehicles. The DRAM is adopted from the Tezzaron DiRAM4 but supports over 130 Tbps of memory bandwidth with potential for 64 GB of capacity.
A Processor In Memory architecture with Application Specific Instruction features has been designed to support Sparse Hierarchical Temporal algorithms that permit in-situ learning. These achieve an improvement in performance over a GPU implementation of over 20x and a power efficiency improvement of over 500x.
Currently we are designing 2.5D versions of these accelerators as well as accelerators for quantized deep learning and Long Short Term Memory (LSTM) algorithms. Preliminary results from this activity will be presented.