Processing-in-Memory

Processing deep learning workloads inside memory

Research Description

Processing-in-memory (PIM) is an innovative computing paradigm that aims to overcome the limitations of the traditional von Neumann architecture, which separates the processing units (CPU/GPU) from the memory units (RAM). This separation creates a bottleneck known as the “memory wall,” where the time and energy costs of data movement between the processor and memory significantly impact overall system performance.

PIM integrates computational capabilities directly into memory units. Instead of transferring data back and forth between the CPU/GPU and RAM, PIM allows for data processing to occur within the memory itself. This approach can drastically reduce data movement, leading to improvements in speed, energy efficiency, and overall system performance.

Your Job

  • Understanding the concept of computational intensity, memory bandwidth, and memory-bounded workload.
  • Understanding various memory technologies.
  • Evaluating workloads and identifying computational or memory bottleneck.
  • Designing a novel PIM architecture or PIM software stack.

Related Papers:

  1. SCIE
    S-FLASH: A NAND Flash-Based Deep Neural Network Accelerator Exploiting Bit-Level Sparsity
    Myeonggu Kang , Hyeonuk Kim , Hyein Shin , Jaehyeong Sim, Kyeonghan Kim , and Lee-Sup Kim
    IEEE Transactions on Computers, vol.71, num.6, pp.1291–1304, 2021
  2. Top-Tier
    A PVT-Robust Customized 4T Embedded DRAM Cell Array for Accelerating Binary Neural Networks
    Hyein Shin , Jaehyeong Sim, Daewoong Lee , and Lee-Sup Kim
    In 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)
  3. Top-Tier
    An Energy-Efficient Processing-in-Memory Architecture for Long Short Term Memory in Spin Orbit Torque MRAM
    Kyeonghan Kim , Hyein Shin , Jaehyeong Sim, Myeonggu Kang , and Lee-Sup Kim
    In 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)
  4. Top-Tier
    NAND-Net: Minimizing Computational Complexity of In-Memory Processing for Binary Neural Networks
    Hyeonuk Kim , Jaehyeong Sim, Yeongjae Choi , and Lee-Sup Kim
    In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA)
  5. Top-Tier
    NID: Processing Binary Convolutional Neural Network in Commodity DRAM
    Jaehyeong Sim, Hoseok Seol , and Lee-Sup Kim
    In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)