AI Computing Platform Laboratory

Department of Computer Science and Engineering, Artificial Intelligence Convergence, Ewha Womans University

ACPLlogo.png

The AI Computing Platform Laboratory (ACPL) has been a lab of excellence for efficient AI hardware & software research since its founding in 2021.

Our main research goal is to accelerate AI model in a faster and energy-efficient way through HW/SW co-design. Specifically, our research interests include:

  • Designing Neural Processing Unit (NPU) and domain-specific hardware
  • Making AI model efficient, such as quantization, pruning, and knowledge distillation
  • Hardware-aware neural architecture search (HW-Aware NAS) and neural architecture accelerator search (NAAS)
  • Processing-in-memory (PIM)

연구실 소개 동영상

현재 연구실 구성원 모집 정보 및 연구실 합류 희망자 안내

news

Oct 11, 2024 Doeun Kim has joined our group as an undergraduate intern. Welcome!
Oct 01, 2024 Suhyeon Kim has joined our group as an undergraduate intern. Welcome!
Sep 19, 2024 Jaeyoung Choi has joined our group as an undergraduate intern. Welcome!
Sep 14, 2024 Our lab has two papers accepted for presentation at 2024 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI).
Sep 05, 2024 Wonhui Roh and Jungyeon Ha have joined our group as an undergraduate intern. Welcome!

latest publications

  1. Accepted
    AutoCaps-Zero: Searching for Hardware-Efficient Squash Function in Capsule Networks
    Jieui Kang , Sooyoung Kwon , Hyojin Kim , and Jaehyeong Sim
    In 2024 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)
  2. Accepted
    OCW: Enhancing Few-Shot Learning with Optimized Class-Weighting Methods
    Jieui Kang , Subean Lee , Eunseo Kim , Soeun Choi , and Jaehyeong Sim
    In 2024 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)
  3. An Energy-Efficient Hardware Accelerator for On-Device Inference of YOLOX
    Kyungmi Kim , Soeun Choi , Eunkyeol Hong , Yoonseo Jang , and Jaehyeong Sim
    In 2024 21st International SoC Design Conference (ISOCC)
  4. BS2: Bit-Serial Architecture Exploiting Weight Bit Sparsity for Efficient Deep Learning Acceleration
    Eunseo Kim , Subean Lee , Chaeyun Kim , HaYoung Lim , Jimin Nam , and Jaehyeong Sim
    In 2024 21st International SoC Design Conference (ISOCC)
  5. AlphaAccelerator: An Automatic Neural FPGA Accelerator Design Framework Based on GNNs
    Jiho Lee , Jieui Kang , Eunjin Lee , Yejin Lee , and Jaehyeong Sim
    In 2024 21st International SoC Design Conference (ISOCC)