AI Computing Platform Laboratory
Department of Computer Science and Engineering, Artificial Intelligence Convergence, Ewha Womans University

As AI models continue to grow in size, they require vast amounts of energy, making sustainable AI unfeasible if current trends persist. Consequently, the importance of robust computing HW/SW infrastructure underpinning AI will become critical.
Our main research goal is to computing AI model in a faster and energy-efficient way through HW/SW co-design. Specifically, our research interests include:
- Neural Processing Unit (NPU), domain-specific hardware, FPGA
- Quantization, pruning, and knowledge distillation
- Hardware-aware neural architecture search (HW-Aware NAS) and neural architecture accelerator search (NAAS)
- Processing-in-memory (PIM)
- Efficient LLM serving including KV Caching and other optimizations
- On-Device AI
news
Aug 08, 2025 | Our lab has two papers accepted for presentation at 22st International SoC Design Conference (ISOCC). |
---|---|
Jul 01, 2025 | Jihyun Lee, Yoon Ji Choi, Nadam Park, and Jaelin Lee have joined our group as an undergraduate intern. Welcome! |
Jun 23, 2025 | Dahyun Choi has joined our group as an undergraduate intern. Welcome! |
May 23, 2025 | The paper authored by our lab members Minseo Kim, Suhyeon Kim, and Jiyeon Ha — who are concurrently participating in a capstone design project and undergraduate research internship — has been accepted for presentation at the 2025 Summer Annual Conference of IEIE (2025 대한전자공학회 하계학술대회). Congratulations! |
Apr 02, 2025 | Prof. Sim was selected as a Teaching Excellence Award for the 2024-2 semester. link. |
latest publications
-
AcceptedLoRA-PIM: In-Memory Delta-Weight Injection for Multi-Adapter LLM ServingIn 2025 22st International SoC Design Conference (ISOCC)
-
AcceptedGATHER: A Gated-Attention Accelerator for Efficient LLM InferenceIn 2025 22st International SoC Design Conference (ISOCC)
-
학부생 성과메모리 용량 제약 하에서 하드웨어 최적화 트랜스포머 설계를 위한 HPO-NAS 통합 프레임워크In 2025년도 대한전자공학회 하계학술대회
-
ViT-Slim: Genetic Alorighm-based NAS Framework for Efficient Vision Transformer DesignIn 2025 IEEE International Conference on Artificial Intelligence (CAI)