AI Computing Platform Laboratory
Department of Computer Science and Engineering, Artificial Intelligence Convergence, Ewha Womans University

As AI models continue to grow in size, they require vast amounts of energy, making sustainable AI unfeasible if current trends persist. Consequently, the importance of robust computing HW/SW infrastructure underpinning AI will become critical.
Our main research goal is to computing AI model in a faster and energy-efficient way through HW/SW co-design. Specifically, our research interests include:
- Neural Processing Unit (NPU), domain-specific hardware, FPGA
- Quantization, pruning, and knowledge distillation
- Hardware-aware neural architecture search (HW-Aware NAS) and neural architecture accelerator search (NAAS)
- Processing-in-memory (PIM)
- Efficient LLM serving including KV Caching and other optimizations
- On-Device AI
news
Sep 15, 2025 | Eunseo Ko has joined our group as an undergraduate intern. Welcome! |
---|---|
Sep 01, 2025 | Jihyo Han has joined our group as a Master student. Welcome! |
Sep 01, 2025 | Jihee Choi, Hyun Young Kim, Wonjung Moon, Ingyeong Yang, Eunseo Shin, and Aria Yeonju Park have joined our group as an undergraduate intern. Welcome! |
Aug 08, 2025 | Our lab has two papers accepted for presentation at 22st International SoC Design Conference (ISOCC). |
Jul 01, 2025 | Jihyun Lee, Yoon Ji Choi, Nadam Park, and Jaelin Lee have joined our group as an undergraduate intern. Welcome! |
latest publications
-
AcceptedLoRA-PIM: In-Memory Delta-Weight Injection for Multi-Adapter LLM ServingIn 2025 22st International SoC Design Conference (ISOCC)
-
AcceptedGATHER: A Gated-Attention Accelerator for Efficient LLM InferenceIn 2025 22st International SoC Design Conference (ISOCC)
-
학부생 성과메모리 용량 제약 하에서 하드웨어 최적화 트랜스포머 설계를 위한 HPO-NAS 통합 프레임워크In 2025년도 대한전자공학회 하계학술대회
-
ViT-Slim: Genetic Alorighm-based NAS Framework for Efficient Vision Transformer DesignIn 2025 IEEE International Conference on Artificial Intelligence (CAI)