AI Computing Platform Laboratory
Department of Computer Science and Engineering, Artificial Intelligence Convergence, Ewha Womans University

As AI models continue to grow in size, they require vast amounts of energy, making sustainable AI unfeasible if current trends persist. Consequently, the importance of robust computing HW/SW infrastructure underpinning AI will become critical.
Our main research goal is to computing AI model in a faster and energy-efficient way through HW/SW co-design. Specifically, our research interests include:
- Neural Processing Unit (NPU), domain-specific hardware, FPGA
- quantization, pruning, and knowledge distillation
- Hardware-aware neural architecture search (HW-Aware NAS) and neural architecture accelerator search (NAAS)
- Processing-in-memory (PIM)
- Efficient LLM serving including KV Caching and other optimizations
- On-Device AI
news
May 23, 2025 | The paper authored by our lab members Minseo Kim, Suhyeon Kim, and Jiyeon Ha — who are concurrently participating in a capstone design project and undergraduate research internship — has been accepted for presentation at the 2025 Summer Annual Conference of IEIE (2025 대한전자공학회 하계학술대회). Congratulations! |
---|---|
Apr 02, 2025 | Prof. Sim was selected as a Teaching Excellence Award for the 2024-2 semester. link. |
Mar 04, 2025 | Our lab has one paper accepted for presentation in IEEE Conference on Artificial Intelligence (CAI) 2025. |
Feb 24, 2025 | HaYoung, Subean, and Kyungmi have graduated. Congratulations! |
Dec 24, 2024 | Our lab has one paper accepted for publication in IEEE Access. |
latest publications
-
학부생 성과메모리 용량 제약 하에서 하드웨어 최적화 트랜스포머 설계를 위한 HPO-NAS 통합 프레임워크In 2025년도 대한전자공학회 하계학술대회
-
ViT-Slim: Genetic Alorighm-based NAS Framework for Efficient Vision Transformer DesignIn 2025 IEEE International Conference on Artificial Intelligence (CAI)
-
Enhancing Gender Prediction Model Performance through Automatic Individual Entity Extraction and Class BalanceIn 2025 IEEE International Conference on Big Data and Smart Computing (BigComp)
-
SCIEPRISM-Med: Parameter-efficient Robust Interdomain Specialty Model for Medical Language TasksIEEE Access, vol.13, pp.4957-4965, 2025Collaborative research conducted with NVIDIA
-
SCIESpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector MultiplicationIEEE Access, vol.12, pp.176009-176021, 2024