Fifth International Workshop on
Domain Specific System Architecture (DOSSA-5)


Orlando, FL, USA, June 17, 2023
http://prism.sejong.ac.kr/dossa-5

CALL FOR PAPERS

In conjunction with the 50th IEEE International Symposium on Computer Architecture (ISCA-50)


Workshop Schedule

2:00 pm - 2:05 pm (EST)
Workshop Introduction

2:05 pm - 2:40 pm (EST)Invited Talk I
Euicheol Lim, Fellow and leader of Memory Solution Product Design team. SK Hynix
"Cost-effective GPT inference accelerator using AiM"
(slide)

2:40 pm - 3:15 pm (EST)Invited Talk II
Byeongho Kim, Hardware Engineer at Samsung Electronics DRAM Design Team
"Exploring Processing-in-Memory for memory-bound applications in computing systems"
(slide)

3:15 pm - 3:30 pm (EST)Paper I
Akhil Shekar (University of Virginia); Sabiha Tajdari (University of Virginia); Morteza Baradaran (University of Virginia)*; Kevin Skadron (University of Virginia)
"HashMem : PIM Architecture for accelerated Hashmap Performance"
(slide) (paper)


3:30 pm - 4:00 pm (EST)Break time

4:00 pm - 4:35 pm (EST)Invited Talk III
Amir Yazdanbakhsh, Google Research, Brain Team
"Breaking Barriers: Embracing Machine Learning for the Next Generation of Domain Specific Accelerators"
(slide)

4:35 pm - 4:50 pm (EST)Paper II
Jackson Woodruff (University of Edinburgh)*; Chris Cummins (Facebook AI Research)
"Designing CGRAs with Deep Reinforcement Learning"
(slide) (paper)

4:50 pm - 5:05 pm (EST) Paper III
Hansung Kim (University of California, Berkeley)*; Angie Wang (Apple); Sizhuo Zhang (Apple); Sophia Shao (Berkeley)
"Cost of Divergence in Ray Tracing: Performance Characterization on CPU and GPU"
(slide) (paper)

5:05 pm - 5:20 pm (EST) Paper IV
Stefan Abi-Karam (Georgia Institute of Technology); Rishov R. Sarkar (Georgia Institute of Technology); Callie Hao (Georgia Institute of Technology)*
"AINR-DSP: FPGA Acceleration of Arbitrary-Order Gradient Computations for Implicit Neural Representation Processing"
(slide) (paper)

5:20 pm - 5:35 pm (EST) Paper V
Zishen Wan (Georgia Institute of Technology)*; Yiming Gan (University of Rochester); Bo Yu (PerceptIn); Arijit Raychowdhury (Georgia Institute of Technology); Yuhao Zhu (University of Rochester)
"VPP: The Vulnerability-Proportional Protection Paradigm Towards Reliable Autonomous Machines"
(slide) (paper)

5:35 pm - 5:40 pm (EST)Closing

CALL FOR PAPERS

Domain specific systems are an increasingly important computing environment for many people and businesses. As the information technologies emerges into various real world applications such as autonomous driving, IoT (Innternet of Things), CPS (Cyber physical systems) and health care applications in the 4th industrial revolution era, interest in the specialized domain specific computing systems are increasing significantly. In addition to the conventional computing platforms, domain specific computing systems have a lot of design challenges including specialized hardware components like hardware accelerator, optimized library and domain specific languages. This workshop focuses on domain specific system design in both hardware and software aspects and their interaction in order to improve the availability and efficiency in the emerging real world applications. The main theme of this workshop in this year is the HW/SW components for domain specific systems. Topics of particular interest include, but are not limited to:

Application analysis and workload characterization to design domain specific system for emerging applications, such as autonomous driving, IoT and health care applications.
Domain specific processor/system architectures and hardware features for domain specific systems;
Hardware accelerators for domain specific systems;
Storage architectures for domain specific systems;
Experiences in domain specific system development;
Novel techniques to improve responsiveness by exploiting domain specific systems;
Novel techniques to improve performance/energy for domain specific systems;
Domain specific systems performance evaluation methodologies;
Application benchmarks for domain specific systems;
Enabling technologies for domain specific systems (smart edge devices, smart sensors, energy harvesting, sensor networks, sensor fusion etc.);

The workshop aims at providing a forum for researchers, engineers and students from academia and industry to discuss their latest research in designing domain specific system for various emerging application areas in 4th industrial revolution era to bring their ideas and research problems to the attention of others, and to obtain valuable and instant feedback from fellow researchers. One of the goals of the workshop is to facilitate lively and rigorous–yet friendly–discussion about the research problems in the architecture, implementation, networking, and programming and thus pave the way to novel solutions that improve both hardware and software of future domain specific systems.

Invited Talk I

- Speaker : Eui-cheol Lim, Research Fellow, SK Hynix

- Talk Title : Cost-effective GPT inference accelerator using AiM (SK hynix’s PIM)

- Abstract :
   ChatGPT has been opening up the mainstream market for AI services. But problems seem to exist with considerably higher operating costs and substantially longer service latency. As the GPT model size continued to increase, memory intensive function takes up most of the GPT inference operation. That’s why even latest GPU system does not provide sufficient performance and energy efficiency. To resolve it, we are introducing faster and cheaper GPT inference accelerator solution using AiM which is SK hynix’s first PIM device.

- Bio :
    Eui-cheol Lim is a Fellow and leader of Memory Solution Product Design team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory.

Invited Talk II

- Speaker : Byeongho Kim

- Talk Title : Exploring Processing-in-Memory for memory-bound applications in computing systems

- Abstract :
   Deep neural networks and big data applications become a mainstream in computer systems due to their efficiency. Many of these applications are memory-bound, demanding larger memory capacity and bandwidth compared to traditional applications. Processing-in-memory (PIM) has been proposed as a solution to overcome these limitations. To cost-effectively gain the energy-efficiency and bandwidth, PIM devices have gained significant interest from both industry and academia. The in-memory processing devices designed by Samsung and the systems utilizing them will be introduced. Furthermore, this talk will delve into the suitability of PIM systems in accelerating the latest AI applications.

- Bio :
    Byeongho Kim is a hardware engineer currently working in the DRAM Design team at Samsung Electronics. He specializes in processing-in-memory architecture and its associated systems, with a focus on HBM-PIM and next-gen PIM architecture. Byeongho Kim holds a Ph.D. and B.S. from Seoul National University, earned in 2022 and 2017 respectively. His Ph.D. research focuses on intelligent memory systems, particularly in-memory processing, high-performance computing, and AI acceleration.

Invited Talk III

- Speaker : Amir Yazdanbakhsh, Google Research, Brain Team

- Talk Title : Breaking Barriers: Embracing Machine Learning for the Next Generation of Domain Specific Accelerators

- Abstract :
    In recent years, computer architecture research has been enriched by the advent of machine learning (ML) techniques. In this talk, we will discuss the interplay between ML and designing domain specific architectures. Then, we will delve into the synergies between these two domains, highlighting their collaborative potential to advance design of computer systems. We will explore the diverse range of opportunities that ML affords for optimizing various aspects across the entire compute stack, from algorithmic to system-level optimization. Furthermore, we will embark on a journey towards Architecture 2.0, envisioning a future where ML-assisted architecture research takes center stage. This discussion will emphasize the importance of nurturing and fostering a community that embraces data-driven solutions in computer architecture design.

- Bio :
    Amir Yazdanbakhsh received his Ph.D. degree in computer science from the Georgia Institute of Technology. His Ph.D. work has been recognized by various awards, including Microsoft PhD Fellowship and Qualcomm Innovation Fellowship. Amir is currently a Research Scientist at Google DeepMind where he is the co-founder and co-lead of the Machine Learning for Computer Architecture team. His work focuses on leveraging the recent machine learning methods and advancements to innovate and design better hardware accelerators. He is also interested in designing large-scale distributed systems for training machine learning applications, and led the development of a massively large-scale distributed reinforcement learning system that scales to TPU Pod and efficiently manages thousands of actors to solve complex, real-world tasks. The work of our team has been covered by media outlets including WIRED, ZDNet, AnalyticsInsight, InfoQ. Amir was inducted into the ISCA Hall of Fame in 2023.

SUBMISSION GUIDELINE

Submit a 2‐page presentation abstract to a web‐based submission system (https://cmt3.research.microsoft.com/DoSSA2023) by May. 3, 2023. Notification of acceptance will be sent out by May. 17, 2023. Final paper and presentation material (to be posted on the workshop web site) due June. 7, 2023. For additional information regarding paper submissions, please contact the organizers.

IMPORTANT DATES

Abstract submission : May. 3, 2023
Author notification : May. 17, 2023
Final camera-ready paper : June. 7, 2023
Workshop : June. 17,2023

Workshop Organizers

Hyesoon Kim, Georgia Tech (hyesoon@cc.gatech.edu)
Giho Park, Sejong Univ. (ghpark@sejong.ac.kr)
Jaewoon Sim, Seol National Univ. (jaewoong@snu.ac.kr)

Web Chair

Chiwon Han, Sejong Univ. (hc930104@sju.ac.kr)
Sungwun Bae, Sejong Univ. (bay1028@sju.ac.kr)

Prior DOSSA

DOSSA-1 (http://prism.sejong.ac.kr/dossa-1)
DOSSA-2 (http://prism.sejong.ac.kr/dossa-2)
DOSSA-3 (http://prism.sejong.ac.kr/dossa-3)
DOSSA-4 (http://prism.sejong.ac.kr/dossa-4)