http://prism.sejong.ac.kr

CALL FOR PAPERS

In conjunction with the 42st International Symposium on Computer Architecture(ISCA-42)


 



CALL FOR PAPERS

One of the most important principles in designing today's computing systems is to exploit parallelism. Mobile platforms are no exception and we find increasingly more instances of the use of parallelism in them. At the hardware level, there are: multiple processor cores, GPGPU, accelerators, multiple banks of memory, multiple channels to non-volatile memory chips, and multiple radios, to name a few. At the software level, parallel and concurrent threading techniques are commonly employed to improve responsiveness and throughput in the OS and applications alike. We anticipate that future mobile platforms will make more extensive and creative use of parallelism.
This workshop focuses on how parallelism is, and can be, utilized in hardware, software and their interaction in order to improve the user experiences with mobile platforms. Topics of particular interest include, but are not limited to:

- Emerging parallel application processor architectures and hardware features in mobile platforms;
- Compelling future applications on mobile platforms that call for unprecedented parallelism;
- Mobile GPGPU architectures and programming models;
- Hardware accelerators for mobile applications;
- Storage architectures in mobile platforms;
- Radio and networking architectures in mobile platforms;
- Compiler support for parallel mobile platforms;
- OS support to accommodate and promote parallelism in mobile platforms;
- Experiences in parallel mobile applications development;
- Novel techniques to improve responsiveness by exploiting parallelism;
- Novel techniques to improve performance/energy by exploiting parallelism;
- Mobile platform performance evaluation methodologies;
- Application benchmarks for mobile platforms;
- Characterization of emerging workloads on mobile platforms; and
- Impact and interaction of emerging technologies to mobile platforms

The workshop aims at providing a forum for researchers, engineers and students from academia and industry to discuss their latest research in designing mobile platforms and systems, to bring their ideas and research problems to the attention of others, and to obtain valuable and instant feedback from fellow researchers.

Invited talk I

- David Hansquine, Qualcomm
- Title : Mobile Processor Design Pitfalls (slides)
- Abstract : Mobile processor performance has surpassed that of laptop and even desktop computers. While years of research for more conventional processors have provided a trail of breadcrumbs to follow, they may not be entirely applicable to mobile. Tight thermal constraints, limited energy budgets imposed by battery operation, and the requirement to support a diverse & varying set of workloads complicate the design process. This talk explores some of the "traps" or "pitfalls" in simply following the breadcrumbs by examining the assumptions and technologies while presenting data from commercial mobile devices.
- Bio : Since joining Qualcomm in 1995, David Hansquine has designed various wireless modem and microprocessor-related blocks while leading more than a dozen ASICs for both mobile handsets and infrastructure products. He was responsible for several of Qualcomm's baseband processor chips as well as GHz and quad-core application processors for smartphones and tablets. Currently, David is leading a Processor Research team investigating novel circuit and architecture techniques to improve power and performance in mobile devices. On the side, he likes to dabble in developing Android apps. He has 14 patents plus several pending.

Invited talk II

- Ivan Jibaja, Intel
- Title : SIMD.js: Bringing the Full Power of Modern Hardware to JavaScript (slides)
- Astract : The performance gaps between the web platform and native models are disappearing one after another. This talk will focus on the JavaScript language support for the use of hardware data parallel instructions. As JavaScript performance approaches that of native programming languages through Just-In-Time (JIT) compilation and aggressive type inference, parallelism offers exciting new leaps in performance and power efficiency. While JavaScript programming models for multicore are under development, SIMD.js, a JavaScript API, currently under active development in a collaboration between Intel, Mozilla, Google, ARM, and Microsoft, brings the capability of programming the low-level data-parallelism capabilities of modern microprocessors using their Single-Instruction Multiple Data vector instructions to JavaScript. Already landed in Firefox Nightly, implemented in Microsoft's Edge browser, and fully prototyped in the Chromium browser, SIMD.js delivers speedups averaging 4x on compute-intensive kernels. Historically, SIMD has been very successful in dramatically improving application performance of certain domains including gaming, image processing, and computer vision. With the recent TC39 approval of advancing SIMD.js to the next stage of standardization for inclusion in ES2016 (ES7), JavaScript execution performance is about to get a lot faster for such domains.

Invited talk III

- Nat Duca, Google
- Title : Chrome Performance at Chrome Scale
- Astract : Browsers are a key aspect of both the mobile and desktop experience, and the speed of one's browser is a prominent part of user satisfaction with their device. Making Chrome faster is thus a huge priority for our. Fortunately, web browsers are simply so complex that there're are just endless amounts of improvements to be made!
For us, speed is limited not by technology, but by cognition: a browser, like an OS, is often so complex that its hard to understand the structure of the system from the chaos, and then doubly hard to explain to others. For instance, a single touch down event goes asynchronously through 13 threads in order to update the screen in Chrome, with about a half dozens caveats to that that vary depending on the content of the page at that moment. If you can master the threads' roles, and their myriad interactions, and you can figure out how to make chrome faster ... in that one part of chrome! Making chrome feel faster, overall, requires us to repeat this process at scale, across thousands of subsystems and hundreds of engineers.
We solve the insight-at-scale problem with a quirky profiling tool we call "chrome://tracing." This tool gathers up huge swaths of data from every data source we can get ahold of, and then smashes it into a super dense visualization that lets us see how the system actually was behaving, over time. In this talk, I will present a few things we know about Chrome's bottlenecks on mobile devices, showing how those issues manifest in our tracing tools. As will become clear, once you know how to use these tools, making the web faster is a matter of cracking open one of your favorite web pages, grabbing a trace, and digging in.
- Bio: Nat Duca is an engineer on Chrome web platform team focused on making Chrome go faster, splitting time between actual performance work, and benchmark and tools development. From new apis for the web, to raw speed and efficiency improvements, the goal is to make chrome feel faster, every 6 weeks.

 

Workshop Program

  9:00 - 9:10 : Welcome and Workshop Introduction

  9:10 - 10:10 : Industrial Invited Talk I:David Hansquine, Qualcomm

Paper Session 1: Mobile GPU

  10:10 - 10:35: "Leverage Mobile GPUs for Flexible High-Speed Wireless Communication"(slides), Qi Zheng, Cao Gao, Trevor Mudge, University of Michigan

  10:35 - 11:00: "Offloading to the GPU: An Objective Approach"(slides), Ajaykumar Kannan, Mario Badr, Parisa Khadem Hamedani, Natalie Enright Jerger, University of Toronto

  11:00 - 11:30: Break

  11:30 - 12:30 : Industrial Invited Talk II: Ivan Jibaja, Intel

  12:30 - 1:30 : Lunch

  1:30 - 2:30: Industrial Invited Talk III: Nat Duca, Google

Paper Session 2: Cache Architecture

  2:30 - 2:55: "Adaptive Cache Partitioning on a Composite Core"(slides), Jiecao Yu, Andrew Lukefahr, Shruti Padmanabha, Reetuparna Das, Scott Mahlke, University of Michigan

  2:55 - 3:20: "Unified Cache: A Case for Low-Latency Communication", Khalid Al-Hawaj, Simone Campanoni, Gu-Yeon Wei, David Brooks, Harvard University

SUBMISSION GUIDELINE

Submit a 2-page presentation abstract to a web-based submission system (https://cmt.research.microsoft.com/PRISM2015/) by March 31, 2015. Notification of acceptance will be sent out by April 22, 2015. Final presentation material (to be posted on the workshop web site) due June 4, 2015. For additional information regarding paper submissions, please contact the organizers.

IMPORTANT DATES

Abstract submission :March 31, 2015  April 7, 2015
Author notification : April 22, 2015
Final camera-ready paper : June 4, 2015
Workshop : June 14, 2015

Workshop Organizers

Sangyeun Cho, Samsung Electronics Co. (sangyeun.cho@samsung.com)
Hyesoon Kim, Georgia Tech (hyesoon@cc.gatech.edu)
Hsien-Hsin Lee, TSMC (hhleeq@tsmc.com)
Giho Park, Sejong Univ. (ghpark@sejong.ac.kr)
Vijay Janapa Reddi, UT Austin (vj@ece.utexas.edu)



Web Chair

Minkwan Kee, Sejong Univ. (mkkee@sju.ac.kr)
Hyunsu Seon, Sejong Univ. (hsseon@sju.ac.kr)

Prior PRISM

PRISM-1
PRISM-2