All seminars will take place on Fridays at 11 a.m., either via Zoom or in-person. Check seminar details.
University of Michigan
November 2, 2017
11:00am - 12:00pm
Approximate Computing is Easy if You Don't Care about Output Quality
The growth of the microprocessor industry fueled by Moore's law has led to great progress in data-intensive applications, such as deep learning, natural language processing, and computer vision. However, energy consumption has emerged as a barrier to this growth. Transistor threshold voltages have not kept pace with technology scaling, resulting in dramatic rises in chip power density and limits to future performance scaling. A key characteristic shared by many of these applications is that absolute correctness of the output is not essential for proper operation. This opens up a new design dimension to trade off performance and energy consumption with output correctness known as approximate computing. Image and video processing applications are well known candidates for approximation as consumers can tolerate occasional dropped frames or small losses in resolution during video playback. Machine learning and data analysis on massive data sets provide opportunities to process representative subsets of input data in a fraction of the time, while still yielding high accuracy. In this talk, I will present our recent work on compiler and run-time software techniques for more generalized approximate computing. While new techniques for approximation are important, one of the key things that we discovered is that quality control is critical. How do we ensure the user experience meets a prescribed level of quality? Current approaches either do not monitor output quality at all or use sampling approaches to check small subsets of the output assuming that they are representative. While these approaches have been shown to produce average errors that are acceptable, they often miss large errors without any means to take corrective actions. Thus, to bring approximate computing to the mainstream, online detection and correction of large approximation errors is necessary. I will talk about our work on efficient quality control and how we generalize these techniques to the broader deep learning space.
Scott Mahlke is a Professor in the Electrical Engineering and Computer Science Department at the University of Michigan where he leads the Compilers Creating Custom Processors group (http://cccp.eecs.umich.edu). The CCCP group delivers technologies in the areas of customized processors for energy-efficient computing, reliable system design, and compiler code generation for heterogeneous systems. Prior to joining academia, Scott was a Senior Researcher at Hewlett-Packard Laboratories. He was one of the original contributors to both the OpenImpact and Trimaran compilers for VLIW processors. Scott's achievements were recognized by several awards, including the Morris Wellman Professorship in 2004, ACM SIGARCH/IEEE-CS TCCA Most Influential Paper Award in 2006, University of Illinois Young Alumni Achievement Award in 2007, Ted Kennedy Family Team Excellence Award in 2009, the EECS Outstanding Achievement Award in 2011, and the 2014 Monroe-Brown Foundation Education Excellence Award. Scott received the Ph.D. degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 1997 and is an IEEE Fellow.