Sales Ended

IEEE CASS-SV Artificial Intelligence For Industry Forum

Event Information

Share this event

Date and Time



Intel Santa Clara SC12 auditorium

3600 Juliette Ln

Santa Clara, CA 95054

View Map

Event description


IEEE Circuits and Systems Society-Silicon Valley (CAS-SV) Artificial Intelligent For Industry Forum

Topic: Algorithm-architecture co-design for energy-efficient deep learning, including algorithm optimization (e.g., novel numerical representation, network pruning/compression) and accelerator architectures (e.g., programmable SoC).


1:00-1:20pm sign in and networking

1:20-1:30pm Opening

1:30-2:15pm Dr. Debbie Marr (Intel)

TITLE: Architecture for Machine Learning

BIO: Debbie is Director of the Accelerator Architecture Research Lab (AAL) in Intel Labs. She leads research in hardware acceleration for machine learning. Her team is exploring innovative and efficient hardware acceleration techniques to address the rapid pace of machine learning algorithm innovation. Their research scope encompasses CPUs, GPUs, Accelerators, and FPGA. In the past, Debbie played leading roles on many of Intel’s key CPU products from the 386SL to Intel’s leading-edge 2017 Core/Xeon products. She was the chief architect for the Haswell Core, and she was the chief architect of advanced research and development for Intel’s 2017/2018 Core generation. Debbie has a PhD in electrical and computer engineering from University of Michigan, and holds over 20 patents.

2:15-3:00pm Prof. Vivienne Sze (MIT)

TITLE: Energy-Efficient Edge Computing for AI-driven Applications

ABSTRACT: Edge computing near the sensor is preferred over the cloud due to privacy or latency concerns for a wide range of applications including robotics/drones, self-driving cars, smart Internet of Things, and portable/wearable electronics. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. In this talk, we will describe how joint algorithm and hardware design can be used to reduce energy consumption while delivering real-time and robust performance for applications including deep learning, computer vision, autonomous navigation and video/image processing. We will show how energy-efficient techniques that exploit correlation and sparsity to reduce compute, data movement and storage costs can be applied to various AI tasks including object detection, image classification, depth estimation, super-resolution, localization and mapping. Finally, we will discuss how to efficiently maintain flexibility when building energy efficient and high performance accelerators in the rapidly moving field of deep learning.

3:00-3:20pm networking break

3:20-4:05pm Dr. Mark Sandler (Google)

TITLE: MobileNet: designing efficient architectures for mobile classification, detection and segmentation.

ABSTRACT: In this talk we present lessons and insights that led us to design of MobileNet V1 and V2, discuss common optimization techniques, such as quantization, and common pitfalls when designing efficient architectures as well show our insights can guide automated architecture search.

BIO: Mark Sandler is a research scientist at Google, working among other things, on next generation high performance neural networks for mobile vision.

4:05-4:50pm Dr. Jongsoo Park (Facebook)

TITLE: Deep Learning Inference in Facebook Data Centers: Characterization, Performance Optimizations, and Hardware Implications

ABSTRACT: Machine learning (ML), particularly deep learning (DL), is used in many social network services. Despite recent proliferation of DL accelerators, to provide flexibility, availability and low latency, many inference workloads are run evaluated on CPU servers in the datacenter. As DL models grow in complexity, they take more time to evaluate and thus result in higher compute and energy demands in the datacenter. This talk will present characterizations of DL models used in Facebook social network services to illustrate the needs for better co-design of DL algorithms, numerics, and hardware. I will present computational characteristics of our models, describe high-performance optimizations targeting existing systems, point out limitations of these systems, and suggest implications for future general-purpose/accelerated inference hardware.

4:50-5:20pm Q&A & closing networking


IEEE Circuits and Systems Society

Intel Corporation


IEEE Circuits and Systems Society Santa Clara Valley Chapter

IEEE Communications Society Santa Clara Valley Chapter

IEEE Computational Intelligence Society Santa Clara Valley Chapter

IEEE Computer Society Technical Committee on Multimedia Computing

IEEE Signal Processing Society Santa Clara Valley Chapter

Tau Beta Pi San Francisco Bay Area Alumni Chapter

IEEE Computer Society Technical Committee on Semantic Computing

Share with friends

Date and Time


Intel Santa Clara SC12 auditorium

3600 Juliette Ln

Santa Clara, CA 95054

View Map

Save This Event

Event Saved