Sold Out

NVIDIA Deep Learning Institute Workshop @ University of Sydney - Day 1

Event Information

Share this event

Date and Time



Business School, ABS Collaborative Learning Studio 3190

University of Sydney

Darlington Campus

Sydney, NSW 2006


View Map

Event description


NVIDIA Deep Learning Institute, together with University of Sydney, is hosting a two day Accelerated Computing workshop, starting with:

Day 1 CUDA and OpenACC development on GPUs - comprising of lectures and hands-on labs, exclusively for vertifiable academic students, staff and researchers.

In this workshop, you will start with the basic programming skills of CUDA and OpenACC and quickly move on to learning how to solve real-world problems using CUDA and OpenACC.

Attendees must bring their own laptops and power supply. Connectivity will be available through the University Wi-Fi.

You will learn to:

  • Understand CUDA programming skills
  • Parallelize atrix multiply algorithm with CUDA
  • Learn data management with Unified Memory of GPU
  • Understand OpenACC programming skills

Agenda for Day 1 (February 26)

09:15 Registration

09:30 Guest Speech: How GPU Computing is Boosting the Research of Artificial Intelligence (Prof. Dacheng Tao, The University of Sydney)

10:20 Morning Break

10:30 Lecture: CUDA Programming Skills

11:30 Hands-on lab: Accelerating Applications with CUDA C/C++

13:00 Lunch Break

14:00 Lecture: OpenACC Programming Skills

15:00 Afternoon Break

15:15 Hands-on lab: OpenACC - 2X in 4 Steps

17:00 Running GPU jobs on the USYD Artemis HPC (The University of Sydney)

Content Level: Beginner

Pre-requisite: Basic C/C++ understanding can be helpful for some exercises.

IMPORTANT: To reserve your seat, you MUST register with a valid university email address and follow these pre-workshop instructions.

  • You must bring your own laptop, charger and adaptor (if needed) to this workshop.
  • Please note that access to the same email address used to register for the event on Eventbrite, will be required for the Qwiklabs account registration on the event day - which is used to run the hands-on labs. Please ensure you use only your university email address.

Training Syllabus - Day 1

Guest Lecture: How GPU Computing is Boosting the Research of Artificial Intelligence

Since the concept of Turing machine has been first proposed in 1936, the capability of machines to perform intelligent tasks went on growing exponentially. Artificial Intelligence (AI), as an essential accelerator, pursues the target of making machines as intelligent as human beings. It has already reformed how we live, work, learning, discover and communicate. In this talk, I will review our recent progress on AI by introducing some representative advancements from algorithms to applications, and illustrate the stairs for its realization from perceiving to learning, reasoning and behaving. To push AI from the narrow to the general, many challenges lie ahead. I will bring some examples out into the open, and shed lights on our future target. Today, we teach machines how to be intelligent as ourselves. Tomorrow, they will be our partners to get into our daily life.

GPU Lecture 1: CUDA Programming Skills

This lecture will explain the GPU architecture and teach basic skills of CUDA programming, such as how to write CUDA kernels, how to utilize GPU threads, and how to optimize GPU memory access, etc. After this lecture, you will be able to understand how to parallelize your sequential code in CUDA, get ready to write and optimize your CUDA programs.

GPU Lab 1: Accelerating Applications with CUDA C/C++

Learn how to accelerate your C/C++ application using CUDA to harness the massively parallel power of NVIDIA GPUs. In 90 minutes, you will work through seven exercises, including:

  • Hello Parallelism!
  • Accelerate the simple SAXPY algorithm
  • Accelerate a basic Matrix Multiply algorithm with CUDA
  • Error checking GPU code
  • Querying GPU Devices for capabilities
  • Data management with Unified Memory
  • A case study implementing most of the above

GPU Lecture 2: OpenACC Programming Skills

GPU Lab 2: OpenACC - 2X in 4 Steps

Learn how to accelerate your C/C++ or Fortran application using OpenACC to harness the massively parallel power of NVIDIA GPUs. OpenACC is a directive based approach to computing where you provide compiler hints to accelerate your code, instead of writing the accelerator code yourself. In 90 minutes, you will experience a four-step process for accelerating applications using OpenACC:

  1. Characterize and profile your application
  2. Add compute directives
  3. Add directives to optimize data movement
  4. Optimize your application using kernel scheduling

Running GPU jobs on the USYD Artemis HPC (The University of Sydney):

This session will explain how to connect to the HPC cluster and V100s GPUs based in the University of Sydney.

Guest Speaker:

Prof. Dacheng Tao (University of Sydney)

Dacheng Tao (F’15) is Professor of Computer Science and ARC Laureate Fellow in the School of Information Technologies and the Faculty of Engineering and Information Technologies, and the Inaugural Director of the UBTECH Sydney Artificial Intelligence Centre, at the University of Sydney. He mainly applies statistics and mathematics to Artificial Intelligence and Data Science. His research interests spread across computer vision, data science, image processing, machine learning, and video surveillance. His research results have expounded in one monograph and 500+ publications at prestigious journals and prominent conferences, such as IEEE T-PAMI, T-NNLS, T-IP, JMLR, IJCV, NIPS, ICML, CVPR, ICCV, ECCV, ICDM; and ACM SIGKDD, with several best paper awards, such as the best theory/algorithm paper runner up award in IEEE ICDM’07, the best student paper award in IEEE ICDM’13, the distinguished student paper award in the 2017 IJCAI, the 2014 ICDM 10-year highest-impact paper award, and the 2017 IEEE Signal Processing Society Best Paper Award. He received the 2015 Australian Scopus-Eureka Prize, the 2015 ACS Gold Disruptor Award and the 2015 UTS Vice-Chancellor’s Medal for Exceptional Research. He is a Fellow of the IEEE, AAAS, OSA, IAPR and SPIE.

NVIDIA Experts:

Nicolas Walker

Nicolas Walker is a Senior Solution Architect at NVIDIA. He supports customers in South East Asia developing data center and workstation solutions in the areas of High Performance Computing, Deep Learning, Virtualized Desktops and Professional Graphics. Before joining NVIDIA in February 2016, Nicolas held roles in IBM and Lenovo as solution architect focusing on enterprise infrastructure and HPC for the last 15 years. Before moving to Singapore, he was based in Italy, Scotland and Malaysia. He holds a BSc(Hons) in Software Engineering.

Maggie Zhang

Maggie Zhang is a Solutions Architect in the areas of HPC/DL at NVIDIA, ANZ. Before joining NVIDIA, she worked as a postdoctoral researcher in Lero (The Irish Software Research Center), Ireland and a research fellow in National University of Defense Technology, China. Her research areas are GPU/CPU heterogeneous computing, compiler optimization, computer architecture, deep learning. She got her PhD in Computer Science & Engineering from the University of New South Wales in 2013.

Michael Lang

Michael Lang has been active in the virtualization and VDI spaces for well over a decade, as a partner, vendor and always as an advocate for improved business outcomes. Whether it is with a mining, government or defense customer, or an architectural firm looking to solve challenges or increase productivity, he brings a wealth of knowledge and experience to benefit his customers. In addition, he is the Intelligent Video Analytics Solutions architect for Deep Learning based solutions at NVIDIA ANZ. Michael is an NVIDIA Certified Deep Learning Instructor, and holds multiple technical certifications, has a Masters in IT Security, and degrees in Psychology and Philosophy.

Share with friends

Date and Time


Business School, ABS Collaborative Learning Studio 3190

University of Sydney

Darlington Campus

Sydney, NSW 2006


View Map

Save This Event

Event Saved