This is a 2 day hands on workshop for programmers who want to learn how to program and utilize the parallel computing power of the Graphics Processing Unit (GPU) using NVIDIA’s CUDA programming framework.
The attendees will needs to have some basic C or C++ knowledge, but no prior knowledge of parallel computing concepts is necessary. A laptop will be required to remotely connect and try out programming exercises on a CUDA enabled machine. The course will be held in English.
What Will I Learn?
By the end of this workshop you will have built a number of CUDA enabled applications and have an understanding of the CUDA programming methodology to apply to and solve your own problems.
1. Introduction & Setup
The course will start by introducing the concepts of general purpose GPU programming and go into the process of installing and setting up the development environment on the 3 OS’s that support CUDA. We will also talk about the different language bindings for languages like Java, Python and Ruby.
2. CUDA Basic Concepts
Then we will give a hands on introduction to CUDA, introducing the concepts of threads and blocks to learn the fundamental way that CUDA exposes parallelism.
3. Hardware & Memory
The main gist of the course will involve learning the concepts of CUDA memory management together with the hardware capability of the GPU we are developing on. This will lead us into learning about the different types of GPU memories available to the programmer and it’s optimal utilization.
4. Streams, Atomics & Graphics
Once we are familiar with the core concepts, we will talk about interoperability of the CUDA library with rendering and also the use of atomic primitives to accomplish things which are quite trivial in the traditional CPU case. Then we will talk about the concept of CUDA streams and the idea of concurrency.
5. Optimization Techniques
We will go into the various concepts of GPU kernel optimization to get the most out of the hardware we are planning to write code for.
6. External Libraries
This will lead us to talk about different external libraries both 3rd party as well as those provided by NVIDIA which provide optimized algorithms running on the GPU, for applications ranging from Finance to Medical Imaging.
7. Future (Given time)
Finally, we will talk about the future of GPU computing, in particular the new features in CUDA 5.x and GPUs on the cloud. We will also give an introduction to OpenCL, since many of the concepts from this course will carry over to OpenCL. We will talk about the advantages and disadvantages of OpenCL here as well.
When & Where
Dr. Kashif Rasul
Dr. Kashif Rasul
Kashif has a PhD. in Mathematics from the Freie Universität Berlin and is currently working for a startup. He has presented at NVIDIA's GTC in 2009 and is also contributing to the Open Source Java CUDA bindings.
Prior to finishing his PhD. he has worked at the Max Planck Institute in Golm, Germany and at Visage Imaging in Berlin as a software developer.