float x = input[threadID];
float y = func(x);
output[threadID] = y;
threadID
NVIDIA Corporation 2009
Thread Cooperation
The Missing Piece: threads may need to cooperate
Thread cooperation is a powerful feature of CUDA
Thread cooperation is valuable
Share results to avoid redundant computation
Share memory accesses
Bandwidth reduction
Cooperation between a monolithic array of threads
is not scalable
Cooperation within smaller batches of threads is scalable
Kernel launches a grid of thread blocks
Threads within a block cooperate via shared memory
Threads within a block can synchronize
Threads in different blocks cannot cooperate
Allows programs to transparently scale to different
GPUs
NVIDIA Corporation 2009
Thread Batching
Grid
Thread Block 0
Shared Memory
Thread Block 1
Shared Memory
Thread Block N-1
Shared Memory
On-chip, small
Fast
Off-chip, large
Uncached
Persistent across
kernel launches
Kernel I/O
Multiprocessor
NVIDIA Corporation 2009
Physical Memory Layout
Local memory resides in device DRAM
Use registers and shared memory to minimize local
memory use
Host can read and write global memory but not
shared memory
Host
CPU
Chipset DRAM
Device
DRAM
Local
Memory
Global
Memory
GPU
Multiprocessor
Multiprocessor
Registers
Shared Memory
NVIDIA Corporation 2009
Execution Model
Software Hardware
Threads are executed by thread processors
Thread
Thread
Processor
Thread
Block
Multiprocessor
Thread blocks are executed on multiprocessors
Thread blocks do not migrate
Several concurrent thread blocks can reside on
one multiprocessor - limited by multiprocessor
resources (shared memory and register file)
...
Grid
Device
A kernel is launched as a grid of thread blocks
Only one kernel can execute on a device at
one time
CUDA Programming Basics
Part I - Software Stack and Memory Management
NVIDIA Corporation 2009
Outline of CUDA Programming Basics
Part I
CUDA software stack and compilation
GPU memory management
Part II
Kernel launches
Some specifics of GPU code
NOTE: only the basic features are covered
See the Programming Guide for many more API functions
NVIDIA Corporation 2009
CUDA Software Development
Environment
Main components
Device Driver (part of display driver)
Toolkit (compiler, documentation, libraries)
SDK (example codes, white papers)
Consult Quickstart Guides for installation
instructions on different platforms
http://www.nvidia.com/cuda
NVIDIA Corporation 2009
CUDA Software Development Tools
Profiler
Available now for all supported OSs
Command-line or GUI
Sampling signals on GPU for:
Memory access parameters
Execution (serialization, divergence)
Debugger
Currently Linux only (gdb)
Runs on the GPU
Emulation mode
Compile with -deviceemu
NVIDIA Corporation 2009
Compiler
Any source file containing language extensions, like
<<< >>>, must be compiled with nvcc
nvcc is a compiler driver
Invokes all the necessary tools and compilers like cudacc,
g++, cl, ...
nvcc can output either:
C code (CPU code)
That must then be compiled with the rest of the application
using another tool
PTX or object code directly
An executable requires linking to:
Runtime library (cudart)
Core library (cuda)
NVIDIA Corporation 2009
Compiling
NVCC
CPU/GPU
Source
PTX to Target
Compiler
G80 GPU
Target code
PTX Code
Virtual
Physical
CPU Source
NVIDIA Corporation 2009
nvcc & PTX Virtual Machine
EDG
CPU/GPU
Source
PTX Code
CPU Source
Open64
EDG
Separate CPU & GPU code
Open64
Generates GPU PTX
assembly
Parallel Thread eXecution
(PTX)
Virtual Machine and ISA
Programming model
Execution resources and
state
NVIDIA Corporation 2009
Compiler Flags
Important flags:
-arch=sm_13 enables double precision on compatible
hardware
-G enables debugging on device code
--ptxas-options=-v shows register and memory usage
--maxregcount=N limits the number of registers to N
-use_fast_math uses fast math library
NVIDIA Corporation 2009
GPU Memory Management
NVIDIA Corporation 2009
Memory spaces
CPU and GPU have separate memory spaces
Data is moved across PCIe bus
Use functions to allocate/set/copy memory on GPU
Very similar to corresponding C functions
Pointers are just addresses
Cant tell from the pointer value whether the address is on
CPU or GPU
Must exercise care when dereferencing:
Dereferencing CPU pointer on GPU will likely crash
Same for vice versa
Host
GPU
NVIDIA Corporation 2009
GPU Memory Allocation / Release
Host (CPU) manages device (GPU) memory
cudaMalloc(void **pointer, size_t nbytes)
cudaMemset(void *pointer, int value, size_t
count)
cudaFree(void *pointer)
int n = 1024;
int nbytes = 1024*sizeof(int);
int *a_d = 0;
cudaMalloc( (void**)&a_d, nbytes );
cudaMemset( a_d, 0, nbytes);
cudaFree(a_d);
NVIDIA Corporation 2009
Data Copies
cudaMemcpy(void *dst, void *src, size_t nbytes,
enum cudaMemcpyKind direction);
direction specifies locations (host or device) of src and
dst
Blocks CPU thread: returns after the copy is complete
Doesnt start copying until previous CUDA calls complete
enum cudaMemcpyKind
cudaMemcpyHostToDevice
cudaMemcpyDeviceToHost
cudaMemcpyDeviceToDevice
int main(void)
{
float *a_h, *b_h; // host data
float *a_d, *b_d; // device data
int N = 14, nBytes, i ;
nBytes = N*sizeof(float);
a_h = (float *)malloc(nBytes);
b_h = (float *)malloc(nBytes);
cudaMalloc((void **) &a_d, nBytes);
cudaMalloc((void **) &b_d, nBytes);
for (i=0, i<N; i++) a_h[i] = 100.f + i;
cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice);
cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost);
for (i=0; i< N; i++) assert( a_h[i] == b_h[i] );
free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d);
return 0;
}
NVIDIA Corporation 2009
Data Movement Example
Host Device
int main(void)
{
float *a_h, *b_h; // host data
float *a_d, *b_d; // device data
int N = 14, nBytes, i ;
nBytes = N*sizeof(float);
a_h = (float *)malloc(nBytes);
b_h = (float *)malloc(nBytes);
cudaMalloc((void **) &a_d, nBytes);
cudaMalloc((void **) &b_d, nBytes);
for (i=0, i<N; i++) a_h[i] = 100.f + i;
cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice);
cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost);
for (i=0; i< N; i++) assert( a_h[i] == b_h[i] );
free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d);
return 0;
}
NVIDIA Corporation 2009
Data Movement Example
Host
a_h
b_h
int main(void)
{
float *a_h, *b_h; // host data
float *a_d, *b_d; // device data
int N = 14, nBytes, i ;
nBytes = N*sizeof(float);
a_h = (float *)malloc(nBytes);
b_h = (float *)malloc(nBytes);
cudaMalloc((void **) &a_d, nBytes);
cudaMalloc((void **) &b_d, nBytes);
for (i=0, i<N; i++) a_h[i] = 100.f + i;
cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice);
cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost);
for (i=0; i< N; i++) assert( a_h[i] == b_h[i] );
free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d);
return 0;
}
NVIDIA Corporation 2009
Data Movement Example
Host Device
a_h
b_h
a_d
b_d
int main(void)
{
float *a_h, *b_h; // host data
float *a_d, *b_d; // device data
int N = 14, nBytes, i ;
nBytes = N*sizeof(float);
a_h = (float *)malloc(nBytes);
b_h = (float *)malloc(nBytes);
cudaMalloc((void **) &a_d, nBytes);
cudaMalloc((void **) &b_d, nBytes);
for (i=0, i<N; i++) a_h[i] = 100.f + i;
cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice);
cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost);
for (i=0; i< N; i++) assert( a_h[i] == b_h[i] );
free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d);
return 0;
}
NVIDIA Corporation 2009
Data Movement Example
Host Device
a_h
b_h
a_d
b_d
int main(void)
{
float *a_h, *b_h; // host data
float *a_d, *b_d; // device data
int N = 14, nBytes, i ;
nBytes = N*sizeof(float);
a_h = (float *)malloc(nBytes);
b_h = (float *)malloc(nBytes);
cudaMalloc((void **) &a_d, nBytes);
cudaMalloc((void **) &b_d, nBytes);
for (i=0, i<N; i++) a_h[i] = 100.f + i;
cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice);
cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost);
for (i=0; i< N; i++) assert( a_h[i] == b_h[i] );
free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d);
return 0;
}
NVIDIA Corporation 2009
Data Movement Example
Host Device
a_h
b_h
a_d
b_d
int main(void)
{
float *a_h, *b_h; // host data
float *a_d, *b_d; // device data
int N = 14, nBytes, i ;
nBytes = N*sizeof(float);
a_h = (float *)malloc(nBytes);
b_h = (float *)malloc(nBytes);
cudaMalloc((void **) &a_d, nBytes);
cudaMalloc((void **) &b_d, nBytes);
for (i=0, i<N; i++) a_h[i] = 100.f + i;
cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice);
cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost);
for (i=0; i< N; i++) assert( a_h[i] == b_h[i] );
free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d);
return 0;
}
NVIDIA Corporation 2009
Data Movement Example
Host Device
a_h
b_h
a_d
b_d
int main(void)
{
float *a_h, *b_h; // host data
float *a_d, *b_d; // device data
int N = 14, nBytes, i ;
nBytes = N*sizeof(float);
a_h = (float *)malloc(nBytes);
b_h = (float *)malloc(nBytes);
cudaMalloc((void **) &a_d, nBytes);
cudaMalloc((void **) &b_d, nBytes);
for (i=0, i<N; i++) a_h[i] = 100.f + i;
cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice);
cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost);
for (i=0; i< N; i++) assert( a_h[i] == b_h[i] );
free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d);
return 0;
}
NVIDIA Corporation 2009
Data Movement Example
Host Device
a_h
b_h
a_d
b_d
int main(void)
{
float *a_h, *b_h; // host data
float *a_d, *b_d; // device data
int N = 14, nBytes, i ;
nBytes = N*sizeof(float);
a_h = (float *)malloc(nBytes);
b_h = (float *)malloc(nBytes);
cudaMalloc((void **) &a_d, nBytes);
cudaMalloc((void **) &b_d, nBytes);
for (i=0, i<N; i++) a_h[i] = 100.f + i;
cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice);
cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost);
for (i=0; i< N; i++) assert( a_h[i] == b_h[i] );
free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d);
return 0;
}
NVIDIA Corporation 2009
Data Movement Example
Host Device
a_h
b_h
a_d
b_d
int main(void)
{
float *a_h, *b_h; // host data
float *a_d, *b_d; // device data
int N = 14, nBytes, i ;
nBytes = N*sizeof(float);
a_h = (float *)malloc(nBytes);
b_h = (float *)malloc(nBytes);
cudaMalloc((void **) &a_d, nBytes);
cudaMalloc((void **) &b_d, nBytes);
for (i=0, i<N; i++) a_h[i] = 100.f + i;
cudaMemcpy(a_d, a_h, nBytes, cudaMemcpyHostToDevice);
cudaMemcpy(b_d, a_d, nBytes, cudaMemcpyDeviceToDevice);
cudaMemcpy(b_h, b_d, nBytes, cudaMemcpyDeviceToHost);
for (i=0; i< N; i++) assert( a_h[i] == b_h[i] );
free(a_h); free(b_h); cudaFree(a_d); cudaFree(b_d);
return 0;
}
NVIDIA Corporation 2009
Data Movement Example
Host Device
CUDA Programming Basics
Part II - Kernels
NVIDIA Corporation 2009
Outline of CUDA Basics
Part I
CUDA software stack and compilation
GPU memory management
Part II
Kernel launches
Some specifics of GPU code
NOTE: only the basic features are covered
See the Programming Guide for many more API functions
NVIDIA Corporation 2009
CUDA Programming Model
Parallel code (kernel) is launched and executed on a
device by many threads
Threads are grouped into thread blocks
Parallel code is written for a thread
Each thread is free to execute a unique code path
Built-in thread and block ID variables
NVIDIA Corporation 2009
Thread Hierarchy
Threads launched for a parallel section are
partitioned into thread blocks
Grid = all blocks for a given launch
Thread block is a group of threads that can:
Synchronize their execution
Communicate via shared memory
NVIDIA Corporation 2009
Executing Code on the GPU
Kernels are C functions with some restrictions
Cannot access host memory
Must have void return type
No variable number of arguments (varargs)
Not recursive
No static variables
Function arguments automatically copied from host
to device
NVIDIA Corporation 2009
Function Qualifiers
Kernels designated by function qualifier:
__global__
Function called from host and executed on device
Must return void
Other CUDA function qualifiers
__device__
Function called from device and run on device
Cannot be called from host code
__host__
Function called from host and executed on host (default)
__host__ and __device__ qualifiers can be combined to
generate both CPU and GPU code
NVIDIA Corporation 2009
Launching Kernels
Modified C function call syntax:
kernel<<<dim3 dG, dim3 dB>>>()
Execution Configuration (<<< >>>)
dG - dimension and size of grid in blocks
Two-dimensional: x and y
Blocks launched in the grid: dG.x*dG.y
dB - dimension and size of blocks in threads:
Three-dimensional: x, y, and z
Threads per block: dB.x*dB.y*dB.z
Unspecified dim3 fields initialize to 1
NVIDIA Corporation 2009
Execution Configuration Examples
kernel<<<32,512>>>(...);
dim3 grid, block;
grid.x = 2; grid.y = 4;
block.x = 8; block.y = 16;
kernel<<<grid, block>>>(...);
dim3 grid(2, 4), block(8,16);
kernel<<<grid, block>>>(...);
Equivalent assignment using
constructor functions
NVIDIA Corporation 2009
CUDA Built-in Device Variables
All __global__ and __device__ functions have
access to these automatically defined variables
dim3 gridDim;
Dimensions of the grid in blocks (at most 2D)
dim3 blockDim;
Dimensions of the block in threads
dim3 blockIdx;
Block index within the grid
dim3 threadIdx;
Thread index within the block
NVIDIA Corporation 2009
Built-in variables are used to determine unique
thread IDs
Map from local thread ID (threadIdx) to a global ID which
can be used as array indices
Unique Thread IDs
0
0 1 2 3 4
1
0 1 2 3 4
2
0 1 2 3 4
blockIdx.x
blockDim.x = 5
threadIdx.x
blockIdx.x*blockDim.x
+threadIdx.x
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Grid
NVIDIA Corporation 2009
Minimal Kernels
__global__ void kernel( int *a )
{
int idx = blockIdx.x*blockDim.x + threadIdx.x;
a[idx] = 7;
}
__global__ void kernel( int *a )
{
int idx = blockIdx.x*blockDim.x + threadIdx.x;
a[idx] = blockIdx.x;
}
__global__ void kernel( int *a )
{
int idx = blockIdx.x*blockDim.x + threadIdx.x;
a[idx] = threadIdx.x;
}
Output: 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7
Output: 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2
Output: 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
NVIDIA Corporation 2009
Increment Array Example
CPU program CUDA program
void inc_cpu(int *a, int N)
{
int idx;
for (idx = 0; idx<N; idx++)
a[idx] = a[idx] + 1;
}
void main()
{
inc_cpu(a, N);
}
__global__ void inc_gpu(int *a_d, int N)
{
int idx = blockIdx.x * blockDim.x
+ threadIdx.x;
if (idx < N)
a_d[idx] = a_d[idx] + 1;
}
void main()
{
dim3 dimBlock (blocksize);
dim3 dimGrid(ceil(N/(float)blocksize));
inc_gpu<<<dimGrid, dimBlock>>>(a_d, N);
}
NVIDIA Corporation 2009
Host Synchronization
All kernel launches are asynchronous
control returns to CPU immediately
kernel executes after all previous CUDA calls have
completed
cudaMemcpy() is synchronous
control returns to CPU after copy completes
copy starts after all previous CUDA calls have completed
cudaThreadSynchronize()
blocks until all previous CUDA calls complete
NVIDIA Corporation 2009
Host Synchronization Example