Fundamentals of Deep Learning for Multi GPUs

Find out how to use multiple GPUs to train neural networks and effectively parallelize\ntraining of deep neural networks using TensorFlow.

8 hours of instruction

Find out how to use multiple GPUs to train neural networks and effectively parallelize\ntraining of deep neural networks using TensorFlow.

OBJECTIVES

  1. Stochastic Gradient Descent, a crucial tool in parallelized training
  2. Batch size and its effect on training time and accuracy
  3. Transforming a single-GPU implementation to a Horovod multi-GPU implementation
  4. Techniques for maintaining high accuracy when training across multiple GPUs

PREREQUISITES

None

SYLLABUS & TOPICS COVERED

  1. Introduction
    • Meet the instructor
    • Create an account
  2. Stochastic Gradient Descent And The Effects Of Batch Size
    • Understand the issues with sequential single-thread data processing and the theory behind speeding up applications with parallel processing
    • Explore loss function, gradient descent, and stochastic gradient descent (SGD)
    • Learn the effect of batch size on accuracy and training time
  3. Training On Multiple GPUs With Horovod
    • Discover the benefits of training on multiple GPUs with Horovod
    • Learn to transform single-GPU training on the Fashion MNIST dataset to Horovod multi-GPU
  4. implementation
    • Maintaining Model Accuracy When Scaling To Multiple GPUs
    • Understand why accuracy can decrease when parallelizing training on multiple GPUs
    • Explore tools for maintaining accuracy when scaling training to multiple GPUs
  5. Final Review
    • Review key learnings and answer questions.
    • Complete the assessment and earn a certificate.
    • Complete the workshop survey.
    • Learn how to set up your own AI application development environment.

SOFTWARE REQUIREMENTS

Each participant will be provided with dedicated access to a fully configured, GPU-accelerated workstation in the cloud.

Not Enrolled
This course is currently closed