Title: Mixed Precision Training
Authors: Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Hao Wu
Published: 10th October 2017 (Tuesday) @ 17:42:04
Link: http://arxiv.org/abs/1710.03740v3

Abstract

Deep neural networks have enabled progress in a wide variety of applications. Growing the size of the neural network typically results in improved accuracy. As model sizes grow, the memory and compute requirements for training these models also increases. We introduce a technique to train deep neural networks using half precision floating point numbers. In our technique, weights, activations and gradients are stored in IEEE half-precision format. Half-precision floating numbers have limited numerical range compared to single-precision numbers. We propose two techniques to handle this loss of information. Firstly, we recommend maintaining a single-precision copy of the weights that accumulates the gradients after each optimizer step. This single-precision copy is rounded to half-precision format during training. Secondly, we propose scaling the loss appropriately to handle the loss of information with half-precision gradients. We demonstrate that this approach works for a wide variety of models including convolution neural networks, recurrent neural networks and generative adversarial networks. This technique works for large scale models with more than 100 million parameters trained on large datasets. Using this approach, we can reduce the memory consumption of deep learning models by nearly 2x. In future processors, we can also expect a significant computation speedup using half-precision hardware units.



Three Techniques for Successful Training of DNNs with Half Mixed Precision

  1. Accumulation of FP16 products into FP32
  2. Loss scaling
  3. Keep an FP32 master copy of weights

Mixed-Precision Training Iteration

The three techniques introduced above can be combined into the following sequence of steps for each training iteration. Additions to the traditional iteration procedure are in bold.

  1. Make an FP16 copy of the weights
  2. Forward propagate using FP16 weights and activations
  3. Multiply the resulting loss by the scale factor S
  4. Backward propagate using FP16 weights, activations, and their gradients
  5. Multiply the weight gradients by 1/S
  6. Optionally process the weight gradients (gradient clipping, weight decay, etc.)
  7. Update the master copy of weights in FP32

Examples for how to add the mixed-precision training steps to the scripts of various DNN training frameworks can be found in the Training with Mixed Precision User Guide.

Results

Table 1: ILSVRC12 classification top-1 accuracy