Data echoing for improved training speed
Google also proposed a new optimization technique: speeding up neural network training with data echoing. It’s quite simple: while one bottlenecked part of the training pipeline is getting the next input ready, the current input gets “echoed” through the rest of the model graph, reducing training time while preserving predictive performance. This is cool work, and hopefully it’ll get upstreamed to TensorFlow for everyone to use.