Deep learning is driving significant advances in Artificial Intelligence. In this talk, I will describe the design and implementation of TensorFlow, the Google platform for large-scale training of neural networks. The talk will focus on one important aspect of the system: distributed execution of iterative TensorFlow programs.
Yuan Yu is a research scientist at Google Research, working on the Google Brain project. His current research focus is on programming abstractions, compilers, and runtimes for large-scale parallel and distributed computing. Prior to joining Google, he was a researcher at Microsoft Research and DEC/Compaq Systems Research Center.