Actions and Detail Panel
PipelineAI Tensorflow GPU Dev Summit
Sat, September 16, 2017, 9:00 AM – 5:00 PM PDT
Build an End-to-End, Continuous Deep Learning Model Training and Deployment Pipeline with Tensorflow, Spark, Kafka, Kubernetes, Jupyter Notebook, and GPUs!
Note: A GPU-based cloud instance will be provided to each attendee as part of the event
We will each build an end-to-end, continuous Spark ML and Tensorflow AI model training and depoyment pipeline on our own GPU-based cloud instance.
At the end, we will combine our cloud instances to create the LARGEST Spark ML and Tensorflow AI Training and Deployment Cluster in the WORLD!
Tensorflow on Spark
Tensorflow Data Ingestion (TF Input Readers, TFRecords, and Queues)
Creating Custom Input Readers (Kafka)
Tensorflow and Hadoop Integration (HDFS)
Trade-offs of CPU vs. *GPU, Scale Up vs. Scale Out
Single-node Tensorflow AI Model Training
Distributed, Multi-node Tensorflow AI Model Training (Distributed Tensorflow)
Model Checkpointing, Saving, Exporting, and Importing
Centralized Logging and Visualizing Distributed Tensorflow AI Model Training (Tensorboard)
Distributed, Multi-node Tensorflow AI Model Serving/Predicting (Tensorflow Serving)
Continuous Tensorflow AI Model Training and Deployment (Tensorflow AI, Airflow, Jenkins)
Hybrid Cross-Cloud and On-Premise Deployments (Kubernetes)
Highly-Scalable and Highly-Available Model Deployments using Spring Boot and NetflixOSS-based Microservices (Spring Boot, NetflixOSS)
High-Performance and Fault-Tolerance using Request Batching and Circuit Breakers (NetflixOSS)