This document shows how to create a cluster of TensorFlow servers, and how to distribute a computation graph across that cluster. We assume that you are familiar with the basic concepts of writing TensorFlow programs.
To see a simple TensorFlow cluster in action, execute the following:
# Start a TensorFlow server as a single-process "cluster". $ python >>> import tensorflow as tf >>> c = tf.constant("Hello, distributed TensorFlow!") >>> server = tf.train.Server.create_local_server() >>> sess = tf.Session(server.target) # Create a session on the server. >>> sess.run(c) 'Hello, distributed TensorFlow!'
The tf.train.Server.create_local_server()
method creates a single-process cluster, with an in-process server.
A TensorFlow “cluster” is a set of “tasks” that participate in the distributed execution of a TensorFlow graph. Each task is associated with a TensorFlow “server”, which contains a “master” that can be used to create sessions, and a “worker” that executes operations in the graph. A cluster can also be divided into one or more “jobs”, where each job contains one or more tasks.
To create a cluster, you start one TensorFlow server per task in the cluster. Each task typically runs on a different machine, but you can run multiple tasks on the same machine (e.g. to control different GPU devices). In each task, do the following:
Create a tf.train.ClusterSpec
that describes all of the tasks in the cluster. This should be the same for each task.
Create a tf.train.Server
, passing the tf.train.ClusterSpec
to the constructor, and identifying the local task with a job name and task index.
tf.train.ClusterSpec
to describe the clusterThe cluster specification dictionary maps job names to lists of network adresses. Pass this dictionary to the tf.train.ClusterSpec
constructor. For example:
tf.train.Server
instance in each taskA tf.train.Server
object contains a set of local devices, a set of connections to other tasks in its tf.train.ClusterSpec
, and a “session target” that can use these to perform a distributed computation. Each server is a member of a specific named job and has a task index within that job. A server can communicate with any other server in the cluster.
For example, to launch a cluster with two servers running on localhost:2222
and localhost:2223
, run the following snippets in two different processes on the local machine:
# In task 0: cluster = tf.train.ClusterSpec({"local": ["localhost:2222", "localhost:2223"]}) server = tf.train.Server(cluster, job_name="local", task_index=0)
# In task 1: cluster = tf.train.ClusterSpec({"local": ["localhost:2222", "localhost:2223"]}) server = tf.train.Server(cluster, job_name="local", task_index=1)
Note: Manually specifying these cluster specifications can be tedious, especially for large clusters. We are working on tools for launching tasks programmatically, e.g. using a cluster manager like Kubernetes. If there are particular cluster managers for which you'd like to see support, please raise a GitHub issue.
To place operations on a particular process, you can use the same tf.device()
function that is used to specify whether ops run on the CPU or GPU. For example:
with tf.device("/job:ps/task:0"): weights_1 = tf.Variable(...) biases_1 = tf.Variable(...) with tf.device("/job:ps/task:1"): weights_2 = tf.Variable(...) biases_2 = tf.Variable(...) with tf.device("/job:worker/task:7"): input, labels = ... layer_1 = tf.nn.relu(tf.matmul(input, weights_1) + biases_1) logits = tf.nn.relu(tf.matmul(layer_1, weights_2) + biases_2) # ... train_op = ... with tf.Session("grpc://worker7.example.com:2222") as sess: for _ in range(10000): sess.run(train_op)
In the above example, the variables are created on two tasks in the ps
job, and the compute-intensive part of the model is created in the worker
job. TensorFlow will insert the appropriate data transfers between the jobs (from ps
to worker
for the forward pass, and from worker
to ps
for applying gradients).
A common training configuration, called “data parallelism,” involves multiple tasks in a worker
job training the same model on different mini-batches of data, updating shared parameters hosted in a one or more tasks in a ps
job. All tasks typically run on different machines. There are many ways to specify this structure in TensorFlow, and we are building libraries that will simplify the work of specifying a replicated model. Possible approaches include:
In-graph replication. In this approach, the client builds a single tf.Graph
that contains one set of parameters (in tf.Variable
nodes pinned to /job:ps
); and multiple copies of the compute-intensive part of the model, each pinned to a different task in /job:worker
.
Between-graph replication. In this approach, there is a separate client for each /job:worker
task, typically in the same process as the worker task. Each client builds a similar graph containing the parameters (pinned to /job:ps
as before using tf.train.replica_device_setter()
to map them deterministically to the same tasks); and a single copy of the compute-intensive part of the model, pinned to the local task in /job:worker
.
Asynchronous training. In this approach, each replica of the graph has an independent training loop that executes without coordination. It is compatible with both forms of replication above.
Synchronous training. In this approach, all of the replicas read the same values for the current parameters, compute gradients in parallel, and then apply them together. It is compatible with in-graph replication (e.g. using gradient averaging as in the CIFAR-10 multi-GPU trainer), and between-graph replication (e.g. using the tf.train.SyncReplicasOptimizer
).
The following code shows the skeleton of a distributed trainer program, implementing between-graph replication and asynchronous training. It includes the code for the parameter server and worker tasks.
import tensorflow as tf # Flags for defining the tf.train.ClusterSpec tf.app.flags.DEFINE_string("ps_hosts", "", "Comma-separated list of hostname:port pairs") tf.app.flags.DEFINE_string("worker_hosts", "", "Comma-separated list of hostname:port pairs") # Flags for defining the tf.train.Server tf.app.flags.DEFINE_string("job_name", "", "One of 'ps', 'worker'") tf.app.flags.DEFINE_integer("task_index", 0, "Index of task within the job") FLAGS = tf.app.flags.FLAGS def main(_): ps_hosts = FLAGS.ps_hosts.split(",") worker_hosts = FLAGS.worker_hosts.split(",") # Create a cluster from the parameter server and worker hosts. cluster = tf.train.ClusterSpec({"ps": ps_hosts, "worker": worker_hosts}) # Create and start a server for the local task. server = tf.train.Server(cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_index) if FLAGS.job_name == "ps": server.join() elif FLAGS.job_name == "worker": # Assigns ops to the local worker by default. with tf.device(tf.train.replica_device_setter( worker_device="/job:worker/task:%d" % FLAGS.task_index, cluster=cluster)): # Build model... loss = ... global_step = tf.Variable(0) train_op = tf.train.AdagradOptimizer(0.01).minimize( loss, global_step=global_step) saver = tf.train.Saver() summary_op = tf.merge_all_summaries() init_op = tf.initialize_all_variables() # Create a "supervisor", which oversees the training process. sv = tf.train.Supervisor(is_chief=(FLAGS.task_index == 0), logdir="/tmp/train_logs", init_op=init_op, summary_op=summary_op, saver=saver, global_step=global_step, save_model_secs=600) # The supervisor takes care of session initialization, restoring from # a checkpoint, and closing when done or an error occurs. with sv.managed_session(server.target) as sess: # Loop until the supervisor shuts down or 1000000 steps have completed. step = 0 while not sv.should_stop() and step < 1000000: # Run a training step asynchronously. # See `tf.train.SyncReplicasOptimizer` for additional details on how to # perform *synchronous* training. _, step = sess.run([train_op, global_step]) # Ask for all the services to stop. sv.stop() if __name__ == "__main__": tf.app.run()
To start the trainer with two parameter servers and two workers, use the following command line (assuming the script is called trainer.py
):
# On ps0.example.com: $ python trainer.py \ --ps_hosts=ps0.example.com:2222,ps1.example.com:2222 \ --worker_hosts=worker0.example.com:2222,worker1.example.com:2222 \ --job_name=ps --task_index=0 # On ps1.example.com: $ python trainer.py \ --ps_hosts=ps0.example.com:2222,ps1.example.com:2222 \ --worker_hosts=worker0.example.com:2222,worker1.example.com:2222 \ --job_name=ps --task_index=1 # On worker0.example.com: $ python trainer.py \ --ps_hosts=ps0.example.com:2222,ps1.example.com:2222 \ --worker_hosts=worker0.example.com:2222,worker1.example.com:2222 \ --job_name=worker --task_index=0 # On worker1.example.com: $ python trainer.py \ --ps_hosts=ps0.example.com:2222,ps1.example.com:2222 \ --worker_hosts=worker0.example.com:2222,worker1.example.com:2222 \ --job_name=worker --task_index=1