The first few Tensorflow tutorials guide you through training and testing a simple neural network to classify handwritten digits from the MNIST database of digit images.

If you‘re new to machine learning, we recommend starting here. You’ll learn about a classic problem, handwritten digit classification (MNIST), and get a gentle introduction to multiclass classification.

If you're already familiar with other deep learning software packages, and are already familiar with MNIST, this tutorial will give you a very brief primer on TensorFlow.

This is a technical tutorial, where we walk you through the details of using TensorFlow infrastructure to train models at scale. We use MNIST as the example.

A quick introduction to tf.contrib.learn, a high-level API for TensorFlow. Build, train, and evaluate a neural network with just a few lines of code.

An overview of tf.contrib.learn's rich set of tools for working with linear models in TensorFlow.

This tutorial walks you through the code for building a linear model using tf.contrib.learn.

This tutorial shows you how to use tf.contrib.learn to jointly train a linear model and a deep neural net to harness the advantages of each type of model.

This tutorial shows you how to use TensorFlowâ€™s logging capabilities and the Monitor API to audit the in-progress training of a neural network.

An introduction to TensorFlow Serving, a flexible, high-performance system for serving machine learning models, designed for production environments.

An introduction to convolutional neural networks using the CIFAR-10 data set. Convolutional neural nets are particularly tailored to images, since they exploit translation invariance to yield more compact and effective representations of visual content.

How to run object recognition using a convolutional neural network trained on ImageNet Challenge data and label set.

Building on the Inception recognition model, we will release a TensorFlow version of the Deep Dream neural network visual hallucination software.

This tutorial motivates why it is useful to learn to represent words as vectors (called *word embeddings*). It introduces the word2vec model as an efficient method for learning embeddings. It also covers the high-level details behind noise-contrastive training methods (the biggest recent advance in training embeddings).

An introduction to RNNs, wherein we train an LSTM network to predict the next word in an English sentence. (A task sometimes called language modeling.)

A follow on to the RNN tutorial, where we assemble a sequence-to-sequence model for machine translation. You will learn to build your own English-to-French translator, entirely machine learned, end-to-end.

An introduction to SyntaxNet, a Natural Language Processing framework for TensorFlow.

TensorFlow can be used for computation that has nothing to do with machine learning. Here's a naive implementation of Mandelbrot set visualization.

As another example of non-machine learning computation, we offer an example of a naive PDE simulation of raindrops landing on a pond.