TensorBoard was designed to help users visualize the structure of their graphs, as well as understand the behavior of their models The following is a brief overview of what EEG does under the hood Please see pages 14 and 15 of the November 2015 white paper to see a specific example of EEG visualization along with descriptions of the current UI This section lists areas of improvement and extension for TensorFlow identified for consideration by the TensorFlow team Extensions: Improvements: Systems designed primarily for neural networks: Systems that support symbolic differentiation: Systems with a core written in C++: Similarities shared with DistBelief and Project Adam: Differences between TensorFlow and DistBelief/Project Adam: Systems that represent complex workflows as dataflow graphs Systems that support data-dependent control flow Systems optimized for accessing the same data repeatedly Systems that execute dataflow graphs across heterogenous devices, including GPUs Feature implementations that are most similar to TensorFlow are listed after the feature

[Tensorflow] Core Concepts and Common Confusions

This post will be written from my personal perspective, so the readers should already have some basic idea about what deep learning is, and preferably are familiar with PyTorch.

Official documentation recommends using Estimators and Datasets, but I personally chose to start from Layers APIs and low-level APIs to have the kind of access similar to ones in PyTorch, and work my way up to Estimators and Datasets.

Tensorflow implicitly defines a default graph for you, but I prefer to explicitly define it and group all graph definition in a context: This method is basically the whole point of creating a session.

(You can also use variables instead of tensors in feeds, although it’s not very common.) In the following example from the official tutorial, y is passed as the sole element of fetches, and values to placeholder x is passed in a dictionary: In PyTorch, a variable is part of the automatic differentiation module and a wrapper around a tensor.

(The official documentation mentioned modifications to variables are visible across multiple sessions, but that seems only apply to concurrent session running on multiple workers.) To save variables/weights in Tensorflow usually involves serializing all variable into a file.

Saving models in Tensorflow involves defining a Saver in the graph definition and invoking the save method in a session: There’s also a SavedModel class that not only saves variables, but also the graph and the metadata of the graph for you.

Tensorboard group operations according namespaces they belong to, and generate a nice visual representation of the graph for you: An example of tensor name is scope_outer/scope_inner/tensor_a:0.

This stackoverflow answer gave a brilliant explanation to the differences between these two, as illustrated in the following graph: Turns out there is only one difference — tf.variable_scope affects tf.get_variable, whiletf.name_scope doesn’t.

Here’s an example of mixing tf.variable_scope and tf.name_scope from the official documentation: IMO, usually you’d want to use variable_scope unless there is a need to put operations and variables in different levels of namespaces.