“When we open-sourced TensorFlow we were hoping to build a machine learning platform for everyone in the world,” said Jeff Dean at the first annual TensorFlow Developer Summit. They’ve done well in achieving that goal: TensorFlow is undeniably the most popular ML project on GitHub with over 77K stars.

Let’s take a look at some of these new features!

Keras

One of the biggest changes in 1.4 has got to be the addition of the extremely popular ML framework Keras. Keras has graduated to a part of the core package tf.keras. The high-level APIs are meant to minimize the time between your ideas and working implementation. It also integrates smoothly with other TensorFlow APIs, including the Estimator API. Since Keras is now a part of the TensorFlow core, it can be relied upon for production workflows.

Datasets

Here’s another API that’s managed to make the jump to the core package: tf.data. The Dataset API has graduated to version 1.4 with extra support for Python generators.

In the future, the TensorFlow team now strongly recommends using the Dataset API for creating input pipelines for TensorFlow models. Why? It provides much more functionality than older pipelines like feed_dict or the queue-based pipeline. It’s also got better all-around performance, as well as being much cleaner and easier to use.

Additionally, the TensorFlow team has made it clear that they’re focusing on Dataset API for future development, rather than bringing other older APIs up to speed.

Bug fixes, breaking changes, and known issues

It’s not an update without at least one issue. Let’s count ourselves lucky that these are minimal. We’ve got a few of the bigger ones here, but you should head over to the release notes for a full accounting of all the changes.

Custom op libraries must link against libtensorflow_framework.so (installed at tf.sysconfig.get_lib()).

tf.nn.rnn_cell.DropoutWrapper is now more careful about dropping out LSTM states. Specifically, it no longer ever drops the c (memory) state of an LSTMStateTuple. The new behavior leads to proper dropout behavior for LSTMs and stacked LSTMs. This bug fix follows recommendations from published literature, but is a behavioral change. State dropout behavior may be customized via the new dropout_state_filter_visitor argument.