Used in the notebooks

Args:

feature_columns: An iterable containing all the feature columns used by
the model. All items in the set should be instances of classes derived
from FeatureColumn.

n_batches_per_layer: the number of batches to collect statistics per
layer. The total number of batches is total number of data divided by
batch size.

model_dir: Directory to save model parameters, graph and etc. This can
also be used to load checkpoints from the directory into a estimator to
continue training a previously saved model.

n_classes: number of label classes. Default is binary classification.
Multiclass support is not yet implemented.

weight_column: A string or a NumericColumn created by
tf.fc_old.numeric_column defining feature column representing weights.
It is used to downweight or boost examples during training. It will be
multiplied by the loss of the example. If it is a string, it is used as
a key to fetch weight tensor from the features. If it is a
NumericColumn, raw tensor is fetched by key weight_column.key, then
weight_column.normalizer_fn is applied on it to get weight tensor.

label_vocabulary: A list of strings represents possible label values. If
given, labels must be string type and have any value in
label_vocabulary. If it is not given, that means labels are already
encoded as integer or float within [0, 1] for n_classes=2 and encoded
as integer values in {0, 1,..., n_classes-1} for n_classes>2 . Also
there will be errors if vocabulary is not provided and labels are
string.

n_trees: number trees to be created.

max_depth: maximum depth of the tree to grow.

learning_rate: shrinkage parameter to be used when a tree added to the
model.

l1_regularization: regularization multiplier applied to the absolute
weights of the tree leafs.

l2_regularization: regularization multiplier applied to the square weights
of the tree leafs.

tree_complexity: regularization factor to penalize trees with more leaves.

min_node_weight: min_node_weight: minimum hessian a node must have for a
split to be considered. The value will be compared with
sum(leaf_hessian)/(batch_size * n_batches_per_layer).

config: RunConfig object to configure the runtime settings.

center_bias: Whether bias centering needs to occur. Bias centering refers
to the first node in the very first tree returning the prediction that
is aligned with the original labels distribution. For example, for
regression problems, the first node will return the mean of the labels.
For binary classification problems, it will return a logit for a prior
probability of label 1.

pruning_mode: one of 'none', 'pre', 'post' to indicate no pruning, pre-
pruning (do not split a node if not enough gain is observed) and post
pruning (build the tree up to a max depth and then prune branches with
negative gain). For pre and post pruning, you MUST provide
tree_complexity >0.

quantile_sketch_epsilon: float between 0 and 1. Error bound for quantile
computation. This is only used for float feature columns, and the number
of buckets generated per float feature is 1/quantile_sketch_epsilon.

train_in_memory: bool, when true, it assumes the dataset is in memory,
i.e., input_fn should return the entire dataset as a single batch,
n_batches_per_layer should be set as 1, num_worker_replicas should be 1,
and num_ps_replicas should be 0 in tf.Estimator.RunConfig.

Attributes:

config

model_dir

model_fn: Returns the model_fn which is bound to self.params.

params

Raises:

ValueError: when wrong arguments are given or unsupported functionalities
are requested.

Eager Compatibility

Estimators can be used while eager execution is enabled. Note that input_fn
and all hooks are executed inside a graph context, so they have to be written
to be compatible with graph mode. Note that input_fn code using tf.data
generally works in both graph and eager modes.

Methods

eval_dir

Args:

name: Name of the evaluation if user needs to run multiple evaluations on
different data sets, such as on training data vs test data. Metrics for
different evaluations are saved in separate folders, and appear
separately in tensorboard.

Args:

input_fn: A function that constructs the input data for evaluation. See
Premade Estimators
for more information. The
function should construct and return one of the following: * A
tf.data.Dataset object: Outputs of Dataset object must be a tuple
(features, labels) with same constraints as below. * A tuple
(features, labels): Where features is a tf.Tensor or a dictionary
of string feature name to Tensor and labels is a Tensor or a
dictionary of string label name to Tensor. Both features and
labels are consumed by model_fn. They should satisfy the expectation
of model_fn from inputs.

steps: Number of steps for which to evaluate model. If None, evaluates
until input_fn raises an end-of-input exception.

hooks: List of tf.train.SessionRunHook subclass instances. Used for
callbacks inside the evaluation call.

checkpoint_path: Path of a specific checkpoint to evaluate. If None, the
latest checkpoint in model_dir is used. If there are no checkpoints
in model_dir, evaluation is run with newly initialized Variables
instead of ones restored from checkpoint.

name: Name of the evaluation if user needs to run multiple evaluations on
different data sets, such as on training data vs test data. Metrics for
different evaluations are saved in separate folders, and appear
separately in tensorboard.

Returns:

A dict containing the evaluation metrics specified in model_fn keyed by
name, as well as an entry global_step which contains the value of the
global step for which this evaluation was performed. For canned
estimators, the dict contains the loss (mean loss per mini-batch) and
the average_loss (mean loss per sample). Canned classifiers also return
the accuracy. Canned regressors also return the label/mean and the
prediction/mean.

For each mode passed in via the input_receiver_fn_map,
this method builds a new graph by calling the input_receiver_fn to obtain
feature and label Tensors. Next, this method calls the Estimator's
model_fn in the passed mode to generate the model graph based on
those features and labels, and restores the given checkpoint
(or, lacking that, the most recent checkpoint) into the graph.
Only one of the modes is used for saving variables to the SavedModel
(order of preference: tf.estimator.ModeKeys.TRAIN,
tf.estimator.ModeKeys.EVAL, then
tf.estimator.ModeKeys.PREDICT), such that up to three
tf.MetaGraphDefs are saved with a single set of variables in a single
SavedModel directory.

For the variables and tf.MetaGraphDefs, a timestamped export directory
below
export_dir_base, and writes a SavedModel into it containing
the tf.MetaGraphDef for the given mode and its associated signatures.

For prediction, the exported MetaGraphDef will provide one SignatureDef
for each element of the export_outputs dict returned from the model_fn,
named using the same keys. One of these keys is always
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY,
indicating which
signature will be served when a serving request does not specify one.
For each signature, the outputs are provided by the corresponding
tf.estimator.export.ExportOutputs, and the inputs are always the input
receivers provided by
the serving_input_receiver_fn.

For training and evaluation, the train_op is stored in an extra
collection,
and loss, metrics, and predictions are included in a SignatureDef for the
mode in question.

Extra assets may be written into the SavedModel via the assets_extra
argument. This should be a dict, where each key gives a destination path
(including the filename) relative to the assets.extra directory. The
corresponding value gives the full path of the source file to be copied.
For example, the simple case of copying a single file without renaming it
is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.

Args:

export_dir_base: A string containing a directory in which to create
timestamped subdirectories containing exported SavedModels.

input_receiver_fn_map: dict of tf.estimator.ModeKeys to
input_receiver_fn mappings, where the input_receiver_fn is a
function that takes no arguments and returns the appropriate subclass of
InputReceiver.

assets_extra: A dict specifying how to populate the assets.extra directory
within the exported SavedModel, or None if no extra assets are
needed.

as_text: whether to write the SavedModel proto in text format.

checkpoint_path: The checkpoint path to export. If None (the default),
the most recent checkpoint found within the model directory is chosen.

Returns:

The string path to the exported directory.

Raises:

ValueError: if any input_receiver_fn is None, no export_outputs
are provided, or no checkpoint can be found.

Args:

input_fn: A function that provides input data for predicting as
minibatches. See Premade Estimators
for more information. The function should construct and return one of
the following:

A tf.data.Dataset object: Outputs of Dataset object must be a
tuple (features, labels) with same constraints as below.

A tuple (features, labels): Where features is a tf.Tensor or a
dictionary of string feature name to Tensor and labels is a
Tensor or a dictionary of string label name to Tensor. Both
features and labels are consumed by model_fn. They should
satisfy the expectation of model_fn from inputs.

predict_keys: list of str, name of the keys to predict. It is used if
the tf.estimator.EstimatorSpec.predictions is a dict. If
predict_keys is used then rest of the predictions will be filtered
from the dictionary, with the exception of 'bias' and 'dfc', which will
always be in the dictionary. If None, returns all keys in prediction
dict, as well as two new keys 'dfc' and 'bias'.

hooks: List of tf.train.SessionRunHook subclass instances. Used for
callbacks inside the prediction call.

checkpoint_path: Path of a specific checkpoint to predict. If None, the
latest checkpoint in model_dir is used. If there are no checkpoints
in model_dir, prediction is run with newly initialized Variables
instead of ones restored from checkpoint.

Yields:

Evaluated values of predictions tensors. The predictions tensors will
contain at least two keys 'dfc' and 'bias' for model explanations. The
dfc value corresponds to the contribution of each feature to the overall
prediction for this instance (positive indicating that the feature makes
it more likely to select class 1 and negative less likely). The dfc is
an OrderedDict, where the keys are the feature column names and the values
are the contributions. It is sorted by the absolute value of the
contribution (e.g OrderedDict([('age', -0.54), ('gender', 0.4), ('fare',
0.21)])). The 'bias' value will be the same across all the instances,
corresponding to the probability (classification) or prediction
(regression) of the training data distribution.

Raises:

ValueError: when wrong arguments are given or unsupported functionalities
are requested.

This method builds a new graph by first calling the
serving_input_receiver_fn to obtain feature Tensors, and then calling
this Estimator's model_fn to generate the model graph based on those
features. It restores the given checkpoint (or, lacking that, the most
recent checkpoint) into this graph in a fresh session. Finally it creates
a timestamped export directory below the given export_dir_base, and writes
a SavedModel into it containing a single tf.MetaGraphDef saved from this
session.

The exported MetaGraphDef will provide one SignatureDef for each
element of the export_outputs dict returned from the model_fn, named
using
the same keys. One of these keys is always
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY,
indicating which
signature will be served when a serving request does not specify one.
For each signature, the outputs are provided by the corresponding
tf.estimator.export.ExportOutputs, and the inputs are always the input
receivers provided by
the serving_input_receiver_fn.

Extra assets may be written into the SavedModel via the assets_extra
argument. This should be a dict, where each key gives a destination path
(including the filename) relative to the assets.extra directory. The
corresponding value gives the full path of the source file to be copied.
For example, the simple case of copying a single file without renaming it
is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.

The experimental_mode parameter can be used to export a single
train/eval/predict graph as a SavedModel.
See experimental_export_all_saved_models for full docs.

Args:

export_dir_base: A string containing a directory in which to create
timestamped subdirectories containing exported SavedModels.

This method builds a new graph by first calling the
serving_input_receiver_fn to obtain feature Tensors, and then calling
this Estimator's model_fn to generate the model graph based on those
features. It restores the given checkpoint (or, lacking that, the most
recent checkpoint) into this graph in a fresh session. Finally it creates
a timestamped export directory below the given export_dir_base, and writes
a SavedModel into it containing a single tf.MetaGraphDef saved from this
session.

The exported MetaGraphDef will provide one SignatureDef for each
element of the export_outputs dict returned from the model_fn, named
using
the same keys. One of these keys is always
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY,
indicating which
signature will be served when a serving request does not specify one.
For each signature, the outputs are provided by the corresponding
tf.estimator.export.ExportOutputs, and the inputs are always the input
receivers provided by
the serving_input_receiver_fn.

Extra assets may be written into the SavedModel via the assets_extra
argument. This should be a dict, where each key gives a destination path
(including the filename) relative to the assets.extra directory. The
corresponding value gives the full path of the source file to be copied.
For example, the simple case of copying a single file without renaming it
is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.

Args:

export_dir_base: A string containing a directory in which to create
timestamped subdirectories containing exported SavedModels.

predict

Please note that interleaving two predict outputs does not work. See:
issue/20506

Args:

input_fn: A function that constructs the features. Prediction continues
until input_fn raises an end-of-input exception
(tf.errors.OutOfRangeError or StopIteration).
See Premade Estimators
for more information. The function should construct and return one of
the following:

A tf.data.Dataset object: Outputs of Dataset object must have
same constraints as below.

features: A tf.Tensor or a dictionary of string feature name to
Tensor. features are consumed by model_fn. They should satisfy
the expectation of model_fn from inputs.

A tuple, in which case the first item is extracted as features.

predict_keys: list of str, name of the keys to predict. It is used if
the tf.estimator.EstimatorSpec.predictions is a dict. If
predict_keys is used then rest of the predictions will be filtered
from the dictionary. If None, returns all.

hooks: List of tf.train.SessionRunHook subclass instances. Used for
callbacks inside the prediction call.

checkpoint_path: Path of a specific checkpoint to predict. If None, the
latest checkpoint in model_dir is used. If there are no checkpoints
in model_dir, prediction is run with newly initialized Variables
instead of ones restored from checkpoint.

yield_single_examples: If False, yields the whole batch as returned by
the model_fn instead of decomposing the batch into individual
elements. This is useful if model_fn returns some tensors whose first
dimension is not equal to the batch size.

Yields:

Evaluated values of predictions tensors.

Raises:

ValueError: If batch length of predictions is not the same and
yield_single_examples is True.

train

Args:

input_fn: A function that provides input data for training as minibatches.
See Premade Estimators
for more information. The function should construct and return one of
the following:

A tf.data.Dataset object: Outputs of Dataset object must be
a tuple (features, labels) with same constraints as below.

A tuple (features, labels): Where features is a tf.Tensor or
a dictionary of string feature name to Tensor and labels is a
Tensor or a dictionary of string label name to Tensor. Both
features and labels are consumed by model_fn. They should
satisfy the expectation of model_fn from inputs.

hooks: List of tf.train.SessionRunHook subclass instances. Used for
callbacks inside the training loop.

steps: Number of steps for which to train the model. If None, train
forever or train until input_fn generates the tf.errors.OutOfRange
error or StopIteration exception. steps works incrementally. If you
call two times train(steps=10) then training occurs in total 20 steps.
If OutOfRange or StopIteration occurs in the middle, training stops
before 20 steps. If you don't want to have incremental behavior please
set max_steps instead. If set, max_steps must be None.

max_steps: Number of total steps for which to train model. If None,
train forever or train until input_fn generates the
tf.errors.OutOfRange error or StopIteration exception. If set,
steps must be None. If OutOfRange or StopIteration occurs in the
middle, training stops before max_steps steps. Two calls to
train(steps=100) means 200 training iterations. On the other hand, two
calls to train(max_steps=100) means that the second call will not do
any iteration since first call did all 100 steps.

saving_listeners: list of CheckpointSaverListener objects. Used for
callbacks that run immediately before or after checkpoint savings.