Used in the notebooks

This layer takes input of shape (batch_size, units) or (batch_size, 1) and
transforms it using units number of lookup tables satisfying monotonicity
and bounds constraints if specified. If multi dimensional input is provided,
each output will be for the corresponding input, otherwise all calibration
functions will act on the same input. All units share the same layer
configuration, but each one has their separate set of trained parameters.

Input shape:

Rank-2 tensor with shape: (batch_size, units) or (batch_size, 1).

Output shape:

Rank-2 tensor with shape: (batch_size, units).

Example:

calibrator = tfl.layers.CategoricalCalibration(
# Number of categories.
num_buckets=3,
# Output can be bounded.
output_min=0.0,
output_max=1.0,
# For categorical calibration layer monotonicity is specified for pairs of
# indices of categories. Output for first category in pair will be less
# than or equal to output for second category.
monotonicities=[(0, 1), (0, 2)])

Only applicable if the layer has exactly one input,
i.e. if it is connected to one incoming layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable
the layer to run input compatibility checks when it is called.
Consider a Conv2D layer: it can only be called on a single input tensor
of rank 4. As such, you can set, in __init__():

self.input_spec = tf.keras.layers.InputSpec(ndim=4)

Now, if you try to call the layer on an input that isn't rank 4
(for instance, an input of shape (2,), it will raise a nicely-formatted
error:

ValueError: Input 0 of layer conv2d is incompatible with the layer:
expected ndim=4, found ndim=1. Full shape received: [2]

Methods

add_loss

add_loss(
losses, inputs=None
)

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent
on the inputs passed when calling a layer. Hence, when reusing the same
layer on different inputs a and b, some entries in layer.losses may
be dependent on a and some on b. This method automatically keeps track
of dependencies.

This method can be used inside a subclassed layer or model's call
function, in which case losses should be a Tensor or list of Tensors.

Example:

This method can also be called directly on a Functional Model during
construction. In this case, any loss Tensors passed to this Model must
be symbolic and be able to be traced back to the model's Inputs. These
losses become part of the model's topology and are tracked in get_config.

If this is not the case for your loss (if, for example, your loss references
a Variable of one of the model's layers), you can wrap your loss in a
zero-argument lambda. These losses are not tracked as part of the model's
topology since they can't be serialized.

The get_losses_for method allows to retrieve the losses relevant to a
specific set of inputs.

Arguments

losses

Loss tensor, or list/tuple of tensors. Rather than tensors, losses
may also be zero-argument callables which create a loss tensor.

inputs

Ignored when executing eagerly. If anything other than None is
passed, it signals the losses are conditional on some of the layer's
inputs, and thus they should only be run where these inputs are
available. This is the case for activity regularization losses, for
instance. If None is passed, the losses are assumed
to be unconditional, and will apply across all dataflows of the layer
(e.g. weight regularization losses).

add_metric

add_metric(
value, aggregation=None, name=None
)

Adds metric tensor to the layer.

Args

value

Metric tensor.

aggregation

Sample-wise metric reduction function. If aggregation=None,
it indicates that the metric tensor provided has been aggregated
already. eg, bin_acc = BinaryAccuracy(name='acc') followed by
model.add_metric(bin_acc(y_true, y_pred)). If aggregation='mean', the
given metric tensor will be sample-wise reduced using mean function.
eg, model.add_metric(tf.reduce_sum(outputs), name='output_mean',
aggregation='mean').

count_params

if the layer isn't yet built
(in which case its weights aren't yet defined).

from_config

@classmethodfrom_config(
config
)

Creates a layer from its config.

This method is the reverse of get_config,
capable of instantiating the same layer from the config
dictionary. It does not handle layer connectivity
(handled by Network), nor weights (handled by set_weights).

Arguments

config

A Python dictionary, typically the
output of get_config.

Returns

A layer instance.

get_config

get_weights

get_weights()

Returns the current weights of the layer.

The weights of a layer represent the state of the layer. This function
returns both trainable and non-trainable weight values associated with this
layer as a list of Numpy arrays, which can in turn be used to load state
into similarly parameterized layers.

For example, a Dense layer returns a list of two values-- per-output
weights and the bias value. These can be used to set the weights of another
Dense layer:

set_weights

set_weights(
weights
)

Sets the weights of the layer, from Numpy arrays.

The weights of a layer represent the state of the layer. This function
sets the weight values from numpy arrays. The weight values should be
passed in the order they are created by the layer. Note that the layer's
weights must be instantiated before calling this function by calling
the layer.

For example, a Dense layer returns a list of two values-- per-output
weights and the bias value. These can be used to set the weights of another
Dense layer:

If the layer's call method takes a mask argument (as some Keras
layers do), its default value will be set to the mask generated
for inputs by the previous layer (if input did come from
a layer that generated a corresponding mask, i.e. if it came from
a Keras layer with masking support.