tf.keras.layers.TimeDistributed

Class TimeDistributed

This wrapper allows to apply a layer to every temporal slice of an input.

The input should be at least 3D, and the dimension of index one
will be considered to be the temporal dimension.

Consider a batch of 32 samples,
where each sample is a sequence of 10 vectors of 16 dimensions.
The batch input shape of the layer is then (32, 10, 16),
and the input_shape, not including the samples dimension, is (10, 16).

You can then use TimeDistributed to apply a Dense layer
to each of the 10 timesteps, independently:

scope_name

trainable

trainable_variables

trainable_weights

updates

variables

Returns:

weights

Returns:

A list of variables.

Methods

__init__

__init__(
layer,
**kwargs
)

__call__

__call__(
inputs,
*args,
**kwargs
)

Wrapper around self.call(), for handling internal references.

If a Keras tensor is passed:
- We call self._add_inbound_node().
- If necessary, we build the layer to match
the shape of the input(s).
- We update the _keras_history of the output tensor(s)
with the current layer.
This is done as part of _add_inbound_node().

Arguments:

inputs: Can be a tensor or list/tuple of tensors.

*args: Additional positional arguments to be passed to call(). Only
allowed in subclassed Models with custom call() signatures. In other
cases, Layer inputs must be passed using the inputs argument and
non-inputs must be keyword arguments.

**kwargs: Additional keyword arguments to be passed to call().

Returns:

Output of the layer's call method.

Raises:

ValueError: in case the layer is missing shape information
for its build call.

TypeError: If positional arguments are passed and this Layer is not a
subclassed Model.

__deepcopy__

__deepcopy__(memo)

add_loss

add_loss(
losses,
inputs=None
)

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent
on the inputs passed when calling a layer. Hence, when reusing the same
layer on different inputs a and b, some entries in layer.losses may
be dependent on a and some on b. This method automatically keeps track
of dependencies.

The get_losses_for method allows to retrieve the losses relevant to a
specific set of inputs.

Note that add_loss is not supported when executing eagerly. Instead,
variable regularizers may be added through add_variable. Activity
regularization is not supported directly (but such losses may be returned
from Layer.call()).

Arguments:

losses: Loss tensor, or list/tuple of tensors.

inputs: If anything other than None is passed, it signals the losses
are conditional on some of the layer's inputs,
and thus they should only be run where these inputs are available.
This is the case for activity regularization losses, for instance.
If None is passed, the losses are assumed
to be unconditional, and will apply across all dataflows of the layer
(e.g. weight regularization losses).

Raises:

RuntimeError: If called in Eager mode.

add_update

add_update(
updates,
inputs=None
)

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance
in a BatchNormalization layer) may be dependent on the inputs passed
when calling a layer. Hence, when reusing the same layer on
different inputs a and b, some entries in layer.updates may be
dependent on a and some on b. This method automatically keeps track
of dependencies.

The get_updates_for method allows to retrieve the updates relevant to a
specific set of inputs.

This call is ignored in Eager mode.

Arguments:

updates: Update op, or list/tuple of update ops.

inputs: If anything other than None is passed, it signals the updates
are conditional on some of the layer's inputs,
and thus they should only be run where these inputs are available.
This is the case for BatchNormalization updates, for instance.
If None, the updates will be taken into account unconditionally,
and you are responsible for making sure that any dependency they might
have is available at runtime.
A step counter might fall into this category.

add_variable

Adds a new variable to the layer, or gets an existing one; returns it.

Arguments:

name: variable name.

shape: variable shape.

dtype: The type of the variable. Defaults to self.dtype or float32.

initializer: initializer instance (callable).

regularizer: regularizer instance (callable).

trainable: whether the variable should be part of the layer's
"trainable_variables" (e.g. variables, biases)
or "non_trainable_variables" (e.g. BatchNorm mean, stddev).
Note, if the current variable scope is marked as non-trainable
then this parameter is ignored and any added variables are also
marked as non-trainable.

constraint: constraint instance (callable).

partitioner: (optional) partitioner instance (callable). If
provided, when the requested variable is created it will be split
into multiple partitions according to partitioner. In this case,
an instance of PartitionedVariable is returned. Available
partitioners include tf.fixed_size_partitioner and
tf.variable_axis_size_partitioner. For more details, see the
documentation of tf.get_variable and the "Variable Partitioners
and Sharding" section of the API guide.

Returns:

The created variable. Usually either a Variable or ResourceVariable
instance. If partitioner is not None, a PartitionedVariable
instance is returned.

Raises:

RuntimeError: If called with partioned variable regularization and
eager execution is enabled.