weight1PlusWeight2.train.map { result =>
result should be(310.0f)
weight2.data should be < 300.0f
weight1.data should be < 10.0f
}

Note

Unlike DoubleLayers, DoubleLayer in this CumulativeDoubleLayers will share Tapes
created in forward pass pass for all dependencies, avoiding re-evaluation
in the case of diamond dependencies in a neural network.

weight1PlusWeight2.train.map { result =>
result should be(310.0f)
weight2.data should be < 300.0f
weight1.data should be < 10.0f
}

Note

Unlike FloatLayers, FloatLayer in this CumulativeFloatLayers will share Tapes
created in forward pass pass for all dependencies, avoiding re-evaluation
in the case of diamond dependencies in a neural network.

Author:

杨博 (Yang Bo)

Note

Unlike INDArrayLayers, INDArrayLayer in this CumulativeINDArrayLayers will share Tapes
created in forward pass for all dependencies, avoiding re-evaluation
in the case of diamond dependencies in a neural network.

Author:

杨博 (Yang Bo)

Note

By default, the computation in a DoubleLayer will re-evaluate again and again
if the DoubleLayer is used by multiple other operations.
This behavior is very inefficient if there is are diamond dependencies in a neural network.
It's wise to use CumulativeDoubleLayers instead of this DoubleLayers in such neural network.

A plugin that provides differentiable operators
on neural networks whose Data and Delta is scala.Float.

A plugin that provides differentiable operators
on neural networks whose Data and Delta is scala.Float.

Author:

杨博 (Yang Bo)

Note

By default, the computation in a FloatLayer will re-evaluate again and again
if the FloatLayer is used by multiple other operations.
This behavior is very inefficient if there is are diamond dependencies in a neural network.
It's wise to use CumulativeFloatLayers instead of this FloatLayers in such neural network.

Author:

杨博 (Yang Bo)

Note

By default, the computation in a INDArrayLayer will re-evaluate again and again
if the INDArrayLayer is used by multiple other operations.
This behavior is very inefficient if there is are diamond dependencies in a neural network.
It's wise to use CumulativeINDArrayLayers instead of this INDArrayLayers in such neural network.