Selectived fully connected layer. Different from fc, the output
of this layer maybe sparse. It requires an additional input to indicate
several selected columns for output. If the selected columns is not
specified, selective_fc acts exactly like fc.

Different from img_conv, conv_op is an Operator, which can be used
in mixed. And conv_op takes two inputs to perform convolution.
The first input is the image and the second is filter kernel. It only
support GPU mode.

Convolution Transpose (deconv) layer for image. Paddle can support both square
and non-square input currently.

The details of convolution transpose layer,
please refer to the following explanation and references therein
<http://datascience.stackexchange.com/questions/6107/
what-are-deconvolutional-layers/>`_ .
The num_channel means input image’s channel number. It may be 1 or 3 when
input is raw pixels of image(mono or RGB), or it may be the previous layer’s
num_filters * num_group.

There are several group of filter in PaddlePaddle implementation.
Each group will process some channel of the inputs. For example, if an input
num_channel = 256, group = 4, num_filter=32, the PaddlePaddle will create
32*4 = 128 filters to process inputs. The channels will be split into 4
pieces. First 256/4 = 64 channels will process by first 32 filters. The
rest channels will be processed by rest group of filters.

It just simply reorganizes input sequence, combines “context_len” sequence
to one context from context_start. “context_start” will be set to
-(context_len - 1) / 2 by default. If context position out of sequence
length, padding will be filled as zero if padding_attr = False, otherwise
it is trainable.

padding_attr (bool|paddle.v2.attr.ParameterAttribute) – Padding Parameter Attribute. If false, it means padding
always be zero. Otherwise Padding is learnable, and
parameter attribute is set by this parameter.

batch_norm_type (None|string, None or "batch_norm" or "cudnn_batch_norm") – We have batch_norm and cudnn_batch_norm. batch_norm
supports both CPU and GPU. cudnn_batch_norm requires
cuDNN version greater or equal to v4 (>=v4). But
cudnn_batch_norm is faster and needs less memory
than batch_norm. By default (None), we will
automaticly select cudnn_batch_norm for GPU and
batch_norm for CPU. Otherwise, select batch norm
type based on the specified type. If you use cudnn_batch_norm,
we suggested you use latest version, such as v5.1.

num_channels (int) – num of image channels or previous layer’s number of
filters. None will automatically get from layer’s
input.

bias_attr (paddle.v2.attr.ParameterAttribute) – \(\beta\), better be zero when initialize. So the
initial_std=0, initial_mean=1 is best practice.

param_attr (paddle.v2.attr.ParameterAttribute) – \(\gamma\), better be one when initialize. So the
initial_std=0, initial_mean=1 is best practice.

layer_attr (paddle.v2.attr.ExtraAttribute) – Extra Layer Attribute.

use_global_stats (bool|None.) – whether use moving mean/variance statistics
during testing peroid. If None or True,
it will use moving mean/variance statistics during
testing. If False, it will use the mean
and variance of current batch of test data for
testing.

moving_average_fraction (float.) – Factor used in the moving average
computation, referred to as facotr,
\(runningMean = newMean*(1-factor)
+ runningMean*factor\)

Normalize a layer’s output. This layer is necessary for ssd.
This layer applys normalize across the channels of each sample to
a conv layer’s output and scale the output by a group of trainable
factors which dimensions equal to the channel’s number.

NOTE: In PaddlePaddle’s implementation, the multiplications
\(W_{xi}x_{t}\) , \(W_{xf}x_{t}\),
\(W_{xc}x_t\), \(W_{xo}x_{t}\) are not done in the lstmemory layer,
so an additional mixed with full_matrix_projection or a fc must
be included in the configuration file to complete the input-to-hidden
mappings before lstmemory is called.

NOTE: This is a low level user interface. You can use network.simple_lstm
to config a simple plain lstm layer.

Please refer to Generating Sequences With Recurrent Neural Networks for
more details about LSTM.

1. update gate \(z\): defines how much of the previous memory to
keep around or the unit updates its activations. The update gate
is computed by:

\[z_t = \sigma(W_{z}x_{t} + U_{z}h_{t-1} + b_z)\]

2. reset gate \(r\): determines how to combine the new input with the
previous memory. The reset gate is computed similarly to the update gate:

\[r_t = \sigma(W_{r}x_{t} + U_{r}h_{t-1} + b_r)\]

3. The candidate activation \(\tilde{h_t}\) is computed similarly to
that of the traditional recurrent unit:

\[{\tilde{h_t}} = tanh(W x_{t} + U (r_{t} \odot h_{t-1}) + b)\]

4. The hidden activation \(h_t\) of the GRU at time t is a linear
interpolation between the previous activation \(h_{t-1}\) and the
candidate activation \(\tilde{h_t}\):

\[h_t = (1 - z_t) h_{t-1} + z_t {\tilde{h_t}}\]

NOTE: In PaddlePaddle’s implementation, the multiplication operations
\(W_{r}x_{t}\), \(W_{z}x_{t}\) and \(W x_t\) are not computed in
gate_recurrent layer. Consequently, an additional mixed with
full_matrix_projection or a fc must be included before grumemory
is called.

Recurrent layer group is an extremely flexible recurrent unit in
PaddlePaddle. As long as the user defines the calculation done within a
time step, PaddlePaddle will iterate such a recurrent calculation over
sequence input. This is extremely usefull for attention based model, or
Neural Turning Machine like models.

recurrent one time step function.The input of this function is
input of the group. The return of this function will be
recurrent group’s return value.

The recurrent group scatter a sequence into time steps. And
for each time step, will invoke step function, and return
a time step result. Then gather each time step of output into
layer group’s output.

name (basestring) – recurrent_group’s name.

input (LayerOutput|StaticInput|SubsequenceInput|list|tuple) –

Input links array.

LayerOutput will be scattered into time steps.
SubsequenceInput will be scattered into sequence steps.
StaticInput will be imported to each time step, and doesn’t change
through time. It’s a mechanism to access layer outside step function.

reverse (bool) – If reverse is set true, the recurrent unit will process the
input sequence in a reverse order.

targetInlink (LayerOutput|SubsequenceInput) –

the input layer which share info with layer group’s output

Param input specifies multiple input layers. For
SubsequenceInput inputs, config should assign one input
layer that share info(the number of sentences and the number
of words in each sentence) with all layer group’s outputs.
targetInlink should be one of the layer group’s input.

is_generating – If is generating, none of input type should be LayerOutput;
else, for training or testing, one of the input type must
be LayerOutput.

name (base string) – Name of the recurrent unit that generates sequences.

step (callable) –

A callable function that defines the calculation in a time
step, and it is applied to sequences with arbitrary length by
sharing a same set of weights.

You can refer to the first parameter of recurrent_group, or
demo/seqToseq/seqToseq_net.py for more details.

input (list) – Input data for the recurrent unit

bos_id (int) – Index of the start symbol in the dictionary. The start symbol
is a special token for NLP task, which indicates the
beginning of a sequence. In the generation task, the start
symbol is essential, since it is used to initialize the RNN
internal state.

eos_id (int) – Index of the end symbol in the dictionary. The end symbol is
a special token for NLP task, which indicates the end of a
sequence. The generation process will stop once the end
symbol is generated, or a pre-defined max iteration number
is exceeded.

max_length (int) – Max generated sequence length.

beam_size (int) – Beam search for sequence generation is an iterative search
algorithm. To maintain tractability, every iteration only
only stores a predetermined number, called the beam_size,
of the most promising next words. The greater the beam
size, the fewer candidate words are pruned.

num_results_per_sample (int) – Number of the generated results per input
sequence. This number must always be less than
beam size.

Get layer’s output by name. In PaddlePaddle, a layer might return multiple
values, but returns one layer’s output. If the user wants to use another
output besides the default one, please use get_output first to get
the output from input.

If stride > 0, this layer slides a window whose size is determined by stride,
and return the last value of the window as the output. Thus, a long sequence
will be shorten. Note that for sequence with sub-sequence, the default value
of stride is -1.

If stride > 0, this layer slides a window whose size is determined by stride,
and return the first value of the window as the output. Thus, a long sequence
will be shorten. Note that for sequence with sub-sequence, the default value
of stride is -1.

bias_attr (paddle.v2.attr.ParameterAttribute or None or bool) – The Bias Attribute. If no bias, then pass False or
something not type of paddle.v2.attr.ParameterAttribute. None will get a
default Bias.

The expand method is the same with ExpandConvLayer, but saved the transposed
value. After expanding, output.sequenceStartPositions will store timeline.
The number of time steps are outputH * outputW and the dimension of each
time step is block_y * block_x * num_channels. This layer can be used after
convolution neural network, and before recurrent neural network.

A layer for reshaping the sequence. Assume the input sequence has T instances,
the dimension of each instance is M, and the input reshape_size is N, then the
output sequence has T*M/N instances, the dimension of each instance is N.

Note that T*M/N must be an integer.

The example usage is:

reshape=seq_reshape(input=layer,reshape_size=4)

参数:

input (paddle.v2.config_base.Layer) – Input layer.

reshape_size (int) – the size of reshaped sequence.

name (basestring) – Layer name.

act (paddle.v2.Activation.Base) – Activation type.

layer_attr (paddle.v2.attr.ExtraAttribute) – extra layer attributes.

bias_attr (paddle.v2.attr.ParameterAttribute or None or bool) – The Bias Attribute. If no bias, then pass False or
something not type of paddle.v2.attr.ParameterAttribute. None will get a
default Bias.

input (paddle.v2.config_base.Layer) – Samples of the same query should be loaded as sequence.

score – The 2nd input. Score of each sample.

NDCG_num (int) – The size of NDCG (Normalized Discounted Cumulative Gain),
e.g., 5 for NDCG@5. It must be less than for equal to the
minimum size of lists.

max_sort_size (int) – The size of partial sorting in calculating gradient.
If max_sort_size = -1, then for each list, the
algorithm will sort the entire list to get gradient.
In other cases, max_sort_size must be greater than or
equal to NDCG_num. And if max_sort_size is greater
than the size of a list, the algorithm will sort the
entire list of get gradient.

name (None|basestring) – The name of this layers. It is not necessary.

A layer for calculating the decoding sequence of sequential conditional
random field model. The decoding sequence is stored in output.ids.
If a second input is provided, it is treated as the ground-truth label, and
this layer will also calculate error. output.value[i] is 1 for incorrect
decoding or 0 for correct decoding.

Considering the ‘blank’ label needed by CTC, you need to use
(num_classes + 1) as the input size. num_classes is the category number.
And the ‘blank’ is the last category index. So the size of ‘input’ layer, such as
fc with softmax activation, should be num_classes + 1. The size of ctc
should also be num_classes + 1.

A layer intergrating the open-source warp-ctc
<https://github.com/baidu-research/warp-ctc> library, which is used in
Deep Speech 2: End-toEnd Speech Recognition in English and Mandarin
<https://arxiv.org/pdf/1512.02595v1.pdf>, to compute Connectionist Temporal
Classification (CTC) loss.

Let num_classes represent the category number. Considering the ‘blank’
label needed by CTC, you need to use (num_classes + 1) as the input
size. Thus, the size of both warp_ctc and ‘input’ layer should
be set to num_classes + 1.

You can set ‘blank’ to any value ranged in [0, num_classes], which
should be consistent as that used in your labels.

As a native ‘softmax’ activation is interated to the warp-ctc library,
‘linear’ activation is expected instead in the ‘input’ layer.

neg_distribution (list|tuple|collections.Sequence|None) – The distribution for generating the random negative labels.
A uniform distribution will be used if not provided.
If not None, its length must be equal to num_classes.

Organize the classes into a binary tree. At each node, a sigmoid function
is used to calculate the probability of belonging to the right branch.
This idea is from “F. Morin, Y. Bengio (AISTATS 05):
Hierarchical Probabilistic Neural Network Language Model.”

The example usage is:

cost=hsigmoid(input=[layer1,layer2],label=data)

参数:

input (paddle.v2.config_base.Layer|list|tuple) – Input layers. It could be a paddle.v2.config_base.Layer or list/tuple of
paddle.v2.config_base.Layer.