This function writes the given input variable to the specified position
indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the
output LOD_TENSOR_ARRAY is not given(None), a new one will be created and
returned.

Parameters:

x (Variable|list) – The input tensor from which the data will be read.

i (Variable|list) – The index of the output LOD_TENSOR_ARRAY, pointing to
the position to which the input tensor will be
written.

array (Variable|list) – The output LOD_TENSOR_ARRAY to which the input
tensor will be written. If this parameter is
NONE, a new LOD_TENSOR_ARRAY will be created and
returned.

The dynamic RNN can process a batch of sequence data. The length of each
sample sequence can be different. This API automatically process them in
batch.

The input lod must be set. Please reference lod_tensor

>>> importpaddle.fluidasfluid>>> data=fluid.layers.data(name='sentence',dtype='int64',lod_level=1)>>> embedding=fluid.layers.embedding(input=data,size=[65535,32],>>> is_sparse=True)>>>>>> drnn=fluid.layers.DynamicRNN()>>> withdrnn.block():>>> word=drnn.step_input(embedding)>>> prev=drnn.memory(shape=[200])>>> hidden=fluid.layers.fc(input=[word,prev],size=200,act='relu')>>> drnn.update_memory(prev,hidden)# set prev to hidden>>> drnn.output(hidden)>>>>>> # last is the last time step of rnn. It is the encoding result.>>> last=fluid.layers.sequence_last_step(drnn())

The dynamic RNN will unfold sequence into timesteps. Users need to define
how to process each time step during the with block.

The memory is used staging data cross time step. The initial value of
memory can be zero or another variable.

The dynamic RNN can mark multiple variables as its output. Use drnn() to
get the output sequence.

If the init is not None, memory will be initialized by
this variable. The need_reorder is used to reorder the memory as
the input variable. It should be set to true when the initialized memory
depends on the input sample.

Input(X) is a batch of sequences. Input(RankTable) stores new orders of the
input sequence batch. The reorder_lod_tensor_by_rank operator reorders the
Input(X) according to the information provided by Input(RankTable).

For example:

If the indices stored in the Input(RankTable) are [3, 0, 2, 1], the
Input(X) will be reordered that the fourth sequence in Input(X) will become the
first one, and then followed by the original first, third, and the second one.

If the LoD information of Input(X) is empty, this means Input(X) is not sequence
data. This is also identical to a batch of sequences where each sequence has a
fixed length 1. In this case, the reorder_lod_tensor_by_rank operator reorders
each slice of Input(X) along the first axis according to Input(RankTable).

This is:
X = [Slice0, Slice1, Slice2, Slice3] and its LoD information is empty. The
indices in RankTable are [3, 0, 2, 1].
Out = [Slice3, Slice0, Slice2, Slice1] with no LoD information is appended.

NOTE: This operator sorts Input(X) according to a given LoDRankTable which does
not need to be calculated according to Input(X). It can be calculated according
to another different sequence, and then this operator sorts Input(X) according
to the given LoDRankTable.

Parameters:

x – (LoDTensor), the input lod tensor to be reordered according to Input(RankTable).

rank_table – (LoDRankTable), the rank table according to which Input(X) is reordered.

ParallelDo is used to represent multi-thread data parallel processing.

Its vanilla implementation can be shown as the following (\(|\) means
single thread and \(||||\) means multiple threads)

In the forward pass
| Split input onto different devices
| Copy parameter onto different devices
|||| Compute forward pass in parallel
| Merge output from different devices
In the backward pass
| Split output@grad onto different devices
|||| Compute backward pass in parallel
| accumulate param@grad from different devices to the first device
| Merge input@grad from different devices
| Copy param@grad to the place of parallel_do_op

This function takes in the input and based on whether data has
to be returned back as a minibatch, it creates the global variable by using
the helper functions. The global variables can be accessed by all the
following operators in the graph.

All the input variables of this function are passed in as local variables
to the LayerHelper constructor.

Parameters:

name (str) – The name/alias of the function

shape (list) – Tuple declaring the shape.

append_batch_size (bool) – Whether or not to append the data as a batch.

dtype (int|float) – The type of data : float32, float_16, int etc

type (VarType) – The output type. By default it is LOD_TENSOR.

lod_level (int) – The LoD Level. 0 means the input data is not a sequence.

This layer takes a list of files to read from and returns a Reader Variable.
Via the Reader Variable, we can get data from given files. All files must
have name suffixs to indicate their formats, e.g., ‘*.recordio’.

Parameters:

filenames (list) – The list of file names.

shapes (list) – List of tuples which declaring data shapes.

lod_levels (list) – List of ints which declaring data lod_level.

dtypes (list) – List of strs which declaring data type.

thread_num (None) – The number of thread to read files.
Default: min(len(filenames), cpu_number).

is_test (bool|None) – Whether open_files used for testing or not. If it
is used for testing, the order of data generated is same as the file
order. Otherwise, it is not guaranteed the order of data is same
between every epoch. [Default: False].

Returns:

A Reader Variable via which we can get file data.

Return type:

Variable

Examples

reader=fluid.layers.io.open_files(filenames=['./data1.recordio','./data2.recordio'],shapes=[(3,224,224),(1)],lod_levels=[0,0],dtypes=['float32','int64'])# Via the reader, we can use 'read_file' layer to get data:image,label=fluid.layers.io.read_file(reader)

This layer is a reader decorator. It takes a reader and adds
‘batching’ decoration on it. When reading with the result
decorated reader, output data will be automatically organized
to the form of batches.

Parameters:

reader (Variable) – The reader to be decorated with ‘batching’.

batch_size (int) – The batch size.

Returns:

The reader which has been decorated with ‘batching’.

Return type:

Variable

Examples

raw_reader=fluid.layers.io.open_files(filenames=['./data1.recordio','./data2.recordio'],shapes=[(3,224,224),(1)],lod_levels=[0,0],dtypes=['float32','int64'],thread_num=2,buffer_size=2)batch_reader=fluid.layers.batch(reader=raw_reader,batch_size=5)# If we read data with the raw_reader:# data = fluid.layers.read_file(raw_reader)# We can only get data instance by instance.## However, if we read data with the batch_reader:# data = fluid.layers.read_file(batch_reader)# Each 5 adjacent instances will be automatically combined together# to become a batch. So what we get('data') is a batch data instead# of an instance.

This layer returns a Reader Variable.
Instead of opening a file and reading data from it, this
Reader Variable generates float uniform random data by itself.
It can be used as a dummy reader to test a network without
opening a real file.

Parameters:

low (float) – The lower bound of data’s uniform distribution.

high (float) – The upper bound of data’s uniform distribution.

shapes (list) – List of tuples which declaring data shapes.

lod_levels (list) – List of ints which declaring data lod_level.

for_parallel (Bool) – Set it as True if you are going to run
subsequent operators in parallel.

Returns:

A Reader Variable from which we can get random data.

Return type:

Variable

Examples

reader=fluid.layers.random_data_generator(low=0.0,high=1.0,shapes=[[3,224,224],[1]],lod_levels=[0,0])# Via the reader, we can use 'read_file' layer to get data:image,label=fluid.layers.read_file(reader)

This layer returns a Reader Variable.
The Reader provides decorate_paddle_reader() and
decorate_tensor_provider() to set a Python generator as the data
source in Python side. When Executor::Run() is invoked in C++
side, the data from the generator would be read automatically. Unlike
DataFeeder.feed(), the data reading process and
Executor::Run() process can run in parallel using
py_reader. The start() method of the Reader should be
called when each pass begins, while the reset() method should be
called when the pass ends and fluid.core.EOFException raises.
Note that Program.clone() method cannot clone py_reader.

This function creates a fully connected layer in the network. It can take
multiple tensors as its inputs. It creates a variable called weights for
each input tensor, which represents a fully connected weight matrix from
each input unit to each output unit. The fully connected layer multiplies
each input tensor with its coresponding weight to produce an output Tensor.
If multiple input tensors are given, the results of multiple multiplications
will be sumed up. If bias_attr is not None, a bias variable will be created
and added to the output. Finally, if activation is not None, it will be applied
to the output as well.

This process can be formulated as follows:

\[Out = Act({\sum_{i=0}^{N-1}X_iW_i + b})\]

In the above equation:

\(N\): Number of the input.

\(X_i\): The input tensor.

\(W\): The weights created by this layer.

\(b\): The bias parameter created by this layer (if needed).

\(Act\): The activation function.

\(Out\): The output tensor.

Parameters:

input (Variable|list of Variable) – The input tensor(s) of this layer, and the dimension of
the input tensor(s) is at least 2.

size (int) – The number of output units in this layer.

num_flatten_dims (int, default 1) – The fc layer can accept an input tensor with more than
two dimensions. If this happens, the multidimensional tensor will first be flattened
into a 2-dimensional matrix. The parameter num_flatten_dims determines how the input
tensor is flattened: the first num_flatten_dims (inclusive, index starts from 1)
dimensions will be flatten to form the first dimension of the final matrix (height of
the matrix), and the rest rank(X) - num_flatten_dims dimensions are flattened to
form the second dimension of the final matrix (width of the matrix). For example, suppose
X is a 6-dimensional tensor with a shape [2, 3, 4, 5, 6], and num_flatten_dims = 3.
Then, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].

param_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for learnable
parameters/weights of this layer.

bias_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for the bias
of this layer. If it is set to False, no bias will be added to the output units.
If it is set to None, the bias is initialized zero. Default: None.

act (str, default None) – Activation to be applied to the output of this layer.

padding_idx (int|long|None) – If None, it makes no effect to lookup.
Otherwise the given padding_idx indicates padding the output
with zeros whenever lookup encounters it in input. If
\(padding_idx < 0\), the padding_idx to use in lookup is
\(size[0] + dim\).

W terms denote weight matrices (e.g. \(W_{xi}\) is the matrix of weights from the input gate to the input), \(W_{ic}, W_{fc}, W_{oc}\) are diagonal weight matrices for peephole connections. In our implementation, we use vectors to reprenset these diagonal weight matrices. - The b terms denote bias vectors (\(b_i\) is the input gate bias vector). - \(\sigma\) is the non-line activations, such as logistic sigmoid function. - \(i, f, o\) and \(c\) are the input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector \(h\). - The \(\odot\) is the element-wise product of the vectors. - \(act_g\) and \(act_h\) are the cell input and cell output activation functions and tanh is usually used for them. - \(\tilde{c_t}\) is also called candidate hidden state, which is computed based on the current input and the previous hidden state.

Note that these \(W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connect operator before LSTM operator.

Parameters:

input (Variable) – (LoDTensor) the first input is a LodTensor, which support variable-time length input sequence. The underlying tensor in this LoDTensor is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size

size (int) – 4 * hidden size.

h_0 (Variable) – The initial hidden state is an optional input, default is zero.
This is a tensor with shape (N x D), where N is the
batch size and D is the hidden size.

c_0 (Variable) – The initial cell state is an optional input, default is zero.
This is a tensor with shape (N x D), where N is the
batch size. h_0 and c_0 can be NULL but only at the same time.

param_attr (ParamAttr|None) –

The parameter attribute for the learnable
hidden-hidden weights.

Weights = {\(W_{ch}, W_{ih}, W_{fh}, W_{oh}\)}

The shape is (D x 4D), where D is the hidden
size.

bias_attr (ParamAttr|None) –

The bias attribute for the learnable bias
weights, which contains two parts, input-hidden
bias weights and peephole connections weights if
setting use_peepholes to True.

LSTMP (LSTM with recurrent projection) layer has a separate projection
layer after the LSTM layer, projecting the original hidden state to a
lower-dimensional one, which is proposed to reduce the number of total
parameters and furthermore computational complexity for the LSTM,
espeacially for the case that the size of output units is relative
large (https://research.google.com/pubs/archive/43905.pdf).

Note that these \(W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}\)
operations on the input \(x_{t}\) are NOT included in this operator.
Users can choose to use fully-connected layer before LSTMP layer.

Parameters:

input (Variable) – The input of dynamic_lstmp layer, which supports
variable-time length input sequence. The underlying
tensor in this Variable is a matrix with shape
(T X 4D), where T is the total time steps in this
mini-batch, D is the hidden size.

size (int) – 4 * hidden size.

proj_size (int) – The size of projection output.

param_attr (ParamAttr|None) –

The parameter attribute for the learnable
hidden-hidden weight and projection weight.

Hidden-hidden weight = {\(W_{ch}, W_{ih}, W_{fh}, W_{oh}\)}.

The shape of hidden-hidden weight is (P x 4D),
where P is the projection size and D the hidden
size.

Projection weight = {\(W_{rh}\)}.

The shape of projection weight is (D x P).

bias_attr (ParamAttr|None) –

The bias attribute for the learnable bias
weights, which contains two parts, input-hidden
bias weights and peephole connections weights if
setting use_peepholes to True.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

Returns:

A tuple of two output variable: the projection of hidden state, and cell state of LSTMP. The shape of projection is (T x P), for the cell state which is (T x D), and both LoD is the same with the input.

The \(\odot\) is the element-wise product of the vectors. \(act_g\)
is the update gate and reset gate activation function and \(sigmoid\)
is usually used for it. \(act_c\) is the activation function for
candidate hidden state and \(tanh\) is usually used for it.

Note that these \(W_{ux}x_{t}, W_{rx}x_{t}, W_{cx}x_{t}\) operations on
the input \(x_{t}\) are NOT included in this operator. Users can choose
to use fully-connect layer before GRU layer.

Parameters:

input (Variable) – The input of dynamic_gru layer, which supports
variable-time length input sequence. The underlying tensor in this
Variable is a matrix with shape \((T \times 3D)\), where
\(T\) is the total time steps in this mini-batch, \(D\)
is the hidden size.

size (int) – The dimension of the gru cell.

param_attr (ParamAttr|None) –

The parameter attribute for the learnable
hidden-hidden weight matrix. Note:

The shape of the weight matrix is \((T \times 3D)\), where
\(D\) is the hidden size.

All elements in the weight matrix can be divided into two parts.
The first part are weights of the update gate and reset gate with
shape \((D \times 2D)\), and the second part are weights for
candidate hidden state with shape \((D \times D)\).

h_0 (Variable) – This is initial hidden state. If not set, default is
zero. This is a tensor with shape (N x D), where N is the number of
total time steps of input mini-batch feature and D is the hidden
size.

Returns:

The hidden state of GRU. The shape is \((T \times D)\), and sequence length is the same with the input.

The inputs of gru unit includes \(z_t\), \(h_{t-1}\). In terms
of the equation above, the \(z_t\) is split into 3 parts -
\(xu_t\), \(xr_t\) and \(xm_t\). This means that in order to
implement a full GRU unit operator for an input, a fully
connected layer has to be applied, such that \(z_t = W_{fc}x_t\).

The terms \(u_t\) and \(r_t\) represent the update and reset gates
of the GRU cell. Unlike LSTM, GRU has one lesser gate. However, there is
an intermediate candidate hidden output, which is denoted by \(m_t\).
This layer has three outputs \(h_t\), \(dot(r_t, h_{t-1})\)
and concatenation of \(u_t\), \(r_t\) and \(m_t\).

Linear chain CRF is a special case of CRF that is useful for sequence labeling task. Sequence labeling tasks do not assume a lot of conditional independences among inputs. The only constraint they impose is that the input and output must be linear sequences. Thus, the graph of such a CRF is a simple chain or a line, which results in the linear chain CRF.

Denote Input(Emission) to this operator as \(x\) here. 2. The first D values of Input(Transition) to this operator are for starting weights, denoted as \(a\) here. 3. The next D values of Input(Transition) of this operator are for ending weights, denoted as \(b\) here. 4. The remaning values of Input(Transition) are for transition weights, denoted as \(w\) here. 5. Denote Input(Label) as \(s\) here.

where \(Z\) is a normalization value so that the sum of \(P(s)\) over all possible sequences is 1, and \(x\) is the emission feature weight to the linear chain CRF.

Finally, the linear chain CRF operator outputs the logarithm of the conditional likelihood of each training sample in a mini-batch.

NOTE:

The feature function for a CRF is made up of the emission features and the transition features. The emission feature weights are NOT computed in this operator. They MUST be computed first before this operator is called.

Because this operator performs global normalization over all possible sequences internally, it expects UNSCALED emission feature weights. Please do not call this op with the emission feature being output of any nonlinear activation.

The 2nd dimension of Input(Emission) MUST be equal to the tag number.

Parameters:

input (Variable) – (LoDTensor, default LoDTensor<float>) A 2-D LoDTensor with shape [N x D], where N is the size of the mini-batch and D is the total tag number. The unscaled emission weight matrix for the linear chain CRF.

input – (Tensor, default Tensor<float>) A 2-D Tensor with shape [(D + 2) x D]. The learnable parameter for the linear_chain_crf operator. See more details in the operator’s comments

label (Variable) – (LoDTensor, default LoDTensor<int64_t>) A LoDTensor with shape [N x 1], where N is the total element number in a mini-batch. The ground truth

param_attr (ParamAttr) – The attribute of the learnable parameter.

Returns:

(Tensor, default Tensor<float>) A 2-D Tensor with shape [N x D]. The exponentials of Input(Emission). This is an intermediate computational result in forward computation, and will be reused in backward computation

output(Variable): (Tensor, default Tensor<float>) A 2-D Tensor with shape [(D + 2) x D]. The exponentials of Input(Transition). This is an intermediate computational result in forward computation, and will be reused in backward computation

output(Variable): (Tensor, default Tensor<float>) The logarithm of the conditional likelihood of each training sample in a mini-batch. This is a 2-D tensor with shape [S x 1], where S is the sequence number in a mini-batch. Note: S is equal to the sequence number in a mini-batch. The output is no longer a LoDTensor

The crf_decoding operator reads the emission feature weights and the transition feature weights learned by the linear_chain_crf operator. It implements the Viterbi algorithm which is a dynamic programming algorithm for finding the most likely sequence of hidden states, called the Viterbi path, that results in a sequence of observed tags.

The output of this operator changes according to whether Input(Label) is given:

Input(Label) is given: This happens in training. This operator is used to co-work with the chunk_eval operator. When Input(Label) is given, the crf_decoding operator returns a row vector with shape [N x 1] whose values are fixed to be 0, indicating an incorrect prediction, or 1 indicating a tag is correctly predicted. Such an output is the input to chunk_eval operator.

Input(Label) is not given: This is the standard decoding process.

The crf_decoding operator returns a row vector with shape [N x 1] whose values range from 0 to maximum tag number - 1, Each element indicates an index of a predicted tag.

Parameters:

input (Variable) – (LoDTensor, default: LoDTensor<float>). A LoDTensor with shape [N x D] where N is the size of the mini-batch and D is the total tag number. This input is the unscaled emission weight matrix of the linear_chain_crf operator

param_attr (ParamAttr) – The parameter attribute for training.

label (Variable) – (LoDTensor, LoDTensor<int64_t>). The ground truth with shape [N x 1]. This input is optional. See more details in the operator’s comments

Returns:

(LoDTensor, LoDTensor<int64_t>). The decoding results. What to return changes depending on whether the Input(Label) (the ground truth) is given. See more details in the operator’s comment

The input X and Y must have the same shape, except that the 1st dimension of input Y could be just 1 (different from input X), which will be broadcasted to match the shape of input X before computing their cosine similarity.

Both the input X and Y can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input X.

Please make sure that in this case the summation of each row of label
equals one.

One-hot cross-entropy with vecterized label:

As a special case of 2), when each row of ‘label’ has only one
non-zero element which is equal to 1, soft-label cross-entropy degenerates
to a one-hot cross-entropy with one-hot label representation.

Parameters:

input (Variable|list) – a 2-D tensor with shape [N x D], where N is the
batch size and D is the number of classes. This
input is a probability computed by the previous
operator, which is almost always the result of
a softmax operator.

label (Variable|list) – the ground truth which is a 2-D tensor. When
soft_label is set to False, label is a
tensor<int64> with shape [N x 1]. When
soft_label is set to True, label is a
tensor<float/double> with shape [N x D].

ChunkEvalOp computes the precision, recall, and F1-score of chunk detection,
and supports IOB, IOE, IOBES and IO (also known as plain) tagging schemes.
Here is a NER example of labeling for these tagging schemes:

There are three chunk types(named entity types) including PER(person), ORG(organization)
and LOC(LOCATION), and we can see that the labels have the form <tag type>-<chunk type>.

Since the calculations actually use label ids rather than labels, extra attention
should be paid when mapping labels to ids to make CheckEvalOp work. The key point
is that the listed equations are satisfied by ids.

tag_type=label%num_tag_typechunk_type=label/num_tag_type

where num_tag_type is the num of tag types in the tagging scheme, num_chunk_type
is the num of chunk types, and tag_type get its value from the following table.

SchemeBeginInsideEndSingleplain0---IOB01--IOE-01-IOBES0123

Still use NER as example, assuming the tagging scheme is IOB while chunk types are ORG,
PER and LOC. To satisfy the above equations, the label map can be like this:

B-ORG0I-ORG1B-PER2I-PER3B-LOC4I-LOC5O6

It’s not hard to verify the equations noting that the num of chunk types
is 3 and the num of tag types in IOB scheme is 2. For example, the label
id of I-LOC is 5, the tag type id of I-LOC is 1, and the chunk type id of
I-LOC is 2, which consistent with the results from the equations.

Parameters:

input (Variable) – prediction output of the network.

label (Variable) – label of the test data set.

chunk_scheme (str) – The labeling scheme indicating how to encode the chunks. Must be IOB, IOE, IOBES or plain. See the descriptionfor details

num_chunk_types (int) – The number of chunk type. See the description for details

excluded_chunk_types (list) – A list including chunk type ids indicating chunk types that are not counted. See the description for details

This function creates the op for sequence_conv, using the inputs and
other convolutional configurations for the filters and stride as given
in the input parameters to the function.

Parameters:

input (Variable) – (LoDTensor) the input(X) is a LodTensor, which supports variable-time length input sequence. The underlying tensor in this LoDTensor is a matrix with shape (T, N), where T is the total time steps in this mini-batch and N is the input_hidden_size

The convolution2D layer calculates the output based on the input, filter
and strides, paddings, dilations, groups parameters. Input and
Output are in NCHW format, where N is batch size, C is the number of
channels, H is the height of the feature, and W is the width of the feature.
Filter is in MCHW format, where M is the number of output image channels,
C is the number of input image channels, H is the height of the filter,
and W is the width of the filter. If the groups is greater than 1,
C will equal the number of input image channels divided by the groups.
Please refer to UFLDL’s convolution
for more detials.
If bias attribution and activation type are provided, bias is added to the
output of the convolution, and the corresponding activation function is
applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

Where:

\(X\): Input value, a tensor with NCHW format.

\(W\): Filter value, a tensor with MCHW format.

\(\ast\): Convolution operation.

\(b\): Bias value, a 2-D tensor with shape [M, 1].

\(\sigma\): Activation function.

\(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

groups (int) – The groups number of the Conv2d Layer. According to grouped
convolution in Alex Krizhevsky’s Deep CNN paper: when group=2,
the first half of the filters is only connected to the first half
of the input channels, while the second half of the filters is only
connected to the second half of the input channels. Default: groups=1

The convolution3D layer calculates the output based on the input, filter
and strides, paddings, dilations, groups parameters. Input(Input) and
Output(Output) are in NCDHW format. Where N is batch size C is the number of
channels, D is the depth of the feature, H is the height of the feature,
and W is the width of the feature. Convlution3D is similar with Convlution2D
but adds one dimension(depth). If bias attribution and activation type are
provided, bias is added to the output of the convolution, and the
corresponding activation function is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

In the above equation:

\(X\): Input value, a tensor with NCDHW format.

\(W\): Filter value, a tensor with MCDHW format.

\(\ast\): Convolution operation.

\(b\): Bias value, a 2-D tensor with shape [M, 1].

\(\sigma\): Activation function.

\(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

groups (int) – The groups number of the Conv3d Layer. According to grouped
convolution in Alex Krizhevsky’s Deep CNN paper: when group=2,
the first half of the filters is only connected to the first half
of the input channels, while the second half of the filters is only
connected to the second half of the input channels. Default: groups=1

This function computes the softmax activation among all time-steps for each
sequence. The dimension of each time-step should be 1. Thus, the shape of
input Tensor can be either \([N, 1]\) or \([N]\), where \(N\)
is the sum of the length of all sequences.

For example, for a mini-batch of 3 sequences with variable-length,
each containing 2, 3, 2 time-steps, the lod of which is [0, 2, 5, 7],
then softmax will be computed among \(X[0:2, :]\), \(X[2:5, :]\),
\(X[5:7, :]\), and \(N\) turns out to be 7.

Parameters:

input (Variable) – The input variable which is a LoDTensor.

bias_attr (ParamAttr|None) – attributes for bias

param_attr (ParamAttr|None) – attributes for parameter

use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True

The input of the softmax operator is a tensor of any rank. The output tensor
has the same shape as the input.

The input tensor will first be logically flattened to a 2-D matrix. The matrix’s
second dimension(row length) is as same as the last dimension of the input
tensor, and the first dimension(column length) is the product of all other
dimensions of the input tensor. For each row of the matrix, the softmax operator
squashes the K-dimensional(K is the width of the matrix, which is also the size
of the input tensor’s last dimension) vector of arbitrary real values to a
K-dimensional vector of real values in the range [0, 1] that add up to 1.

It computes the exponential of the given dimension and the sum of exponential
values of all the other dimensions in the K-dimensional vector input.
Then the ratio of the exponential of the given dimension and the sum of
exponential values of all the other dimensions is the output of the softmax
operator.

For each row \(i\) and each column \(j\) in the matrix, we have:

\[Out[i, j] = \frac{\exp(X[i, j])}{\sum_j(exp(X[i, j])}\]

Parameters:

input (Variable) – The input variable.

bias_attr (ParamAttr) – attributes for bias

param_attr (ParamAttr) – attributes for parameter

use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed.

The pooling2d operation calculates the output based on the input, pooling_type and ksize, strides, paddings parameters. Input(X) and output(Out) are in NCHW format, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(ksize, strides, paddings) are two elements. These two elements represent height and width, respectively. The input(X) size and output(Out) size may be different.

input (Variable) – The input tensor of pooling operator. The format of
input tensor is NCHW, where N is batch size, C is
the number of channels, H is the height of the
feature, and W is the width of the feature.

pool_size (int) – The side length of pooling windows. All pooling
windows are squares with pool_size on a side.

pool_type – (string), pooling type, can be “max” for max-pooling and “avg” for average-pooling

pool_stride (int) – stride of the pooling layer.

pool_padding (int) – padding size.

global_pooling – (bool, default false) Whether to use the global pooling. If global_pooling = true, ksize and paddings will be ignored

Beam Search Decode Layer. This layer constructs the full hypotheses for
each source sentence by walking back along the LoDTensorArray ids
whose lods can be used to restore the path in the beam search tree.
Please see the following demo for a fully beam search usage example:

fluid/tests/book/test_machine_translation.py

Parameters:

ids (Variable) – The LodTensorArray variable containing the selected ids
of all steps.

scores (Variable) – The LodTensorArray variable containing the selected
scores of all steps.

beam_size (int) – The beam width used in beam search.

end_id (int) – The id of end token.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

Returns:

The LodTensor pair containing the generated id sequences and the corresponding scores. The shapes and lods of the two LodTensor are same. The lod level is 2 and the two levels separately indicate how many hypotheses each source sentence has and how many ids each hypothesis has.

The convolution2D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCHW format. Where N is batch size, C is the number of channels,
H is the height of the feature, and W is the width of the feature.
Parameters(dilations, strides, paddings) are two elements. These two elements
represent height and width, respectively. The details of convolution transpose
layer, please refer to the following explanation and references
therein.
If bias attribution and activation type are provided, bias is added to
the output of the convolution, and the corresponding activation function
is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

Where:

\(X\): Input value, a tensor with NCHW format.

\(W\): Filter value, a tensor with MCHW format.

\(\ast\): Convolution operation.

\(b\): Bias value, a 2-D tensor with shape [M, 1].

\(\sigma\): Activation function.

\(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

num_filters (int) – The number of the filter. It is as same as the output
image channel.

output_size (int|tuple|None) – The output image size. If output size is a
tuple, it must contain two integers, (image_H, image_W). This
parameter only works when filter_size is None.

filter_size (int|tuple|None) – The filter size. If filter_size is a tuple,
it must contain two integers, (filter_size_H, filter_size_W).
Otherwise, the filter will be a square. None if use output size to
calculate filter_size.

groups (int) – The groups number of the Conv2d transpose layer. Inspired by
grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which
when group=2, the first half of the filters is only connected to the
first half of the input channels, while the second half of the
filters is only connected to the second half of the input channels.
Default: groups=1

The convolution3D transpose layer calculates the output based on the input,
filter, and dilations, strides, paddings. Input(Input) and output(Output)
are in NCDHW format. Where N is batch size, C is the number of channels,
D is the depth of the feature, H is the height of the feature, and W
is the width of the feature. Parameters(dilations, strides, paddings) are
two elements. These two elements represent height and width, respectively.
The details of convolution transpose layer, please refer to the following
explanation and references therein.
If bias attribution and activation type are provided, bias is added to
the output of the convolution, and the corresponding activation function
is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

In the above equation:

\(X\): Input value, a tensor with NCDHW format.

\(W\): Filter value, a tensor with MCDHW format.

\(\ast\): Convolution operation.

\(b\): Bias value, a 2-D tensor with shape [M, 1].

\(\sigma\): Activation function.

\(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

num_filters (int) – The number of the filter. It is as same as the output
image channel.

output_size (int|tuple|None) – The output image size. If output size is a
tuple, it must contain three integers, (image_D, image_H, image_W). This
parameter only works when filter_size is None.

filter_size (int|tuple|None) – The filter size. If filter_size is a tuple,
it must contain three integers, (filter_size_D, filter_size_H, filter_size_W).
Otherwise, the filter will be a square. None if use output size to
calculate filter_size.

groups (int) – The groups number of the Conv3d transpose layer. Inspired by
grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which
when group=2, the first half of the filters is only connected to the
first half of the input channels, while the second half of the
filters is only connected to the second half of the input channels.
Default: groups=1

Sequence Expand Layer. This layer will expand the input variable x
according to specified level lod of y. Please note that lod level of
x is at most 1 and rank of x is at least 2. When rank of x
is greater than 2, then it would be viewed as a 2-D tensor.
Following examples will explain how sequence_expand works:

This operator pads sequences in a same batch to a consistent length. The length is specified by attribute ‘padded_length’. New elements, whose values are specified by input ‘PadValue’, will be appended to the end of each sequence, to make their final lengths consistent.

pad_value (Variable) – The Variable that holds values that will be fill
into padded steps. It can be a scalar or a tensor whose shape
equals to time steps in sequences. If it’s a scalar, it will be
automatically broadcasted to the shape of time step.

maxlen (int, default None) – The length of padded sequences. It can be
None or any positive int. When it is None, all sequences will be
padded up to the length of the longest one among them; when it a
certain positive value, it must be greater than the length of the
longest original sequence.”

The inputs of lstm unit include \(x_t\), \(h_{t-1}\) and
\(c_{t-1}\). The 2nd dimensions of \(h_{t-1}\) and \(c_{t-1}\)
should be same. The implementation separates the linear transformation and
non-linear transformation apart. Here, we take \(i_t\) as an example.
The linear transformation is applied by calling a fc layer and the
equation is:

\[L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i\]

The non-linear transformation is applied by calling lstm_unit_op and the
equation is:

\[i_t = \sigma(L_{i_t})\]

This layer has two outputs including \(h_t\) and \(o_t\).

Parameters:

x_t (Variable) – The input value of current step, a 2-D tensor with shape
M x N, M for batch size and N for input size.

param_attr (ParamAttr) – The attributes of parameter weights, used to set
initializer, name etc.

bias_attr (ParamAttr) – The attributes of bias weights, if not False,
bias weights will be created and be set to default value.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

Returns:

The hidden value and cell value of lstm unit.

Return type:

tuple

Raises:

ValueError – The ranks of x_t, hidden_t_prev and cell_t_prev
not be 2 or the 1st dimensions of x_t, hidden_t_prev
and cell_t_prev not be the same or the 2nd dimensions of
hidden_t_prev and cell_t_prev not be the same.

dim (list|int|None) – The dimensions along which the sum is performed. If
None, sum all elements of input and return a
Tensor variable with a single element, otherwise must be in the
range \([-rank(input), rank(input))\). If \(dim[i] < 0\),
the dimension to reduce is \(rank + dim[i]\).

keep_dim (bool|False) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the input unless keep_dim is true.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

Computes the mean of the input tensor’s elements along the given dimension.

Parameters:

input (Variable) – The input variable which is a Tensor or LoDTensor.

dim (list|int|None) – The dimension along which the mean is computed. If
None, compute the mean over all elements of input
and return a variable with a single element, otherwise it
must be in the range \([-rank(input), rank(input))\). If
\(dim[i] < 0\), the dimension to reduce is
\(rank(input) + dim[i]\).

keep_dim (bool) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the input unless keep_dim is true.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

dim (list|int|None) – The dimension along which the maximum is computed.
If None, compute the maximum over all elements of
input and return a Tensor variable with a single element,
otherwise must be in the range \([-rank(input), rank(input))\).
If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).

keep_dim (bool) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the input unless keep_dim is true.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

dim (list|int|None) – The dimensions along which the minimum is computed.
If None, compute the minimum over all elements of
input and return a Tensor variable with a single element,
otherwise must be in the range \([-rank(input), rank(input))\).
If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).

keep_dim (bool) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the input unless keep_dim is true.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

dim (list|int|None) – The dimensions along which the product is performed. If
None, multipy all elements of input and return a
Tensor variable with a single element, otherwise must be in the
range \([-rank(input), rank(input))\). If \(dim[i] < 0\),
the dimension to reduce is \(rank + dim[i]\).

keep_dim (bool|False) – Whether to reserve the reduced dimension in the
output Tensor. The result tensor will have one fewer dimension
than the input unless keep_dim is true.

name (str|None) – A name for this layer(optional). If set None, the
layer will be named automatically.

Drop or keep each element of x independently. Dropout is a regularization
technique for reducing overfitting by preventing neuron co-adaption during
training. The dropout operator randomly sets (according to the given dropout
probability) the outputs of some units to zero, while others are remain
unchanged.

Parameters:

x (Variable) – The input tensor variable.

dropout_prob (float) – Probability of setting units to zero.

is_test (bool) – A flag indicating whether it is in test phrase or not.

seed (int) – A Python integer used to create random seeds. If this
parameter is set to None, a random seed is used.
NOTE: If an integer seed is given, always the same output
units will be dropped. DO NOT use a fixed seed in training.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

num_or_sections (int|list) – If num_or_sections is an integer,
then the integer indicates the number of equal sized sub-tensors
that the tensor will be divided into. If num_or_sections
is a list of integers, the length of list indicates the number of
sub-tensors and the integers indicate the sizes of sub-tensors’
dim dimension orderly.

dim (int) – The dimension along which to split. If \(dim < 0\), the
dimension to split along is \(rank(input) + dim\).

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

input (Variable) – (LoDTensor<float>), the probabilities of
variable-length sequences, which is a 2-D Tensor with
LoD information. It’s shape is [Lp, num_classes + 1],
where Lp is the sum of all input sequences’ length and
num_classes is the true number of classes. (not
including the blank label).

EditDistance operator computes the edit distances between a batch of
hypothesis strings and their references. Edit distance, also called
Levenshtein distance, measures how dissimilar two strings are by counting
the minimum number of operations to transform one string into anthor.
Here the operations include insertion, deletion, and substitution.

For example, given hypothesis string A = “kitten” and reference
B = “sitting”, the edit distance is 3 for A will be transformed into B
at least after two substitutions and one insertion:

“kitten” -> “sitten” -> “sittin” -> “sitting”

The input is a LoDTensor consisting of all the hypothesis strings with
the total number denoted by batch_size, and the separation is specified
by the LoD information. And the batch_size reference strings are arranged
in order in the same way in the input LoDTensor.

The output contains the batch_size results and each stands for the edit
distance for a pair of strings respectively. If Attr(normalized) is true,
the edit distance will be divided by the length of reference string.

Currently, the input tensors’ rank can be any, but when the rank of any
inputs is bigger than 3, this two inputs’ rank should be equal.

The actual behavior depends on the shapes of \(x\), \(y\) and the
flag values of transpose_x, transpose_y. Specifically:

If a transpose flag is specified, the last two dimensions of the tensor
are transposed. If the tensor is rank-1 of shape \([D]\), then for
\(x\) it is treated as \([1, D]\) in nontransposed form and as
\([D, 1]\) in transposed form, whereas for \(y\) it is the
opposite: It is treated as \([D, 1]\) in nontransposed form and as
\([1, D]\) in transposed form.

After transpose, the two tensors are 2-D or n-D and matrix multiplication
performs in the following way.

If both are 2-D, they are multiplied like conventional matrices.

If either is n-D, it is treated as a stack of matrices residing in the
last two dimensions and a batched matrix multiply supporting broadcast
applies on the two tensors.

Also note that if the raw tensor \(x\) or \(y\) is rank-1 and
nontransposed, the prepended or appended dimension \(1\) will be
removed after matrix multiplication.

This operator is used to find values and indices of the k largest entries
for the last dimension.

If the input is a vector (1-D Tensor), finds the k largest entries in the vector
and outputs their values and indices as vectors. Thus values[j] is the j-th
largest entry in input, and its index is indices[j].

If the input is a Tensor with higher rank, this operator computes the top k
entries along the last dimension.

An operator integrating the open source Warp-CTC library
(https://github.com/baidu-research/warp-ctc)
to compute Connectionist Temporal Classification (CTC) loss.
It can be aliased as softmax with CTC, since a native softmax activation is
interated to the Warp-CTC library, to to normlize values for each row of the
input tensor.

Parameters:

input (Variable) – The unscaled probabilities of variable-length sequences,
which is a 2-D Tensor with LoD information.
It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input
sequences’ length and num_classes is the true number of classes.
(not including the blank label).

label (Variable) – The ground truth of variable-length sequence,
which is a 2-D Tensor with LoD information. It is of the shape [Lg, 1],
where Lg is th sum of all labels’ length.

norm_by_times (bool, default false) – Whether to normalize the gradients
by the number of time-step, which is also the sequence’s length.
There is no need to normalize the gradients if warpctc layer was
follewed by a mean_op.

Returns:

The Connectionist Temporal Classification (CTC) loss,
which is a 2-D Tensor of the shape [batch_size, 1].

This layer will rearrange the input sequences. The new dimension is set by
user. Length of each sequence is computed according to original length,
original dimension and new dimension. The following example will help to
illustrate the function of this layer:

Extracts image patches from the input tensor to form a tensor of shape
{input.batch_size * output_height * output_width, filter_size_H *
filter_size_W * input.channels} which is similar with im2col.
This op use filter / kernel to scan images and convert these images to
sequences. After expanding, the number of time step are
output_height * output_width for an image, in which output_height and
output_width are calculated by below equation:

input_image_size (Variable) – the input contains image real size.It’s dim
is [batchsize, 2]. It is dispensable.It is just for batch inference.

out_stride (int|tuple) – The scaling of image through CNN. It is
dispensable. It is valid only when input_image_size is not null.
If out_stride is tuple, it must contain two intergers,
(out_stride_H, out_stride_W). Otherwise,
the out_stride_H = out_stride_W = out_stride.

name (int) – The name of this layer. It is optional.

Returns:

The output is a LoDTensor with shape
{input.batch_size * output_height * output_width,
filter_size_H * filter_size_W * input.channels}.
If we regard output as a matrix, each row of this matrix is
a step of a sequence.

The hierarchical sigmoid operator is used to accelerate the training
process of language model. This operator organizes the classes into a
complete binary tree, each leaf node represents a class(a word) and each
internal node acts as a binary classifier. For each word there’s a unique
path from root to it’s leaf node, hsigmoid calculate the cost for each
internal node on the path, and sum them to get a total cost. hsigmoid can
achive a acceleration from \(O(N)\) to \(O(logN)\), where \(N\)
represents the size of word dict.

This layer does the search in beams for one time step. Specifically, it
selects the top-K candidate word ids of current step from ids
according to their scores for all source sentences, where K is
beam_size and ids,scores are predicted results from the
computation cell. Additionally, pre_ids and pre_scores are
the output of beam_search at previous step, they are needed for special use
to handle ended candidate translations.

Note that the scores passed in should be accumulated scores, and
length penalty should be done with extra operators before calculating the
accumulated scores if needed, also suggest finding top-K before it and
using the top-K candidates following.

Please see the following demo for a fully beam search usage example:

fluid/tests/book/test_machine_translation.py

Parameters:

pre_ids (Variable) – The LodTensor variable which is the output of
beam_search at previous step. It should be a LodTensor with shape
\((batch_size, 1)\) and lod
\([[0, 1, ... , batch_size], [0, 1, ..., batch_size]]\) at the
first step.

pre_scores (Variable) – The LodTensor variable which is the output of
beam_search at previous step.

ids (Variable) – The LodTensor variable containing the candidates ids.
Its shape should be \((batch_size \times beam_size, K)\),
where \(K\) supposed to be beam_size.

scores (Variable) – The LodTensor variable containing the accumulated
scores corresponding to ids and its shape is the same as
the shape of ids.

beam_size (int) – The beam width used in beam search.

end_id (int) – The id of end token.

level (int, default 0) – It can be ignored and mustn’t change currently.
It means the source level of lod, which is explained as following.
The lod level of ids should be 2. The first level is source
level which describes how many prefixes (branchs) for each source
sentece (beam), and the second level is sentence level which
describes how these candidates belong to the prefix. The paths
linking prefixes and selected candidates are organized and reserved
in lod.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

Returns:

The LodTensor pair containing the selected ids and the corresponding scores.

Return type:

Variable

Examples

# Suppose `probs` contains predicted results from the computation# cell and `pre_ids` and `pre_scores` is the output of beam_search# at previous step.topk_scores,topk_indices=layers.topk(probs,k=beam_size)accu_scores=layers.elementwise_add(x=layers.log(x=topk_scores)),y=layers.reshape(pre_scores,shape=[-1]),axis=0)selected_ids,selected_scores=layers.beam_search(pre_ids=pre_ids,pre_scores=pre_scores,ids=topk_indices,scores=accu_scores,beam_size=beam_size,end_id=end_id)

The main motivation is that a bidirectional RNN, useful in DeepSpeech like speech models, learns representation for a sequence by performing a forward and a backward pass through the entire sequence. However, unlike unidirectional RNNs, bidirectional RNNs are challenging to deploy in an online and low-latency setting. The lookahead convolution incorporates information from future subsequences in a computationally efficient manner to improve unidirectional recurrent neural networks. The row convolution operator is different from the 1D sequence convolution, and is computed as follows:

Given an input sequence \(in\) of length \(t\) and input dimension \(d\), and a filter (\(W\)) of size \(context \times d\), the output sequence is convolved as:

input (Variable) – the input(X) is a LodTensor, which supports variable time-length input sequences. The underlying tensor in this LoDTensor is a matrix with shape (T x N), where T is the total time steps in this mini-batch and N is the input data dimension.

Referring to the given index variable, this layer selects rows from the input variables to construct a multiplex variable. Assuming that there are \(m\) input variables and \(I_i\) represents the i-th input variable and \(i\) is in [0, \(m\)). All input variables are tensors with same shape [\(d_0\), \(d_1\), ..., \(d_R\)]. Please note that rank of the input tensor should be at least 2. Each input variable will be treated as a 2-D matrix with shape [\(M\), \(N\)] where \(M\) for \(d_0\) and \(N\) for \(d_1\) * \(d_2\) * ... * \(d_R\). Let \(I_i[j]\) be the j-th row of the i-th input variable. The given index variable should be a 2-D tensor with shape [\(M\), 1]. Let ID[i] be the i-th index value of the index variable. Then the output variable will be a tensor with shape [\(d_0\), \(d_1\), ..., \(d_R\)]. If we treat the output tensor as a 2-D matrix with shape [\(M\), \(N\)] and let \(O[i]\) be the i-th row of the matrix, then O[i] is equal to \(I_{ID[i]}[i]\).

Ids: the index tensor.

X[0 : N - 1]: the candidate tensors for output (N >= 2).

For each index i from 0 to batchSize - 1, the output is the i-th row of the the (Ids[i])-th tensor.

For i-th row of the output tensor:

$$ y[i] = x_{k}[i] $$

where \(y\) is the output tensor, \(x_{k}\) is the k-th input tensor, and \(k = Ids[i]\).

Assume feature vectors exist on dimensions begin_norm_axis...rank(input) and calculate the moment statistics along these dimensions for each feature vector \(a\) with size \(H\), then normalize each feature vector using the corresponding statistics. After that, apply learnable gain and bias on the normalized tensor to scale and shift if scale and shift are set.

Cross entropy loss with softmax is used as the output layer extensively. This
operator computes the softmax normalized values for each row of the input
tensor, after which cross-entropy loss is computed. This provides a more
numerically stable gradient.

Because this operator performs a softmax on logits internally, it expects
unscaled logits. This operator should not be used with the output of
softmax operator since that would produce incorrect results.

When the attribute soft_label is set false, this operators expects mutually
exclusive hard labels, each sample in a batch is in exactly one class with a
probability of 1.0. Each sample in the batch will have a single label.

logits (Variable) – The unscaled log probabilities, which is a 2-D tensor
with shape [N x K]. N is the batch_size, and K is the class number.

label (Variable) – The ground truth which is a 2-D tensor. If soft_label
is set to false, Label is a Tensor<int64> with shape [N x 1]. If
soft_label is set to true, Label is a Tensor<float/double> with

soft_label (bool) – A flag to indicate whether to interpretate the given
labels as soft labels. By default, soft_label is set to False.

This layer computes the smooth L1 loss for Variable x and y.
It takes the first dimension of x and y as batch size.
For each instance, it computes the smooth L1 loss element by element first
and then sums all the losses. So the shape of ouput Variable is
[batch_size, 1].

y (Variable) – A tensor with rank at least 2. The target value of smooth
L1 loss op with same shape as x.

inside_weight (Variable|None) – A tensor with rank at least 2. This
input is optional and should have same shape with x. If
provided, the result of (x - y) will be multiplied
by this tensor element by element.

outside_weight (Variable|None) – A tensor with rank at least 2. This
input is optional and should have same shape with x. If
provided, the out smooth L1 loss will be multiplied by this tensor
element by element.

The target shape can be given by shape or actual_shape.
shape is a list of integer while actual_shape is a tensor
variable. actual_shape has a higher priority than shape
if it is provided, while shape still should be set correctly to
gurantee shape inference in compile-time.

Some tricks exist when specifying the target shape.

1. -1 means the value of this dimension is inferred from the total element
number of x and remaining dimensions. Thus one and only one dimension can
be set -1.

2. 0 means the actual dimension value is going to be copied from the
corresponding dimension of x. The indice of 0s in shape can not exceed
Rank(X).

Here are some examples to explain it.

1. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape
is [6, 8], the reshape operator will transform x into a 2-D tensor with
shape [6, 8] and leaving x’s data unchanged.

2. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape
specified is [2, 3, -1, 2], the reshape operator will transform x into a
4-D tensor with shape [2, 3, 4, 2] and leaving x’s data unchanged. In this
case, one dimension of the target shape is set to -1, the value of this
dimension is inferred from the total element number of x and remaining
dimensions.

3. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape
is [-1, 0, 3, 2], the reshape operator will transform x into a 4-D tensor
with shape [2, 4, 3, 2] and leaving x’s data unchanged. In this case,
besides -1, 0 means the actual dimension value is going to be copied from
the corresponding dimension of x.

Parameters:

x (variable) – The input tensor.

shape (list) – The new shape. At most one dimension of the new shape can
be -1.

actual_shape (variable) – An optional input. If provided, reshape
according to this given shape rather than
shape specifying shape. That is to
say actual_shape has a higher priority
than shape.

act (str) – The non-linear activation to be applied to output variable.

inplace (bool) – If this flag is set true, the output
shares data with input without copying, otherwise
a new output tensor is created
whose data is copied from input x.

Set LoD of x to a new one specified by y or
target_lod. When y provided, y.lod would be
considered as target LoD first, otherwise y.data would be
considered as target LoD. If y is not provided, target LoD should
be specified by target_lod. If target LoD is specified by
Y.data or target_lod, only one level LoD is supported.

Pads a tensor with a constant value given by pad_value, and the
padded width is specified by paddings.

Specifically, the number of values padded before the contents of x
in dimension i is indicated by paddings[i], and the number
of values padded after the contents of x in dimension i is
indicated by paddings[i+1].

Label smoothing is a mechanism to regularize the classifier layer and is
called label-smoothing regularization (LSR).

Label smoothing is proposed to encourage the model to be less confident,
since optimizing the log-likelihood of the correct label directly may
cause overfitting and reduce the ability of the model to adapt. Label
smoothing replaces the ground-truth label \(y\) with the weighted sum
of itself and some fixed distribution \(\mu\). For class \(k\),
i.e.

\[\tilde{y_k} = (1 - \epsilon) * y_k + \epsilon * \mu_k,\]

where \(1 - \epsilon\) and \(\epsilon\) are the weights
respectively, and \(\tilde{y}_k\) is the smoothed label. Usually
uniform distribution is used for \(\mu\).

input (Variable) – (Tensor), the input of ROIPoolOp. The format of input tensor is NCHW. Where N is batch size, C is the number of input channels, H is the height of the feature, and W is the width of the feature

Resize a batch of images. The short edge of input images will be
resized to the given ‘out_short_len’. The long edge of input images
will be resized proportionately to make images’ length-width ratio
constant.

Parameters:

input (Variable) – The input tensor of image resize layer,
This is a 4-D tensor of the shape
(num_batches, channels, in_h, in_w).

out_short_len (int) – The length of output images’ short edge.

resample (str) – resample method, default: BILINEAR.

Returns:

The output is a 4-D tensor of the shape
(num_batches, channls, out_h, out_w).

This operator takes a batch of instance, and do random cropping on each instance. It means that cropping positions differs on each instance, which is determined by an uniform random generator. All cropped instances have the same shape, which is determined by the operator’s attribute ‘shape’.

Parameters:

x (Variable) – A batch of instances to random crop

shape (INTS) – The shape of a cropped instance

seed (int|Variable|None) – The random seed By default, the seed will
get from random.randint(-65536, 65535).

Mean Intersection-Over-Union is a common evaluation metric for
semantic image segmentation, which first computes the IOU for each
semantic class and then computes the average over classes.
IOU is defined as follows:

The predictions are accumulated in a confusion matrix and mean-IOU
is then calculated from it.

Parameters:

input (Variable) – A Tensor of prediction results for semantic labels with type int32 or int64.

label (Variable) – A Tensor of ground truth labels with type int32 or int64.
Its shape should be the same as input.

num_classes (int) – The possible number of labels.

Returns:

A Tensor representing the mean intersection-over-union with shape [1].
out_wrong(Variable): A Tensor with shape [num_classes]. The wrong numbers of each class.
out_correct(Variable): A Tensor with shape [num_classes]. The correct numbers of each class.

shape (Variable|list/tuple of integer) – The output shape is specified
by shape, which can a Variable or a list/tupe of integer.
If a tensor Variable, it’s rank must be the same as x. This way
is suitable for the case that the output shape may be changed each
iteration. If a list/tupe of integer, it’s length must be the same
as the rank of x

offsets (Variable|list/tuple of integer|None) – Specifies the copping
offsets at each dimension. It can be a Variable or or a list/tupe
of integer. If a tensor Variable, it’s rank must be the same as x.
This way is suitable for the case that the offsets may be changed
each iteration. If a list/tupe of integer, it’s length must be the
same as the rank of x. If None, the offsets are 0 at each
dimension.

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

P = {0, 1} or {0, 0.5, 1}, where 0.5 means that there is no information
about the rank of the input pair.

Rank loss layer takes three inputs: left (o_i), right (o_j) and
label (P_{i,j}). The inputs respectively represent RankNet’s output scores
for documents A and B and the value of label P. The following equation
computes rank loss C_{i,j} from the inputs:

label (Variable): Indicats whether A ranked higher than B or not.
left (Variable): RankNet’s output score for doc A.
right (Variable): RankNet’s output score for doc B.
name(str|None): A name for this layer(optional). If set None, the layer

axis (int) – Indicate up to which input dimensions (exclusive) should
be flattened to the outer dimension of the output.
The value for axis must be in the range [0, R], where R
is the rank of the input tensor. When axis = 0, the shape
of the output tensor is (1, (d_0 X d_1 ... d_n), where the
shape of the input tensor is (d_0, d_1, ... d_n).

name (str|None) – A name for this layer(optional). If set None, the layer
will be named automatically.

Returns:

A 2D tensor with the contents of the input tensor, with input

dimensions up to axis flattened to the outer dimension of
the output and remaining input dimensions flattened into the
inner dimension of the output.

This operator is used to perform matrix multiplication for input \(X\) and \(Y\).

The equation is:

$$Out = X * Y$$

Both the input \(X\) and \(Y\) can carry the LoD (Level of Details) information,
or not. But the output only shares the LoD information with input \(X\).

Parameters:

x – (Tensor), The first input tensor of mul op.

y – (Tensor), The second input tensor of mul op.

x_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two
dimensions as its inputs. If the input \(X\) is a tensor with more
than two dimensions, \(X\) will be flattened into a two-dimensional
matrix first. The flattening rule is: the first num_col_dims
will be flattened to form the first dimension of the final matrix
(the height of the matrix), and the rest rank(X) - num_col_dims
dimensions are flattened to form the second dimension of the final
matrix (the width of the matrix). As a result, height of the
flattened matrix is equal to the product of \(X\)‘s first
x_num_col_dims dimensions’ sizes, and width of the flattened
matrix is equal to the product of \(X\)‘s last rank(x) - num_col_dims
dimensions’ size. For example, suppose \(X\) is a 6-dimensional
tensor with the shape [2, 3, 4, 5, 6], and x_num_col_dims = 3.
Thus, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] =
[24, 30].

y_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two,
dimensions as its inputs. If the input \(Y\) is a tensor with more
than two dimensions, \(Y\) will be flattened into a two-dimensional
matrix first. The attribute y_num_col_dims determines how \(Y\) is
flattened. See comments of x_num_col_dims for more details.

This measures the element-wise probability error in classification tasks
in which each class is independent. This can be thought of as predicting labels
for a data-point, where labels are not mutually exclusive.
For example, a news article can be about politics, technology or sports
at the same time or none of these.

The logistic loss is given as follows:

$$loss = -Labels * log(sigma(X)) - (1 - Labels) * log(1 - sigma(X))$$

We know that $$sigma(X) = \frac{1}{1 + exp(-X)}$$. By substituting this we get:

$$loss = X - X * Labels + log(1 + exp(-X))$$

For stability and to prevent overflow of $$exp(-X)$$ when X < 0,
we reformulate the loss as follows:

$$loss = max(X, 0) - X * Labels + log(1 + exp(-|X|))$$

Both the input X and Labels can carry the LoD (Level of Details) information.
However the output only shares the LoD with input X.

Parameters:

x – (Tensor, default Tensor<float>), a 2-D tensor with shape N x D, where N is the batch size and D is the number of classes. This input is a tensor of logits computed by the previous operator. Logits are unscaled log probabilities given as log(p/(1-p)).

label – (Tensor, default Tensor<float>), a 2-D tensor of the same type and shape as X. This input is a tensor of probabalistic labels for each logit

This operator limits the L2 norm of the input \(X\) within \(max\_norm\).
If the L2 norm of \(X\) is less than or equal to \(max\_norm\), \(Out\) will be
the same as \(X\). If the L2 norm of \(X\) is greater than \(max\_norm\), \(X\) will
be linearly scaled to make the L2 norm of \(Out\) equal to \(max\_norm\), as
shown in the following formula:

seed (INT) – (int, default 0) Random seed used for generating samples. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time.

Used to initialize tensors with gaussian random generator.
The defalut mean of the distribution is 0. and defalut standard
deviation (std) of the distribution is 1.. Uers can set mean and std
by input arguments.

Produces a slice of the input tensor along multiple axes. Similar to numpy:
https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
Slice uses axes, starts and ends attributes to specify the start and
end dimension for each axis in the list of axes, it uses this information
to slice the input data tensor. If a negative value is passed for any of
the start or end indices, it represents number of elements before the end
of that dimension. If the value passed to start or end is larger than
the n (the number of elements in this dimension), it represents n.
For slicing to the end of a dimension with unknown size, it is recommended
to pass in INT_MAX. If axes are omitted, they are set to [0, ..., ndim-1].
Following examples will explain how slice works:

The slope should be positive. The offset can be either positive or negative.
The default slope and shift are set according to the above reference.
It is recommended to use the defaults for this activation.

This operator initializes a tensor with random values sampled from a
uniform distribution. The random result is in set [min, max].

Parameters:

shape (INTS) – The shape of the output tensor

min (FLOAT) – Minimum value of uniform random. [default -1.0].

max (FLOAT) – Maximun value of uniform random. [default 1.0].

seed (INT) – Random seed used for generating samples. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time. [default 0].

The cumulative sum of the elements along a given axis.
By default, the first element of the result is the same of the first element of
the input. If exlusive is true, the first element of the result is 0.

When training a model, it is often recommended to lower the learning rate as the
training progresses. By using this function, the learning rate will be decayed by
‘decay_rate’ every ‘decay_steps’ steps.

Generate prior boxes for SSD(Single Shot MultiBox Detector) algorithm.
Each position of the input produce N prior boxes, N is determined by
the count of min_sizes, max_sizes and aspect_ratios, The size of the
box is in range(min_size, max_size) interval, which is generated in
sequence according to the aspect_ratios.

Parameters:

input (Variable) – The Input Variables, the format is NCHW.

image (Variable) – The input image data of PriorBoxOp,
the layout is NCHW.

min_max_aspect_ratios_order (bool) – If set True, the output prior box is
in order of [min, max, aspect_ratios], which is consistent with
Caffe. Please note, this order affects the weights order of
convolution layer followed by and does not affect the final
detection results. Default: False.

Returns:

A tuple with two Variable (boxes, variances)

boxes: the output prior boxes of PriorBox.
The layout is [H, W, num_priors, 4].
H is the height of input, W is the width of input,
num_priors is the total
box count of each position of input.

variances: the expanded variances of PriorBox.
The layout is [H, W, num_priors, 4].
H is the height of input, W is the width of input
num_priors is the total
box count of each position of input

min_max_aspect_ratios_order (bool) – If set True, the output prior box is
in order of [min, max, aspect_ratios], which is consistent with
Caffe. Please note, this order affects the weights order of
convolution layer followed by and does not affect the fininal
detection results. Default: False.

Returns:

A tuple with four Variables. (mbox_loc, mbox_conf, boxes, variances)

mbox_loc: The predicted boxes’ location of the inputs. The layout
is [N, H*W*Priors, 4]. where Priors is the number of predicted
boxes each position of each input.

mbox_conf: The predicted boxes’ confidence of the inputs. The layout
is [N, H*W*Priors, C]. where Priors is the number of predicted boxes
each position of each input and C is the number of Classes.

boxes: the output prior boxes of PriorBox. The layout is [num_priors, 4].
num_priors is the total box count of each position of inputs.

variances: the expanded variances of PriorBox. The layout is
[num_priors, 4]. num_priors is the total box count of each position of inputs

This operator implements a greedy bipartite matching algorithm, which is
used to obtain the matching with the maximum distance based on the input
distance matrix. For input 2D matrix, the bipartite matching algorithm can
find the matched column for each row (matched means the largest distance),
also can find the matched row for each column. And this operator only
calculate matched indices from column to row. For each instance,
the number of matched indices is the column number of the input distance
matrix.

There are two outputs, matched indices and distance.
A simple description, this algorithm matched the best (maximum distance)
row entity to the column entity and the matched indices are not duplicated
in each row of ColToRowMatchIndices. If the column entity is not matched
any row entity, set -1 in ColToRowMatchIndices.

NOTE: the input DistMat can be LoDTensor (with LoD) or Tensor.
If LoDTensor with LoD, the height of ColToRowMatchIndices is batch size.
If Tensor, the height of ColToRowMatchIndices is 1.

NOTE: This API is a very low level API. It is used by ssd_loss
layer. Please consider to use ssd_loss instead.

Parameters:

dist_matrix (Variable) –

This input is a 2-D LoDTensor with shape
[K, M]. It is pair-wise distance matrix between the entities
represented by each row and each column. For example, assumed one
entity is A with shape [K], another entity is B with shape [M]. The
dist_matrix[i][j] is the distance between A[i] and B[j]. The bigger
the distance is, the better matching the pairs are.

NOTE: This tensor can contain LoD information to represent a batch
of inputs. One instance of this batch can contain different numbers
of entities.

match_type (string|None) – The type of matching method, should be
‘bipartite’ or ‘per_prediction’. [default ‘bipartite’].

dist_threshold (float|None) – If match_type is ‘per_prediction’,
this threshold is to determine the extra matching bboxes based
on the maximum distance, 0.5 by default.

Returns:

a tuple with two elements is returned. The first is
matched_indices, the second is matched_distance.

The matched_indices is a 2-D Tensor with shape [N, M] in int type.
N is the batch size. If match_indices[i][j] is -1, it
means B[j] does not match any entity in i-th instance.
Otherwise, it means B[j] is matched to row
match_indices[i][j] in i-th instance. The row number of
i-th instance is saved in match_indices[i][j].

The matched_distance is a 2-D Tensor with shape [N, M] in float type
. N is batch size. If match_indices[i][j] is -1,
match_distance[i][j] is also -1.0. Otherwise, assumed
match_distance[i][j] = d, and the row offsets of each instance
are called LoD. Then match_distance[i][j] =
dist_matrix[d+LoD[i]][j].

This operator can be, for given the target bounding boxes or labels,
to assign classification and regression targets to each prediction as well as
weights to prediction. The weights is used to specify which prediction would
not contribute to training loss.

For each instance, the output out and`out_weight` are assigned based on
match_indices and negative_indices.
Assumed that the row offset for each instance in input is called lod,
this operator assigns classification/regression targets by performing the
following steps:

matched_indices (Variable) – Tensor<int>), The input matched indices
is 2D Tenosr<int32> with shape [N, P], If MatchIndices[i][j] is -1,
the j-th entity of column is not matched to any entity of row in
i-th instance.

negative_indices (Variable) – The input negative example indices are
an optional input with shape [Neg, 1] and int32 type, where Neg is
the total number of negative example indices.

mismatch_value (float32) – Fill this value to the mismatched location.

Returns:

A tuple(out, out_weight) is returned. out is a 3D Tensor with
shape [N, P, K], N and P is the same as they are in
neg_indices, K is the same as it in input of X. If
match_indices[i][j]. out_weight is the weight for output with
the shape of [N, P, 1].

This operation is to get the detection results by performing following
two steps:

Decode input bounding box predictions according to the prior boxes.

Get the final detection results by applying multi-class non maximum
suppression (NMS).

Please note, this operation doesn’t clip the final output bounding boxes
to the image window.

Parameters:

loc (Variable) – A 3-D Tensor with shape [N, M, 4] represents the
predicted locations of M bounding bboxes. N is the batch size,
and each bounding box has four coordinate values and the layout
is [xmin, ymin, xmax, ymax].

scores (Variable) – A 3-D Tensor with shape [N, M, C] represents the
predicted confidence predictions. N is the batch size, C is the
class number, M is number of bounding boxes. For each category
there are total M scores which corresponding M bounding boxes.

prior_box (Variable) – A 2-D Tensor with shape [M, 4] holds M boxes,
each box is represented as [xmin, ymin, xmax, ymax],
[xmin, ymin] is the left top coordinate of the anchor box,
if the input is image feature map, they are close to the origin
of the coordinate system. [xmax, ymax] is the right bottom
coordinate of the anchor box.

background_label (float) – The index of background label,
the background label will be ignored. If set to -1, then all
categories will be considered.

nms_threshold (float) – The threshold to be used in NMS.

nms_top_k (int) – Maximum number of detections to be kept according
to the confidences aftern the filtering detections based on
score_threshold.

keep_top_k (int) – Number of total bboxes to be kept per image after
NMS step. -1 means keeping all bboxes after NMS step.

score_threshold (float) – Threshold to filter out bounding boxes with
low confidence score. If not provided, consider all boxes.

nms_eta (float) – The parameter for adaptive NMS.

Returns:

The detection outputs is a LoDTensor with shape [No, 6].
Each row has six values: [label, confidence, xmin, ymin, xmax, ymax].
No is the total number of detections in this mini-batch. For each
instance, the offsets in first dimension are called LoD, the offset
number is N + 1, N is the batch size. The i-th image has
LoD[i + 1] - LoD[i] detected results, if it is 0, the i-th image
has no detected results. If all images have not detected results,
all the elements in LoD are 0, and output tensor only contains one
value, which is -1.

This layer is to compute dection loss for SSD given the location offset
predictions, confidence predictions, prior boxes and ground-truth boudding
boxes and labels, and the type of hard example mining. The returned loss
is a weighted sum of the localization loss (or regression loss) and
confidence loss (or classification loss) by performing the following steps:

Apply hard example mining to get the negative example indices and update
the matched indices.

Assign classification and regression targets

4.1. Encoded bbox according to the prior boxes.

4.2. Assign regression targets.

4.3. Assign classification targets.

Compute the overall objective loss.

5.1 Compute confidence loss.

5.1 Compute localization loss.

5.3 Compute the overall weighted loss.

Parameters:

location (Variable) – The location predictions are a 3D Tensor with
shape [N, Np, 4], N is the batch size, Np is total number of
predictions for each instance. 4 is the number of coordinate values,
the layout is [xmin, ymin, xmax, ymax].

confidence (Variable) – The confidence predictions are a 3D Tensor
with shape [N, Np, C], N and Np are the same as they are in
location, C is the class number.

gt_box (Variable) – The ground-truth boudding boxes (bboxes) are a 2D
LoDTensor with shape [Ng, 4], Ng is the total number of ground-truth
bboxes of mini-batch input.

detect_res – (LoDTensor) A 2-D LoDTensor with shape [M, 6] represents the detections. Each row has 6 values: [label, confidence, xmin, ymin, xmax, ymax], M is the total number of detect results in this mini-batch. For each instance, the offsets in first dimension are called LoD, the number of offset is N + 1, if LoD[i + 1] - LoD[i] == 0, means there is no detected data

label – (LoDTensor) A 2-D LoDTensor represents theLabeled ground-truth data. Each row has 6 values: [label, xmin, ymin, xmax, ymax, is_difficult] or 5 values: [label, xmin, ymin, xmax, ymax], where N is the total number of ground-truth data in this mini-batch. For each instance, the offsets in first dimension are called LoD, the number of offset is N + 1, if LoD[i + 1] - LoD[i] == 0, means there is no ground-truth data

class_num – (int) The class number

background_label – (int, defalut: 0) The index of background label, the background label will be ignored. If set to -1, then all categories will be considered

input_states – If not None, It contains 3 elements:
1. pos_count (Tensor) A tensor with shape [Ncls, 1], store the input positive example count of each class, Ncls is the count of input classification. This input is used to pass the AccumPosCount generated by the previous mini-batch when the multi mini-batches cumulative calculation carried out. When the input(PosCount) is empty, the cumulative calculation is not carried out, and only the results of the current mini-batch are calculated.
2. true_pos (LoDTensor) A 2-D LoDTensor with shape [Ntp, 2], store the input true positive example of each class.This input is used to pass the AccumTruePos generated by the previous mini-batch when the multi mini-batches cumulative calculation carried out. .
3. false_pos (LoDTensor) A 2-D LoDTensor with shape [Nfp, 2], store the input false positive example of each class.This input is used to pass the AccumFalsePos generated by the previous mini-batch when the multi mini-batches cumulative calculation carried out. .

out_states – If not None, it contains 3 elements.
1. accum_pos_count (Tensor) A tensor with shape [Ncls, 1], store the positive example count of each class. It combines the input input(PosCount) and the positive example count computed from input(Detection) and input(Label).
2. accum_true_pos (LoDTensor) A LoDTensor with shape [Ntp’, 2], store the true positive example of each class. It combines the input(TruePos) and the true positive examples computed from input(Detection) and input(Label).
3. accum_false_pos (LoDTensor) A LoDTensor with shape [Nfp’, 2], store the false positive example of each class. It combines the input(FalsePos) and the false positive examples computed from input(Detection) and input(Label).

This layer can be, for given the Intersection-over-Union (IoU) overlap
between anchors and ground truth boxes, to assign classification and
regression targets to each each anchor, these target labels are used for
train RPN. The classification targets is a binary class label (of being
an object or not). Following the paper of Faster-RCNN, the positive labels
are two kinds of anchors: (i) the anchor/anchors with the highest IoU
overlap with a ground-truth box, or (ii) an anchor that has an IoU overlap
higher than rpn_positive_overlap(0.7) with any ground-truth box. Note
that a single ground-truth box may assign positive labels to multiple
anchors. A non-positive anchor is when its IoU ratio is lower than
rpn_negative_overlap (0.3) for all ground-truth boxes. Anchors that are
neither positive nor negative do not contribute to the training objective.
The regression targets are the encoded ground-truth boxes associated with
the positive anchors.

Parameters:

loc (Variable) – A 3-D Tensor with shape [N, M, 4] represents the
predicted locations of M bounding bboxes. N is the batch size,
and each bounding box has four coordinate values and the layout
is [xmin, ymin, xmax, ymax].

scores (Variable) – A 3-D Tensor with shape [N, M, C] represents the
predicted confidence predictions. N is the batch size, C is the
class number, M is number of bounding boxes. For each category
there are total M scores which corresponding M bounding boxes.

anchor_box (Variable) – A 2-D Tensor with shape [M, 4] holds M boxes,
each box is represented as [xmin, ymin, xmax, ymax],
[xmin, ymin] is the left top coordinate of the anchor box,
if the input is image feature map, they are close to the origin
of the coordinate system. [xmax, ymax] is the right bottom
coordinate of the anchor box.

gt_box (Variable) – The ground-truth boudding boxes (bboxes) are a 2D
LoDTensor with shape [Ng, 4], Ng is the total number of ground-truth
bboxes of mini-batch input.

rpn_positive_overlap (float) – Minimum overlap required between an anchor
and ground-truth box for the (anchor, gt box) pair to be a positive
example.

rpn_negative_overlap (float) – Maximum overlap allowed between an anchor
and ground-truth box for the (anchor, gt box) pair to be a negative
examples.

Returns:

A tuple(predicted_scores, predicted_location, target_label,
target_bbox) is returned. The predicted_scores and
predicted_location is the predicted result of the RPN.
The target_label and target_bbox is the ground truth,
respectively. The predicted_location is a 2D Tensor with shape
[F, 4], and the shape of target_bbox is same as the shape of
the predicted_location, F is the number of the foreground
anchors. The predicted_scores is a 2D Tensor with shape
[F + B, 1], and the shape of target_label is same as the shape
of the predicted_scores, B is the number of the background
anchors, the F and B is depends on the input of this operator.

Generate anchors for Faster RCNN algorithm.
Each position of the input produce N anchors, N =
size(anchor_sizes) * size(aspect_ratios). The order of generated anchors
is firstly aspect_ratios loop then anchor_sizes loop.

Computes intersection-over-union (IOU) between two box lists.
Box list ‘X’ should be a LoDTensor and ‘Y’ is a common Tensor,
boxes in ‘Y’ are shared by all instance of the batched inputs of X.
Given two boxes A and B, the calculation of IOU is as follows:

$$
IOU(A, B) =
\frac{area(A\cap B)}{area(A)+area(B)-area(A\cap B)}
$$

Parameters:

x – (LoDTensor, default LoDTensor<float>) Box list X is a 2-D LoDTensor with shape [N, 4] holds N boxes, each box is represented as [xmin, ymin, xmax, ymax], the shape of X is [N, 4]. [xmin, ymin] is the left top coordinate of the box if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the box. This tensor can contain LoD information to represent a batch of inputs. One instance of this batch can contain different numbers of entities.

y – (Tensor, default Tensor<float>) Box list Y holds M boxes, each box is represented as [xmin, ymin, xmax, ymax], the shape of X is [N, 4]. [xmin, ymin] is the left top coordinate of the box if the input is image feature map, and [xmax, ymax] is the right bottom coordinate of the box.

Returns:

(LoDTensor, the lod is same as input X) The output of iou_similarity op, a tensor with shape [N, M] representing pairwise iou scores.

prior_box – (Tensor, default Tensor<float>) Box list PriorBox is a 2-D Tensor with shape [M, 4] holds M boxes, each box is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the left top coordinate of the anchor box, if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the anchor box.

prior_box_var – (Tensor, default Tensor<float>, optional) PriorBoxVar is a 2-D Tensor with shape [M, 4] holds M group of variance. PriorBoxVar will set all elements to 1 by default. Optional.

target_box – (LoDTensor or Tensor) This input can be a 2-D LoDTensor with shape [N, 4] when code_type is ‘encode_center_size’. This input also can be a 3-D Tensor with shape [N, M, 4] when code_type is ‘decode_center_size’. [N, 4], each box is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the left top coordinate of the box if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the box. This tensor can contain LoD information to represent a batch of inputs. One instance of this batch can contain different numbers of entities.

(LoDTensor or Tensor) When code_type is ‘encode_center_size’, the output tensor of box_coder_op with shape [N, M, 4] representing the result of N target boxes encoded with M Prior boxes and variances. When code_type is ‘decode_center_size’, N represents the batch size and M represents the number of deocded boxes.

PolygonBoxTransform Operator is used to transform the coordinate shift to the real coordinate.

The input is the final geometry output in detection network.
We use 2*n numbers to denote the coordinate shift from n corner vertices of
the polygon_box to the pixel location. As each distance offset contains two numbers (xi, yi),
the geometry output contains 2*n channels.

This function computes the accuracy using the input and label.
If the correct label occurs in top k predictions, then correct will increment by one.
Note: the dtype of accuracy is determined by input. the input and label dtype can be different.

Parameters:

input (Variable) – The input of accuracy layer, which is the predictions of network.
Carry LoD information is supported.

This implementation computes the AUC according to forward output and label.
It is used very widely in binary classification evaluation.

Note: If input label contains values other than 0 and 1, it will be cast
to bool. Find the relevant definitions here.

There are two types of possible curves:

ROC: Receiver operating characteristic;

PR: Precision Recall

Parameters:

input (Variable) – A floating-point 2D Variable, values are in the range
[0, 1]. Each row is sorted in descending order. This
input should be the output of topk. Typically, this
Variable indicates the probability of each label.

label (Variable) – A 2D int Variable indicating the label of the training
data. The height is batch size and width is always 1.

curve (str) – Curve type, can be ‘ROC’ or ‘PR’. Default ‘ROC’.

num_thresholds (int) – The number of thresholds to use when discretizing
the roc curve. Default 200.

topk (int) – only topk number of prediction output will be used for auc.

Returns:

A scalar representing the current AUC.

Return type:

Variable

Examples

# network is a binary classification model and label the ground truthprediction=network(image,is_infer=True)auc_out=fluid.layers.auc(input=prediction,label=label)