Moving RMS

Description

The Moving RMS block computes the moving root mean square (RMS) of the input
signal along each channel independently over time. The block uses either the sliding window
method or the exponential weighting method to compute the moving RMS. In the sliding window
method, a window of specified length moves over the data sample by sample, and the block computes
the RMS over the data in the window. In the exponential weighting method, the block squares the
data samples, multiplies them with a set of weighting factors, and sums the weighed data. The
block then computes the RMS by taking the square root of the sum. For more details on these
methods, see Algorithms.

Ports

Input

x — Data inputcolumn vector | row vector | matrix

Data over which the block computes the moving RMS. The block accepts real-valued or
complex-valued multichannel inputs, that is, m-by-n
size inputs, where m ≥ 1 and n ≥ 1. The block also
accepts variable-size inputs. During simulation, you can change the size of each input
channel. However, the number of channels cannot change.

This port is unnamed until you set Method to
Exponential weighting and select the Specify forgetting
factor from input port parameter.

Data Types: single | doubleComplex Number Support: Yes

lambda — Forgetting factorpositive real scalar in the range (0,1]

The forgetting factor determines how much weight past data is given. A forgetting factor
of 0.9 gives more weight to the older data than does a forgetting factor of 0.1. A forgetting
factor of 1.0 indicates infinite memory – all previous samples are given an equal
weight.

Dependencies

This port appears when you set Method to Exponential
weighting and select the Specify forgetting factor from input
port parameter.

Data Types: single | double

Output

Port_1 — Moving RMS outputcolumn vector | row vector | matrix

The size of the moving RMS output matches the size of the input. The block uses either
the sliding window method or the exponential weighting method to compute the moving RMS, as
specified by the Method parameter. For more details, see Algorithms.

Parameters

Sliding window — A window of length Window
length moves over the input data along each channel. For every sample the
window moves over, the block computes the RMS over the data in the window.

Exponential weighting — The block multiplies the
squares of the samples by a set of weighting factors. The magnitude of the weighting
factors decreases exponentially as the age of the data increases, but the magnitude never
reaches zero. To compute the RMS, the algorithm sums the weighted data and takes a square
root of the sum.

When you select this check box, the length of the sliding window is equal to the value
you specify in Window length. When you clear this check box, the length
of the sliding window is infinite. In this mode, the block computes the RMS of the current
sample and all the previous samples in the channel.

When you select this check box, the forgetting factor is input through the
lambda port. When you clear this check box, the forgetting factor is
specified on the block dialog through the Forgetting factor
parameter.

Dependencies

This parameter appears only when you set Method to
Exponential weighting.

The forgetting factor determines how much weight past data is given. A forgetting factor
of 0.9 gives more weight to the older data than does a forgetting factor of 0.1. A forgetting
factor of 1.0 indicates infinite memory – all previous samples are given an equal
weight.

Tunable: Yes

Dependencies

This parameter appears when you set Method to
Exponential weighting and clear the Specify forgetting
factor from input port check box.

Simulate model using generated C code. The first time you run a simulation,
Simulink® generates C code for the block. The C code is reused for subsequent
simulations, as long as the model does not change. This option requires additional startup
time but provides faster simulation speed than Interpreted
execution.

Interpreted execution

Simulate model using the MATLAB® interpreter. This option shortens startup time but has slower
simulation speed than Code generation.

Block Characteristics

Data Types

double | single

Multidimensional Signals

No

Variable-Size Signals

Yes

Algorithms

Sliding Window Method

In the sliding window method, the output for each input sample is the RMS of the current
sample and the Len – 1 previous samples. Len is
the length of the window. To compute the first Len – 1 outputs, when
the window does not have enough data yet, the algorithm fills the window with zeros. As
an example, to compute the RMS when the second input sample comes in, the algorithm
fills the window with Len – 2 zeros. The data vector,
x, is then the two data samples followed by
Len – 2 zeros.

When you do not specify the window length, the algorithm chooses
an infinite window length. In this mode, the output is the moving
RMS of the current sample and all the previous samples in the channel.

Consider an example of computing the moving RMS of a streaming
input data using the sliding window method. The algorithm uses a window
length of 4. With each input sample that comes in, the window of length
4 moves along the data.

Exponential Weighting Method

In the exponential weighting method, the moving RMS is computed
recursively using these formulas:

wN,λ=λwN−1,λ+1x_rmsN,λ=(1−1wN,λ)x_rmsN−1,λ+(1wN,λ)x2N

x_rmsN,λ — Moving
RMS at the current sample

x2N — Square
of the current input data sample

x_rmsN−1,λ — Moving
RMS at the previous sample

λ — Forgetting factor

wN,λ — Weighting
factor applied to the current data sample

(1−1wN,λ)x_rmsN−1,λ — Effect
of the previous data on the RMS

For the first sample, where N = 1, the algorithm
chooses wN,λ = 1. For the next
sample, the weighting factor is updated and used to compute the RMS,
as per the recursive equation. As the age of the data increases, the
magnitude of the weighting factor decreases exponentially and never
reaches zero. In other words, the recent data has more influence on
the current RMS than the older data.

The value of the forgetting factor determines the rate of change
of the weighting factors. A forgetting factor of 0.9 gives more weight
to the older data than does a forgetting factor of 0.1. A forgetting
factor of 1.0 indicates infinite memory. All the previous samples
are given an equal weight.

Here is an example of computing the moving RMS using the exponential
weighting method. The forgetting factor is 0.9.