15.1 Introduction

This chapter describes SVG's declarative filter effects
feature set, which when combined with the 2D power of SVG can
describe much of the common artwork on the Web in such a way
that client-side generation and alteration can be performed
easily. In addition, the ability to apply filter effects to SVG
graphics elements
and container
elements helps to maintain the semantic structure of the
document, instead of resorting to images which aside from
generally being a fixed resolution tend to obscure the original
semantics of the elements they replace. This is especially true
for effects applied to text.

A filter effect consists of a series of graphics operations
that are applied to a given source
graphic to produce a modified graphical result. The
result of the filter effect is rendered to the target device
instead of the original source graphic. The following
illustrates the process:

Each 'filter' element contains a set
of filter primitives as its children.
Each filter primitive performs a single fundamental graphical
operation (e.g., a blur or a lighting effect) on one or more
inputs, producing a graphical result. Because most of the
filter primitives represent some form of image processing, in
most cases the output from a filter primitive is a single RGBA
image.

The original source graphic or the result from a filter
primitive can be used as input into one or more other filter
primitives. A common application is to use the source graphic
multiple times. For example, a simple filter could replace one
graphic by two by adding a black copy of original source
graphic offset to create a drop shadow. In effect, there are
now two layers of graphics, both with the same original source
graphics.

When applied to
container elements such as 'g', the
'filter' property applies to the contents of the
group as a whole. The group's children do not render to the
screen directly; instead, the graphics commands necessary to
render the children are stored temporarily. Typically, the
graphics commands are executed as part of the processing of the
referenced 'filter' element via use of the
keywords SourceGraphic or
SourceAlpha. Filter effects can be applied to container elements
with no content (e.g., an empty 'g' element), in which
case the SourceGraphic or
SourceAlpha consist of a transparent black rectangle
that is the size of the filter effects region.

Sometimes filter primitives result in undefined pixels. For
example, filter primitive
'feOffset' can shift an image down and to the right,
leaving undefined pixels at the top and left. In these cases,
the undefined pixels are set to transparent black.

The following pictures show the intermediate image results
from each of the six filter elements:

Source graphic

After filter primitive 1

After filter primitive 2

After filter primitive 3

After filter primitive 4

After filter primitive 5

After filter primitive 6

Filter primitive 'feGaussianBlur' takes input
SourceAlpha, which is the alpha channel of the
source graphic. The result is stored in a temporary buffer
named "blur". Note that "blur" is used as input to both
filter primitives 2 and 3.

Filter primitive 'feOffset' takes buffer
"blur", shifts the result in a positive direction in both x
and y, and creates a new buffer named "offsetBlur". The
effect is that of a drop shadow.

Filter primitive 'feSpecularLighting',
uses buffer "blur" as a model of a surface elevation and
generates a lighting effect from a single point source. The
result is stored in buffer "specOut".

Filter primitive 'feComposite' masks out the
result of filter primitive 3 by the original source graphics
alpha channel so that the intermediate result is no bigger
than the original source graphic.

Filter primitive 'feComposite' composites the
result of the specular lighting with the original source
graphic.

Filter primitive 'feMerge' composites two
layers together. The lower layer consists of the drop shadow
result from filter primitive 2. The upper layer consists of
the specular lighting result from filter primitive 5.

Specifies the coordinate system for the various length
values within the filter primitives and for the attributes
that define the filter
primitive subregion.
If primitiveUnits="userSpaceOnUse", any
length values within the filter definitions represent
values in the current user coordinate system in place at
the time when the 'filter' element is
referenced (i.e., the user coordinate system for the
element referencing the 'filter' element via a
'filter' property).
If primitiveUnits="objectBoundingBox",
then any length values within the filter definitions
represent fractions or percentages of the bounding box on
the referencing element (see Object bounding box
units).
If attribute primitiveUnits
is not specified, then the effect is as if a value of userSpaceOnUse were
specified.Animatable:
yes.

A URI
reference to another
'filter' element within the current SVG document
fragment. Any attributes which are defined on the
referenced 'filter'
element which are not defined on this element are inherited
by this element. If this element has no defined filter
nodes, and the referenced element has defined filter nodes
(possibly due to its own
href attribute), then this element inherits the
filter nodes defined from the referenced 'filter' element. Inheritance
can be indirect to an arbitrary level; thus, if the
referenced 'filter'
element inherits attributes or its filter node
specification due to its own
href attribute, then the current element can inherit
those attributes or filter node specifications.Animatable:
yes.

Properties
inherit into the 'filter'
element from its ancestors; properties do not inherit
from the element referencing the
'filter' element.

'filter' elements are
never rendered directly; their only usage is as something that
can be referenced using the
'filter' property. The
'display' property does not apply to the 'filter' element; thus, 'filter' elements are not directly
rendered even if the 'display' property is set to
a value other than none, and
'filter' elements are
available for referencing even when the
'display' property on the 'filter' element or any of its
ancestors is set to none.

A URI
reference to a 'filter' element which
defines the filter effects that shall be applied to this
element.

none

Do not apply any filter effects to this element.

15.5 Filter effects region

A 'filter' element can define a
region on the canvas to which a given filter effect applies and
can provide a resolution for any intermediate continuous tone
images used to process any raster-based filter primitives. The
'filter' element has the
following attributes which work together to define the filter
effects region:

filterUnits={ userSpaceOnUse | objectBoundingBox
}.
Defines the coordinate system for attributes x, y, width, height.
If filterUnits="userSpaceOnUse", x, y, width, height represent
values in the current user coordinate system in place at the
time when the 'filter' element is
referenced (i.e., the user coordinate system for the element
referencing the 'filter' element via a
'filter' property).
If filterUnits="objectBoundingBox", then x, y, width, height represent
fractions or percentages of the bounding box on the
referencing element (see Object bounding box
units).
If attribute filterUnits is
not specified, then the effect is as if a value of objectBoundingBox were
specified.
Animatable: yes.

x, y,
width, height, which define a rectangular region on
the canvas to which this filter applies.
The amount of memory and processing time required to apply
the filter are related to the size of this rectangle and the
filterRes attribute of the
filter.
The coordinate system for these attributes depends on the
value for attribute filterUnits.
Negative values for width or
height are an error (see Error processing).
Zero values disable rendering of the element which referenced
the filter.
The bounds of this rectangle act as a hard clipping region
for each
filter primitive included with a given
'filter' element; thus, if the effect of a given
filter primitive would extend beyond the bounds of the
rectangle (this sometimes happens when using a
'feGaussianBlur' filter primitive with a very
large stdDeviation), parts of the
effect will get clipped.
If x or y is not
specified, the effect is as if a value of "-10%" were
specified.
If width or height is not
specified, the effect is as if a value of "120%" were
specified.
Animatable: yes.

filterRes (which has the form
x-pixels [y-pixels]) indicates the width and height of
the intermediate images in pixels. If not provided, then a
reasonable default resolution appropriate for the target
device will be used. (For displays, an appropriate display
resolution, preferably the current display's pixel
resolution, is the default. For printing, an appropriate
common printer resolution, such as 400dpi, is the
default.)
Care should be taken when assigning a non-default value to
this attribute. Too small of a value may result in unwanted
pixelation in the result. Too large of a value may result in
slow processing and large memory usage.
Negative values are an error (see Error processing).
Zero values disable rendering of the element which referenced
the filter.
Animatable: yes.

Note that both of the two possible value for filterUnits (i.e., objectBoundingBox and userSpaceOnUse) result in a filter
region whose coordinate system has its X-axis and Y-axis each
parallel to the X-axis and Y-axis, respectively, of the user
coordinate system for the element to which the filter will be
applied.

Sometimes implementers can achieve faster performance when
the filter region can be mapped directly to device pixels;
thus, for best performance on display devices, it is suggested
that authors define their region such that SVG user agent can
align the filter region pixel-for-pixel with the background. In
particular, for best filter effects performance, avoid rotating
or skewing the user coordinate system. Explicit values for
attribute filterRes can either
help or harm performance. If
filterRes is smaller than the automatic (i.e., default)
filter resolution, then filter effect might have faster
performance (usually at the expense of quality). If filterRes is larger than the automatic
(i.e., default) filter resolution, then filter effects
performance will usually be slower.

It is often necessary to provide padding space because the
filter effect might impact bits slightly outside the
tight-fitting bounding box on a given object. For these
purposes, it is possible to provide negative percentage values
for x, y and percentages values greater than
100% for width, height. This, for example, is
why the defaults for the filter effects region are x="-10%"
y="-10%" width="120%" height="120%".

15.6 Accessing the background image

Two possible pseudo input images for filter effects are BackgroundImage and BackgroundAlpha, which each
represent an image snapshot of the canvas under the filter
region at the time that the
'filter' element is invoked. BackgroundImage represents both the
color values and alpha channel of the canvas (i.e., RGBA pixel
values), whereas BackgroundAlpha
represents only the alpha channel.

Implementations of SVG user agents often will need to
maintain supplemental background image buffers in order to
support the BackgroundImage and
BackgroundAlpha pseudo input
images. Sometimes, the background image buffers will contain an
in-memory copy of the accumulated painting operations on the
current canvas.

Because in-memory image buffers can take up significant
system resources, SVG content must explicitly indicate to the
SVG user agent that the document needs access to the background
image before BackgroundImage and
BackgroundAlpha pseudo input
images can be used. The property which enables access to the
background image is 'enable-background':

'enable-background' is only
applicable to
container elements and specifies how the SVG user agents
manages the accumulation of the background image.

A value of new indicates two things:

It enables the ability of children of the current container element
to access the background image.

It indicates that a new (i.e., initially transparent
black) background image canvas is established and that (in
effect) all children of the current container element
shall be rendered into the new background image canvas in
addition to being rendered onto the target device.

A meaning of enable-background: accumulate
(the initial/default value) depends on context:

Otherwise, there is no current background image canvas,
so it is only necessary to render graphics elements
onto the target device. (No need to render to the background
image canvas.)

If a filter effect specifies either the BackgroundImage or the BackgroundAlpha pseudo input images
and no ancestor
container element has a property value of
'enable-background:new', then the background image request is
technically in error. Processing will proceed without
interruption (i.e., no error message) and a transparent black
image shall be provided in response to the request.

The optional
<x>,<y>,<width>,<height>
parameters on the new value indicate the
subregion of the
container element's user
space where access to the background image is allowed to
happen. These parameters enable the SVG user agent potentially
to allocate smaller temporary image buffers than the default
values, which might require the SVG user agent to allocate
buffers as large as the current viewport. Thus, the values
<x>,<y>,<width>,<height> act as a
clipping rectangle on the background image canvas. Negative
values for <width> or <height> are an error (see Error processing). If
more than zero but less than four of the values
<x>,<y>,<width> and <height> are
specified or if zero values are specified for <width> or
<height>, BackgroundImage
and BackgroundAlpha are
processed as if background image processing were not
enabled.

Assume you have an element E in the document and that E has
a series of ancestors A1 (its immediate parent),
A2, etc. (Note: A0 is E.) Each ancestor
Ai will have a corresponding temporary background
image offscreen buffer BUFi. The contents of the
background image available to a
'filter' referenced by E is defined as follows:

Find the element Ai with the smallest
subscript i (including A0=E) for which the 'enable-background' property has
the value new. (Note: if
there is no such ancestor element, then there is no
background image available to E, in which case a transparent
black image will be used as E's background image.)

For each Ai (from i=n to 1), initialize
BUFi to transparent black. Render all children of
Ai up to but not including Ai-1 into
BUFi. The children are painted, then filtered,
clipped, masked and composited using the various painting,
filtering, clipping, masking and object opacity settings on
the given child. Any filter effects, masking and group
opacity that might be set on Ai do not
apply when rendering the children of Ai into
BUFi.
(Note that for the case of A0=E, the graphical
contents of E are not rendered into BUF1 and thus
are not part of the background image available to E. Instead,
the graphical contents of E are available via the SourceGraphic and SourceAlpha pseudo input
images.)

Then, for each Ai (from i=1 to n-1), composite
BUFi into BUFi+1.

The accumulated result (i.e., BUFn) represents
the background image available to E.

The first set is the reference graphic. The reference
graphic consists of a red rectangle followed by a 50%
transparent 'g' element. Inside the
'g' is a green circle that partially overlaps the
rectangle and a a blue triangle that partially overlaps the
circle. The three objects are then outlined by a rectangle
stroked with a thin blue line. No filters are applied to the
reference graphic.

The second set enables background image processing and
adds an empty 'g' element which invokes the
ShiftBGAndBlur filter. This filter takes the current
accumulated background image (i.e., the entire reference
graphic) as input, shifts its offscreen down, blurs it, and
then writes the result to the canvas. Note that the offscreen
for the filter is initialized to transparent black, which
allows the already rendered rectangle, circle and triangle to
show through after the filter renders its own result to the
canvas.

The third set enables background image processing and
instead invokes the ShiftBGAndBlur filter on the inner
'g' element. The accumulated background at the
time the filter is applied contains only the red rectangle.
Because the children of the inner
'g' (i.e., the circle and triangle) are not part
of the inner 'g' element's background and
because ShiftBGAndBlur ignores SourceGraphic, the children of
the inner 'g' do not appear in the
result.

The fourth set enables background image processing and
invokes the ShiftBGAndBlur on the
'polygon' element that draws the triangle. The
accumulated background at the time the filter is applied
contains the red rectangle plus the green circle ignoring the
effect of the 'opacity' property on the inner
'g' element. (Note that the blurred green circle
at the bottom does not let the red rectangle show through on
its left side. This is due to ignoring the effect of the
'opacity' property.) Because the triangle itself
is not part of the accumulated background and because
ShiftBGAndBlur ignores SourceGraphic, the triangle does not
appear in the result.

The fifth set is the same as the fourth except that
filter ShiftBGAndBlur_WithSourceGraphic is invoked instead of
ShiftBGAndBlur. ShiftBGAndBlur_WithSourceGraphic performs the
same effect as ShiftBGAndBlur, but then renders the
SourceGraphic on top of the shifted, blurred background
image. In this case, SourceGraphic is the blue triangle;
thus, the result is the same as in the fourth case except
that the blue triangle now appears.

15.7 Filter primitives overview

15.7.1 Overview

This section describes the various filter primtives that can
be assembled to achieve a particular filter effect.

Unless otherwise stated, all image filters operate on
premultiplied RGBA samples. Filters which work more naturally
on non-premultiplied data (feColorMatrix and
feComponentTransfer) will temporarily undo and redo
premultiplication as specified. All raster effect filtering
operations take 1 to N input RGBA images, additional attributes
as parameters, and produce a single output RGBA image.

The RGBA result from each filter primitive will be clamped
into the allowable ranges for colors and opacity values. Thus,
for example, the result from a given filter primitive will have
any negative color values or opacity values adjusted up to
color/opacity of zero.

The width of the subregion which restricts calculation
and rendering of the given filter primitive. See filter
primitive subregion.
A negative value is an error (see Error processing).
A value of zero disables the effect of the given filter
primitive (i.e., the result is a transparent black
image).Animatable:
yes.

The height of the subregion which restricts calculation
and rendering of the given filter primitive. See filter
primitive subregion.
A negative value is an error (see Error processing).
A value of zero disables the effect of the given filter
primitive (i.e., the result is a transparent black
image).Animatable:
yes.

result =
"<filter-primitive-reference>"

Assigned name for this filter primitive. If supplied,
then graphics that result from processing this filter
primitive can be referenced by an in attribute on a subsequent
filter primitive within the same 'filter' element. If no
value is provided, the output will only be available for
re-use as the implicit input into the next filter primitive
if that filter primitive provides no value for its in attribute.
Note that a <filter-primitive-reference> is
not an XML ID; instead, a
<filter-primitive-reference> is only meaningful
within a given 'filter' element and thus
have only local scope. It is legal for the same
<filter-primitive-reference> to appear multiple
times within the same 'filter' element.
When referenced, the
<filter-primitive-reference> will use the
closest preceding filter primitive with the given
result.Animatable:
yes.

Identifies input for the given filter primitive. The
value can be either one of six keywords or can be a
string which matches a previous result attribute value
within the same 'filter' element.
If no value is provided and this is the first filter
primitive, then this filter primitive will use SourceGraphic as its input.
If no value is provided and this is a subsequent filter
primitive, then this filter primitive will use the result
from the previous filter primitive as its input.

If the value for result
appears multiple times within a given 'filter' element, then a
reference to that result will use the closest preceding
filter primitive with the given value for attribute result. Forward references to
results are an
error.

This keyword represents the graphics
elements that were the original input into the 'filter' element. For
raster effects filter primitives, the graphics
elements will be rasterized into an initially clear
RGBA raster in image space. Pixels left untouched by
the original graphic will be left clear. The image is
specified to be rendered in linear RGBA pixels. The
alpha channel of this image captures any anti-aliasing
specified by SVG. (Since the raster is linear, the
alpha channel of this image will represent the exact
percent coverage of each pixel.)

This keyword represents the graphics
elements that were the original input into the 'filter' element. SourceAlpha has all of the
same rules as SourceGraphic
except that only the alpha channel is used. The input
image is an RGBA image consisting of implicitly black
color values for the RGB channels, but whose alpha
channel is the same as SourceGraphic. If this
option is used, then some implementations might need to
rasterize the
graphics elements in order to extract the alpha
channel.

This keyword represents the value of the 'fill' property on the
target element for the filter effect. The FillPaint
image has conceptually infinite extent. Frequently this
image is opaque everywhere, but it might not be if the
"paint" itself has alpha, as in the case of a gradient
or pattern which itself includes transparent or
semi-transparent parts.

This keyword represents the value of the 'stroke' property on the
target element for the filter effect. The StrokePaint
image has conceptually infinite extent. Frequently this
image is opaque everywhere, but it might not be if the
"paint" itself has alpha, as in the case of a gradient
or pattern which itself includes transparent or
semi-transparent parts.

15.7.3 Filter primitive subregion

All filter primitives have attributes x,
y, width and
height which identify a subregion which restricts
calculation and rendering of the given filter primitive. These
attributes are defined according to the same rules as other
filter primitives' coordinate and length attributes and thus
represent values in the coordinate system established by
attribute primitiveUnits on the
'filter' element.

x, y,
width and height default to the union
(i.e., tightest fitting bounding box) of the subregions defined
for all referenced nodes. If there are no referenced nodes
(e.g., for 'feImage' or 'feTurbulence'), or one or more
of the referenced nodes is a standard input (one of
SourceGraphic, SourceAlpha,
BackgroundImage,
BackgroundAlpha, FillPaint or
StrokePaint), or for
'feTile' (which is special because its principal
function is to replicate the referenced node in X and Y and
thereby produce a usually larger result), the default subregion
is 0%,0%,100%,100%, where percentages are relative to the
dimensions of the filter region.

x, y,
width and height act as a hard clip
clipping rectangle.

All intermediate offscreens are defined to not exceed the
intersection of x, y,
width and height with the filter region. The
filter region and any of the x,
y, width and height
subregions are to be set up such that all offscreens are made
big enough to accommodate any pixels which even partly
intersect with either the filter region or the x,y,width,height
subregions.

'feTile' references a previous
filter primitive and then stitches the tiles together based on
the x, y,
width and height values of the
referenced filter primitive in order to fill its own filter primitive
subregion.

X location for the light source in the coordinate
system established by attribute primitiveUnits on the 'filter' element.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

Y location for the light source in the coordinate
system established by attribute primitiveUnits on the 'filter' element.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

Z location for the light source in the coordinate
system established by attribute primitiveUnits on the 'filter' element, assuming
that, in the
initial coordinate system, the positive Z-axis comes
out towards the person viewing the content and assuming
that one unit along the Z-axis equals one unit in X or
Y.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

X location for the light source in the coordinate
system established by attribute primitiveUnits on the 'filter' element.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

Y location for the light source in the coordinate
system established by attribute primitiveUnits on the 'filter' element.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

Z location for the light source in the coordinate
system established by attribute primitiveUnits on the 'filter' element, assuming
that, in the
initial coordinate system, the positive Z-axis comes
out towards the person viewing the content and assuming
that one unit along the Z-axis equals one unit in X or
Y.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

X location in the coordinate system established by
attribute primitiveUnits on the 'filter' element of the
point at which the light source is pointing.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

Y location in the coordinate system established by
attribute primitiveUnits on the 'filter' element of the
point at which the light source is pointing.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

Z location of the point at which the light source is
pointing, assuming that, in the initial
coordinate system, the positive Z-axis comes out
towards the person viewing the content and assuming that
one unit along the Z-axis equals one unit in X or Y.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

A limiting cone which restricts the region where the
light is projected. No light is projected outside the cone.
limitingConeAngle represents
the angle between the spot light axis (i.e. the axis
between the light source and the point to which it is
pointing at) and the spot light cone. User agents should
apply a smoothing technique such as anti-aliasing at the
boundary of the cone.
If no value is specified, then no limiting cone will be
applied.Animatable:
yes.

The second input image to the blending operation. This
attribute can take on the same values as the in attribute.Animatable:
yes.

For all feBlend modes, the result opacity is computed as
follows:

qr = 1 - (1-qa)*(1-qb)

For the compositing formulas below, the following
definitions apply:

cr = Result color (RGB) - premultiplied
qa = Opacity value at a given pixel for image A
qb = Opacity value at a given pixel for image B
ca = Color (RGB) at a given pixel for image A - premultiplied
cb = Color (RGB) at a given pixel for image B - premultiplied

The following
table provides the list of available image blending modes:

on the RGBA color and alpha values of every pixel on the
input graphics to produce a result with a new set of RGBA color
and alpha values.

The calculations are performed on non-premultiplied color
values. If the input graphics consists of premultiplied color
values, those values are automatically converted into
non-premultiplied color values for this operation.

These matrices often perform an identity mapping in the
alpha channel. If that is the case, an implementation can avoid
the costly undoing and redoing of the premultiplication for all
pixels with A = 1.

Indicates the type of matrix operation. The keyword
matrix indicates that a full
5x4 matrix of values will be provided. The other keywords
represent convenience shortcuts to allow commonly used
color operations to be performed without specifying a
complete matrix.Animatable:
yes.

If the attribute is not specified, then the default
behavior depends on the value of attribute type. If type="matrix", then this
attribute defaults to the identity matrix. If type="saturate", then this
attribute defaults to the value
1, which results in the identify matrix. If type="hueRotate", then this
attribute defaults to the value
0, which results in the identify matrix.Animatable:
yes.

Example feColorMatrix shows
examples of the four types of feColorMatrix operations.

15.11 Filter primitive
'feComponentTransfer'

for every pixel. It allows operations like brightness
adjustment, contrast adjustment, color balance or
thresholding.

The calculations are performed on non-premultiplied color
values. If the input graphics consists of premultiplied color
values, those values are automatically converted into
non-premultiplied color values for this operation. (Note that
the undoing and redoing of the premultiplication can be avoided
if feFuncA is the
identity transform and all alpha values on the source graphic
are set to 1.)

Indicates the type of component transfer function. The
type of function determines the applicability of the
other attributes.

For identity:

C' = C

For table, the
function is defined by linear interpolation into a
lookup table by attribute tableValues, which
provides a list of n+1 values (i.e.,
v0 to vn) in order to identify
n interpolation ranges. Interpolations use
the following formula.

For a value C pick a k
such that:

k/N <= C < (k+1)/N

The result C' is given by:

C' = vk + (C -
k/N)*N * (vk+1 - vk)

For discrete, the
function is defined by the step function defined by
attribute tableValues, which
provides a list of n values (i.e.,
v0 to vn-1) in order to
identify a step function consisting of n
steps. The step function is defined by the following
formula.

When type="table", the
list of
<number>s v0,v1,...vn, separated by
white space and/or a comma, which define the lookup table.
An empty list results in an identity transfer
function.
If the attribute is not specified, then the effect is as
if an empty list were provided.Animatable:
yes.

15.12 Filter primitive
'feComposite'

This filter performs the combination of the two input images
pixel-wise in image space using one of the Porter-Duff [PORTERDUFF] compositing
operations: over, in, atop, out, xor. Additionally, a
component-wise arithmetic operation (with the result
clamped between [0..1]) can be applied.

The arithmetic operation is useful for combining
the output from the 'feDiffuseLighting' and
'feSpecularLighting' filters with texture data. It
is also useful for implementing dissolve. If the
arithmetic operation is chosen, each result pixel is
computed using the following formula:

result = k1*i1*i2 + k2*i1 + k3*i2 + k4

For this filter primitive, the extent of the resulting image
might grow as described in the section that describes the filter primitive
subregion.

The compositing operation that is to be performed. All
of the operator types except
arithmetic match the
correspond operation as described in [PORTERDUFF]. The arithmetic operator is described
above.Animatable:
yes.

The second input image to the compositing operation.
This attribute can take on the same values as the in attribute.Animatable:
yes.

Example feComposite shows
examples of the six types of feComposite operations. It also
shows two different techniques to using the BackgroundImage as part of the
compositing operation.

The first two rows render bluish triangles into the
background. A filter is applied which composites reddish
triangles into the bluish triangles using one of the
compositing operations. The result from compositing is drawn
onto an opaque white temporary surface, and then that result is
written to the canvas. (The opaque white temporary surface
obliterates the original bluish triangle.)

The last two rows apply the same compositing operations of
reddish triangles into bluish triangles. However, the
compositing result is directly blended into the canvas (the
opaque white temporary surface technique is not used). In some
cases, the results are different than when a temporary opaque
white surface is used. The original bluish triangle from the
background shines through wherever the compositing operation
results in completely transparent pixel. In other cases, the
result from compositing is blended into the bluish triangle,
resulting in a different final color value.

15.13 Filter primitive
'feConvolveMatrix'

feConvolveMatrix applies a matrix convolution filter effect.
A convolution combines pixels in the input image with
neighboring pixels to produce a resulting image. A wide variety
of imaging operations can be achieved through convolutions,
including blurring, edge detection, sharpening, embossing and
beveling.

A matrix convolution is based on an n-by-m matrix (the
convolution kernel) which describes how a given pixel value in
the input image is combined with its neighboring pixel values
to produce a resulting pixel value. Each result pixel is
determined by applying the kernel matrix to the corresponding
source pixel and its neighboring pixels. The basic convolution
formula which is applied to each color value for a given pixel
is:

where "orderX" and "orderY" represent the X and Y values for
the order attribute, "targetX"
represents the value of the targetX attribute, "targetY"
represents the value of the targetY attribute,
"kernelMatrix" represents the value of the kernelMatrix attribute,
"divisor" represents the value of the divisor attribute, and
"bias" represents the value of the bias attribute.

Note in the above formulas that the values in the kernel
matrix are applied such that the kernel matrix is rotated 180
degrees relative to the source and destination images in order
to match convolution theory as described in many computer
graphics textbooks.

To illustrate, suppose you have a input image which is 5
pixels by 5 pixels, whose color values for one of the color
channels are as follows:

Let's focus on the color value at the second row and second
column of the image (source pixel value is 120). Assuming the
simplest case (where the input image's pixel grid aligns
perfectly with the kernel's pixel grid) and assuming default
values for attributes divisor,
targetX and targetY, then resulting color value
will be:

Because they operate on pixels, matrix convolutions are
inherently resolution-dependent. To make 'feConvolveMatrix produce
resolution-independent results, an explicit value should be
provided for either the filterRes attribute on the
'filter' element and/or attribute kernelUnitLength.

kernelUnitLength, in
combination with the other attributes, defines an implicit
pixel grid in the filter effects coordinate system (i.e., the
coordinate system established by the primitiveUnits attribute). If the
pixel grid established by kernelUnitLength is not
scaled to match the pixel grid established by attribute filterRes (implicitly or
explicitly), then the input image will be temporarily rescaled
to match its pixels with kernelUnitLength. The
convolution happens on the resampled image. After applying the
convolution, the image is resampled back to the original
resolution.

When the image must be resampled to match the coordinate
system defined by kernelUnitLength prior to
convolution, or resampled to match the device coordinate system
after convolution, it is recommended that high
quality viewers make use of appropriate interpolation
techniques, for example bilinear or bicubic. Depending on the
speed of the available interpolents, this choice may be
affected by the 'image-rendering' property
setting. Note that implementations might choose approaches that
minimize or eliminate resampling when not necessary to produce
proper results, such as when the document is zoomed out such
that kernelUnitLength is
considerably smaller than a device pixel.

Indicates the number of cells in each dimension for
kernelMatrix. The values
provided must be
<integer>s greater than zero. The first number,
<orderX>, indicates the number of columns in the
matrix. The second number, <orderY>, indicates the
number of rows in the matrix. If <orderY> is not
provided, it defaults to <orderX>.
A typical value is order="3". It is recommended that only
small values (e.g., 3) be used; higher values may result in
very high CPU overhead and usually do not produce results
that justify the impact on performance.
If the attribute is not specified, the effect is as if a
value of "3" were specified.Animatable:
yes.

kernelMatrix = "<list of numbers>"

The list of
<number>s that make up the kernel matrix for the
convolution. Values are separated by space characters
and/or a comma. The number of entries in the list must
equal <orderX> times <orderY>.Animatable:
yes.

After applying the
kernelMatrix to the input image to yield a number,
that number is divided by
divisor to yield the final destination color value.
A divisor that is the sum of all the matrix values tends to
have an evening effect on the overall color intensity of
the result. It is an error to specify a divisor of zero.
The default value is the sum of all values in kernelMatrix,
with the exception that if the sum is zero, then the
divisor is set to 1.Animatable:
yes.

After applying the
kernelMatrix to the input image to yield a number
and applying the divisor, the bias attribute is added to each
component. One application of
bias is when it is desirable to have .5 gray value
be the zero response of the filter. If bias is not specified, then the
effect is as if a value of zero were specified.Animatable:
yes.

Determines the positioning in X of the convolution
matrix relative to a given target pixel in the input image.
The leftmost column of the matrix is column number zero.
The value must be such that: 0 <= targetX < orderX.
By default, the convolution matrix is centered in X over
each pixel of the input image (i.e., targetX = floor (
orderX / 2 )).Animatable:
yes.

Determines the positioning in Y of the convolution
matrix relative to a given target pixel in the input image.
The topmost row of the matrix is row number zero. The value
must be such that: 0 <= targetY < orderY. By default,
the convolution matrix is centered in Y over each pixel of
the input image (i.e., targetY = floor ( orderY / 2
)).Animatable:
yes.

edgeMode = "duplicate | wrap | none"

Determines how to extend the input image as necessary
with color values so that the matrix operations can be
applied when the kernel is positioned at or near the edge
of the input image.

"duplicate" indicates that the input image is extended
along each of its borders as necessary by duplicating the
color values at the given edge of the input image.

The first number is the <dx> value. The second
number is the <dy> value. If the <dy> value is
not specified, it defaults to the same value as <dx>.
Indicates the intended distance in current filter units
(i.e., units as determined by the value of attribute primitiveUnits) between
successive columns and rows, respectively, in the kernelMatrix. By specifying
value(s) for
kernelUnitLength, the kernel becomes defined in a
scalable, abstract coordinate system. If kernelUnitLength is not specified,
the default value is one pixel in the offscreen bitmap,
which is a pixel-based coordinate system, and thus
potentially not scalable. For some level of consistency
across display media and user agents, it is necessary that
a value be provided for at least one of filterRes and kernelUnitLength. In some
implementations, the most consistent results and the
fastest performance will be achieved if the pixel grid of
the temporary offscreen images aligns with the pixel grid
of the kernel.
A negative or zero value is an error (see Error
processing).Animatable:
yes.

preserveAlpha = "false | true"

A value of false
indicates that the convolution will apply to all channels,
including the alpha channel.
A value of true indicates
that the convolution will only apply to the color channels.
In this case, the filter will temporarily unpremultiply the
color component values, apply the kernel, and then
re-premultiply at the end.
If preserveAlpha is not
specified, then the effect is as if a value of false were specified.Animatable:
yes.

15.14 Filter primitive
'feDiffuseLighting'

This filter primitive lights an image using the alpha
channel as a bump map. The resulting image is an RGBA opaque
image based on the light color with alpha = 1.0 everywhere. The
lighting calculation follows the standard diffuse component of
the Phong lighting model. The resulting image depends on the
light color, light position and surface geometry of the input
bump map.

The light map produced by this filter primitive can be
combined with a texture image using the multiply term of the
arithmetic'feComposite' compositing
method. Multiple light sources can be simulated by adding
several of these light maps together before applying it to the
texture image.

The formulas below make use of 3x3 filters. Because they
operate on pixels, such filters are inherently
resolution-dependent. To make
'feDiffuseLighting' produce resolution-independent
results, an explicit value should be provided for either the filterRes attribute on the
'filter' element and/or attribute kernelUnitLength.

kernelUnitLength, in
combination with the other attributes, defines an implicit
pixel grid in the filter effects coordinate system (i.e., the
coordinate system established by the primitiveUnits attribute). If the
pixel grid established by kernelUnitLength is not
scaled to match the pixel grid established by attribute filterRes (implicitly or
explicitly), then the input image will be temporarily rescaled
to match its pixels with kernelUnitLength. The 3x3
filters are applied to the resampled image. After applying the
filter, the image is resampled back to its original
resolution.

When the image must be resampled, it is recommended that high
quality viewers make use of appropriate interpolation
techniques, for example bilinear or bicubic. Depending on the
speed of the available interpolents, this choice may be
affected by the 'image-rendering' property
setting. Note that implementations might choose approaches that
minimize or eliminate resampling when not necessary to produce
proper results, such as when the document is zoomed out such
that kernelUnitLength is
considerably smaller than a device pixel.

For the formulas that follow, the
Norm(Ax,Ay,Az) function
is defined as:

Norm(Ax,Ay,Az) =
sqrt(Ax^2+Ay^2+Az^2)

The resulting RGBA image is computed as follows:

Dr = kd * N.L *
Lr
Dg = kd * N.L * Lg
Db = kd * N.L * Lb
Da = 1.0

where

kd = diffuse lighting constant
N = surface normal unit vector, a function of x and y
L = unit vector pointing from surface to light, a function
of x and y in the point and spot light cases
Lr,Lg,Lb = RGB components
of light, a function of x and y in the spot light case

N is a function of x and y and depends on the surface
gradient as follows:

The surface described by the input alpha image
Ain (x,y) is:

Z (x,y) = surfaceScale *
Ain (x,y)

Surface normal is calculated using the Sobel gradient 3x3
filter. Different filter kernels are used depending on whether
the given pixel is on the interior or an edge. For each case,
the formula is:

In these formulas, the dx and dy
values (e.g., I(x-dx,y-dy)), represent deltas
relative to a given (x,y) position for the purpose
of estimating the slope of the surface at that point. These
deltas are determined by the value (explicit or implicit) of
attribute kernelUnitLength.

Top/left corner:

FACTORx=2/(3*dx)
Kx =
| 0 0 0 |
| 0 -2 2 |
| 0 -1 1 |

FACTORy=2/(3*dy)
Ky =
| 0 0 0 |
| 0 -2 -1 |
| 0 2 1 |

Top row:

FACTORx=1/(3*dx)
Kx =
| 0 0 0 |
| -2 0 2 |
| -1 0 1 |

FACTORy=1/(2*dy)
Ky =
| 0 0 0 |
| -1 -2 -1 |
| 1 2 1 |

Top/right corner:

FACTORx=2/(3*dx)
Kx =
| 0 0 0 |
| -2 2 0 |
| -1 1 0 |

FACTORy=2/(3*dy)
Ky =
| 0 0 0 |
| -1 -2 0 |
| 1 2 0 |

Left column:

FACTORx=1/(2*dx)
Kx =
| 0 -1 1 |
| 0 -2 2 |
| 0 -1 1 |

FACTORy=1/(3*dy)
Ky =
| 0 -2 -1 |
| 0 0 0 |
| 0 2 1 |

Interior pixels:

FACTORx=1/(4*dx)
Kx =
| -1 0 1 |
| -2 0 2 |
| -1 0 1 |

FACTORy=1/(4*dy)
Ky =
| -1 -2 -1 |
| 0 0 0 |
| 1 2 1 |

Right column:

FACTORx=1/(2*dx)
Kx =
| -1 1 0|
| -2 2 0|
| -1 1 0|

FACTORy=1/(3*dy)
Ky =
| -1 -2 0 |
| 0 0 0 |
| 1 2 0 |

Bottom/left corner:

FACTORx=2/(3*dx)
Kx =
| 0 -1 1 |
| 0 -2 2 |
| 0 0 0 |

FACTORy=2/(3*dy)
Ky =
| 0 -2 -1 |
| 0 2 1 |
| 0 0 0 |

Bottom row:

FACTORx=1/(3*dx)
Kx =
| -1 0 1 |
| -2 0 2 |
| 0 0 0 |

FACTORy=1/(2*dy)
Ky =
| -1 -2 -1 |
| 1 2 1 |
| 0 0 0 |

Bottom/right corner:

FACTORx=2/(3*dx)
Kx =
| -1 1 0 |
| -2 2 0 |
| 0 0 0 |

FACTORy=2/(3*dy)
Ky =
| -1 -2 0 |
| 1 2 0 |
| 0 0 0 |

L, the unit vector from the image sample to the light, is
calculated as follows:

The first number is the <dx> value. The second
number is the <dy> value. If the <dy> value is
not specified, it defaults to the same value as <dx>.
Indicates the intended distance in current filter units
(i.e., units as determined by the value of attribute primitiveUnits) for
dx and dy, respectively, in the
surface
normal calculation formulas. By specifying value(s) for
kernelUnitLength, the kernel
becomes defined in a scalable, abstract coordinate system.
If kernelUnitLength is not
specified, the dx and dy values
should represent very small deltas relative to a given
(x,y) position, which might be implemented in
some cases as one pixel in the intermediate image offscreen
bitmap, which is a pixel-based coordinate system, and thus
potentially not scalable. For some level of consistency
across display media and user agents, it is necessary that
a value be provided for at least one of filterRes and kernelUnitLength. Discussion of
intermediate images are in the Introduction and in
the description of attribute filterRes.
A negative or zero value is an error (see Error
processing).Animatable:
yes.

where P(x,y) is the input image, in, and P'(x,y) is the
destination. XC(x,y) and YC(x,y) are the component values of
the designated by the xChannelSelector and
yChannelSelector. For example, to use the R component of
in2 to control displacement in x
and the G component of Image2 to control displacement in y, set
xChannelSelector to "R" and yChannelSelector
to "G".

The displacement map defines the inverse of the mapping
performed.

The calculations using the pixel values from in2 are performed using
non-premultiplied color values. If the image from in2 consists of premultiplied
color values, those values are automatically converted into
non-premultiplied color values before performing this
operation.

This filter can have arbitrary non-localized effect on the
input which might require substantial buffering in the
processing pipeline. However with this formulation, any
intermediate buffering needs can be determined by
scale which represents the maximum range of displacement
in either x or y.

When applying this filter, the source pixel location will
often lie between several source pixels. In this case it is
recommended that high
quality viewers apply an interpolent on the surrounding
pixels, for example bilinear or bicubic, rather than simply
selecting the nearest source pixel. Depending on the speed of
the available interpolents, this choice may be affected by the
'image-rendering' property
setting.

Displacement scale factor. The amount is expressed in
the coordinate system established by attribute primitiveUnits on the 'filter' element.
When the value of this attribute is 0, this operation has no effect
on the source image.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

xChannelSelector = "R | G | B
| A"

Indicates which channel from in2 to use to displace the
pixels in in along the x-axis.Animatable:
yes.

yChannelSelector = "R | G | B
| A"

Indicates which channel from in2 to use to displace the
pixels in in along the y-axis.Animatable:
yes.

The second input image, which is used to displace the
pixels in the image from attribute in. This attribute can take on
the same values as the in attribute.Animatable:
yes.

15.16 Filter primitive
'feFlood'

This filter primitive creates a rectangle filled with the
color and opacity values from properties 'flood-color' and 'flood-opacity'. The rectangle is as
large as the
filter primitive subregion established by the x, y, width and height attributes on the 'feFlood' element.

The 'flood-color' property
indicates what color to use to flood the current filter primitive
subregion. The keyword
currentColor and ICC colors can be specified in the same
manner as within a
<paint> specification for the
'fill' and 'stroke' properties.

The value of stdDeviation can be either one or
two numbers. If two numbers are provided, the first number
represents a standard deviation value along the x-axis of the
current coordinate system and the second value represents a
standard deviation in Y. If one number is provided, then that
value is used for both X and Y.

Even if only one value is provided for stdDeviation, this can be
implemented as a separable convolution.

For larger values of 's' (s >= 2.0), an approximation can
be used: Three successive box-blurs build a piece-wise
quadratic convolution kernel, which approximates the Gaussian
kernel to within roughly 3%.

let d = floor(s * 3*sqrt(2*pi)/4 +
0.5)

... if d is odd, use three box-blurs of size 'd', centered
on the output pixel.

... if d is even, two box-blurs of size 'd' (the first one
centered on the pixel boundary between the output pixel and the
one to the left, the second one centered on the pixel boundary
between the output pixel and the one to the right) and one box
blur of size 'd+1' centered on the output pixel.

Frequently this operation will take place on alpha-only
images, such as that produced by the built-in input,
SourceAlpha. The implementation may notice this and
optimize the single channel case. If the input has infinite
extent and is constant, this operation has no effect. If the
input has infinite extent and is a tile, the filter is
evaluated with periodic boundary conditions.

The standard deviation for the blur operation. If two
<number>s are
provided, the first number represents a standard deviation
value along the x-axis of the coordinate system established
by attribute primitiveUnits on the 'filter' element. The
second value represents a standard deviation in Y. If one
number is provided, then that value is used for both X and
Y.
A negative value is an error (see Error processing).
A value of zero disables the effect of the given filter
primitive (i.e., the result is a transparent black
image).
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

The example at the
start of this chapter makes use of the feGaussianBlur filter primitive to
create a drop shadow effect.

15.18 Filter primitive
'feImage'

This filter primitive refers to a graphic external to this
filter element, which is loaded or rendered into an RGBA raster
and becomes the result of the filter primitive.

This filter primitive can refer to an external image or can
be a reference to another piece of SVG. It produces an image
similar to the built-in image source
SourceGraphic except that the graphic comes from an
external source.

If the xlink:href references
a stand-alone image resource such as a JPEG, PNG or SVG file,
then the image resource is rendered according to the behavior
of the 'image' element; otherwise, the
referenced resource is rendered according to the behavior of
the 'use' element. In either case,
the current user coordinate system depends on the value of
attribute primitiveUnits on the
'filter' element. The processing of the
preserveAspectRatio attribute on the 'feImage' element is identical to
that of the 'image' element.

When the referenced image must be resampled to match the
device coordinate system, it is recommended that high
quality viewers make use of appropriate interpolation
techniques, for example bilinear or bicubic. Depending on the
speed of the available interpolents, this choice may be
affected by the 'image-rendering' property
setting.

15.19 Filter primitive
'feMerge'

This filter primitive composites input image layers on top
of each other using the over operator with
Input1 (corresponding to the first 'feMergeNode' child element) on
the bottom and the last specified input, InputN
(corresponding to the last 'feMergeNode' child element),
on top.

Many effects produce a number of intermediate layers in
order to create the final output image. This filter allows us
to collapse those into a single image. Although this could be
done by using n-1 Composite-filters, it is more convenient to
have this common operation available in this form, and offers
the implementation some additional flexibility.

Each 'feMerge' element can have any number of 'feMergeNode'
subelements, each of which has an
in attribute.

The canonical implementation of feMerge is to render the
entire effect into one RGBA layer, and then render the
resulting layer on the output device. In certain cases (in
particular if the output device itself is a continuous tone
device), and since merging is associative, it might be a
sufficient approximation to evaluate the effect one layer at a
time and render each layer individually onto the output device
bottom to top.

If the topmost image input is
SourceGraphic and this
'feMerge' is the last filter primitive in the filter,
the implementation is encouraged to render the layers up to
that point, and then render the
SourceGraphic directly from its vector description
on top.

The example at the
start of this chapter makes use of the feMerge filter primitive to
composite two intermediate filter results together.

15.20 Filter primitive
'feMorphology'

This filter primitive performs "fattening" or "thinning" of
artwork. It is particularly useful for fattening or thinning an
alpha channel.

The dilation (or erosion) kernel is a rectangle with a width
of 2*x-radius and a height of 2*y-radius. In
dilation, the output pixel is the individual component-wise
maximum of the corresponding R,G,B,A values in the input
image's kernel rectangle. In erosion, the output pixel is the
individual component-wise minimum of the corresponding R,G,B,A
values in the input image's kernel rectangle.

Frequently this operation will take place on alpha-only
images, such as that produced by the built-in input,
SourceAlpha. In that case, the implementation might
want to optimize the single channel case.

If the input has infinite extent and is constant, this
operation has no effect. If the input has infinite extent and
is a tile, the filter is evaluated with periodic boundary
conditions.

Because 'feMorphology'
operates on premultipied color values, it will always result in
color values less than or equal to the alpha channel.

The radius (or radii) for the operation. If two <number>s are
provided, the first number represents a x-radius and the
second value represents a y-radius. If one number is
provided, then that value is used for both X and Y. The
values are in the coordinate system established by
attribute primitiveUnits on the 'filter' element.
A negative value is an error (see Error processing).
A value of zero disables the effect of the given filter
primitive (i.e., the result is a transparent black
image).
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

Example feMorphology shows
examples of the four types of feMorphology operations.

15.21 Filter primitive
'feOffset'

This filter primitive offsets the input image relative to
its current position in the image space by the specified
vector.

This is important for effects like drop shadows.

When applying this filter, the destination location may be
offset by a fraction of a pixel in device space. In this case a
high
quality viewer should make use of appropriate interpolation
techniques, for example bilinear or bicubic. This is especially
recommended for dynamic viewers where this interpolation
provides visually smoother movement of images. For static
viewers this is less of a concern. Close attention should be
made to the 'image-rendering' property
setting to determine the authors intent.

The amount to offset the input graphic along the
x-axis. The offset amount is expressed in the coordinate
system established by attribute primitiveUnits on the 'filter' element.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

The amount to offset the input graphic along the
y-axis. The offset amount is expressed in the coordinate
system established by attribute primitiveUnits on the 'filter' element.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

The example at the
start of this chapter makes use of the feOffset filter primitive to offset
the drop shadow from the original source graphic.

15.22 Filter primitive
'feSpecularLighting'

This filter primitive lights a source graphic using the
alpha channel as a bump map. The resulting image is an RGBA
image based on the light color. The lighting calculation
follows the standard specular component of the Phong lighting
model. The resulting image depends on the light color, light
position and surface geometry of the input bump map. The result
of the lighting calculation is added. The filter primitive
assumes that the viewer is at infinity in the z direction
(i.e., the unit vector in the eye direction is (0,0,1)
everywhere).

This filter primitive produces an image which contains the
specular reflection part of the lighting calculation. Such a
map is intended to be combined with a texture using the
add term of the arithmetic
'feComposite' method. Multiple light sources can be
simulated by adding several of these light maps before applying
it to the texture image.

Unlike the 'feDiffuseLighting', the 'feSpecularLighting' filter
produces a non-opaque image. This is due to the fact that the
specular result
(Sr,Sg,Sb,Sa) is
meant to be added to the textured image. The alpha channel of
the result is the max of the color components, so that where
the specular light is zero, no additional coverage is added to
the image and a fully white highlight will add opacity.

The
'feDiffuseLighting' and
'feSpecularLighting' filters will often be applied
together. An implementation may detect this and calculate both
maps in one pass, instead of two.

The example at the
start of this chapter makes use of the feSpecularLighting filter primitive
to achieve a highly reflective, 3D glowing effect.

15.23 Filter primitive
'feTile'

This filter primitive fills a target rectangle with a
repeated, tiled pattern of an input image. The target rectangle
is as large as the filter primitive
subregion established by the x, y, width and height attributes on the 'feTile' element.

Typically, the input image has been defined with its own filter primitive
subregion in order to define a reference tile. 'feTile' replicates the reference
tile in both X and Y to completely fill the target rectangle.
The top/left corner of each given tile is at location
(x+i*width,y+j*height), where (x,y)
represents the top/left of the input image's filter primitive
subregion, width and height represent
the width and height of the input image's filter primitive
subregion, and i and j can be any
integer value. In most cases, the input image will have a
smaller filter primitive subregion than the 'feTile' in order to achieve a
repeated pattern effect.

Implementers must take appropriate measures in constructing
the tiled image to avoid artifacts between tiles, particularly
in situations where the user to device transform includes shear
and/or rotation. Unless care is taken, interpolation can lead
to edge pixels in the tile having opacity values lower or
higher than expected due to the interaction of painting
adjacent tiles which each have partial overlap with particular
pixels.

15.24 Filter primitive
'feTurbulence'

This filter primitive creates an image using the Perlin
turbulence function. It allows the synthesis of artificial
textures like clouds or marble. For a detailed description the
of the Perlin turbulence function, see "Texturing and
Modeling", Ebert et al, AP Professional, 1994. The resulting
image will fill the entire filter primitive
subregion for this filter primitive.

It is possible to create bandwidth-limited noise by
synthesizing only one octave.

The C code below shows the exact algorithm used for this
filter effect.

For fractalSum, you get a turbFunctionResult that is aimed
at a range of -1 to 1 (the actual result might exceed this
range in some cases). To convert to a color value, use the
formula colorValue = ((turbFunctionResult * 255) + 255) /
2, then clamp to the range 0 to 255.

For turbulence, you get a turbFunctionResult that is aimed
at a range of 0 to 1 (the actual result might exceed this range
in some cases). To convert to a color value, use the formula
colorValue = (turbFunctionResult * 255), then
clamp to the range 0 to 255.

The following order is used for applying the pseudo random
numbers. An initial seed value is computed based on attribute
seed. Then the implementation
computes the lattice points for R, then continues getting
additional pseudo random numbers relative to the last generated
pseudo random number and computes the lattice points for G, and
so on for B and A.

The base frequency (frequencies) parameter(s) for the
noise function. If two
<number>s are provided, the first number
represents a base frequency in the X direction and the
second value represents a base frequency in the Y
direction. If one number is provided, then that value is
used for both X and Y.
A negative value for base frequency is an error (see Error
processing).
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

The starting number for the pseudo random number
generator.
If the attribute is not specified, then the effect is as
if a value of 0 were
specified.Animatable:
yes.

stitchTiles = "stitch |
noStitch"

If
stitchTiles="noStitch", no attempt it made to
achieve smooth transitions at the border of tiles which
contain a turbulence function. Sometimes the result will
show clear discontinuities at the tile borders.
If stitchTiles="stitch",
then the user agent will automatically adjust
baseFrequency-x and baseFrequency-y values such that the
feTurbulence node's width and height (i.e., the width and
height of the current subregion) contains an integral
number of the Perlin tile width and height for the first
octave. The baseFrequency will be adjusted up or down
depending on which way has the smallest relative (not
absolute) change as follows: Given the frequency, calculate
lowFreq=floor(width*frequency)/width and
hiFreq=ceil(width*frequency)/width. If
frequency/lowFreq < hiFreq/frequency then use lowFreq,
else use hiFreq. While generating turbulence values,
generate lattice vectors as normal for Perlin Noise, except
for those lattice points that lie on the right or bottom
edges of the active area (the size of the resulting tile).
In those cases, copy the lattice vector from the opposite
edge of the active area.Animatable:
yes.

type = "fractalNoise | turbulence"

Indicates whether the filter primitive should perform a
noise or turbulence function.Animatable:
yes.

Example feTurbulence shows
the effects of various parameter settings for feTurbulence.