Abstract

Filter effects are a way of processing an element's rendering before it
is displayed in the document. Typically, rendering an element via CSS or
SVG can conceptually described as if the element, including its children,
are drawn into a buffer (such as a raster image) and then that buffer is
composited into the elements parent. Filters apply an effect before the
compositing stage. Examples of such effects are blurring, changing color
intensity and warping the image.

Although originally designed for use in SVG, filter effects are a set a
set of operations to apply on an image buffer and therefore can be applied
to nearly any presentational environment, including CSS. They are
triggered by a style instruction (the ‘filter’ property). This specification
describes filters in a manner that allows them to be used in content
styled by CSS, such as HTML and SVG. It also defines a CSS property value
function that produces a CSS <image> value.

Status of This Document

This section describes the status of this document at the time of
its publication. Other documents may supersede this document. A list of
current W3C publications and the latest revision of this technical report
can be found in the W3C technical reports
index at http://www.w3.org/TR/.

Publication as a Working Draft does not imply endorsement by the W3C
Membership. This is a draft document and may be updated, replaced or
obsoleted by other documents at any time. It is inappropriate to cite this
document as other than work in progress.

The (archived) public
mailing list public-fx@w3.org (see
instructions) is preferred
for discussion of this specification. When sending e-mail, please put the
text “filter-effects” in the subject, preferably like this:
“[filter-effects] …summary of comment…”

1. Introduction

A filter effect is a graphical operation that is applied to an element
as it is drawn into the document. It is an image-based effect, in that it
takes zero or more images as input, a number of parameters specific to the
effect, and then produces an image as output. The output image is either
rendered into the document instead of the original element, used as an
input image to another filter effect, or provided as a CSS image value.

A simple example of a filter effect is a "flood". It takes no image
inputs but has a parameter defining a color. The effect produces an output
image that is completely filled with the given color. A slightly more
complex example is an "inversion" which takes a single image input
(typically an image of the element as it would normally be rendered into
its parent) and adjusts each pixel such that they have the opposite color
values.

Filter effects are exposed with three levels of complexity:

A small set of canned filter functions that are given by name. While
not particularly powerful, these are convenient and easily understood and
provide a simple approach to achieving common effects, such as blurring.

A graph of individual filter effects described in markup that define
an overall effect. The graph is agnostic to its input in that the effect
can be applied to any content. While such graphs are the combination of
effects that may be simple in isolation, the graph as a whole can produce
complex effects. An example is given below.

A customizable system that exposes a shading language allowing
control over the geometry and pixel values of filtered output.

The following pictures show the intermediate image results from each of
the six filter elements:

Source graphic

After filter primitive 1

After filter primitive 2

After filter primitive 3

After filter primitive 4

After filter primitive 5

After filter primitive 6

Filter primitive ‘feGaussianBlur’ takes input SourceAlpha, which is the alpha channel of the
source graphic. The result is stored in a temporary buffer named "blur".
Note that "blur" is used as input to both filter primitives 2 and 3.

Filter primitive ‘feOffset’
takes buffer "blur", shifts the result in a positive direction in both x
and y, and creates a new buffer named "offsetBlur". The effect is that of
a drop shadow.

Filter primitive ‘feSpecularLighting’, uses buffer "blur" as a
model of a surface elevation and generates a lighting effect from a
single point source. The result is stored in buffer "specOut".

Filter primitive ‘feComposite’ masks out the result of filter
primitive 3 by the original source graphics alpha channel so that the
intermediate result is no bigger than the original source graphic.

Filter primitive ‘feComposite’ composites the result of the
specular lighting with the original source graphic.

Filter primitive ‘feMerge’
composites two layers together. The lower layer consists of the drop
shadow result from filter primitive 2. The upper layer consists of the
specular lighting result from filter primitive 5.

2. Reading This Document

Each section of this document is normative unless otherwise specified.

This document contains explicit conformance criteria that overlap with
some RelaxNG definitions in requirements. If there is any conflict between
the two, the explicit conformance criteria are the definitive reference.

Note that even though this specification references parts of
SVG 1.1 [SVG11] it
does not require an SVG 1.1 implementation.

Add link to conformance classes
here.

3. Definitions

When used in this specification, terms have the meanings assigned in
this section.

null filter

The null filter output is all transparent black pixels. If applied to
an element it means that the element (and children if any) becomes
invisible. Note that it does not affect event processing.

The union of all border boxes for the element that has an associated
CSS layout box and is not in the http://www.w3.org/2000/svg
namespace and it's descendant elements. Or the object bounding box [SVG11], if the
element does not have an associated CSS layout box and is in the
http://www.w3.org/2000/svg namespace (See getBoundingClientRect[CSSOM]]).

local
coordinate system

In general, a coordinate system defines locations and distances on
the current canvas. The current local coordinate system (also user
coordinate system) is the coordinate system that is currently active and
which is used to define how coordinates and lengths are located and
computed, respectively, on the current canvas [CSS3-TRANSFORMS].

For elements that have an associated CSS layout box, the current user
coordinate system has its origin at the top-left corner of the bounding client rect and one
unit equals one CSS pixel. The viewport for resolving percentage values
is defined by the width and height of the bounding client rect.

If the element does not have an associated CSS layout box and is in
the http://www.w3.org/2000/svg namespace, the current local
coordinate system has its origin at the top-left corner of the element's
nearest viewport.

All elements In SVG 1.1 it applies only to
"container elements (except ‘mask’) and graphics elements"

Inherited:

no

Percentages:

N/A

Media:

visual

Animatable:

yes

If the value of the ‘filter’ property is ‘none’ then there is no filter effect applied.
Otherwise, the list of functions (described below) are applied in order.

The application of the ‘filter’ property to an element formatted with
the CSS box model establishes a pseudo-stacking-context the same way that
CSS ‘opacity’ does, and all the element's
descendants are rendered together as a group with the filter effect
applied to the group as a whole.

The ‘filter’
property has no effect on the geometry of the target element's CSS boxes,
even though ‘filter’
can cause painting outside of an element's border box.

The compositing model follows the SVG compositing
model[SVG11]:
first any filter effect is applied, then any clipping, masking and
opacity. These effects all apply after any other CSS effects such as
‘border’. As per SVG, the application of
‘filter’ has no
effect on hit-testing.

The CSS WG wants to look into how to apply the filter to
certain parts of an object (background, border) instead of just the whole
group with descendants.

Margin properties specify the width of the filter region of a box. The ‘filter-margin’
shorthand property sets the margin for all four sides while the other
margin properties only set their respective side. These properties apply
to all elements (which is different from the regular margin properties
where vertical margins will not have any effect on non-replaced inline
elements).

Indicates that the user agent can choose either the ‘sRGB or ’linearRGB'‘ spaces for
filter effects color operations. This option indicates that the author
doesn’t require that color operations occur in a particular
color space.

‘sRGB’

Indicates that filter effects color operations should occur in the
sRGB color space.

Note that ‘color-interpolation-filters’ has a different
initial value than ‘color-interpolation’.
‘color-interpolation-filters’ has an initial
value of ‘linearRGB’, where as ‘color-interpolation’ has an initial value of
‘sRGB’. Thus, in the default case, filter
effects operations occur in the linearRGB color space, whereas all other
color interpolations occur by default in the sRGB color space.

At the moment it is unspecified how the ‘color-interpolation-filters’ property applies
to short hand filters. The FXTF discussed to apply it to the whole filter
chain.

7. Filter Functions

The value of the ‘filter’ property is a list of <filter-function>s applied in the order
provided. The individual filter functions are separated by whitespace. The
set of allowed filter functions is given below.

An IRI
reference to a ‘filter’ element that defines the filter
effect. For example ‘url(commonfilters.xml#large-blur)’. If the IRI
references a non-existent object or the referenced object is not a ‘filter’ element, then the null filter will be applied instead.

grayscale(<number> | <percentage>)

Converts the input image to grayscale. The passed parameter defines
the proportion of the conversion. A value of 100% is completely
grayscale. A value of 0% leaves the input unchanged. Values between 0%
and 100% are linear multipliers on the effect. If the parameter is
missing, a value of 100% is used. The markup equivalent of this function
is given below.

sepia(<number> | <percentage>)

Converts the input image to sepia. The passed parameter defines the
proportion of the conversion. A value of 100% is completely sepia. A
value of 0 leaves the input unchanged. Values between 0% and 100% are
linear multipliers on the effect. If the parameter is missing, a value of
100% is used. The markup equivalent of this function is given below.

saturate(<number> | <percentage>)

Saturates the input image. The passed parameter defines the
proportion of the conversion. A value of 0% is completely un-saturated. A
value of 100% leaves the input unchanged. Other values are linear
multipliers on the effect. Values of amount over 100% are allowed,
providing super-saturated results. If the parameter is missing, a value
of 100% is used. The markup equivalent of this function is given below.

hue-rotate(<angle>)

Applies a hue rotation on the input image. The passed parameter
defines the number of degrees around the color circle the input samples
will be adjusted. A value of 0deg leaves the input unchanged. If the
parameter is missing, a value of 0deg is used. Implementations should not
normalize this value in order to allow animations beyond 360deg. The
markup equivalent of this function is given below.

invert(<number> | <percentage>)

Inverts the samples in the input image. The passed parameter defines
the proportion of the conversion. A value of 100% is completely inverted.
A value of 0% leaves the input unchanged. Values between 0% and 100% are
linear multipliers on the effect. If the parameter is missing, a value of
100% is used. The markup equivalent of this function is given below.

opacity(<number> | <percentage>)

Applies transparency to the samples in the input image. The passed
parameter defines the proportion of the conversion. A value of 0% is
completely transparent. A value of 100% leaves the input unchanged.
Values between 0% and 100% are linear multipliers on the effect. This is
equivalent to multiplying the input image samples by amount. If the
parameter is missing, a value of 100% is used. The markup equivalent of
this function is given below.

brightness(<number> | <percentage>)

Applies a linear multiplier to input image, making it appear more or
less bright. A value of 0% will create an image that is completely black.
A value of 100% leaves the input unchanged. Other values are linear
multipliers on the effect. Values of amount over 100% are allowed,
providing brighter results. If the parameter is missing, a value of 100%
is used. The markup equivalent of this function is given below.

contrast(<number> | <percentage>)

Adjusts the contrast of the input. A value of 0% will create an image
that is completely black. A value of 100% leaves the input unchanged.
Values of amount over 100% are allowed, providing results with less
contrast. If the parameter is missing, a value of 100% is used. The
markup equivalent of this function is given
below.

blur(<length>)

Applies a Gaussian blur to the input image. The passed parameter (the
radius) defines the value of the standard deviation to the Gaussian
function. If no parameter is provided, then a value 0 is used. The
parameter is specified a CSS length, but does not accept percentage
values. The markup equivalent of this function is given below.

drop-shadow(<shadow>)

Applies a drop shadow effect to the input image. A drop shadow is
effectively a blurred, offset version of the input image's alpha mask
drawn in a particular color, composited below the image. The function
accepts a parameter of type <shadow> (defined in CSS3 Backgrounds),
with the exception that the ‘inset’
keyword is not allowed. The markup equivalent of this function is given below.

The <vertex-shader> and
<fragment-shader> parameters reference the shader
sources. The <vertex-mesh> parameter specifies the
granularity of the vertex mesh. The <params> specify
the parameters and corresponding passed to the shader programs. The
markup equivalent is given below.

It might be clearer to name the custom()
function the shader() function instead and introduce an
feCustomShader filter primitive instead of ’feCustom'.

The above list is a collection of effects that can be easily defined
in terms of SVG filters. However, there are many more interesting effects
that can be considered for inclusion. If accepted, there will have to be
equivalent XML elements for the effect. Effects considered include:

brightness, contrast, exposure

halftone

motion-blur(radius, angle)

posterize(levels)

bump(x, y, radius, intensity)

generators

circle-crop(x, y, radius)

affine-transform(some matrix)

crop(x, y, w, h)

bloom(radius, intensity)

gloom(radius, intensity)

mosaic(w,h)

displace(url, intensity)

edge-detect(intensity)

pinch(x, y, radius, scale)

twirl(x, y, radius, angle)

The first function in the list takes the element (SourceGraphic) as the input image. Subsequent
operations take the output from the previous function as the input image.
The exception is the function that references a ‘filter’
element, which can specify an alternate input, but still uses the previous
output as its SourceGraphic.

Specifies the coordinate system for the various length values within
the filter primitives and
for the attributes that define the filter primitive subregion.
If primitiveUnits="userSpaceOnUse", any
length values within the filter definitions represent values in the
current user coordinate
system in place at the time when the ‘filter’
element is referenced (i.e., the user coordinate system for the element
referencing the ‘filter’ element via a ‘filter’ property).
If primitiveUnits="objectBoundingBox",
then any length values within the filter definitions represent fractions
or percentages of the bounding box on the referencing element (see object bounding box units). Note
that if only one number was specified in a <number-optional-number>
value this number is expanded out before the ‘primitiveUnits’ computation takes place.
The lacuna
value for ‘primitiveUnits’ is userSpaceOnUse.Animatable: yes.

An IRI
reference to another ‘filter’
element within the current SVG
document fragment. Any attributes which are defined on the
referenced ‘filter’ element which are not defined on
this element are inherited by this element. If this element has no
defined filter nodes, and the referenced element has defined filter
nodes (possibly due to its own href
attribute), then this element inherits the filter nodes defined from the
referenced ‘filter’ element. Inheritance can be indirect
to an arbitrary level; thus, if the referenced ‘filter’
element inherits attributes or its filter node specification due to its
own href attribute, then the current
element can inherit those attributes or filter node specifications.

This attribute is deprecated and should not be used in
new content, it's included for backwards compatibility reasons only.

Animatable: yes.

Properties inherit into the ‘filter’ element
from its ancestors; properties do not inherit from the element
referencing the ‘filter’ element.

‘filter’ elements are never rendered directly;
their only usage is as something that can be referenced using the ‘filter’ property. The ‘display’ property does not apply to the ‘filter’ element; thus, ‘filter’
elements are not directly rendered even if the ‘display’ property is set to a value other than
none, and ‘filter’
elements are available for referencing even when the ‘display’ property on the ‘filter’ element
or any of its ancestors is set to none.

9. Filter effects
region

A ‘filter’ element can define a region on the
canvas to which a given filter effect applies and can provide a resolution
for any intermediate continuous tone images used to process any
raster-based filter primitives.
The ‘filter’ element has the following attributes
which work together to define the filter effects region:

If filterUnits="userSpaceOnUse", ‘x’, ‘y’,
‘width’, ‘height’ represent values in the current user
coordinate system in place at the time when the ‘filter’
element is referenced (i.e., the user coordinate system for the element
referencing the ‘filter’ element via a ‘filter’ property).

These attributes define a rectangular region on the canvas to which
this filter applies.

The amount of memory and processing time required to apply the filter
are related to the size of this rectangle and the ‘filterRes’ attribute of the filter.

The coordinate system for these attributes depends on the value for
attribute ‘filterUnits’.

The bounds of this rectangle act as a hard clipping region for each
filter primitive included with
a given ‘filter’ element; thus, if the effect of a
given filter primitive would extend beyond the bounds of the rectangle
(this sometimes happens when using a ‘feGaussianBlur’ filter primitive with a very
large ‘stdDeviation’), parts of the effect will get
clipped.

Defines the width and height of the intermediate images in pixels. If
not provided, then the user agent will use reasonable values to produce
a high-quality result on the output device.

Care should be taken when assigning a non-default value to this
attribute. Too small of a value may result in unwanted pixelation in the
result. Too large of a value may result in slow processing and large
memory usage.

Negative or zero
values disable rendering of the element which referenced the
filter.

Animatable: yes.

Note that both of the two possible value for ‘filterUnits’ (i.e., objectBoundingBox and userSpaceOnUse) result in a filter region whose coordinate system has
its X-axis and Y-axis each parallel to the X-axis and Y-axis,
respectively, of the user coordinate
system for the element to which the filter will be applied.

Sometimes implementers can achieve faster
performance when the filter region can be mapped directly to device
pixels; thus, for best performance on display devices, it is suggested
that authors define their region such that the user agent can align the filter region pixel-for-pixel with the
background. In particular, for best filter effects performance, avoid
rotating or skewing the user coordinate system. Explicit values for
attribute ‘filterRes’ can either help or harm
performance. If ‘filterRes’ is smaller than the automatic
(i.e., default) filter resolution, then filter effect might have faster
performance (usually at the expense of quality). If ‘filterRes’ is larger than the automatic (i.e.,
default) filter resolution, then filter effects performance will usually
be slower.

It is often necessary to provide padding space
because the filter effect might impact bits slightly outside the
tight-fitting bounding
box on a given object. For these purposes, it is possible to provide
negative percentage values for ‘x’,
‘y’ and percentages values greater than 100% for ‘width’, ‘height’. This, for example, is why the
defaults for the filter effects region are x="-10%"
y="-10%" width="120%" height="120%".

Implementations will often need to
maintain supplemental background image buffers in order to support the BackgroundImage and BackgroundAlpha pseudo input images.
Sometimes, the background image buffers will contain an in-memory copy of
the accumulated painting operations on the current canvas.

Because in-memory image buffers can take up significant system
resources, content must explicitly indicate to the
user agent that the document needs access to the background image before
BackgroundImage and BackgroundAlpha pseudo input images can be
used.

A background image is what's been rendered before the current
element. The host language is responsible for defining what rendered
before in this context means. For SVG, that uses the painter's
algorithm, rendered before means all of the prior elements in pre
order traversal previous to the element to which the filter is applied.

Indicates that a new (i.e., initially transparent black) background
image canvas is established and that (in effect) all children of the
current container
element element shall be rendered into the new background image
canvas in addition to being rendered onto the target device.

accumulate (the initial/default value)

If an ancestor container
element element has a property value of ‘enable-background: new’, then all renderable child
elements of the current container
element element are rendered both onto the parent container
element element's background image canvas and onto the target device.

Otherwise, there is no current background image canvas, so it is only
necessary to render graphics
elements the renderable elements onto the target device. (No need to
render to the background image canvas.)

If a filter effect specifies either the BackgroundImage or the BackgroundAlpha pseudo input images and no
ancestor container
element element has a property value of ‘enable-background: new’, then the background image
request is technically in error. Processing will proceed without
interruption (i.e., no error message) and a transparent black image shall
be provided in response to the request.

10.1. Accessing
the background image in SVG

This section only applies to the SVG definition of enable-background.

Assume you have an element E in the document and that E has a series of
ancestors A1 (its immediate parent), A2, etc. (Note:
A0 is E.) Each ancestor Ai will have a corresponding
temporary background image offscreen buffer BUFi. The contents
of the background image available to a ‘filter’
referenced by E is defined as follows:

Find the element Ai with the smallest subscript i
(including A0=E) for which the ‘enable-background’ property has the value
‘new’.

Note that if there is no such ancestor element, then the
outermost element An will be chosen.

For each Ai (from i=n to 1), initialize BUFi to
transparent black. Render all children of Ai up to but not
including Ai-1 into BUFi. The children are painted,
then filtered, clipped, masked and composited using the various painting,
filtering, clipping, masking and object opacity settings on the given
child. Any filter effects, masking and group opacity that might be set on
Ai do not apply when rendering the children of
Ai into BUFi.
(Note that for the case of A0=E, the graphical contents of E
are not rendered into BUF1 and thus are not part of the
background image available to E. Instead, the graphical contents of E are
available via the SourceGraphic and SourceAlpha pseudo input images.)

Then, for each Ai (from i=1 to n-1), composite
BUFi into BUFi+1.

The accumulated result (i.e., BUFn) represents the
background image available to E.

The first set is the reference graphic. The reference graphic consists
of a red rectangle followed by a 50% transparent ‘g’ element.
Inside the ‘g’ is a green circle that partially
overlaps the rectangle and a a blue triangle that partially overlaps the
circle. The three objects are then outlined by a rectangle stroked with a
thin blue line. No filters are applied to the reference graphic.

The second set enables background image processing and adds an empty
‘g’
element which invokes the ShiftBGAndBlur filter. This filter takes the
current accumulated background image (i.e., the entire reference graphic)
as input, shifts its offscreen down, blurs it, and then writes the result
to the canvas. Note that the offscreen for the filter is initialized to
transparent black, which allows the already rendered rectangle, circle
and triangle to show through after the filter renders its own result to
the canvas.

The third set enables background image processing and instead invokes
the ShiftBGAndBlur filter on the inner ‘g’ element. The accumulated background at
the time the filter is applied contains only the red rectangle. Because
the children of the inner ‘g’ (i.e., the circle and triangle) are not
part of the inner ‘g’ element's background and because
ShiftBGAndBlur ignores SourceGraphic, the children of the inner ‘g’ do not
appear in the result.

The fourth set enables background image processing and invokes the
ShiftBGAndBlur on the ‘polygon’ element that draws the triangle.
The accumulated background at the time the filter is applied contains the
red rectangle plus the green circle ignoring the effect of the ‘opacity’ property
on the inner ‘g’ element. (Note that the blurred green
circle at the bottom does not let the red rectangle show through on its
left side. This is due to ignoring the effect of the ‘opacity’
property.) Because the triangle itself is not part of the accumulated
background and because ShiftBGAndBlur ignores SourceGraphic, the triangle
does not appear in the result.

The fifth set is the same as the fourth except that filter
ShiftBGAndBlur_WithSourceGraphic is invoked instead of ShiftBGAndBlur.
ShiftBGAndBlur_WithSourceGraphic performs the same effect as
ShiftBGAndBlur, but then renders the SourceGraphic on top of the shifted,
blurred background image. In this case, SourceGraphic is the blue
triangle; thus, the result is the same as in the fourth case except that
the blue triangle now appears.

11. Filter
primitives overview

11.1.
Overview

This section describes the various filter primitives that can be
assembled to achieve a particular filter effect.

Unless otherwise stated, all image filters operate on premultiplied
RGBA samples. Filters which work more naturally on non-premultiplied data
(‘feColorMatrix’, ‘feCustom’ and
‘feComponentTransfer’) will temporarily undo
and redo premultiplication as specified. All raster effect filtering
operations take 1 to N input RGBA images, additional attributes as
parameters, and produce a single output RGBA image.

The RGBA result from each filter primitive will be clamped into the
allowable ranges for colors and opacity values. Thus, for example, the
result from a given filter
primitive will have any negative color values or opacity values
adjusted up to color/opacity of zero.

Sometimes filter primitives result in
undefined pixels. For example, filter primitive ‘feOffset’ can
shift an image down and to the right, leaving undefined pixels at the top
and left. In these cases, the undefined pixels are set to transparent
black.

11.2. Common attributes

The following attributes are available for most of the filter
primitives:

Assigned name for this filter
primitive. If supplied, then graphics that result from processing
this filter primitive can be
referenced by an ‘in’ attribute on a subsequent filter
primitive within the same ‘filter’
element. If no value is provided, the output will only be available for
re-use as the implicit input into the next filter primitive if that filter primitive provides no
value for its ‘in’ attribute.

Can <filter-primitive-reference> be defined as
<author-ident> as in CSS Values and Units?

If the value for result appears
multiple times within a given ‘filter’
element, then a reference to that result will use the closest preceding
filter primitive with the
given value for attribute ‘result’. Forward references to results are
not allowed, and will be treated as if no result was specified.

Definitions for the six keywords:

SourceGraphic

This keyword represents the graphics elements that were the
original input into the ‘filter’
element. For raster effects filter primitives, the
graphics elements will be rasterized into an initially clear RGBA
raster in image space. Pixels left untouched by the original graphic
will be left clear. The image is specified to be rendered in linear
RGBA pixels. The alpha channel of this image captures any
anti-aliasing specified by SVG. (Since the raster is linear, the
alpha channel of this image will represent the exact percent coverage
of each pixel.)

SourceAlpha

This keyword represents the graphics elements that were the
original input into the ‘filter’
element. SourceAlpha has all of the same
rules as SourceGraphic except that only
the alpha channel is used. The input image is an RGBA image
consisting of implicitly black color values for the RGB channels, but
whose alpha channel is the same as SourceGraphic.

If this option is used, then some
implementations might need to rasterize the graphics elements in
order to extract the alpha channel.

Consider whether this should be e.g the bounding client rect filled with the current color, or if
it makes sense to use the ‘fill’
property for this case too.

Note that text is generally painted filled,
not stroked.

The FillPaint image has conceptually
infinite extent. Frequently this image is opaque everywhere, but it
might not be if the "paint" itself has alpha, as in the case of a
gradient or pattern which itself includes transparent or
semi-transparent parts.

StrokePaint

This keyword represents the target element rendered
stroked.

For svg this keyword represents the value of the ‘stroke’ on the target element for the
filter effect.

Consider whether this should be e.g the bounding client rect filled with the one of the border
colors, or if it makes sense to use the ‘stroke’ property for this case too.

Note that text is generally painted filled,
not stroked.

The StrokePaint image has conceptually
infinite extent. Frequently this image is opaque everywhere, but it
might not be if the "paint" itself has alpha, as in the case of a
gradient or pattern which itself includes transparent or
semi-transparent parts.

Animatable: yes.

11.3. Filter
primitive subregion

Define filter primitive subregion for shorthands. E.g. the
outout bounds of the previous filter functions, extended by affected area
of current filter function.

The filter primitive subregion
act as a hard clip clipping rectangle on both the filter primitive's input
image(s) and the filter primitive result.

Consider making it possible to do select between clip-input,
clip-output, clip-both or none.

All intermediate offscreens are defined to not exceed the intersection
of the filter primitive subregion
with the filter region. The filter region and any of the filter
primitive subregions are to be set up such that all offscreens are made
big enough to accommodate any pixels which even partly intersect with
either the filter region or the filter
primitive subregions.

The upper left rect shows an ‘feFlood’ with
flood-opacity="75%" so the cross should be visible through the
green rect in the middle.

The lower left rect shows an ‘feMerge’ that
merges SourceGraphic with FillPaint. Since the circle has fill-opacity="0.5"
it will also be transparent so that the cross is visible through the
green rect in the middle.

The upper right rect shows an ‘feBlend’ that
has mode="multiply". Since the circle in this case isn't
transparent the result is totally opaque. The rect should be dark green
and the cross should not be visible through it.

A limiting cone which restricts the region where the light is
projected. No light is projected outside the cone. limitingConeAngle represents the angle in degrees
between the spot light axis (i.e. the axis between the light source and
the point to which it is pointing at) and the spot light cone. User agents
should apply a smoothing technique such as anti-aliasing at the boundary
of the cone.
If no value is specified, then no limiting cone will be applied.Animatable: yes.

The second input image to the blending operation. This attribute can
take on the same values as the in
attribute.Animatable: yes.

For all feBlend modes, the result opacity is computed as follows:

qr = 1 - (1-qa)*(1-qb)

For the compositing formulas below, the following definitions apply:

image A = in
image B = in2
cr = Result color (RGB) - premultiplied
qa = Opacity value at a given pixel for image A
qb = Opacity value at a given pixel for image B
ca = Color (RGB) at a given pixel for image A - premultiplied
cb = Color (RGB) at a given pixel for image B - premultiplied

The following table provides the list of available
image blending modes:

on the RGBA color and alpha values of every pixel on the input graphics
to produce a result with a new set of RGBA color and alpha values.

The calculations are performed on non-premultiplied color values. If
the input graphics consists of premultiplied color values, those values
are automatically converted into non-premultiplied color values for this
operation.

These matrices often perform an identity mapping in the alpha channel.
If that is the case, an implementation can avoid the costly undoing and
redoing of the premultiplication for all pixels with A = 1.

Attribute definitions:

type =
"matrix | saturate | hueRotate | luminanceToAlpha"

Indicates the type of matrix operation. The keyword matrix indicates that a full 5x4 matrix of values
will be provided. The other keywords represent convenience shortcuts to
allow commonly used color operations to be performed without specifying
a complete matrix. The lacuna
value for ‘type’ is matrix. Animatable: yes.

If the attribute is not specified, then the default behavior depends on
the value of attribute ‘type’. If type="matrix", then this attribute defaults to
the identity matrix. If type="saturate",
then this attribute defaults to the value 1, which results in the identity matrix. If
type="hueRotate", then this attribute
defaults to the value 0, which results in
the identity matrix.Animatable: yes.

for every pixel. It allows operations like brightness adjustment,
contrast adjustment, color balance or thresholding.

The calculations are performed on non-premultiplied color values. If
the input graphics consists of premultiplied color values, those values
are automatically converted into non-premultiplied color values for this
operation. (Note that the undoing and redoing of the premultiplication can
be avoided if ‘feFuncA’ is the identity transform and all
alpha values on the source graphic are set to 1.)

The child elements of a ‘feComponentTransfer’ element specify the transfer functions for the four
channels:

‘feFuncR’ — transfer function for the red
component of the input graphic

‘feFuncG’ — transfer function for the green
component of the input graphic

‘feFuncB’ — transfer function for the blue
component of the input graphic

‘feFuncA’ — transfer function for the alpha
component of the input graphic

Indicates the type of component transfer function. The type of
function determines the applicability of the other attributes.

In the following, C is the initial component (e.g., ‘feFuncR’), C' is the remapped
component; both in the closed interval [0,1].

For identity:

C' = C

For table, the function is
defined by linear interpolation between values given in the
attribute ‘tableValues’. The table has
n+1 values (i.e., v0 to vn)
specifying the start and end values for n evenly sized
interpolation regions. Interpolations use the following formula:

For a value C < 1 find k such
that:

k/n <= C < (k+1)/n

The result C' is given by:

C' = vk + (C - k/n)*n *
(vk+1 - vk)

If C = 1 then:

C' = vn.

For discrete, the function is
defined by the step function given in the attribute ‘tableValues’, which provides a list of
n values (i.e., v0 to vn-1) in
order to identify a step function consisting of n steps.
The step function is defined by the following formula:

When type="table", the list of <number>
s v0,v1,...vn, separated by white space and/or a comma,
which define the lookup table. An empty list results in an identity
transfer function. If the attribute is not specified, then the
effect is as if an empty list were provided.Animatable: yes.

This filter performs the combination of the two input images
pixel-wise in image space using one of the Porter-Duff [PORTERDUFF] compositing operations:
over, in, atop, out, xor [SVG-COMPOSITING]. Additionally, a
component-wise arithmetic operation (with the result clamped
between [0..1]) can be applied.

The arithmetic operation is useful for combining the
output from the ‘feDiffuseLighting’ and ‘feSpecularLighting’ filters with texture
data. It is also useful for implementing dissolve. If the
arithmetic operation is chosen, each result pixel is computed
using the following formula:

result = k1*i1*i2 + k2*i1 + k3*i2 + k4

where:

i1 and i2 indicate the corresponding
pixel channel values of the input image, which map to ‘in’ and ‘in2’
respectively

k1, k2, k3 and k4 indicate the values
of the attributes with the same name

For this filter primitive, the extent of the resulting image might
grow as described in the section that describes the filter primitive subregion.

Attribute definitions:

operator
= "over | in | out | atop | xor | arithmetic"

The compositing operation that is to be performed. All of the
operator types except arithmetic match the corresponding operation
as described in [PORTERDUFF]. The
arithmetic operator is described
above. The lacuna
value for ‘operator’ is over. Animatable: yes.

feConvolveMatrix applies a matrix convolution filter effect. A
convolution combines pixels in the input image with neighboring pixels
to produce a resulting image. A wide variety of imaging operations can
be achieved through convolutions, including blurring, edge detection,
sharpening, embossing and beveling.

A matrix convolution is based on an n-by-m matrix (the convolution
kernel) which describes how a given pixel value in the input image is
combined with its neighboring pixel values to produce a resulting
pixel value. Each result pixel is determined by applying the kernel
matrix to the corresponding source pixel and its neighboring pixels.
The basic convolution formula which is applied to each color value for
a given pixel is:

where "orderX" and "orderY" represent the X and Y values for the ‘order’ attribute, "targetX" represents the
value of the ‘targetX’ attribute, "targetY" represents
the value of the ‘targetY’ attribute, "kernelMatrix"
represents the value of the ‘kernelMatrix’ attribute, "divisor"
represents the value of the ‘divisor’ attribute, and "bias" represents
the value of the ‘bias’ attribute.

Note in the above formulas that the values in the kernel matrix are
applied such that the kernel matrix is rotated 180 degrees relative to
the source and destination images in order to match convolution theory
as described in many computer graphics textbooks.

To illustrate, suppose you have a input image which is 5 pixels by
5 pixels, whose color values for one of the color channels are as
follows:

Let's focus on the color value at the second row and second column
of the image (source pixel value is 120). Assuming the simplest case
(where the input image's pixel grid aligns perfectly with the kernel's
pixel grid) and assuming default values for attributes ‘divisor’, ‘targetX’ and ‘targetY’, then resulting color value will
be:

Because they operate on pixels, matrix convolutions are inherently
resolution-dependent. To make ‘feConvolveMatrix’ produce
resolution-independent results, an explicit value should be provided
for either the ‘filterRes’ attribute on the ‘filter’ element and/or attribute ‘kernelUnitLength’.

‘kernelUnitLength’, in combination with the
other attributes, defines an implicit pixel grid in the filter effects
coordinate system (i.e., the coordinate system established by the ‘primitiveUnits’ attribute). If the pixel
grid established by ‘kernelUnitLength’ is not scaled to match
the pixel grid established by attribute ‘filterRes’ (implicitly or explicitly),
then the input image will be temporarily rescaled to match its pixels
with ‘kernelUnitLength’. The convolution happens
on the resampled image. After applying the convolution, the image is
resampled back to the original resolution.

When the image must be resampled to match the coordinate system
defined by ‘kernelUnitLength’ prior to convolution, or
resampled to match the device coordinate system after convolution, it
is recommended that high quality viewers make use of appropriate
interpolation techniques, for example bilinear or bicubic. Depending
on the speed of the available interpolents, this choice may be
affected by the ‘image-rendering’ property setting. Note
that implementations might choose approaches that minimize or
eliminate resampling when not necessary to produce proper results,
such as when the document is zoomed out such that ‘kernelUnitLength’ is considerably smaller
than a device pixel.

Indicates the number of cells in each dimension for ‘kernelMatrix’. The values provided must
be <integer>
s greater than zero. Values that are not integers will be truncated,
i.e. rounded to the closest integer value towards zero. The first
number, <orderX>, indicates the number of columns in the
matrix. The second number, <orderY>, indicates the number of
rows in the matrix. If <orderY> is not provided, it defaults
to <orderX>.
It is recommended that only small values (e.g., 3) be used; higher
values may result in very high CPU overhead and usually do not
produce results that justify the impact on performance.
The lacuna
value for ‘order’ is 3. Animatable: yes.

kernelMatrix = "<list of
numbers>"

The list of <number>
s that make up the kernel matrix for the convolution. Values are
separated by space characters and/or a comma. The number of entries
in the list must equal <orderX> times <orderY>.Animatable: yes.

After applying the kernelMatrix to
the input image to yield a number, that number is divided by ‘divisor’ to yield the final destination
color value. A divisor that is the sum of all the matrix values
tends to have an evening effect on the overall color intensity of
the result. If the specified divisor is zero then the default value
will be used instead. The lacuna
value is the sum of all values in kernelMatrix, with the
exception that if the sum is zero, then the divisor is set to 1.Animatable: yes.

After applying the kernelMatrix to
the input image to yield a number and applying the ‘divisor’, the ‘bias’ attribute is added to each
component. One application of ‘bias’ is when it is desirable to have
.5 gray value be the zero response of
the filter. The bias property shifts the range of the filter. This
allows representation of values that would otherwise be clamped to 0
or 1.
The lacuna
value for ‘bias’ is 0.Animatable: yes.

Determines the positioning in X of the convolution matrix
relative to a given target pixel in the input image. The leftmost
column of the matrix is column number zero. The value must be such
that: 0 <= targetX < orderX. By default, the convolution
matrix is centered in X over each pixel of the input image (i.e.,
targetX = floor ( orderX / 2 )).Animatable: yes.

Determines the positioning in Y of the convolution matrix
relative to a given target pixel in the input image. The topmost row
of the matrix is row number zero. The value must be such that: 0
<= targetY < orderY. By default, the convolution matrix is
centered in Y over each pixel of the input image (i.e., targetY =
floor ( orderY / 2 )).Animatable: yes.

edgeMode = "duplicate |
wrap | none"

Determines how to extend the input image as necessary with color
values so that the matrix operations can be applied when the kernel
is positioned at or near the edge of the input image.

"duplicate" indicates that the input image is extended along each
of its borders as necessary by duplicating the color values at the
given edge of the input image.

The first number is the <dx> value. The second number is
the <dy> value. If the <dy> value is not specified, it
defaults to the same value as <dx>. Indicates the intended
distance in current filter units (i.e., units as determined by the
value of attribute ‘primitiveUnits’) between successive
columns and rows, respectively, in the ‘kernelMatrix’. By specifying value(s)
for ‘kernelUnitLength’, the kernel becomes
defined in a scalable, abstract coordinate system. If ‘kernelUnitLength’ is not specified, the
default value is one pixel in the offscreen bitmap, which is a
pixel-based coordinate system, and thus potentially not scalable.
For some level of consistency across display media and user agents,
it is necessary that a value be provided for at least one of ‘filterRes’ and ‘kernelUnitLength’. In some
implementations, the most consistent results and the fastest
performance will be achieved if the pixel grid of the temporary
offscreen images aligns with the pixel grid of the kernel.
If a negative or zero value is specified the default value will be
used instead. Animatable: yes.

preserveAlpha = "false |
true"

A value of false indicates that the
convolution will apply to all channels, including the alpha channel.
In this case the ALPHAX,Y of the convolution formula for a
given pixel is:

A value of true indicates that the
convolution will only apply to the color channels. In this case, the
filter will temporarily unpremultiply the color component values,
apply the kernel, and then re-premultiply at the end. In this case
the ALPHAX,Y of the convolution formula for a
given pixel is:

This filter primitive lights an image using the alpha channel as a
bump map. The resulting image is an RGBA opaque image based on the
light color with alpha = 1.0 everywhere. The lighting calculation
follows the standard diffuse component of the Phong lighting model.
The resulting image depends on the light color, light position and
surface geometry of the input bump map.

The light map produced by this filter primitive can be combined
with a texture image using the multiply term of the
arithmetic‘feComposite’ compositing method. Multiple
light sources can be simulated by adding several of these light maps
together before applying it to the texture image.

The formulas below make use of 3x3 filters. Because they operate on
pixels, such filters are inherently resolution-dependent. To make ‘feDiffuseLighting’ produce
resolution-independent results, an explicit value should be provided
for either the ‘filterRes’ attribute on the ‘filter’ element and/or attribute ‘kernelUnitLength’.

‘kernelUnitLength’, in combination with the
other attributes, defines an implicit pixel grid in the filter effects
coordinate system (i.e., the coordinate system established by the ‘primitiveUnits’ attribute). If the pixel
grid established by ‘kernelUnitLength’ is not scaled to match
the pixel grid established by attribute ‘filterRes’ (implicitly or explicitly),
then the input image will be temporarily rescaled to match its pixels
with ‘kernelUnitLength’. The 3x3 filters are
applied to the resampled image. After applying the filter, the image
is resampled back to its original resolution.

When the image must be
resampled, it is recommended that high quality viewers make use of
appropriate interpolation techniques, for example bilinear or
bicubic. Depending on the speed of the available interpolents,
this choice may be affected by the ‘image-rendering’ property setting. Note
that implementations might choose approaches that minimize or
eliminate resampling when not necessary to produce proper results,
such as when the document is zoomed out such that ‘kernelUnitLength’ is considerably smaller
than a device pixel.

For the formulas that follow, the
Norm(Ax,Ay,Az) function
is defined as:

ED: Consider making the following in mathml

Norm(Ax,Ay,Az)
= sqrt(Ax^2+Ay^2+Az^2)

The resulting RGBA image is computed as follows:

Dr = kd * N.L *
Lr
Dg = kd * N.L * Lg
Db = kd * N.L * Lb
Da = 1.0

where

kd = diffuse lighting constant
N = surface normal unit vector, a function of x and y
L = unit vector pointing from surface to light, a function of x and y
in the point and spot light cases
Lr,Lg,Lb = RGB components of light,
a function of x and y in the spot light case

N is a function of x and y and depends on the surface gradient as
follows:

The surface described by the input alpha image I(x,y) is:

Z (x,y) = surfaceScale * I(x,y)

Surface normal is calculated using the
Sobel gradient 3x3 filter. Different filter kernels are used depending
on whether the given pixel is on the interior or an edge. For each
case, the formula is:

In these formulas, the dx and dy values
(e.g., I(x-dx,y-dy)), represent deltas relative to a
given (x,y) position for the purpose of estimating the
slope of the surface at that point. These deltas are determined by the
value (explicit or implicit) of attribute ‘kernelUnitLength’.

Top/left corner:

FACTORx=2/(3*dx)
Kx =
| 0 0 0 |
| 0 -2 2 |
| 0 -1 1 |

FACTORy=2/(3*dy)
Ky =
| 0 0 0 |
| 0 -2 -1 |
| 0 2 1 |

Top row:

FACTORx=1/(3*dx)
Kx =
| 0 0 0 |
| -2 0 2 |
| -1 0 1 |

FACTORy=1/(2*dy)
Ky =
| 0 0 0 |
| -1 -2 -1 |
| 1 2 1 |

Top/right corner:

FACTORx=2/(3*dx)
Kx =
| 0 0 0 |
| -2 2 0 |
| -1 1 0 |

FACTORy=2/(3*dy)
Ky =
| 0 0 0 |
| -1 -2 0 |
| 1 2 0 |

Left column:

FACTORx=1/(2*dx)
Kx =
| 0 -1 1 |
| 0 -2 2 |
| 0 -1 1 |

FACTORy=1/(3*dy)
Ky =
| 0 -2 -1 |
| 0 0 0 |
| 0 2 1 |

Interior pixels:

FACTORx=1/(4*dx)
Kx =
| -1 0 1 |
| -2 0 2 |
| -1 0 1 |

FACTORy=1/(4*dy)
Ky =
| -1 -2 -1 |
| 0 0 0 |
| 1 2 1 |

Right column:

FACTORx=1/(2*dx)
Kx =
| -1 1 0|
| -2 2 0|
| -1 1 0|

FACTORy=1/(3*dy)
Ky =
| -1 -2 0 |
| 0 0 0 |
| 1 2 0 |

Bottom/left corner:

FACTORx=2/(3*dx)
Kx =
| 0 -1 1 |
| 0 -2 2 |
| 0 0 0 |

FACTORy=2/(3*dy)
Ky =
| 0 -2 -1 |
| 0 2 1 |
| 0 0 0 |

Bottom row:

FACTORx=1/(3*dx)
Kx =
| -1 0 1 |
| -2 0 2 |
| 0 0 0 |

FACTORy=1/(2*dy)
Ky =
| -1 -2 -1 |
| 1 2 1 |
| 0 0 0 |

Bottom/right corner:

FACTORx=2/(3*dx)
Kx =
| -1 1 0 |
| -2 2 0 |
| 0 0 0 |

FACTORy=2/(3*dy)
Ky =
| -1 -2 0 |
| 1 2 0 |
| 0 0 0 |

L, the unit vector from the image sample to the light, is
calculated as follows:

The first number is the <dx> value. The second number is
the <dy> value. If the <dy> value is not specified, it
defaults to the same value as <dx>. Indicates the intended
distance in current filter units (i.e., units as determined by the
value of attribute ‘primitiveUnits’) for dx and
dy, respectively, in the surface normal calculation
formulas. By specifying value(s) for kernelUnitLength, the kernel becomes defined
in a scalable, abstract coordinate system. If kernelUnitLength is not specified, the
dx and dy values should represent very
small deltas relative to a given (x,y) position, which
might be implemented in some cases as one pixel in the intermediate
image offscreen bitmap, which is a pixel-based coordinate system,
and thus potentially not scalable. For some level of consistency
across display media and user agents, it is necessary that a value
be provided for at least one of ‘filterRes’ and kernelUnitLength. Discussion of intermediate
images are in the Introduction and in
the description of attribute ‘filterRes’.
If a negative or zero value is specified the default value will be
used instead. Animatable: yes.

The displacement map, ‘in2’, defines the inverse of the mapping
performed.

The input image in is to remain premultiplied for this filter
primitive. The calculations using the pixel values from ‘in2’ are performed using non-premultiplied
color values. If the image from ‘in2’ consists of premultiplied color values,
those values are automatically converted into non-premultiplied color
values before performing this operation.

This filter can have arbitrary non-localized effect on the input
which might require substantial buffering in the processing pipeline.
However with this formulation, any intermediate buffering needs can be
determined by ‘scale’ which represents the maximum range
of displacement in either x or y.

When applying this filter, the source pixel location will often lie
between several source pixels. In this case it is recommended that
high quality viewers apply an interpolent on the surrounding pixels,
for example bilinear or bicubic, rather than simply selecting the
nearest source pixel. Depending on the speed of the available
interpolents, this choice may be affected by the ‘image-rendering’ property setting.

Displacement scale factor. The amount is expressed in the
coordinate system established by attribute ‘primitiveUnits’ on the ‘filter’ element.
When the value of this attribute is 0,
this operation has no effect on the source image.

The ‘flood-color’ property indicates what color
to use to flood the current filter
primitive subregion. The keyword ‘currentColor’ and ICC colors can be specified in
the same manner as within a <paint> specification for the
‘fill’ and ‘stroke’ properties.

The Gaussian blur kernel is an approximation of the normalized
convolution:

G(x,y) = H(x)I(y)

where

H(x) = exp(-x2/ (2s2)) /
sqrt(2* pi*s2)

and

I(x) = exp(-y2/ (2t2)) /
sqrt(2* pi*t2)

with ‘s’ being the standard
deviation in the x direction and ‘t’
being the standard deviation in the y direction, as specified by ‘stdDeviation’.

The value of ‘stdDeviation’ can be either one or two
numbers. If two numbers are provided, the first number represents a
standard deviation value along the x-axis of the current coordinate
system and the second value represents a standard deviation in Y. If
one number is provided, then that value is used for both X and Y.

Even if only one value is provided for ‘stdDeviation’, this can be implemented as
a separable convolution.

For larger values of ‘s’ (s >=
2.0), an approximation can be used: Three successive box-blurs build a
piece-wise quadratic convolution kernel, which approximates the
Gaussian kernel to within roughly 3%.

let d = floor(s * 3*sqrt(2*pi)/4 + 0.5)

... if d is odd, use three box-blurs of size ‘d’, centered on the output pixel.

... if d is even, two box-blurs of size ‘d’ (the first one centered on the pixel
boundary between the output pixel and the one to the left, the second
one centered on the pixel boundary between the output pixel and the
one to the right) and one box blur of size ‘d+1’ centered on the output pixel.

The approximation formula also applies correspondingly to ‘t’.

Frequently this operation will take place on alpha-only images,
such as that produced by the built-in input, SourceAlpha. The
implementation may notice this and optimize the single channel case.
If the input has infinite extent and is constant (e.g FillPaint where the
fill is a solid color), this operation has no effect. If the input has
infinite extent and the filter resultwhere the fill is a solid color)
is the input to an ‘feTile’,
the filter is evaluated with periodic boundary conditions.

By default,
the subregion interacts as input and output clipping and this sentence
would be irrelevant. However, this changes if the WG decides to allow
a choice between input and output clipping.

What about
other inputs with infinite extents? What is the ‘periodic boundary condition’?

The standard deviation for the blur operation. If two <number>
s are provided, the first number represents a standard deviation
value along the x-axis of the coordinate system established by
attribute ‘primitiveUnits’ on the ‘filter’ element. The second value
represents a standard deviation in Y. If one number is provided,
then that value is used for both X and Y.
A value of zero disables the effect of the given filter primitive
(i.e., the result is the filter input image).
If ‘stdDeviation’ is 0 in only one of X or Y, then the effect is
that the blur is only applied in the direction that has a non-zero
value.
The lacuna
value for ‘stdDeviation’ is 0.Animatable: yes.

This filter primitive refers to a graphic external to this filter
element, which is loaded or rendered into an RGBA raster and becomes
the result of the filter primitive.

This filter primitive can refer to an external image or can be a
reference to another piece of SVG. It produces an image similar to the
built-in image source SourceGraphic except that the graphic
comes from an external source.

If the xlink:href references a
stand-alone image resource such as a JPEG, PNG or SVG file, then the
image resource is rendered according to the behavior of the ‘image’
element; otherwise, the referenced resource is rendered according to
the behavior of the ‘use’ element. In either case, the
current user coordinate system depends on the value of attribute ‘primitiveUnits’ on the ‘filter’ element. The processing of
the preserveAspectRatio attribute on the
‘feImage’ element is identical to that of
the ‘image’ element.

When the
referenced image must be resampled to match the device coordinate
system, it is recommended that high quality viewers make use of
appropriate interpolation techniques, for example bilinear or
bicubic. Depending on the speed of the available interpolents,
this choice may be affected by the ‘image-rendering’ property setting.

Attribute definitions:

xlink:href
= "<IRI>"

An IRI
reference to an image resource or to an element. Animatable: yes.

Example feImage illustrates how
images are placed relative to an object. From left to right:

The default placement of an image. Note that the image is
centered in the filter region and
has the maximum size that will fit in the region consistent with
preserving the aspect ratio.

The image stretched to fit the bounding box of an object.

The image placed using user coordinates. Note that the image is
first centered in a box the size of the filter region and has the maximum
size that will fit in the box consistent with preserving the aspect
ratio. This box is then shifted by the given ‘x’ and ‘y’ values relative to the viewport the
object is in.

This filter primitive composites input image layers on top of each
other using the over operator with Input1
(corresponding to the first ‘feMergeNode’ child element) on the
bottom and the last specified input, InputN (corresponding to
the last ‘feMergeNode’ child element), on
top.

Many effects produce a number of intermediate layers in order to
create the final output image. This filter allows us to collapse those
into a single image. Although this could be done by using n-1
Composite-filters, it is more convenient to have this common operation
available in this form, and offers the implementation some additional
flexibility.

Each ‘feMerge’ element can have any number of
‘feMergeNode’ subelements, each of which
has an in
attribute.

The canonical implementation of feMerge is to render the entire
effect into one RGBA layer, and then render the resulting layer on the
output device. In certain cases (in particular if the output device
itself is a continuous tone device), and since merging is associative,
it might be a sufficient approximation to evaluate the effect one
layer at a time and render each layer individually onto the output
device bottom to top.

If the topmost image input is SourceGraphic and this ‘feMerge’ is the last filter primitive in
the filter, the implementation is encouraged to render the layers up
to that point, and then render the SourceGraphic directly from its vector
description on top.

The example at the start of this chapter
makes use of the feMerge
filter primitive to composite two intermediate filter results
together.

This filter primitive performs "fattening" or "thinning" of
artwork. It is particularly useful for fattening or thinning an alpha
channel.

The dilation (or erosion) kernel is a rectangle with a width of
2*x-radius and a height of 2*y-radius. In dilation,
the output pixel is the individual component-wise maximum of the
corresponding R,G,B,A values in the input image's kernel rectangle. In
erosion, the output pixel is the individual component-wise minimum of
the corresponding R,G,B,A values in the input image's kernel
rectangle.

Frequently this operation will take place on alpha-only images,
such as that produced by the built-in input, SourceAlpha. In
that case, the implementation might want to optimize the single
channel case.

If the input has infinite extent and is constant (e.g FillPaint where the
fill is a solid color), this operation has no effect. If the input has
infinite extent and the filter result is the input to an ‘feTile’, the filter is evaluated with
periodic boundary conditions.

By default,
the subregion interacts as input and output clipping and this sentence
would be irrelevant. However, this changes if the WG decides to allow
a choice between input and output clipping.

What about other
inputs with infinite extents? What is the ‘periodic
boundary condition’?

Because ‘feMorphology’ operates on premultipied
color values, it will always result in color values less than or equal
to the alpha channel.

Attribute definitions:

operator
= "erode | dilate"

A keyword indicating whether to erode (i.e., thin) or dilate
(fatten) the source graphic. The lacuna value for ‘operator’ is erode. Animatable: yes.

The radius (or radii) for the operation. If two <number>
s are provided, the first number represents a x-radius and the
second value represents a y-radius. If one number is provided, then
that value is used for both X and Y. The values are in the
coordinate system established by attribute ‘primitiveUnits’ on the ‘filter’ element.
A negative or zero value disables the effect of the given filter
primitive (i.e., the result is a transparent black image).
If the attribute is not specified, then the effect is as if a value
of 0 were specified.Animatable: yes.

This filter primitive offsets the input image relative to its
current position in the image space by the specified vector.

This is important for effects like drop shadows.

When applying this filter, the destination location may be offset
by a fraction of a pixel in device space. In this case a high quality viewer
should make use of appropriate interpolation techniques, for example
bilinear or bicubic. This is especially recommended for dynamic
viewers where this interpolation provides visually smoother movement
of images. For static viewers this is less of a concern. Close
attention should be made to the ‘image-rendering’ property setting to
determine the authors intent.

The amount to offset the input graphic along the x-axis. The
offset amount is expressed in the coordinate system established by
attribute ‘primitiveUnits’ on the ‘filter’ element.
If the attribute is not specified, then the effect is as if a value
of 0 were specified.Animatable: yes.

The amount to offset the input graphic along the y-axis. The
offset amount is expressed in the coordinate system established by
attribute ‘primitiveUnits’ on the ‘filter’ element.
If the attribute is not specified, then the effect is as if a value
of 0 were specified.Animatable: yes.

The example at the start of this chapter
makes use of the feOffset
filter primitive to offset the drop shadow from the original source
graphic.

This filter primitive lights a source graphic using the alpha
channel as a bump map. The resulting image is an RGBA image based on
the light color. The lighting calculation follows the standard
specular component of the Phong lighting model. The resulting image
depends on the light color, light position and surface geometry of the
input bump map. The result of the lighting calculation is added. The
filter primitive assumes that the viewer is at infinity in the z
direction (i.e., the unit vector in the eye direction is (0,0,1)
everywhere).

This filter primitive produces an image which contains the specular
reflection part of the lighting calculation. Such a map is intended to
be combined with a texture using the add term of the
arithmetic‘feComposite’ method. Multiple light
sources can be simulated by adding several of these light maps before
applying it to the texture image.

Unlike the ‘feDiffuseLighting’, the ‘feSpecularLighting’ filter produces a
non-opaque image. This is due to the fact that the specular result
(Sr,Sg,Sb,Sa) is meant to
be added to the textured image. The alpha channel of the result is the
max of the color components, so that where the specular light is zero,
no additional coverage is added to the image and a fully white
highlight will add opacity.

The first number is the <dx> value. The second number is
the <dy> value. If the <dy> value is not specified, it
defaults to the same value as <dx>. Indicates the intended
distance in current filter units (i.e., units as determined by the
value of attribute ‘primitiveUnits’) for dx and
dy, respectively, in the surface normal calculation
formulas. By specifying value(s) for kernelUnitLength, the kernel becomes defined
in a scalable, abstract coordinate system. If kernelUnitLength is not specified, the
dx and dy values should represent very
small deltas relative to a given (x,y) position, which
might be implemented in some cases as one pixel in the intermediate
image offscreen bitmap, which is a pixel-based coordinate system,
and thus potentially not scalable. For some level of consistency
across display media and user agents, it is necessary that a value
be provided for at least one of filterRes and kernelUnitLength. Discussion of intermediate
images are in the Introduction and in
the description of attribute ‘filterRes’.
If a negative or zero value is specified the default value will be
used instead. Animatable: yes.

This filter primitive fills a target rectangle with a repeated,
tiled pattern of an input image. The target rectangle is as large as
the filter primitive subregion
established by the ‘feTile’ element.

Typically, the input image has been defined with its own filter primitive subregion in
order to define a reference tile. ‘feTile’ replicates
the reference tile in both X and Y to completely fill the target
rectangle. The top/left corner of each given tile is at location
(x+i*width,y+j*height), where (x,y)
represents the top/left of the input image's filter primitive subregion,
width and height represent the width and
height of the input image's filter
primitive subregion, and i and j can be
any integer value. In most cases, the input image will have a smaller
filter primitive subregion
than the ‘feTile’ in order to achieve a repeated
pattern effect.

Implementers must take
appropriate measures in constructing the tiled image to avoid
artifacts between tiles, particularly in situations where the user to
device transform includes shear and/or rotation. Unless care is taken,
interpolation can lead to edge pixels in the tile having opacity
values lower or higher than expected due to the interaction of
painting adjacent tiles which each have partial overlap with
particular pixels.

Consider phasing out this
C algorithm in favor of Simplex noise, which is more HW friendly.

This filter primitive creates an image using the Perlin turbulence
function. It allows the synthesis of artificial textures like clouds
or marble. For a detailed description the of the Perlin turbulence
function, see "Texturing and Modeling", Ebert et al, AP Professional,
1994. The resulting image will fill the entire filter primitive subregion for
this filter primitive.

It is possible to create bandwidth-limited noise by synthesizing
only one octave.

The C code below shows the exact algorithm used for this filter
effect. The filter primitive
subregion is to be passed as the arguments fTileX, fTileY,
fTileWidth and fTileHeight.

For fractalSum, you get a turbFunctionResult that is aimed at a
range of -1 to 1 (the actual result might exceed this range in some
cases). To convert to a color value, use the formula colorValue
= ((turbFunctionResult * 255) + 255) / 2, then clamp to the
range 0 to 255.

For turbulence, you get a turbFunctionResult that is aimed at a
range of 0 to 1 (the actual result might exceed this range in some
cases). To convert to a color value, use the formula colorValue
= (turbFunctionResult * 255), then clamp to the range 0 to 255.

The following order is used for applying the pseudo random numbers.
An initial seed value is computed based on the ‘seed’ attribute. Then the implementation
computes the lattice points for R, then continues getting additional
pseudo random numbers relative to the last generated pseudo random
number and computes the lattice points for G, and so on for B and A.

The base frequency (frequencies) parameter(s) for the noise
function. If two <number>s
are provided, the first number represents a base frequency in the X
direction and the second value represents a base frequency in the Y
direction. If one number is provided, then that value is used for
both X and Y.

When
the seed number is handed over to the algorithm above it must first
be truncated, i.e. rounded to the closest integer value towards
zero.

Animatable: yes.

stitchTiles = "stitch | noStitch"

If stitchTiles="noStitch", no
attempt it made to achieve smooth transitions at the border of
tiles which contain a turbulence function. Sometimes the result
will show clear discontinuities at the tile borders.
If stitchTiles="stitch", then the
user agent will automatically adjust baseFrequency-x and
baseFrequency-y values such that the ‘feTurbulence’ node's width and height
(i.e., the width and height of the current subregion) contains an
integral number of the Perlin tile width and height for the first
octave. The baseFrequency will be adjusted up or down depending on
which way has the smallest relative (not absolute) change as
follows: Given the frequency, calculate
lowFreq=floor(width*frequency)/width and
hiFreq=ceil(width*frequency)/width. If
frequency/lowFreq < hiFreq/frequency then use lowFreq, else use
hiFreq. While generating turbulence values, generate lattice
vectors as normal for Perlin Noise, except for those lattice points
that lie on the right or bottom edges of the active area (the size
of the resulting tile). In those cases, copy the lattice vector
from the opposite edge of the active area.

This filter creates a drop shadow of the input image. It is a
shorthand filter, and is defined in terms of combinations of other filter primitives. The
expectation is that it can be optimized more easily by
implementations.

The result of a ‘feDropShadow’ filter primitive is
equivalent to the following:

Note that while the definition of the ‘feDropShadow’ filter primitive says that
it can be expanded into an equivalent tree it is not required that it
is implemented like that. The expectation is that user agents can
optimize the handling by not having to do all the steps separately.

Beyond the DOM interface SVGFEDropShadowElement
there is no way of accessing the internals of the ‘feDropShadow’ filter primitive, meaning
if the
filter primitive is implemented as an equivalent tree then that tree
must not be exposed to the DOM.

The calculations are performed on non-premultiplied color values. If
the input graphics consists of premultiplied color values, those
values are automatically converted into non-premultiplied color values
for this operation.

vertexShader:
<uri>

The shader referenced by <uri> provides the implementation
for the ‘feCustom’ vertex shader. If the shader
cannot be retrieved, or if the shader cannot be loaded or compiled
because it contains erroneous code, the shader is a pass through. Otherwise, the vertex
shader is invoked for all the vertex mesh vertices.

The <mat> values are in column major order.
For example, mat2(1, 2, 3, 4) has [1, 2]
in the first column and [3, 4] in the second one.

There may be different ways
to specify the <param-value> syntax. For example, it might be
better to not have a texture() function and simply a
<uri> for texture parameters. Or it might be better to not have
a mat<n> prefixes for matrices.

The following document
from Mozilla describes how WebGL vertex and fragment shaders can be
defined in <script> elements.

CSS shaders can reference shaders defined in
<script> elements, as shown in the following code
snippet.

The shader referenced by <uri> provides the implementation
for the ‘feCustom’ element fragment shader. If the
shader cannot be retrieved, or if the shader cannot be loaded or
compiled because it contains erroneous code, the shader is a pass through. Otherwise, the fragment
shader is invoked for each of the pixels during the rasterization
phase that follows the vertex shader processing.

mix(<uri> [ <blend-mode> || <alpha-compositing>
]?)

The security model disallows any color
value based conditions for the fragment shader. Authors may use the
‘mix’ function for a basic control of
compositing and color managment within a fragment shader. The
processing model for ‘mix’ is provided below.

<uri>

The shader referenced by <uri> provides the
implementation for the ‘feCustom’ element fragment shader. If
the shader cannot be retrieved, or if the shader cannot be loaded
or compiled because it contains erroneous code, the shader is a pass through. Otherwise, the fragment
shader is invoked for each of the pixels during the rasterization
phase that follows the vertex shader processing.

<blend-mode>

Each pixel is blended with the mix color by using one
of the predefined blend modes and it's appropriate blend mode
keyword (See [COMPOSITING]).

<alpha-compositing>

Each pixel is composed with the mix color by using one
of the predefined alpha-compositing operators and it's appropriate
alpha-compositing keyword (See [COMPOSITING]).

The lacuna value for ‘fragmentShader’ is ‘mix(<default-fragment-shader> normal
source-atop)’.

One or two positive integers greater then zero indicate the
additional number of vertex lines and columns that will make the
vertex mesh. With the initial value of ‘1
1’ there is a single row and a single column, resulting in
a four-vertices mesh (top-left, top-right, bottom-right,
bottom-left). If the second parameter is not provided, it takes a
value equal to the first. A value of ‘n
m’ results in a vertex mesh that has n
columns and m rows. This results in a total of (n + 1)
* (m + 1) vertices as illustrated in the figure below.

The lacuna value is ‘1 1’.

If one of the passed parameters is zero or negative, the UA must
fallback to the lacuna values.

<box> = "filter-box | border-box | padding-box |
content-box"

The optional <box> parameter defines the box on which the
vertex mesh is stretched to.

The border box as defined in the CSS box model [CSS21] for elements that have
an associated CSS layout box and are not in the
http://www.w3.org/2000/svg namespace. Or the stroke bounding box [SVG2], if the
element does not have an associated CSS layout box and is in the
http://www.w3.org/2000/svg namespace.

padding-box

The padding box as defined in the CSS box model [CSS21] for
elements that have an associated CSS layout box and are not in the
http://www.w3.org/2000/svg namespace. Or the object bounding box [SVG11], if the
element does not have an associated CSS layout box and is in the
http://www.w3.org/2000/svg namespace.

content-box

The content box as defined in the CSS box model [CSS21] for
elements that have an associated CSS layout box and are not in the
http://www.w3.org/2000/svg namespace. Or the object bounding box [SVG11], if the
element does not have an associated CSS layout box and is in the
http://www.w3.org/2000/svg namespace.

The lacuna value is ‘filter-box’.

Are padding-box or content-box
needed? Can border-box be the bounding client rect?

detached | attached

The optional keywords specify whether the mesh triangles are
attached or detached.

detached

If ‘detached’ is specified, the
triangles are detached. The geometry provided to the vertex shader
is made of triangles which do not share adjacent edges' vertices.

attached

If ‘attached’ is specified, the
triangles are attached. The geometry provided to the vertex shader
is made of triangles which share adjacent edges' vertices.

The lacuna value is ‘attached’.

In the following figure, let us consider the top-left "tile" in the
shader mesh. In the detached mode, the vertex shader will receive the
bottom right and top left vertices multiple time, one of each of the
two triangles which make up the tile. Otherwise, the shader will
receive these vertices only once, because they are shared by the
"connected" triangles.

See the discussion on uniforms passed to shaders to understand how
the shader programs can leverage that feature.

Reference to
discussion missing.

The figure illustrates how a ‘vertexMesh’ value of ‘6
6’ adds vertices passed to the vertex shader. The red
vertices are the default ones and the gray vertices are resulting
from the ‘vertexMesh’ value.

The following example applies a vertex shader (‘distort.vs’) to elements with class ‘distorted’. The vertex shader will operate on
a mesh that has 6 lines and 6 columns (because there are 5 additional
lines and 5 additional columns).

The ‘feCustom’ filter primitive in the
following examples requests the resulting texture of the previous
filter primitive ‘feTurbulence’. This texture gets passed
to the shader with the parameter name ‘tex’.

The ‘feCustom’ filter primitive in the
following examples requests the resulting texture of the previous
filter primitive ‘feGaussianBlur’. This primitive has
access to rendered content, therefore the custom filter primitive
must fall back to a pass through.

33. Shader
Processing Model

This section illustrates the shader processing model as it applies
to an element whose unfiltered rendering is shown in the figure below.

Element before applying shaders

Prior to the shader processing, the input of the filter primitive
gets rendered into an offscreen image (the source texture). The
offscreen size and position is controlled by the filter primitive subregion.

Source texture created by rendering
the element offscreen and adding filter primitive margins

An ‘feCustom’ element or a
custom() filter function defines a custom filter
primitive. The following figure illustrates its model.

The shader processing model

In a first step the source texture is used on a
vertex mesh. By default, the vertex mesh has the
position and size of the filter
primitive subregion. In the following figure, the red markers
represent the default vertices in the default vertex mesh. The
diagonal line illustrate how the vertices define two triangles.

The default vertex mesh and its default vertices

The ‘vertexMesh’ attribute on the ‘feCustom’ element and the
<vertex-mesh> parameter on the custom() filter
function give more granularity controll over the vertex mesh.

A finer, custom vertex mesh

In a second step, the vertex mesh
gets processesed through the vertex shader, which produces a set of
transformed vertices. These transformed vertices are the output as
shown in the figure below.

Transformed vertices after applying the vertex shader

The third step is the rasterization step. The
filter primitive invokes the fragment shader for every pixel location
inside the vertex mesh to perform per pixel operation and produce the
final pixel color at that vertex mesh location.

Transformed vertices after applying the vertex shader

Note that the fragment shader may be called several time
for what corresponds to the same pixel coordinate on the output, for
example when the vertex mesh folds over itself.

There is no guarantee
which shader gets applied first. Because the depth buffer is used
there's no guarantee that blending will happen.

The mix function provides basic functionalities for
color manipulation like blending and alpha compositing on a fragment
shader. The processing model of mix is illustrated in the
following figure:

fragmentShader: processing model for mix

Each color value passed to the fragment shader gets multiplied with
the color matrix, a predefined fragment shader variable that
represents a 4x4 matrix. The matrix is initialized to an identity
matrix, but can be set by a fragment shader. The resulting color is
the multiplied color.

Note that color matrices usually
operate on 5x4 values, where the last column defines an offset that is
added to the color value. The WG may consider to turn the current 4x4
matrix into a 5x4 matrix.

The mix color is a second, predefined fragment shader
variable and represents an ‘RGBA’
<color> value. The mix color can be set by the
fragment shader. If not specified, the default value is transparent
black. The ‘RGB’ channels of the
multiplied color get blended with the ‘RGB’ channels of the mix color with the
multiplied color as destination and the mix color as
source [[!compositing]]. The applied blend mode depends on the passed
<blend-mode> parameter which is ‘normal’ by default. The result of the blending and
the opacity of the mix color define the blended color.

The multiplied color and the mix color get composed
using the alpha-compositing rules as described in the Compositing and
Blending spec [[compositing]]. The applied alpha-compositing mode is
specified by the passed <alpha-compositing> parameter and is
‘source-atop’ by default.

The last step, step 4, shows rasterized output. If
the custom filter primitive is the last primitive in the filter chain,
then this output is the filtered rendering of the element. Otherwise,
the output of the primitive becomes the input to the next filter
primitive using it.

One issue with filter
effects is the impact on interactivity. Filters can offset the visual
rendering of content and affect the way users interact with content.
For example, the ‘feOffset’
element moves the element's rendering by a given offset, which biases
the interaction: the end user may click on an element and actually hit
a different one because of the offset. This issue is expected to be
more acute with vertex shaders and the working group should consider a
general solution to this issue that works for both predefined filter
effects and custom ones.

34. The filter CSS
<image> value

The filter() function produces a CSS <image> value. It has
the following syntax:

34.1.
filter() syntax

The function takes two parameters. The first is a CSS <image>
value. The second is the value of a ‘filter’ property. The function take the
input image parameter and apply the filter rules, returning a
processing image.

35. Security

35.1. Rendered Content Access in custom filter
primitives

Since a custom filter primitive is
applying a processing operation on input values, it is important that
no protected information leaks from that operation.

If a custom filter primitive does not fulfill these requirements,
the primitive is a pass through.

35.2. Origin
Restrictions

Input to a filter effect must not include anything as input that
would violate same origin
restrictions. If cross-origin access is required, then the
requested content should be explicitly marked with CORS data.

For content that falls under this restriction, it should not be
rendered into the input image. For example, a filter effect that is
applying to a cross-origin ‘iframe’
element would receive a completely blank input image.

It might be better to specify that if a
CORS violation is attempted, then the filter is disabled (instead of
running the filter with an empty canvas).

36. RelaxNG Schema for Filter
Effects 1.0

The schema for Filter Effects 1.0 is written in RelaxNG
[RelaxNG], a namespace-aware schema language
that uses the datatypes from XML Schema
Part 2 [Schema2]. This allows
namespaces and modularity to be much more naturally expressed than
using DTD syntax. The RelaxNG schema for Filter Effects 1.0 may be
imported by other RelaxNG schemas, or combined with other schemas in
other languages into a multi-namespace, multi-grammar schema using Namespace-based
Validation Dispatching Language [NVDL].

Unlike a DTD, the schema used for validation is not hardcoded into
the document instance. There is no equivalent to the DOCTYPE
declaration. Simply point your editor or other validation tool to the
IRI of the schema (or your local cached copy, as you prefer).

The RNG is under construction, and only the individual RNG snippets
are available at this time. They have not yet been integrated into a
functional schema. The individual RNG files are available here.

Below are the equivalents for each of the filter functions
expressed in terms of the ‘filter element’
element. The parameters from the function are labelled with brackets
in the following style: [amount]. In the case of parameters that are
percentage values, they are converted to real numbers.

All the parameters specified in the
<shader-params> values (e.g., the feCustom's
param attribute or the custom(<uri>,
<shader-params>) filter function or the
shader property value) will be available as uniforms to
the shader(s) referenced by the ‘shader’ property.

The group may consider applying further restrictions to the GLSL ES
language to make it easier to write vertex and fragment shaders.

The OpenGL ES shading language provides a number of variables that
can be passed to shaders, exchanged between shaders or set by shaders.
In particular, a vertex shader can provide specific data to the
fragment shader in the form of ‘varying’ parameters (parameters that vary per
pixel). The following sections describe particular variables that are
assumed for the vertex and fragment shaders in CSS shaders.

Even though this document recommends the GLSL ES
shading language, there are other possible options to consider, for
example:

The implementation could use the mime type of the url or
<script> element to determine the the shading language.

38.2.1.
Fragment Shaders Differences with GLSL

A normal GLSL shader requires that the fragment shaders computes a
gl_FragColor value which is the color value for the
processed fragment (pixel).

Because of the security restrictions,
fragment shaders are not allowed to access rendered content pixel
values directly. However, fragment shaders in this specification have
the option to compute a value that is automatically mixed with the
rendered content values (but without ever providing direct access to
these values to the shader code).

In the context of this specification, fragment shaders have the
options to compute the following values:

gl_FragColor. When the fragment shader parameter is a
direct reference to a source file, that shader should compute a
gl_FragColor. This may be userful, for example, to
compute complex patterns.

css_MixColor. When the fragment shader parameter uses
the mix()
function, then it can compute a mix color value that is
composited with the rendered content as explained in the shaders processing model.

css_ColorMatrix. When the fragment shader parameter
uses the mix()
function, then it can compute a color matrix value that is
multiplied with the rendered content as explained in the shaders processing model.

The following example shows a fragment shader which computes a
simple solid red color:

CSS

#filtered-element {
filter: custom(url(simple.vs) url(simple.fs));
}

simple.fs

void main()
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}

The following example shows a fragment shader which darkens the
rendered content by computing a css_MixColor which is
multiplied with the rendered content:

38.2.2.
Vertex attribute variables

The vertex coordinates in the filter region box. Coordinates are
normalized to the [-0.5, 0.5] range along the x, y and z axis.

attribute vec2 a_texCoord;

The vertex's texture coordinate. Coordinates are in the [0, 1]
range on both axis

attribute vec2 a_meshCoord;

The vertex's coordinate in the mesh box. Coordinates are in the
[0, 1] range on both axis.

attribute vec3 a_triangleCoord;

The x and y values provide the coordinate of the current
‘tile’ in the shader mesh. For
example, (0, 0) for the top right tile in the mesh. The x and y
values are in the [0, mesh columns] and [0, mesh rows] range,
respectively.

The z coordinate is computed according to the following figure.
The z coordinate value is provided for each vertex and
corresponding triangle. For example, for the bottom right vertex
of the top triangle, the z coordinate will be 2. For the bottom
right vertex of the bottom triangle, the z coordinate will be 4.

The a_triangleCoord.z value

38.2.3. Shader
uniform variables

The following uniform variables are set to specific values by the user
agent:

uniform mat4 u_projectionMatrix

The current projection matrix to the destination texture's
coordinate space). Note that the ‘model
matrix’ which the ‘transform’ property sets, is not passed to
the shaders. It is applied to the filtered element's
rendering.

uniform vec2 u_textureSize

The input texture's size. Includes the filter margins.

uniform vec4 u_meshBox

The mesh box position and size in the filter box coordinate system. For
example, if the mesh box is the filter box, the value will be
(-0.5, -0.5, 1, 1).

uniform vec2 u_tileSize

The size of the current mesh tile, in the same coordinate space
as the vertices.

uniform vec2 u_meshSize

The size of the current mesh in terms of tiles. The x coordinate
provides the number of columns and the y coordinate provides the
number of rows.

38.2.4. Varyings

When the author provides both a vertex and a fragment shader, there
is no requirement on the varyings passed from the vertex shader to the
fragment shader. If no vertex shader is provided, the fragment shader
can expect the v_texCoord varying. If no fragment shader
is provided, the vertex shader must compute a v_texCoord varying for
the default shaders.

varying vec2 v_texCoord;

The current pixel's texture coordinates (in the content
texture).

38.2.5. Other uniform variables: the CSS shaders
parameters

When there parameters are passed to the custom() filter
function or the ‘feCustom’ filter primitive, the user agent
pass uniforms of the corresponding name and type to the shaders.

The following table shows the mapping between CSS shader parameters
and uniform types.

38.2.7. Texture access

If shaders access texture values outside the [0, 1]
range on both axis, the returned value is a fully transluscent black
pixel.

39. Integration with CSS Animations and CSS
Transitions

The CSS ‘filter’ property is animatable.
Interpolation happens between the filter functions only if the ‘filter’ values have
the same number of filter functions, and the same functions appearing
in the same order.

The CSS WG may define different
fading transitions in the future.

39.1.
Interpolating filter function parameters

All properties defined as animatable, provided they are one of the
property types listed in CSS3 Transitions [CSS3-TRANSITIONS],
can be animated.

39.2.
Interpolating the shader-params component in the custom()
function.

To interpolate between params values in a
custom() filter function or between ‘feCustom’‘params’ attribute values, the user agent
should interpolate between each of the [param-def] values according to its
type. List of values need to be of the same length. Matrices need to
be of the same dimension. Arrays need to be of the same size.

Interpolation between shader params only happens if all
the other shader properties are identical: vertex shader, fragment
shader, filter margins and vertex mesh.

Interpolate between the matrix components (applies to mat2, mat3
and mat4).

As with the ‘transform’ property, it is not possible to
animate the different components of the ‘shader-params’ property on different timelines
or with different keyframes. This is a generic issue of animating
properties that have multiple components to them.

40. DOM interfaces

The interfaces below will be made available in a IDL
file for an upcoming draft.

40.1. Interface
ImageData

The ImageData interface corresponds to pixel data that can be used as
input to the SVGFilterElement interface.

An array of pixel values that is the bitmap. This array must
always be in the form of width×height×4 integer values. The
pixel data is in left-to-right order, starting from the top-left
corner, and going row by row downwards. Every pixel is represented
by four integer values, red, green, blue and alpha, in that order.
The range of each color component is 0..255. The intent is that
this is compatible with the HTML5 [HTML5]
canvas interfaces, in particular see ImageData.