The alignment between the running blobs depends very much on the
constraints, i.e., on the size and format of the layer on which they
are running. This causes a problem, since the image and the models
have different sizes. We have therefore introduced an
attention blob which restricts the
movement of the running blob on the image layer to a region of about
the same size as that of the model layers. Each of the
model layers also has the same attention blob to keep the conditions
for their running blobs similar to that in the image layer. This is
important for the alignment. The attention blob restricts the
region for the running blob, but it can be shifted by the latter into
a region where input is especially large and favors activity. The
attention blob therefore automatically aligns with the actual face
position (see Figures 6,
7 and Movie 2).
The attention blob layer is initialized with a primitive segmentation
cue, in this case the norm of the respective jets
[19],
since the norm
indicates the presence of textures of high contrast. The
corresponding equations are (cf.
Equations 1 and 5):

The equations show that the attention blob is generated by the
same dynamics as was discussed in the Blob Formation section for
the formation of the running blob without delayed self-inhibition,
though since the attention blob is to be larger than the running
blob, has to be smaller than . The attention
blob restricts the region for the running blob via the term
, which
is an excitatory blob compensating the constant
inhibition . The attention blob on the other hand gets
excitatory input from the running
blob. By this means the running blob can slowly shift the attention
blob into its favored region. The dynamics of the attention blob has
to be slower than that of the running blob; this is controlled by a
value . is the norm of the jets, and
determines the initialization strength.

Figure 6: Schematic of the attention
blob's function. The attention blob restricts the region in which
the running blob can move. The attention blob, on the other hand,
receives input from the running blob. That input will be strong in
regions where the blobs in both layers cooperate and weak where they
do not (see Figure 4). Due to this
interaction the attention blob slowly moves to the correct region
indicated by the square made of dashed lines. The attention blob in
the model layer is required to keep the conditions for the running
blobs symmetrical.

Figure 7: (click on the image to view a larger version)Function of the attention blob,
using an extreme example of an initial attention blob manually
misplaced for demonstration. At t=150 the two running blobs ran
synchronously for a while, and the attention blob has a long tail.
The blobs then lost alignment again. From t=500 on, the running
blobs remained synchronous, and eventually the attention blob aligned
with the correct face position, indicated by a square made of dashed
lines. The attention blob moves slowly compared to the small running
blob, as it is not driven by self-inhibition (cf. Movie 2). Without an attention blob the
two running blobs may synchronize sooner, but the alignment will
never become stable (see Movie 3).

Movie 2: Attention
blob dynamics as in Figure 7. We see here
the running blob (black), the delayed self-inhibition (red), and the
attention blob (blue) on the image and the model layer (336 kB).

Movie 3: Blob
dynamics as in Movie 2, but without
attention blobs, demonstrating that alignment does not get stable (196
kB).