Signed languages have multiple articulators including the head, body,
and hands, but these broad categories of articulators have smaller components
that can behave independently in creating prosodic structures. For
example, eye gaze has been shown to mark prominence of specific ideas
in ASL discourse (Mather, 1989; Mather & Winston, 1995). Functions
of eye gaze have been categorized into three types in a study of ASL narrative
structure: gaze at audience, gaze of character, gaze at hands (Bahan
& Supalla, 1995). Eye gaze coupled with head tilt expresses agreement by
referencing the same spatial locations as manual marking; eye gaze marks
the object and head tilt marks the subject (Bahan, Kegl, MacLaughlin, &
Neidle, 1995). Studies of eye gaze by English speakers and non-native
signers show that English speakers do not change their gaze to “imagine”
objects in space; rather, they continue to look directly at the addressee,
whereas non-native signers use eye gaze in a random fashion or by
“overgeneralizing” where the eye gaze falls (Thompson & Emmorey, 2004).

Changes in eye gaze are not the only behavior that can serve as a marker
at intonational phrase boundaries performed by the eyes during the production
of signed languages. Eyes can perform several types of movements
because the musculature that controls them can occur independently.
Another area of study in signed languages has been eyeblinks. Baker and
Padden (1978) brought eyeblinks to the attention of signed language
researchers by suggesting their connection to conditional sentences. It has
also been suggested that eyeblinks in signed languages have similar functions
to breathing in spoken languages because both are physical actions
using articulators distinct from the main language production mechanism;
in addition, eyeblinks and breaths occur at intonational phrase boundaries
(Nespor & Sandler, 1999). Wilbur (1994) suggested that there are
two types of eyeblinks with linguistic purposes—inhibited involuntary
eyeblinks, which can serve as boundary markers at intonational phrase
boundaries, and voluntary eyeblinks that can be markers of emphasis as
well as signal a marker of the final sign in a chunk of information.

There has been research that indicates that sections of the face may
be described using categories of syntactic structure. For example, the nonmanual
markers performed by the upper part of the face and head occur
with higher syntactic constituents (clauses, sentences), even if such
constituents contain only a single sign (Wilbur, 2000). A head thrust typically
occurs on the last sign of the first clause in conditionals (Liddell, 1986).
Eyebrow raising and lowering has been claimed to signal rhetorical questions,
yes-no questions, and conditionals in ASL (Coulter, 1979; McIntire,
1980). In Sign Language of the Netherlands (NGT), the position of the
eyebrows and the whole head are involved in distinguishing sentence
types, such as yes–no questions versus wh-questions (Coerts, 1992). The
lower portion of the face has been shown to provide adverbial and adjectival
information. Movements of the mouth, tongue and cheeks are associated
with specific lexical items or phrases (Liddell, 1978, 1980).

As in spoken language, research has shown that lengthening is another
behavior that can be used prosodically in ASL. Holding or lengthening
of signs has been analyzed by Perlmutter (1992) as application of
the Mora-Insertion rule in ASL. Miller (1996) followed with a similar
study of lengthening in Langue des Signes Québécoise (Sign Language of
Quebec). Sandler (1999c) discussed lengthening in Israeli Sign Language
and claimed that lengthening of movement occurs at the right edge of a
phonological phrase.

Signed languages also utilize the entire body as an articulator. The
movement of the torso in space serves as a prosodic marker. Syntactically,
torso leans have been attributed to linking units of meaning in discourse,
including the inclusion or exclusion of related information, providing
contrastive focus, and creating affirmation of larger chunks of discourse
(Wilbur & Patschke, 1998).

In viewing the human capacity for language as a specialized behavior,
the pervasiveness of rhythmic patterning in biological systems can be
applied to language as an organizing principle of phonological structure.
Nespor and Sandler, for example, describe head positioning as a “rhythmic
cue” (1999, p. 165) in signed languages, although they do not specify
which particular constituent is being cued. This proposal was strengthened
by Boyes-Braem’s (1999) study that described the occurrence of
temporal balancing in Swiss German Sign Language. This behavior, similar
to the balancing of unit size in Gee and Grosjean’s (1983) study of
speech, suggests that signers attempt to create equitable temporal partitions
in their utterances. That is, the length of a spoken and signed utterance
is determined in part, not by syntactic structure, but by a tendency
to divide the utterance into equal parts using prosodic structure.

Increasingly, typological information on signed languages around the
world is becoming available. Examination of grammatical patterns in
multiple signed languages shows similar paths of development. A recent
report on negation strategies of various signed languages finds that
nonmanual negation is created by the use of head movements and facial
expressions in many languages (Zeshan, 2004). A survey of 17 signed
languages showed that raised eyebrows, a common nonmanual gesture
used in signed languages around the world, developed from gesture,
acquired new meaning, and grammaticized, thus becoming a linguistic
element (MacFarlane, 1998).