Computational linguistic approaches to sign languages could benefit from investigating how complexity influences structure. We investigate whether morphological complexity has an effect on the order of Verb (V) and Object (O) in Swedish Sign Language (SSL), on the basis of elicited data from five Deaf signers. We find a significant difference in the distribution of the orderings OV vs. VO, based on an analysis of morphological weight. While morphologically heavy verbs exhibit a general preference for OV, humanness seems to affect the ordering in the opposite direction, with [+human] Objects pushing towards a preference for VO.

This paper investigates the domain of name signs (i.e., signs used as personal names) in the Swedish Sign Language (SSL) community. The data are based on responses from an online questionnaire, in which Deaf, hard of hearing, and hearing participants answered questions about the nature of their name signs. The collected questionnaire data comprise 737 name signs, distributed across five main types and 24 subtypes of name signs, following the categorization of previous work on SSL. Signs are grouped according to sociolinguistic variables such as age, gender, and identity (e.g., Deaf or hearing), as well as the relationship between name giver and named (e.g., family or friends). The results show that name signs are assigned at different ages between the groups, such that children of Deaf parents are named earlier than other groups, and that Deaf and hard of hearing individuals are normally named during their school years. It is found that the distribution of name sign types is significantly different between females and males, with females more often having signs denoting physical appearance, whereas males have signs related to personality/behavior. Furthermore, it is shown that the distribution of sign types has changed over time, with appearance signs losing ground to personality/behavior signs – most clearly for Deaf females. Finally, there is a marginally significant difference in the distribution of sign types based on whether or not the name giver was Deaf. The study is the first to investigate name signs and naming customs in the SSL community statistically – synchronically and diachronically – and one of the few to do so for any sign language.

In this paper, we investigate frequency and duration of signs and parts of speech in Swedish Sign Language (SSL) using the SSL Corpus. The duration of signs is correlated with frequency, with high-frequency items having shorter duration than low-frequency items. Similarly, function words (e.g. pronouns) have shorter duration than content words (e.g. nouns). In compounds, forms annotated as reduced display shorter duration. Fingerspelling duration correlates with word length of corresponding Swedish words, and frequency and word length play a role in the lexicalization of fingerspellings. The sign distribution in the SSL Corpus shows a great deal of cross-linguistic similarity with other sign languages in terms of which signs appear as high-frequency items, and which categories of signs are distributed across text types (e.g. conversation vs. narrative). We find a correlation between an increase in age and longer mean sign duration, but see no significant difference in sign duration between genders.

Sign languages make use of paired articulators (the two hands), hence manual signs may be either one- or two-handed. Although two-handedness has previously been regarded a purely formal feature, studies have argued morphologically two-handed forms are associated with some types of inflectional plurality. Moreover, recent studies across sign languages have demonstrated that even lexically two-handed signs share certain semantic properties. In this study, we investigate lexically plural concepts in ten different sign languages, distributed across five sign language families, and demonstrate that such concepts are preferentially represented with two-handed forms, across all the languages in our sample. We argue that this is because the signed modality with its paired articulators enables the languages to iconically represent conceptually plural meanings.

This paper deals with the possibility of conducting syntactic segmentation of the Swedish Sign Language Corpus (SSLC) on the basisof the visual cues from both manual and nonmanual signals. The SSLC currently features segmentation on the lexical level only, whichis why the need for a linguistically valid segmentation on e.g. the clausal level would be very useful for corpus-based studies on thegrammatical structure of Swedish Sign Language (SSL). An experiment was carried out letting seven Deaf signers of SSL each segmenttwo short texts (one narrative and one dialogue) using ELAN, based on the visual cues they perceived as boundaries. This was latercompared to the linguistic analysis done by a language expert (also a Deaf signer of SSL), who segmented the same texts into whatwas considered syntactic clausal units. Furthermore, these segmentation procedures were compared to the segmentation done for theSwedish translations also found in the SSLC. The results show that though the visual and syntactic segmentations overlap in manycases, especially when a number of cues coincide, the visual segmentation is not consistent enough to be used as a means of segmentingsyntactic units in the SSLC.

This paper describes on-going work on extending the annotation of the Swedish Sign Language Corpus (SSLC) with a level of syntactic structure. The basic annotation of SSLC in ELAN consists of six tiers: four for sign glosses (two tiers for each signer; one for each of a signer’s hands), and two for written Swedish translations (one for each signer). In an additional step by Östling et al. (2015), all ¨ glosses of the corpus have been further annotated for parts of speech. Building on the previous steps, we are now developing annotation of clause structure for the corpus, based on meaning and form. We define a clause as a unit in which a predicate asserts something about one or more elements (the arguments). The predicate can be a (possibly serial) verbal or nominal. In addition to predicates and their arguments, criteria for delineating clauses include non-manual features such as body posture, head movement and eye gaze. The goal of this work is to arrive at two additional annotation tier types in the SSLC: one in which the sign language texts are segmented into clauses, and the other in which the individual signs are annotated for their argument types.

In this paper, we describe a method for mapping the phonological feature location of Swedish Sign Language (SSL) signs to the meanings in the Swedish semantic dictionary SALDO. By doing so, we observe clear differences in the distribution of meanings associated with different locations on the body. The prominence of certain locations for specific meanings clearly point to iconic mappings between form and meaning in the lexicon of SSL, which pinpoints modalityspecific properties of the visual modality.

In this paper, we discuss the possibilities for mining lexical variation data across (potential) lects in Swedish Sign Language (SSL). The data come from the SSL Corpus (SSLC), a continuously expanding corpus of SSL, its latest release containing 43 307 annotated sign tokens, distributed over 42 signers and 75 time-aligned video and annotation files. After extracting the raw data from the SSLC annotation files, we created a database for investigating lexical distribution/variation across three possible lects, by merging the raw data with an external metadata file, containing information about the age, gender, and regional background of each of the 42 signers in the corpus. We go on to present a first version of an easy-to-use graphical user interface (GUI) that can be used as a tool for investigating lexical variation across different lects, and demonstrate a few interesting finds. This tool makes it easier for researchers and non-researchers alike to have the corpus frequencies for individual signs visualized in an instant, and the tool can easily be updated with future expansions of the SSLC.

Although the significance of gender and disability issues has graduallyincreased in the global society during the past three decades,there are only few studies with regard to the deaf community andsport. This article examines the level of Deaf or Hard-of-Hearingwomen’s participation in sports and the factors for their continuedunderrepresentation. The WomenSport International’s Task Force onDeaf and Hard of Hearing Girls and Women in Sport conducted aworld-wide survey to determine and assess the needs of deaf andhard of hearing girls and women in sport. A snapshot of the resultsand issues and future aspirations are provided.

In this paper, we present a comparative study of mouth actions in three European sign languages: British Sign Language (BSL), Nederlandse Gebarentaal (Sign Language of the Netherlands, NGT), and Swedish Sign Language (SSL). We propose a typology for, and report the frequency distribution of, the different types of mouth actions observed. In accordance with previous studies, we find the three languages remarkably similar — both in the types of mouth actions they use, and in how these mouth actions are distributed. We then describe how mouth actions can extend over more than one manual sign. This spreading of mouth actions is the primary focus of this paper. Based on an analysis of comparable narrative material in the three languages, we demonstrate that the direction as well as the source and goal of spreading may be language-specific.

This article describes how new technological possibilities allow sign language researchers to share and publish video data and transcriptions online. Both linguistic and technological aspects of creating and publishing a sign language corpus are discussed, and standards are proposed for both metadata and transcription categories specific to sign language data. In addition, ethical aspects of publishing video data of signers online are considered, and suggestions are offered for future corpus projects and software tools.

In Sweden, deaf pupils were traditionally placed in segregated deaf schools. However, during the last decade, the number of children attending mainstream schools after receiving cochlear implants (CIs) has increased dramatically, resulting in lower attendance at deaf schools. Despite the significance of this trend, there exists little knowledge regarding the everyday lives of these pupils in mainstream settings. This paper examines how pupils with CIs interact with school staff and other pupils in classroom settings and how different technologies (e.g. hearing aids and microphones) are used there. Furthermore, it aims to identify opportunities and limitations regarding the pupils’ participation in communication and teaching. The paper builds upon data from an ethnographic study in which fieldwork was conducted in two mainstream Swedish classrooms, both of which including one pupil with CIs. Interaction in these classrooms was documented through participant observations, video recordings and field notes, and the analysis shows that audiologically-oriented and communicative-link technologies play major roles in everyday interaction by both facilitating and limiting the participation of pupils with CIs in different ways, and that it mostly is the school staff that determine how and when these shall be used. The results also indicate that the pupils are largely responsible for their own participation. Overall, the current paper provides a glimpse of one way to educate children with CIs in Sweden, namely, in mainstream schools, and the focus is on what really happens in the technologically framed interaction in these classrooms.

The number of pupils with cochlear implant (CI) has seen a sharp increase in mainstream schools in Sweden. This study focuses on communicative strategies in mainstream classrooms where pupils with CI are members. The empirical ethnographic data comes from two mainstream classrooms in Sweden where pupils and adults use a range of technologies, and strategies, (co)creating opportunities for communication and learning in everyday classroom life. The analyses indicate that pupils with CIs are responsible for their own communicative participation in mainstream classrooms (when they can't make sense of or don't hear oral talk), while their right to choose or regulate communication channels are not uncommonly curtailed by the adults. Different technologies play an important role in mainstream classrooms where pupils with CIs are members but these at the same time sometimes create barriers for participation. Technologies cannot therefore be seen as a panacea for pupils with CI in mainstream educational settings.

Different technologies are commonly used in mainstream classrooms to teach pupils who wear surgically implanted cochlear hearing aids. We focus on these technologies, their application, how pupils react to them, and how they affect mainstream classrooms in Sweden. Our findings indicate that language ideologies play out in specific ways in such technified environments. The hegemonic position wielded by adults with regard to the use of technology usage has specific implications for pupils with cochlear implants.

Although once placed solely in deaf schools, a growing number of deaf students in Sweden are now enrolling in mainstream schools. In order to maintain a functional educational environment for these students, municipalities are required to provide a variety of supporting resources, e.g. technological equipment and specialized personnel. However, the functions of these resources and how these relate to deaf students’ learning is currently unknown. Thus, the present study examines public school resources, including the function of a profession called a hörselpedagog (HP, a kind of pedagogue that is responsible for hard-of-hearing students). In particular, the HPs’ perspectives on the functioning and learning of deaf students in public schools were examined. Data were collected via (i) two questionnaires: one quantitative (n = 290) and one qualitative (n = 26), and (ii) in-depth interviews (n = 9). These show that the resources provided to deaf children and their efficacy are highly varied across the country, which holds implications for the language situations and learning of deaf students.

In this paper we apply methodology presented in Kimmelman (2016) and investigate the transitivityprominence of verbs in Finnish Sign Language (FinSL) and Swedish Sign Language (SSL). Specifically,we ask how similar or different FinSL and SSL verbs are in terms of their transitivity prominence,and how the transitivity prominence of FinSL and SSL verbs compares with that of verbs inother languages. The term transitivity prominence refers to the relative frequency with which a verboccurs with an object. Haspelmath (2015) has shown that in spoken languages, verbs form a rankedcontinuum between those that are highly transitivity prominent and those that occur with no objectat all. Recently, Kimmelman (2016) has argued that Haspelmath's ranking applies also to the verbsof Russian Sign Language (RSL).Our investigation is based on annotated corpus data comprising narratives, conversations andpresentations. For FinSL, we use material from 20 signers (2h 40min, 18446 sign tokens) and forSSL from 28 signers (1h 54min, 15186 sign tokens). From this data, we identified 18 verb lexemeswhich all have enough tokens and which are all comparable between languages. In FinSL, the totalnumber of verb tokens is 745 and in SSL the corresponding number is 579. All the verbs were annotatedfor overt direct and indirect objects and for overt clausal complements. The annotation workwas carried out by different annotators following common guidelines.Concerning the results, our data suggests that there are clear similarities in what verbs rankhighest (e.g. GIVE, TAKE) and what lowest (e.g. HAPPY, COLD) in terms of their transitivity prominencein FinSL and SSL. On the basis of Haspelmath (2015) and Kimmelman (2016), these are thesame verbs that are ranked highest and lowest also in spoken languages and in RSL (Table 1).However, the data also shows that certain verbs (e.g. SEARCH, TALK, PLAY) may differ considerablyin the position they occupy in the ranking. Although some of these differences can be assumed to betrue differences between languages, we suspect that some may, despite our best efforts, be traceableback to issues relating to the type of data as well as to the way the samples were formed and objectsannotated. In our presentation, we will present the results of our comparative study and discuss thedata and methodology-related issues in more detail.

In this paper we investigate a hypothesis, derived from the intuitions of native signers, that there is a rhythmic difference between two historically related sign languages, Finnish Sign Language (FinSL) and Swedish Sign Language (SSL). We define the notion of rhythm as 'the organization of units in time' and presume that the rhythmic feel of a language is determined by the phonetic properties and events that are used in the marking of the areas and borders of temporally ordered units such as signs and sentences (Patel & Daniele 2003; Patel 2006). In previous studies (Boyes Braem 1999; Sandler 2012), it has been suggested that the markers of rhythmic sequences in signed language are, for example, temporal duration, punctual indices (e.g. head nods), and articulatory contours. Accordingly, we approach our hypothesis with three main research questions: (i) Are the signing speed and sign duration different in FinSL and SSL, (ii) Are head nods aligned differently in terms of syntactic units in FinSL and SSL, and (iii) Is the motion of the head different in terms of its articulatory contour in FinSL and SSL sentences? The study is based on narratives collected with identical tasks in both languages (5 Snowman and Frog, where are you? stories per language). The total amount of video material is one hour (30+30 minutes) and it includes signing from twenty (10+10) signers. All of the material has been annotated for signs, sentences and nods. The material also includes 3D numerical data on the head motion of signers (the yaw, pitch, and roll angles). The 3D data has been obtained with computer-vision technology implemented in SLMotion software (Karppa et. al 2014). Concerning question (i), we have not so far found any significant differences in the signing speed and sign duration of the two languages. With a pilot sample of 4+4 signers and 1100 signs per language, we have determined the average signing speed to be two signs per second in both languages, and the average duration of (the core of) the sign to be 0.27 seconds in SSL and 0.29 seconds in FinSL. Concerning (ii), the average number of nods per story was higher in FinSL than in SSL but both languages tended to align nods with syntactic boundaries: of the total number of nods, 81% in FinSL and 77% in SSL occurred on a syntactic boundary, and generally also at the end of the sentence (Figure 1). Concerning question (iii), our initial tests with Snowman revealed that, for example, the amplitude of the tilting-like (roll) motion of the head decreased similarly toward the end of sentences in both languages (Figure 2) but FinSL signers employed this particular type of motion more often in the marking of syntactic junctures than SSL signers (Figure 3). The preliminary results indicate some differences between FinSL and SSL. In our presentation we will present the final results and discuss them in detail with respect to our initial hypothesis.

This paper investigates, with the help of computer-vision technology,the similarities and differences in the rhythm of themovements of the head in sentences in Finnish (FinSL) andSwedish Sign Language (SSL). The results show that themovement of the head in the two languages is often very similar:in both languages, the instances when the movement of thehead changes direction were distributed similarly with regardto clause-boundaries, and the contours of the roll (tilting-like)motion of the head during the sentences were similar. Concerningdifferences, direction changes were found to be usedmore effectively in the marking of clause-boundaries in FinSL,and in SSL the head moved nearly twice as fast as in FinSL. However, the small amount of data means that the results canbe considered to be only preliminary. The paper indicates theroll angle of the head as a domain for further work on head related rhythm.

Traditionally in sign language research, the issue of whether a lexical sign is articulated with one hand or two has been treated as a strictly phonological matter. We argue that accounting for two-handed signs also requires considering meaning as a motivating factor. We report results from a Swadesh list comparison, an analysis of semantic patterns among two-handed signs, and a picture-naming task. Comparing four unrelated languages, we demonstrate that the two hands are recruited to encode various relationship types in sign language lexicons. We develop the general principle that inherently "plural" concepts are straightforwardly mapped onto our paired human hands, resulting in systematic use of the two hands across sign languages. In our analysis, "plurality" subsumes four primary relationship types — interaction, location, dimension, and composition — and we predict that signs with meanings that encompass these relationships — such as 'meet', 'empty', 'large', or 'machine' — will preferentially be two-handed in any sign language.

This study identifies a central factor that gives rise to the different word orders found in the world’s languages. In the last decade, a new window on this long-standing question has been provided by data from young sign languages and invented gesture systems. Previous work has assumed that word order in both invented gesture systems and young sign languages is driven by the need to encode the semantic/syntactic roles of the verb’s arguments. Based on the responses of six groups of participants, three groups of hearing participants who invented a gestural system on the spot, and three groups of signers of relatively young sign languages, we identify a major factor in determining word order in the production of utterances in novel and young communication systems, not suggested by previous accounts, namely the salience of the arguments in terms of their human/animacy properties: human arguments are introduced before inanimate arguments (‘human first’). This conclusion is based on the difference in word order patterns found between responses to depicted simple events that vary as to whether both subject and object are human or whether the subject is human and the object inanimate. We argue that these differential patterns can be accounted for uniformly by the ‘human first’ principle. Our analysis accounts for the prevalence of SOV order in clauses with an inanimate object in all groups (replicating results of previous separate studies of deaf signers and hearing gesturers) and the prevalence of both SOV and OSV in clauses with a human object elicited from the three groups of participants who have the least interference from another linguistic system (nonliterate deaf signers who have had little or no exposure to another language). It also provides an explanation for the basic status of SOV order suggested by other studies, as well as the scarcity of the OSV order in languages of the world, despite its appearance in novel communication systems. The broadest implication of this study is that the basic cognitive distinction between humans and inanimate entities is a crucial factor in setting the wheels of word ordering in motion.