Automatic Detection and Correction of Repairs in Human-Computer Dialog*

Transcription

1 Automatic Detection and Correction of Repairs in Human-Computer Dialog* Elizabeth Shriberg~ John Bear, John Doweling SRI International Menlo Park, California ABSTRACT We have analyzed 607 sentences of spontaneous humancomputer speech data containing repairs (drawn from a corpus of 10,718). We present here criteria and techniques for automatically detecting the presence of a repair, its location, and making the appropriate correction. The criteria involve integration of knowledge from several sources: pattern matching, syntactic and semantic analysis, and acoustics. 1. INTRODUCTION Spontaneous spoken language often includes speech that is not intended by the speaker to be part of the content of the utterance. This speech must be detected and deleted in order to correctly identify the intended meaning. This broad cls of disfluencies encompses a number of phenomena, including word fragments, interjections, filled pauses, restarts, and repairs. We are analyzing the repairs in a large subset (over ten thousand sentences) of spontaneous speech data collected for the DARPA spoken language program. We have categorized these disfluencies to type and frequency, and are investigating methods for their automatic detection and correction. Here we report promising results on detection and correction of repairs by combining pattern matching, syntactic and semantic analysis, and acoustics. The problem of disfluent speech for language understanding systems h been noted but h received limited attention, ttindle [5] attempts to delimit and correct repairs in spontaneous human-human dialog, bed on transcripts containing an "edit signal," or external and reliable marker at the "expunction point," or point of interruption. Carbonell and Hayes [4] briefly describe re- *This research w supported by the Defense Advanced Research Projects Agency under Contract ONR N C-0085 with the Office of Naval Research. It w also supported by a Grant, NSF IRI , from the NationM Science Foundation. The views and conclusions contained in this document are those of the authors and should not be interpreted necessarily representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency of the U.S. Government, or of the National Science Foundation. lelizabeth Shriberg is also affiliated with the Department of Psychology at the University of California at Berkeley. covery strategies for broken-off and restarted utterances in textual input. Ward [13] addresses repairs in spontaneous speech, but does not attempt to identify or correct them. Our approach is most similar to that of Hindle. It differs, however, in that we make no sumption about the existence of an explicit edit signal. As a reliable edit signal h yet to be found, we take it our problem to find the site of the repair automatically. It is the ce, however, that cues to repair exist over a range of syllables. Research in speech production h shown that repairs tend to be marked prosodically [8] and there is perceptual evidence from work using lowps-filtered speech that human listeners can detect the occurrence of a repair in the absence of segmental information [9]. In the sections that follow, we describe in detail our corpus of spontaneous speech data and present an analysis of the repair phenomena observed. In addition, we describe ways in which pattern matching, syntactic and semantic anmysis, and acoustic analysis can be helpful in detecting and correcting these repairs. We use pattern matching to determine an initial set of possible repairs; we then apply information from syntactic, semantic, and acoustic analyses to distinguish actual repairs from false positives. 2. THE CORPUS The data we are analyzing were collected at six sites 1 part of DARPA's Spoken Language Systems project. The corpus contains digitized waveforms and transcriptions of a large number of sessions in which subjects made air travel plans using a computer. In the majority of sessions, data were collected in a Wizard of Oz setting, in which subjects were led to believe they were talking to a computer, but in which a human actually interpreted and responded to queries. In a small portion of the sessions, data were collected using SRI's Spoken Language System [12]), in which no human intervention w in- 1The sites were: AT&T, Bolt Beranek and Newman, Carnegie Mellon University, Msachusetts Institute of Technology, SRI International, and Tex Instruments, Inc. 419

2 volved. Relevant to the current paper is the fact that although the speech w spontaneous, it w somewhat planned (subjects pressed a button to begin speaking to the system) and the transcribers who produced lexical transcriptions of the sessions were instructed to mark words they inferred were verbally deleted by the speaker with special symbols. For further description of the corpus, see MADCOW [10]. 3. CHARACTERISTICS AND DISTRIBUTION OF REPAIRS Of the ten thousand sentences in our corpus, 607 contained repairs. We found that of sentences longer than nine words, 10% contained repairs. While this is lower than rates reported elsewhere for human-human dialog (Levelt [7] reports a rate of 34%), it is still large enough to be significant. And, system developers move toward more closely modeling human-human interaction, the percentage is likely to rise. 3.1 Notation In order to clsify these repairs, and to facilitate communication among the authors, it w necessary for us to develop a notational system that would: (1) be relatively simple, (2) capture sufficient detail, and (3) describe the I want ti- - what show me flights I want a flight flights to boston. what are the fares dmly X flights one way flight Z X vt majority of repairs observed. The notation is described fully in [2]. The bic pects of the notation include marking the interruption point, its extent, and relevant correspondences between words in the region. To mark the site of a repair, corresponding to Hindle's "edit signal" [5], we use a vertical bar (I). To express the notion that words on one side of the repair correspond to words on the other, we use a combination of a letter plus a numerical index. The letter M indicates that two words match exactly. R indicates that the second of the two words w intended by the speaker to replace the first. The two words must be similar, either of the same lexical category, or morphological variants of the same be form (including contraction pairs like I/I'd). Any other word withi, a repair is notated with X. A hyphen affixed to a symbol indicates a word fragment. In addition, certain cue words, such "sorry" or "oops" (marked with CR) well filled pauses (CF) are also labeled if they occur immediately before the site of a repair. 3.2 Distribution While only 607 sentences contained deletions, some sentences contained more than one, for a total of 646 deletions. Table 2 gives the breakdown of deletions by length, where length is defined the number of consecutive deleted words or word fragments. Most of the deletions were fairly short. One or two word deletions accounted for 82% of the data. We categorized the length 1 and length 2 repairs according to their transcriptions. The results are summarized in Table 3. For the purpose of simplicity, we have in this table combined ces involving fragments (which always occurred the second word) with their sociated full-word patterns. The overall rate of fragments for the length 2 repairs w 34%. I want to leave R1 what are M~ M2... fly to boston R1 M,... fly from boston M~ R1 what X are X depart before... R1 what are the fares M2 from boston R~ M~ from denver M~ R~ are there any flights 4. SIMPLE PATTERN MATCHING We analyzed a subset of 607 sentences containing repairs and concluded that certain simple pattern-matching techniques could successfully detect a number of them. Deletion Length Occurrences Percentage % % % % % % Table 1: Examples of Notation Table 2: Distribution of Repairs by Length 420

3 Type Pattern Frequency Length 1 Repairs Fragments -, Ri-, X- 61% Repeats { 16% Insertions [ X1... Xi 7% Replacement R1 [R1 9% Other X[X 5% Length 2 Repairs Repeats M2 { M2 28% Replace 2nd R1 I R1 27% Insertions M2IX1...XiM2 19% Replace 1st R1 Mi JR1 10% Other...[... 17% Table 3: Distribution of Repairs by Type The pattern matching component reported on here looks for the following kinds of subsequences: Simple syntactic anomalies, such "a the" or "to from". Sequences of identical words such "<I> <would> <like> <a> <book> I would like a flight..." Matching single words surrounding a cue word like "sorry," for example "from" in this ce: "I would like to see the flights <from> <philadelphia> <i'm> <sorry> from denver to philadelphia." Of the 406 sentences with nontrivial repairs in our data (more editing necessary than deleting fragments and filled pauses), the program successfully corrected 177. It found 132 additional sentences with repairs but made the wrong correction. There were 97 sentences that contained repairs which it did not find. In addition, out of the 10,517 sentence corpus (10, trivial), it incorrectly hypothesized that an additional 191 contained repairs. Thus of 10,517 sentences of varying lengths, it pulled out 500 possibly containing a repair and missed 97 sentences actually containing a repair. Of the 500 that it proposed containing a repair, 62% actually did and 38% did not. Of the 62% that had repairs, it made the appropriate correction for 57%. These numbers show that although pattern matching is useful in identifying possible repairs, it is less successful at making appropriate corrections. This problem stems largely from the overlap of related patterns. Many sentences contain a subsequence of words that match not one but several patterns. For example the phre "FLIGHT <word> FLIGHT" matches three different patterns: show the FLIGHT earliest FLIGHT { X show the FLIGHT time FLIGHT date Ri I R1 show the delta FLIGHT united FLIGHT R1 I R1 Each of these sentences is a false positive for the other two patterns. Despite these problems of overlap, pattern matching is useful in reducing the set of candidate sentences to be processed for repairs. Instead of applying detailed and possibly time-intensive analysis techniques to 10,000 sentences, we can incree efficiency by limiting ourselves to the 500 sentences selected by the pattern matcher, which h (at let on one meure) a 75% recall rate. The repair sites hypothesized by the pattern matcher constitute useful input for further processing bed on other sources of information. 5. NATURAL LANGUAGE CONSTRAINTS Here we describe experiments conducted to meure the effectiveness of a natural language processing system in distinguishing repairs from false positives. A false positive is a repair pattern that incorrectly matches a sentence or part of a sentence. We conducted the experiments using the syntactic and semantic components of the Gemini natural language processing system. Gemini is an extensive reimplementation of the Core Language Engine [1]. It includes modular syntactic and semantic components, integrated into an efficient all-paths bottom-up parser [11]). Gemini w trained on a 2,200 sentence subset of the full 10,718-sentence corpus (only those annotated cls A or D). Since this subset excluded the unanswerable (cls X) sentences, Gemini's coverage on the full corpus is only an estimated 70% for syntax, and 50% for semantics. 2 Nonetheless, the results reported here are promising, and should improve syntactic and semantic coverage incree. We tested Gemini on a subset of the data that the pat- 2Gemini's syntactic coverage of the 2,200 sentence datet it w trained on (the set of annotated and answerable MADCOW queries) is approximately 91%, while its semantic coverage is approximately 77%. On a fair test of the February 1992 test set, Gemini's syntactic coverage w 87% and semantic coverage w 71%. 421

4 Syntax Only Repair False Positive Repairs 68 (96%) 56 (30%) False Positives 3 (4%) 131 (70%) Syntax and Semantics Repair False Positive Repairs 64 (85%) 23 (20%) False Positives 11 (15%) 90 (80%) Table 4: Syntax and Semantics Results tern matcher returned likely to contain a repair. We excluded all sentences that contained fragments, resulting in a datet of 335 sentences, of which 179 contained repairs and 176 contained false positives. The approach w follows: for each sentence, parsing w attempted. If parsing succeeded, the sentence w marked a false positive. If parsing did not succeed, then pattern matching w used to detect possible repairs, and the edits sociated with the repairs were made. Parsing w then reattempted. If parsing succeeded at this point, the sentence w marked a repair. Otherwise, it w marked NO OPINION. Since multiple repairs and false positives can occur in the same sentence, the pattern matching process is constrained to prefer fewer repairs to more repairs, and shorter repairs to longer repairs. This is done to favor an analysis that deletes the fewest words from a sentence. It is often the ce that more drtic repairs would result in a syntactically and semantically well-formed sentence, but not the sentence that the speaker intended. For instance, the sentence "show me <flights> daily flights to boston" could be repaired by deleting the words "flights daily", and would then yield a grammatical sentence, but in this ce the speaker intended to delete only "flights." Table 4 shows the results of these experiments. We ran them two ways: once using syntactic constraints alone and again using both syntactic and semantic constraints. As can be seen, Gemini is quite accurate at detecting a repair, although somewhat less accurate at detecting a false positive. Furthermore, in ces where Gemini detected a repair, it produced the intended correction in 62 out of 68 ces for syntax alone, and in 60 out of 64 ces using combined syntax and semantics. In both ces, a large number of sentences (29% for syntax, 50% for semantics) received a NO OPINION evaluation. The NO OPINION ces were evenly split between repairs and false positives in both tests. The main points to be noted from Table 4 are that with syntax alone, the system is quite accurate in detecting repairs, and with syntax and semantics working together, it is accurate at detecting false positives. However, since the coverage of syntax and semantics will always be lower than the coverage of syntax alone, we cannot compare these rates directly. 6. ACOUSTICS A third source of information that can be helpful in detecting repairs is acoustics. While acoustics alone cannot tackle th e problem of locating repairs, since any prosodic patterns found in repairs will be found in fluent speech, acoustic information can be quite effective when combined with other sources of information, particularly, pattern matching. Our approach in studying the ways in which acoustics might be helpful w to begin by looking at two patterns conducive to acoustic meurement and comparison. First, we focused on patterns in which there is only one matched word, and in which the two occurrences of that word are either adjacent or separated by only one word. Matched words allow for comparisons of word duration; proximity helps avoid variability due to global intonation contours not sociated with the patterns themselves. We present here analyses for the Mi[ ("flights for <one> one person") and MI[X ("<flight> earliest flight") repairs, and their sociated false positives ("u s air five one one," "a flight on flight number five one one," respectively). Second, we have done a preliminary analysis of repairs in which a word such "no" or "well" w present an editing expression [6] at the point of interruption ("...flights <between> <boston> <and> <dall> <no> between oakland and boston"). False positives for these ces are instances in which the cue word functions in its usual lexical sense ("I want to leave boston no later than one p m."). Hirshberg and Litman [3] have shown that cue words that function differently can be distinguished perceptually by listeners on the bis of prosody. Thus, we sought to determine whether acoustic analysis could help in deciding, when such words were present, whether or not they marked the interruption point of a repair. In both analyses, a number of features were meured to allow for comparisons between the words of interest. 422

5 Pauses before/ after X before after X X (only) (only) F0 of X greater than F0 of 1st ~I1 less than F0 of 1st Pauses after X (only) and FO of X less than FO of 1st Pauses before X (only) and F0 of X greater than F0 of 1st False Positives (N--24) False Positives Repairs (N=12) Repairs Table 5: Acoustic Characteristics of IX Repairs Word onsets and offsets were labeled by inspection of waveforms and parameter files (pitch tracks and spectrograms) obtained using the Entropic Waves software package. Files with questionable pitch tracks were excluded from the analysis. An average F0 value for words of interest w determined by simply averaging, within a labeled word, all 10-ms frame values having a probability of voicing above In examining the Mil repair pattern, we found that the strongest distinguishing cue between the repairs (g = 20) and the false positives (g = 20) w the interval between the offset of the first word and the onset of the second. False positives had a mean gap of 42 ms (s.d. = 55.8) opposed to 380 ms (s.d. = 200.4) for repairs. A second difference found between the two groups w that, in the ce of repairs, there w a statistically reliable reduction in duration for the second occurrence of, with a mean difference of 53.4 ms. However because false positives showed no reliable difference for word duration, this w a much less useful predictor than gap duration. F0 of the matched words w not helpful in separating repairs from false positives; both groups showed a highly significant correlation for, and no significant difference between, the mean F0 of the matched words. A different set of features w found to be useful in distinguishing repairs from false positives for the 1X pattern. These features are shown in Table 5. Cell values are percentages of repairs or false positives that possessed the characteristics indicated in the columns. Despite the small data set, some suggestive trends emerge. For example, for ces in which there w a pause (defined for purposes of this analysis a silence of greater than 200 ms) on only one side of the inserted word, the pause w never after the insertion (X) for the repairs Table 6: Combining Acoustic Characteristics of 1X Repairs and rarely before the X in the false positives. Note that values do not add up to 100% because ces of no pauses, or pauses on both sides are not included in the table. A second distinguishing characteristic w the F0 value of X. For repairs, the inserted word w nearly always higher in F0 than the preceding ; for false positives, this incree in F0 w rarely observed. Table 6 shows the results of combining the acoustic constraints in Table 5. As can be seen, although acoustic features may be helpful individually, certain combinations of features widen the gap between observed rates of repairs and false positives possessing the relevant set of features. Finally, in a preliminary study of the cue words "no" and "well," we compared 9 examples of these words at the site of a repair to 15 examples of the same words occurring in fluent speech. We found that these groups were quite distinguishable on the bis of simple prosodic features. Table 7 shows the percentage of repairs versus false positives characterized by a clear rise or fall in F0, lexical stress, and continuity of the speech immediately preceding and following the editing expression ("continuous" means there is no silent pause on either side of the cue word). As 'can be seen, at let for this limited data set, cue words marking repairs were quite distinguishable from those same words found in fluent strings on the bis of simple prosodic features. F0 F0 Lexical Continuous rise fall stress speech Repairs False positives Table 7: Acoustic Characteristics of Cue Words Although one cannot draw conclusions from such limited 423

6 data sets, such results are nevertheless interesting. They illustrate that acoustics can indeed play a role in distinguishing repairs from false positives, but only if each pattern is examined individually, to determine which features to use, and how to combine them. Analysis of additional patterns and access to a larger databe of repairs will help us better determine the ways in which acoustics can play a role in detection of repairs. 7. CONCLUSION In summary, disfluencies occur at high enough rates in human-computer dialog to merit consideration. In contrt to earlier approaches, we have made it our goal to detect and correct repairs automatically, without suming an explicit edit signal. Without such an edit signal, however, repairs are eily confused both with false positives and with other repairs. Preliminary results show that pattern matching is effective at detecting repairs without excessive overgeneration. Our syntax-only approach is quite accurate at detecting repairs and correcting them. Acoustics is a third source of information that can be tapped to provide corroborating evidence about a hypothesis, given the output of a pattern matcher. While none of these knowledge sources by itself is sufficient, we propose that by combining them, and possibly others, we can greatly enhance our ability to detect and correct repairs. As a next step, we intend to explore additional pects of the syntax and semantics of repairs, analyze further acoustic patterns, and examine corpora with higher rates of disfluencies. 5. Hindle, D. (1983) "Deterministic Parsing of Syntactic Non-fluencies," Proceedings of the A CL, pp Hockett, C. (1967) "Where the Tongue Slips, There Slip I," in To Honor Roman Jakobson: Vol. 2, The Hague: Mouton. 7. Levelt, W. (1983) "Monitoring and self-repair in speech," Cognition, Vol. 14, pp Levelt, W., and A. Cutler (1983) "Prosodic Marking in Speech Repair," Journal of Semantics, Vol. 2, pp Lickley, R., R. Shillcock, and E. Bard (1991) "Processing Disfluent Speech: How and when are disfluencies found?" Proceedings of the Second European Conference on Speech Communication and Technology, Vol. 3, pp MADCOW (1992) "Multi-site Data Collection for a Spoken Language Corpus," Proceedings of the DARPA Speech and Natural Language Workshop, February 23-26, Moore, R. and J. Dowding (1991) "Efficient Bottom-up Parsing," Proceedings of the DARPA Speech and Natural Language Workshop, February 19-22, 1991, pp Shriberg, E., Wade, E., and P. Price (1992) "Human- Machine Problem Solving Using Spoken Language Systems (SLS): Factors Affecting Performance and User Satisfaction," Proceedings of the DARPA Speech and Natural Language Workshop, February 23-26, Ward, W. (1991) "Evaluation of the CMU ATIS System," Proceedings of the DARPA Speech and Natural Language Workshop, February 19-22, 1991, pp ACKNOWLEDGMENTS We would like to thank Patti Price for her helpful comments on earlier drafts, well for her participation in the development of the notational system used. We would also like to thank Robin Lickley for his helpful feedback on the acoustics section. REFERENCES 1. Alshawi, H, Carter, D., van Eijck, J., Moore, R. C., Moran, D. B., Pereira, F., Pulman, S., and A. Smith (1988) Research Programme In Natural Language Processing: July 1988 Annual Report, SRI International Tech Note, Cambridge, England. 2. Bear, J., Dowding, J., Price, P., and E. E. Shriberg (1992) "Labeling Conventions for Notating Grammatical Repairs in Speech," unpublished manuscript, to appear an SRI Tech Note. 3. Hirschberg, J. and D. Litman (1987) "Now Let's Talk About Now: Identifying Cue Phres Intonationally," Proceedings of the ACL, pp Carbonell, J. and P. Hayes, P., (1983) "Recovery Strategies for Parsing Extragrammatical Language," American Journal of Computational Linguistics, Vol. 9, Numbers 3-4, pp

Discourse Markers in English Writing Li FENG Abstract Many devices, such as reference, substitution, ellipsis, and discourse marker, contribute to a discourse s cohesion and coherence. This paper focuses

Notes on English Intonation, p. 1 Notes on English Intonation. Ken de Jong, Indiana University 2/23/06 Introduction. Following are some very general notes to introduce you to an analysis of the intonation

Lecture : An Overview of peech Recognition. Introduction We can classify speech recognition tasks and systems along a set of dimensions that produce various tradeoffs in applicability and robustness. Isolated

The Walking Around Project Transcription Guidelines for PILOT #1 (revised July 26, 2011; for more information, contact Karla Batres) I. Working with the recordings NOTE: I recommend using HEADPHONES when

Master of Arts in Linguistics Syllabus Applicants shall hold a Bachelor s degree with Honours of this University or another qualification of equivalent standard from this University or from another university

Functional Performance Indicators (FAPI) An Integrated Approach to Skill FAPI Overview The Functional (FAPI) assesses the functional auditory skills of children with hearing loss. It can be used by parents,

Chapter 5 English Words and Sentences Ching Kang Liu Language Center National Taipei University 1 Citation form The strong form and the weak form 1. The form in which a word is pronounced when it is considered

3 How Can Teachers Teach Listening? The research findings discussed in the previous chapter have several important implications for teachers. Although many aspects of the traditional listening classroom

THESE ARE A FEW OF MY FAVORITE THINGS.. FLUENCY ENHANCING THERAPY FOR PRESCHOOL CHILDREN Guidelines for Verbal Interaction It has been found effective during adult-child verbal interactions to reduce the

Comparing Support Vector Machines, Recurrent Networks and Finite State Transducers for Classifying Spoken Utterances Sheila Garfield and Stefan Wermter University of Sunderland, School of Computing and

Julia Hirschberg AT&T Bell Laboratories Murray Hill, New Jersey 07974 Comparing the questions -proposed for this discourse panel with those identified for the TINLAP-2 panel eight years ago, it becomes

1 Translated and published with permission from Gallaudet University Press. The Effects of Lag Time on Interpreter Errors By Dennis Cokely Abstract A popular but naive notion that sign language interpreters

Notes and discussion Things to remember when transcribing speech David Crystal University of Reading Until the day comes when this journal is available in an audio or video format, we shall have to rely

Open-Source, Cross-Platform Java Tools Working Together on a Dialogue System Oana NICOLAE Faculty of Mathematics and Computer Science, Department of Computer Science, University of Craiova, Romania oananicolae1981@yahoo.com

Introduction to the Database There are now eight PDF documents that describe the CHILDES database. They are all available at http://childes.psy.cmu.edu/data/manual/ The eight guides are: 1. Intro: This

LISTENING Standard : Students demonstrate competence in listening as a tool for learning and comprehension. Proficiency Level I: Students at this level are beginning to understand short utterances. They

Leave A Speech Act After The Beep : Using the Telephone to Teach Pragmatics Douglas A. Demo, Georgetown University, United States Level: This activity can be adapted for beginning through advanced levels

April 28, 2013 Designing a Divide-by-hree Logic Circuit c 2005, C. Bond. All rights reserved. http://www.crbond.com Purpose his document provides a detailed description of each step in the design of an

Advanced CB 21 A One level Assess descriptions and narrations of factual and technical materials. Discriminate for accurate information while taking notes of a complex discourse. Assess the essential message

Lecture 1-10: Spectrograms Overview 1. Spectra of dynamic signals: like many real world signals, speech changes in quality with time. But so far the only spectral analysis we have performed has assumed

A Dual Focus on Form and Meaning in EFL Communicative Instruction I-Shou University, Kaohsiung Abstract This paper examines an issue concerning the relative merits and drawbacks of focusing on form (accuracy)

31 Case Studies: Java Natural Language Tools Available on the Web Chapter Objectives Chapter Contents This chapter provides a number of sources for open source and free atural language understanding software

CHILDES workshop winter 2010 Transcriptions in the CHAT format Mirko Hanke University of Oldenburg, Dept of English mirko.hanke@uni-oldenburg.de 10-12-2010 1 The CHILDES project CHILDES is the CHIld Language

What Is Linguistics? December 1992 Center for Applied Linguistics Linguistics is the study of language. Knowledge of linguistics, however, is different from knowledge of a language. Just as a person is

CLAN Manual for the CEAPP project 1 About CHILDES, CLAN, and CHAT This manual explains how to transcribe audiovisual interactional data using the CLAN- CHAT program. The program was built by Brian MacWhinney

Evaluation of Features for Sentence Extraction on Different Types of Corpora Chikashi Nobata, Satoshi Sekine and Hitoshi Isahara Communications Research Laboratory 3-5 Hikaridai, Seika-cho, Soraku-gun,

CHAPTER 4. STATEMENT LOGIC 59 The rightmost column of this truth table contains instances of T and instances of F. Notice that there are no degrees of contingency. If both values are possible, the formula

Level: All Target age: Adults Summary: This is the first of a series of articles on teaching pronunciation. The article starts by explaining why it is important to teach pronunciation. It then moves on

Teaching English with Technology, vol. 3, no. 1, pp. 3-12, http://www.iatefl.org.pl/call/callnl.htm 3 DEFINING EFFECTIVENESS FOR BUSINESS AND COMPUTER ENGLISH ELECTRONIC RESOURCES by Alejandro Curado University

Measuring and synthesising expressivity: Some tools to analyse and simulate phonostyle J.-Ph. Goldman - University of Geneva EMUS Workshop 05.05.2008 Outline 1. Expressivity What is, how to characterize

Remote interpreting in dialogic settings. A methodological framework to investigate the impact of the use of telephone and video link on quality in medical interpreting. Esther de Boe, PhD candidate PhD

Financial Literacy and ESOL Financial Literacy and ESOL There are more and more resources available for delivering LLN in the context of finance but most of them are focused on working with learners whose

Study Plan for Master of Arts in Applied Linguistics Master of Arts in Applied Linguistics is awarded by the Faculty of Graduate Studies at Jordan University of Science and Technology (JUST) upon the fulfillment

L2 EXPERIENCE MODULATES LEARNERS USE OF CUES IN THE PERCEPTION OF L3 TONES Zhen Qin, Allard Jongman Department of Linguistics, University of Kansas, United States qinzhenquentin2@ku.edu, ajongman@ku.edu