Zusammenfassung

Emotion analysis (EA) is a rapidly developing area in computational linguistics. For most EA systems, the number of emotion classes is very limited and the text units the classes are assigned to are discrete and predefined. The question we address is whether the set of emotion categories can be enriched and whether the units to which the categories are assigned can be more flexibly defined. Six untrained participants annotated a corpus of eight texts having no predetermined annotation units and using fifteen emotional categories. The inter-annotator agreement rates were considerably high for this difficult task: 0.55 (moderate) on average, reaching 0.82 (almost perfect) with some annotator pairs. The final application of the intended EA system is predominantly in the emotion enhancement of human–computer interaction in virtual reality. The system is meant to be a bridge between unprocessed input text and auditory and visual information: generated speech, animation of facial expressions and body language. The first steps towards integrating text-based information annotated for emotion categories and simulation of human emotional perception of texts in story telling scenarios for virtual reality are already made. We have created a virtual character, whose animation of face and body is driven by annotations in text.