Abstract

A major challenge in automated text analysis is that different words are used for related concepts. Analyzing text at the surface level would treat related concepts (i.e. actors, actions, targets, and victims) as different objects, potentially missing common narrative patterns. Shallow parsers reveal semantic roles of words leading to subject-verb-object triplets. We developed a novel algorithm to extract information from triplets by clustering them into generalized concepts by utilizing syntactic criteria based on common contexts and semantic corpus-based statistical criteria based on "contextual synonyms". We show that generalized concepts representation of text (1) overcomes surface level differences (which arise when different keywords are used for related concepts) without drift, (2) leads to a higher-level semantic network representation of related stories, and (3) when used as features, they yield a significant 36% boost in performance for the story detection task.

Original language

English (US)

Title of host publication

Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2015

abstract = "A major challenge in automated text analysis is that different words are used for related concepts. Analyzing text at the surface level would treat related concepts (i.e. actors, actions, targets, and victims) as different objects, potentially missing common narrative patterns. Shallow parsers reveal semantic roles of words leading to subject-verb-object triplets. We developed a novel algorithm to extract information from triplets by clustering them into generalized concepts by utilizing syntactic criteria based on common contexts and semantic corpus-based statistical criteria based on {"}contextual synonyms{"}. We show that generalized concepts representation of text (1) overcomes surface level differences (which arise when different keywords are used for related concepts) without drift, (2) leads to a higher-level semantic network representation of related stories, and (3) when used as features, they yield a significant 36{\%} boost in performance for the story detection task.",

booktitle = "Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2015",

publisher = "Association for Computing Machinery, Inc",

}

TY - GEN

T1 - Story detection using generalized concepts and relations

AU - Ceran, Betul

AU - Kedia, Nitesh

AU - Corman, Steven

AU - Davulcu, Hasan

PY - 2015/8/25

Y1 - 2015/8/25

N2 - A major challenge in automated text analysis is that different words are used for related concepts. Analyzing text at the surface level would treat related concepts (i.e. actors, actions, targets, and victims) as different objects, potentially missing common narrative patterns. Shallow parsers reveal semantic roles of words leading to subject-verb-object triplets. We developed a novel algorithm to extract information from triplets by clustering them into generalized concepts by utilizing syntactic criteria based on common contexts and semantic corpus-based statistical criteria based on "contextual synonyms". We show that generalized concepts representation of text (1) overcomes surface level differences (which arise when different keywords are used for related concepts) without drift, (2) leads to a higher-level semantic network representation of related stories, and (3) when used as features, they yield a significant 36% boost in performance for the story detection task.

AB - A major challenge in automated text analysis is that different words are used for related concepts. Analyzing text at the surface level would treat related concepts (i.e. actors, actions, targets, and victims) as different objects, potentially missing common narrative patterns. Shallow parsers reveal semantic roles of words leading to subject-verb-object triplets. We developed a novel algorithm to extract information from triplets by clustering them into generalized concepts by utilizing syntactic criteria based on common contexts and semantic corpus-based statistical criteria based on "contextual synonyms". We show that generalized concepts representation of text (1) overcomes surface level differences (which arise when different keywords are used for related concepts) without drift, (2) leads to a higher-level semantic network representation of related stories, and (3) when used as features, they yield a significant 36% boost in performance for the story detection task.