Winnie – I am already board with her being papped every day leaving practice. If she is this desperate for attention, she needs a wardrobe malfunction, stat. Like titty fall out of a wife beater or something.

****WARNING: THIS IS A LONG RESPONSE TO A QUESTION @LOWKEY ASKED ME. PLEASE SCROLL DOWN IF YOU ARE NOT INTERESTED****

A few days ago, after the Great CDaN Bee Swarm of March 2014, people were discussing which user names they believed were operated by the same author. I made some offhand remark that in a lot of cases, grammatical, syntactical and thematic patterns could potentially be triangulated by the types of programs (CAQDAS) used by qualitative researchers studying rhetorical/literary works, sociolinguistics, narratives, and other forms of textual analyses. @LowKey asked me for info/sources and I said I would follow up, because I had to ask some experts first. So here it goes.

I have polled a dozen or so grad students, postdocs and professors from neuropsychology, Comp Lit/English, criminology/law, computer science and various social sciences. A few study online communities and trolling behaviors specifically.

Overall, this research has not been done yet on a big scale, but it IS in the process of being developed. This research is relatively new, as you would imagine, and it has been relatively confined to the private sector and government/law enforcement agencies thus far. Academia is just starting to catch on. But fuck yes, it is theoretically possible and this vein of inquiry is based on a lot of assumptions that have been empirically supported across various literatures. Forensic linguistics is a whole damn field on its own, and it is gaining a lot of traction recently. Here are links to the very few sources on troll detection methods out there:

Academics have only produced a limited amount of research on trolls and troll detection methodologies, such as these studies, which as you can see, have all only JUST come out:

Another interesting thing to note is that a lot of the research is skewed specifically towards cyber-bullying, rather than mere trolling. It is my understanding that similar programs, like BULLY TRACER have also been developed.

A similar project was also recently conducted in the UK, by researchers likeCAMBRIA ET AL., 2010, which rely on sentic computing programs to conduct textual analyses.

Some private sector advances have also been spearheaded recently by corporations likeXBOX . Social media sites are also working on these types of detection programs.

In sum, YES, it is absolutely possible to use textual analyses software to assess the likelihood that multiple posts are being written by the same author, but these studies are new and relatively limited in their scope. Setting up a project like this would totally be feasible, and I hope that one of the people I talked to actually sets it up in the future. I would LOVE to see this shit happen. But they better give last author credit or at least send a thanks to @LadyH and @LowKey. LOL.

Based off what I have gathered, here is what we think future researchers should try. One could code online posts and run it through qualitative textual analysis programs like Dedoose, which identifies patterns and themes across texts. This formulation is based on the assumption that even if a troll assumes multiple identities, who talk about different topics or attempt to mimic different dialects, they will still have some sort of underlying writing patterns that may be detectable only due to technological advancements. Thus, there are a few important things you would specifically be trying to triangulate hits on: *Grammatical, syntactical and thematic patterns *Word combinations/patterns and certain types of slang words *Certain spelling/grammatical errors (i.e. this has been common in criminal cases relying on forensic linguistics, like the Zodiac, who infamously misspelled "eggs" as "aigs") *The contexts in which trolls appear, support certain posters, or troll certain posters, because authors tend to get irritated by the same users or topics, which will draw them out

But equally important to studying whether this kind of technology can confirm multiple identities to one author is that these analyses include some sort of assessment into the reliability coefficients of this design. In other words, a program may be able to predict the likelihood that two texts were written by the same author, but how accurate is this prediction empirically? The other biggest thing we would have to consider is that this type of analyses would be very difficult, if not impossible, to do without a pretty sophisticated computer coding system that learns as it goes. I do worry that programs like Dedoose may not be as effective with analyzing certain types of texts. However, effective, sophisticated and adaptive programs not only exist, but they have been held up legally in courts. They are used extensively by law enforcement agencies monitoring the communications of criminal organizations and suspected terrorists, but unfortunately. they are not available for academic or public use. I would love to know what the fuck those evil geniuses have came up with.

But more generally, this line of research will bring about a lot of valid criticisms that CDaNers like @Count pointed out. (1) It first needs to be specified what we are defining as a troll, and what kinds of trolls you are trying to run content analyses software on. Lots of different types with lots of different motives. (2) It's not just psych profiles, as alleged by researchers like BUCKELS ET AL., 2014 which centers on alleged dark traits, or writing styles that will matter for trolling, but it might be more important to have them define their use of internet space, because that would affect how they write. Assuming that trolling is necessarily linked within some sort of psychological disposition already pre-assumes that trolls are trolls, whereas this kind of a perspective assumes that we don't know what's going on until we investigate it. These considerations were outlined to me by a grad student who is trying to set up a dissertation about how individual understandings of technology and the use of a certain internet space will be linked with concepts of real and assumed online identities.

Sorry for the crazy length. When someone asks me a question, I try to be as thorough as possible, and I was really intrigued by looking into this. I am sorry that Heisenberg blew up this page y'all, and thanks to @LowKey for the food for thought. It was a fun little project to look into the past few days.

@laura, just remember the speech they give on every Criminal Minds, white male, loner, etc 😀 though I think the interwebz has given rise to more female perps simply because it isn't physical but can kill/destroy.

Hahahaha naw, in all seriousness though, the Buckels study found that computer users that engage in trolling (defined in a malicious sense) typically tend to score higher for the so-called Dark Tetrad traits. Trolls scored higher on a number of personality traits examined: Machiavellianism, psychopathy, narcissism, extraversion, disagreeableness and sadism. On the other hand, I believe that trolling behaviors are also linked to lower self-esteem ratings. However, this line of research is also new and needs to be elaborated more on in the future.

Thanks to those that bothered to read and give feedback. I'm glad some found all that useful.

DISCLAIMER

CRAZY DAYS AND NIGHTS IS A GOSSIP SITE. THE SITE PUBLISHES RUMORS, CONJECTURE, AND FICTION. IN ADDITION TO ACCURATELY REPORTED INFORMATION, CERTAIN SITUATIONS, CHARACTERS AND EVENTS PORTRAYED IN THE BLOG ARE EITHER PRODUCTS OF THE AUTHOR'S IMAGINATION OR ARE USED FICTITIOUSLY. INFORMATION ON THIS SITE MAY CONTAIN ERRORS OR INACCURACIES; THE BLOG'S PROPRIETOR DOES NOT MAKE WARRANTY AS TO THE CORRECTNESS OR RELIABILITY OF THE SITE'S CONTENT. LINKS TO CONTENT ON AND QUOTATION OF MATERIAL FROM OTHER SITES ARE NOT THE RESPONSIBILITY OF CRAZY DAYS AND NIGHTS.