I’d like to point you to a Call for Papers for a workshop I’m involved in organizing at Hypertext 2010 in Toronto this June. I’m really excited about the focus of this event, and I’m looking forward to lots of exciting discussions and presentations (check out the invited talks and panelists!).

The workshop will be opened by an invited talk given by Ed Chi (Palo Alto Research Center). The talk will be followed by a number of peer-reviewed research and position paper presentations and a discussion panel including Barry Wellman (University of Toronto), Marti Hearst (University of California, Berkeley) and Ed Chi (Palo Alto Research Center).

Workshop’s Objectives and Goals:

The goal of this workshop is to focus the attention of researchers on the increasingly important role of modeling social media. The workshop aims to attract and discuss a wide range of modeling perspectives (such as justificative, explanative, descriptive, formative, predictive, etc models) and approaches (statistical modeling, conceptual modeling, temporal modeling, etc). We want to bring together researchers and practitioners with diverse backgrounds interested in 1) exploring different perspectives and approaches to modeling complex social media phenomena and systems, 2) the different purposes and applications that models of social media can serve, 3) issues of integrating and validating social media models and 4) new modeling techniques for social media. The workshop aims to start a dialogue aiming to reflect upon and discuss these issues.

Topics:

Topics may include, but are not limited to:

+ new modeling techniques and approaches for social media
+ models of propagation and influence in twitter, blogs and social tagging systems
+ models of expertise and trust in twitter, wikis, newsgroups, question and answering systems
+ modeling of social phenomena and emergent social behavior
+ agent-based models of social media
+ models of emergent social media properties
+ models of user motivation, intent and goals in social media
+ cooperation and collaboration models
+ software-engineering and requirements models for social media
+ adapting and adaptive hypertext models for social media
+ modeling social media users and their motivations and goals
+ architectural and framework models
+ user modeling and behavioural models
+ modeling the evolution and dynamics of social media

UPDATE March 17 2010: More results can be found in the following publication: M. Strohmaier, C. Koerner, R. Kern, Why do Users Tag? Detecting Users’ Motivation for Tagging in Social Tagging Systems, 4th International AAAI Conference on Weblogs and Social Media (ICWSM2010), Washington, DC, USA, May 23-26, 2010. (Download pdf)

One question that is interesting in this context is: “How do tag clouds of Categorizers respectively Describers actually look like – and what can we learn from them?“.

Categorizers vs. Describers: Our previous work suggests how tag clouds of Categorizers/Describers would look like theoretically: Categorizers would rather use general terms for tagging, terms that are useful labels for categories based on his model of the world. On the other hand, Describers would use terms that are specific to a resource or concepts that can be found directly within a resource, based on characteristics of the resource. That’s the theory.

Example of an Extreme Categorizer: Among 445 delicious users, the following screenshot shows the tag cloud of the single user that scored highest on our “Categorization” measure (the most extreme Categorizer in our dataset).

An example tag cloud of an "Extreme Categorizer" (based on ~1900 bookmarks)

The results are quite intriguing: The above user clearly uses very general terms to annotate his resources, and introduces an elaborated taxonomy to categorize them. While some parts of his vocabulary are more elaborate and fine grained (e.g. “fashion” and corresponding sub-categories “fashion_blog” and “fashion_brand”) others are less elaborated (e.g. “games, health, etc”). The user also produced a controlled vocabulary and sticked to it over the course of 1900 bookmarks, which I think can be seen as another indication for the inclination of this user to use tags for categorization purposes. The fact that a combination of our measures for tagging motivation (Conditional Tag Entropy and Orphaned Tags) has produced this interesting example of an extreme Categorizers provides some evidence for the plausibility of these measures. I think that’s great news.

Example of an Extreme Describer: The next screenshot shows an excerpt of a tag cloud of the user that scored highest on the “Description” measure (the most extreme Describer in our dataset).

An example tag cloud of an "Extreme Describer" (excerpt, based on ~1700 bookmarks)

It is interesting to note that this tag cloud represents an excerpt, the original tag cloud of this user is ~twice this size. The user clearly introduces a large set of tags, and uses many different variations of the same or similiar concepts, without much consideration with regard to terminological or conceptual differences (e.g. exce, excel, Excel_Functions, Excel2007, Exceler, excelets, ExcelPoster, Excl, excxel). Again, the fact that our measures for tagging motivation produced this particular user as an extreme example of a Describer can be seen as an indicator for the principle plausibility of our measures.

However, what is also apparent from this example is that even in the case of this extreme Describer, some categories seem to be present in his tag vocabulary (e.g. “ebooks, fun, etc”). This suggests that a binary approach to understanding tagging motivation (a user is EITHER a Categorizer OR a Describer) is inplausible.

Open Questions: Overall, the examples of two users motivated by diametrically different motivations for tagging raises a number of interesting questions worth studying: What are characteristics, utilities and properties of tags produced by Categorizers and Describers? How do these different types of tagging motivation influence resulting folksonomies? And how do they influence quality attributes of algorithms (e.g. search, ranking) and applications (e.g. tag recommendation) that are processing folksonomical data? We are looking into some of these questions in our current research.

UPDATE March 17 2010: More results can be found in the following publication: M. Strohmaier, C. Koerner, R. Kern, Why do Users Tag? Detecting Users’ Motivation for Tagging in Social Tagging Systems, 4th International AAAI Conference on Weblogs and Social Media (ICWSM2010), Washington, DC, USA, May 23-26, 2010. (Download pdf)

On the “social web” or “web2.0”, where user participation is entirely voluntarily, User Motivation has been identified as a key factor in the mechanisms contributing to the success of tagging systems. Web researchers are trying to identify the reasons why tagging systems work for a couple of years now, evident in, for example, the organization of a panel at CHI 2006 and a number of conferences and workshops on this topic.

Recent research on tagging motivation suggests that it is a rather complex construct. However, there seems to be emerging consensus that a distinction between at least two categories of tagging motivation appears useful: Categorization vs. Description. (Update May 30 2009: I was able to trace back the earliest mention of this distinction to a blog post by Tom Coates from 2005).

Categorization vs. Description

Categorization: Users who are motivated by Categorization engage in tagging because they want to construct and maintain a navigational aid to the resources (URLs, photos, etc) being tagged. This typically implies a limited set of tags (or categories) that is rather stable. Resources are assigned to tags whenever they share some common characteristic important to the mental model of the user (e.g. ‘family photos’, ‘trip to Vienna’ or ‘favorite list of URLs’). Because the tags assigned are very close to the mental models of users, they can act as suitable facilitators for navigation and browsing.

Description: On the other hand, users who are motivated by Description engage in tagging because they want to accurately and precisely describe the resources being tagged. This typically implies an open set of tag, with a rather dynamic and unlimited tag vocabulary. The goal of tagging is to identify those tags that match the resource best. Because the tags assigned are very close to the content of the resources, they can act as suitable facilitators for description and searching.

Related Research: This basic distinction can be identified in the work of a number of researchers who have made similar distinctions: Xu et al 2006 (“Context-based” vs. “Content-based”), Golder and Huberman 2006 (“Refining Categories” vs. “Identifying what it is/is about”), Marlow et al 2006 (“Future retrieval” – “Contribution and Sharing”), Ames and Naaman 2007 (“Organization” vs. “Communication”) and Heckner et al 2008 (“Personal Information Management vs. Sharing”), just to give a few examples, all represent recent research aiming to demystify and conceptualize the reasons why users participate in tagging systems.

Why should we care?

“In the wild“, user behavior on social tagging systems is often a combination of both. So why is this distinction interesting? I believe that this distinction is interesting because it has a number of important implications, including but not limited to:

Tag Recommender Systems: Assuming that a user is a “Categorizer”, he will more likely reject tags that are recommended from a larger user population because he is primarily interested in constructing and maintaing “her” taxonomy, using “her” individual tag vocabulary.

Search: Tags produced by “Describers” are more likely to be helpful for search and retrieval because they focus on the content of resources, where tags produced by “Categorizers” focus on their mental model. Tags by categorizers thus are more subjective, whereas tags by describers are more objective.

Knowledge Acquisition: Folksonomies, i.e. the conceptual structures that can be inferred from the tripartite graph of tagging systems, are likely to be influenced by the “mixture” or dominance of categorizers and describers in their system. A tagging system primarily populated by categorizers is likely to give rise to a completely different set of possible folksonomies than tagging systems primarily populated by describers. More importantly, it is plausible to assume that even within certain tagging systems, tagging motivation among users vary.

This brings me to a small research project I am currently working on: Assuming that a) this distinction in user motivation exists in real-world tagging systems and b) it has important implications, it would be interesting to measure and detect the degree to which users are Categorizers or Describers. Due to the latent nature of “tagging motivation”, past research has mostly focused on questionnaire or sample-based studies of motivation, asking users how they interpret their tagging behavior themselves. While this early work has provided fundamental insights into tagging motivation and contributed significantly to theory building, as a research community, we currently lack robust metrics and automatic methods to detect tagging motivation in tagging systems without direct user interaction.

Detecting Tagging Motivation

I think there are several approaches to detect whether users are Categorizers or Describers without the need to ask them directly. One approach would focus on analyzing the semantics of tags, using wordnet and other knowledge bases to determine the meaning of tags and infer user motivation. This would require parsing of text and performing linguistic analysis, which I believe is difficult in the presence of typos, named entities, combined tags (“toread”) and other issues. Another approach would focus on comparing the tag vocabulary of users to the tag vocabulary of “the crowd”. Users that share a greater set of common tag vocabulary might be describers, whereas users having a highly individual vocabulary might be categorizers. Again there are problems: Tagging systems that accomodate users with different language backgrounds might be prone to detecting user motivation based on false premises.

So what would be a more robust way of detecting user motivation? I am currently interested in developing a model that would be agnostic to language, semantics or social context, focusing solely on statistical properties of individual tagging histories. This way, a determination of user motivation could be made without linguistic analysis or acquiring complete folksonomies from tagging systems, based on a single users’ log of tagging. Let me explain what I mean. I hypothesize that the following statistical properties of users’ tagging history allows to conduct interesting analyses:

Tag Vocabulary size over time: Over time, an ideal Categorizer’s tag vocabulary would reach a plateau, because there is only limited categories that are of interest to him. An ideal Describer is not limited in terms of her tagging vocabulary. This should be easy to observe.

Tag Entropy over time: A Categorizer has an incentive to maintain high entropy (or “information value”) in his tag cloud. Tags would need to be as discriminative as possible in order for him to use them as a navigational aid, otherwise tags would be of little use in browsing. A describer would not have an interest to maintain high entropy.

Percentage of Tag Orphans over time: Categorizers have an interest in a low ratio of Tag Orphans (tags that are only used once) in their set of tags, because lots of orphans would inhibit the usage of their set of tags for browsing. Describers naturally produce lots of orphans when trying to find the most descriptive and complete set of tags for resources.

Tag Overlap: While a Describer would be perfectly fine assigning two or more synonymous tags to the same resource (he might not know which term to use when searching for this resource at a later point), a Categorizer would not have an interest in creating two categories that contain the exact same set of resources. This would again inhibit the usage of tags for browsing, a Categorizers’ main motivation for tagging.

Preliminary Investigations

I have done some preliminary investigations to explore whether these statistical properties of users’ tagging history can actually serve as indicators of tagging motivation. Here are my preliminary results:

Growth of tag vocabulary in different tagging systems

The diagram above shows the growth of tag vocabulary of different taggers. The upper most red line represents tagging behavior of an almost “ideal” Describer, in this case tags produced by the ESP game, that contain a set of tags that represent valid descriptions of the resources they are assigned to. The lower most green line represents tagging behavior of an almost “ideal” Categorizer, tags (in this case: a number of photo sets) produced by a flickr user that categorized photos into a limited set of categories (> 100 sets). All other lines represent tagging behavior of real users on different tagging platforms (bibsonomy, delicious, flickr tags). It is worth noting that all other data lies between the two identified extremes.

In the following, I will discuss the suitability of tag entropy of single users (as opposed to the work by Chi and T. Mytkowicz 2008 focusing on large sets of users) as an indicator for detecting tagging motivation:

Change of tag entropy over time

In this diagram, we can see that while our “ideal” Categorizer and our “ideal” Describer almost describe extremes, there are some users “outdoing” them (e.g. “u5 bibsonomy bookmarks” has even lower entropy than the tags acquired from the “ideal” Describer “ESP game”). Entropy thus seems to be -to some extent – a useful indicator for tagging motivation.

Next, I’ll discuss data comparing the rate of tag orphans in different datasets:

Rate of Tag Orphans over time

Like in the previous diagram, extreme behavior represents a good (but not optimal) upper and lower bound for real tagging behavior. While the “ideal” Categorizer (flickr sets, green line at the bottom) has a very small number of tag orphans, the “ideal” Describer (ESP game data, red line at the top) has a much higher tag orphan rate.

If we can identify the functions of extreme user motivation “(ideal” Categorizers and Describers), and position real user motivation between those extremes, we might be able to come up with scores indicative of user motivation in tagging systems – e.g. a user might be 80% Categorizer, and 20% Describer. Having such a model could help exploring the implications of different user motivations outlined above. Together with students (in particular Christian Körner, Hans-Peter Grahsl and Roman Kern), I am working on constructing and validating such a model, which we are aiming to submit to a conference this year.

UPDATE March 15 2010: More results can be found in the following publication: M. Strohmaier, C. Koerner, R. Kern, Why do Users Tag? Detecting Users’ Motivation for Tagging in Social Tagging Systems, 4th International AAAI Conference on Weblogs and Social Media (ICWSM2010), Washington, DC, USA, May 23-26, 2010. (Download pdf)

Kerstin Bischoff and Claudiu S. Firan and Wolfgang Nejdl and Raluca Paiu. Can all tags be used for search?. CIKM ’08: Proceeding of the 17th ACM conference on Information and knowledge management, 193–202, ACM, New York, NY, USA, 2008.

About me

Markus Strohmaier, Full Professor of Web-Science at the Faculty of Computer Science at University of Koblenz-Landau (Germany) and Scientific Director at GESIS - the Leibniz Institute for the Social Sciences (Germany).

My research focuses on the World Wide Web, my interests include social computation, agents, online production systems and crowdsourcing.