Archive for the ‘evolutionary computation’ Category

This weekend I had a chance to play with the Postrank data to get some insights it reveals about the user engagement patterns across the major social media sites. Since this might be interesting to people studying human-based computation, I decided to share my preliminary results here.

I used Postrank metrics API to retrieve a data for a set of urls. It provides counts of individual user interactions from all PostRank monitored social sites around a single web page. The metrics update in real-time as new user activity occurs and reflect the amount of user engagement the page accumulated so far. If you haven’t used Postrank metrics before, the easiest way is to try their new google reader extension which is pretty nice.

Different social media sites implement different human-based computation techniques, so their activity metrics are not comparable to each other, in general. We can compare the same metric for different web pages, but it doesn’t tell us much about the site/algorithm that computed the metric. One way to analyse this data is to look into pairwise correlations between the metrics across multiple sites. The pairwise correlation may be indicative of some interaction among the metrics. It can be the overlap in the user base (e.g. a user shares the same sites to both diigo and delicious), common interests among users of different sites (users of each site share to the respective sites independently because of similar preferences), or some other factors.

I took a sample of 2169 urls pulled from about 200 feeds in my google reader. Those feeds cover a pretty diverse set of topics, including science, engineering, entrepreneurship, business, management, psychology, legal, photography, music, humor, lifestyle, etc. I pulled the Postrank metrics for each of those urls into a user engagement matrix. Each row of the matrix represents a url information, and each column has values of a single engagement metric (e.g. number of posts on twitter) across all the 2169 urls. I computed the Pearson correlation between every pair of columns. This resulted in a matrix visualized below:

We can see that the Hacker news score and the Hacker news comments highly correlate with each other (correlation 0.9 suggests that one is nearly proportional to another). However very high correlations between different sites (orange spots in the matrix) are less expected. A likely reason for very high correlations is availability of the tools that allow users to export their activity on one site into another. This might be responsible for correlation 0.8 between magnolia and delicious and correlation 0.6 between diigo and delicious. Such import/export ability is enabled by the apis, so we can expect that the sum of correlations in each row would be indicative of the quality and usage of its apis for data portability. Here is the top 10 social sites according to this metric and the top three sites are hardly surprising:

Finally, in order to better visualize the relationships among these sites/metrics, I used MDS (multi-dimensional scaling), a technique often used to map multi-dimensional points into a plane in a way that the distance between them on a plane best approximates the distance in the original multi-dimensional space. For this case, I used 1-correlation as an input to the MDS. This way, sites showing similar user engagement patterns end up close to each other.

One use of this map could be finding alternative sites to explore that have a like-minded community of people. For example, if you are using deliciousto share your bookmarks, you might consider exploring its nearest neighbors: diigo, tumblr, magnolia, and hatena.

Unfortunately, not every social media site allows the access to their user engagement data via their activity streams. I hope more sites do this in the near future, so this map could be more complete. The landscape of social media sites is changing fast and many new sites appear. Some of these new sites might not be getting attention they deserve and this kind of data-driven social media mapping may help users to find sites that offer them the best experience.

Knowledge market is a distributed social mechanism that helps people to identify and satisfy their demand for knowledge. It can facilitate locating existing knowledge resources similarly to what search engines do (the name social search refers to this). It can also stimulate creation of new knowledge resources to satisfy the demand (something that search engines can’t do). The goal of this post is to compare several free knowledge markets created by 3form, Naver, Yahoo, Mail.ru, and Google to identify their common elements and differences. All these sites organize collaborative problem solving activity of a large number of participants providing means and incentives to contribute participants’ intelligent abilities to the distributed problem solving process. MIT Tech Review published an attempt to make a comparison of Q&A sites by Wade Rush. Unfortunately, that comparison was of low quality and too superficial to be useful (see the readers comments). Here is my attempt at such a comparison. Its focus is on free knowledge markets, i.e. those that don’t charge fees for participation and allow participants to build on top of the knowledge resources contributed by others.

Background

Free Knowledge Exchange project was launched in summer 1998 in Russia and its international version became available at 3form.com in February 1999. The project allows any participant to submit problems and brings those problems to the attention of other people (3form community) to collect hints and solutions. The credit assignment system of the website tracks contribution of each individual participant to solving problems. It rewards the actions of the participants as well as the quality of contributed content. In exchange for the contribution to solving problems of others, the website returns to the participant the proportional share of the collective attention directed towards solving the participant’s own problems. 3form uses the method known as human-based genetic algorithm (published in 2000). Naver Knowledge iN is a Korean free knowledge market service opened in 2002 by NHN Corp. The site is based on the same idea as 3form, though implements it somewhat differently. This service made Naver the biggest internet destination in Korea and was a major factor allowing Naver to beat Google and Yahoo in the Korean search market. It took a couple of years for Yahoo and Google to learn their Korean lesson. Yahoo launched Yahoo Answers in December 2005 that is now the biggest free knowledge market worldwide. Mali.ru is a Russan knowledge market inspired by Yahoo Answers and currently the biggest similar service in Russia. Google Q&A is the newest service being tested in Russia and China by Google. Google’s service is likely to be inspired by prior work, though I am not aware of Google acknowledging this.

Prior to 3form, two ways of collective problem solving on the internet were available. On one hand, there were free knowledge sharing forums such as Usenet and IRC, where users could ask technical questions and get help from volunteer experts whose participation was neither accounted nor rewarded in any way. On the other hand, there were expert advice services designed around fee-based Q&A model where questions are answered for a fee by a limited number of paid experts. Experts Exchange (EE) made a step towards becoming a free service by allowing anyone to answer questions. It introduced “answer points” to identify experts among its volunteer answerers. The answer points were awarded based on user evaluation of the answers: the author of the problem could allocate the total amount of answer points among all people who contributed useful ideas toward solution. Despite of beign innovative at the expert side, the service remained fee-based on the user side (even though users were getting some credit in “question points” independent of their contribution, allowing them to ask a limited number of questions). In other words, while the question points had monetary value, answer points had no such value (in particular they couldn’t be counted towards “question points”). Once the limited amount of question points was exhaused, users had to start buying question points to continue using the system.

Korean Naver played a key role in popularizing the concept of knowledge market. Naver, however, was not the first knowledge market in Korea. DBDiC offered analogous service as early as Oct 2000. DBDiC presumably developed its technique independently from 3form, but shares similar architecture, including general structure and credit assignment system. There are two key differences of DBDiC technique from the one of 3form: (1) the identity of the author of the solution biases evaluation of the solution, i.e. high status of the author can lead to accepting the inferior answer as the best despite the presence of the better answer contributed by a person with lower status, (2) answers that are positioned earlier in the list are more likely to be read, chosen, and evaluated, i.e. a great answer later in the list can easily be overlooked. These differences resulted in subjective and position specific biases in solution evaluation system (see also my previous post .Bugs of collective intelligence: why the best ideas arenâ€™t selected?). I could speculate that if DBDiC designers were more familiar with 3form service, they could have avoided those undesirable biases that later propagated into every knowledge market platform created subsequently.

Free knowledge markets are currently abundant with numerous implementations. Wade Rush lists six recently created services. ReadWriteWeb post lists 29 knowledge market services. Neither of these lists is complete and it may not be feasible to create a complete list as new similar services appear almost every day. However, most new services are similar to one of those reviewed here and are likely to be inspired by them.

Incentive systems

Knowledge markets differ from knowledge sharing websites by implementing knoweldge evaluation and incentive systems that encourage participants to help each other. The incentive system of a typical free knowledge market is based on rewarding actions of its participants as well as rewarding quality of their contribution. Measure of quality is normally based on user evaluation. An alternative to this would be computational evaluation, e.g. one based on frequency of occurrence of different answers collected independently (see Luis von Ahn work that explores this model in specialized applications like image labeling). In the case of a typical knowledge market frequency counting is problematic due to the bigger search space.

The action rewards encourage particular actions reinforcing participant behavior that is beneficial for the system as a whole, for example, it can be as simple as visiting the website. The following table summarizes action rewards offered by different knowledge markets:

Action

3form

Naver

Yahoo

Mail.ru

Google

Join

1

100

100

100

Visit

3

1

1

5

Submit question anon

0

-20

N/A

N/A

N/A

Submit question pseu

N/A

1

-5

-5

-B

Submit expert question

N/A

-50*E

N/A

N/A

N/A

Submit answer

0.01

2

2

f(K)

2

Select best answer to your question

3

3

Evaluate answer

1

1

[S>=250]

1

Evaluate question

1

1

In this table, S refers to the current score of the participant. For example, Mail.ru will not reward new participants for provided evaluations until they get score of 250 points. This seems to be an effective way to guard the system from abuse. With this system in place, it becomes hard for someone to manipulate the values of the answers in Mail.ru by creating multiple accounts and submitting votes from them. It is no longer enough to create multiple accounts, it is also required to earn 250 points of credit for each, which will protect the system from bots better than any CAPTCHA would do. Mail.ru also rewards non-peer reviewed answers differently, depending on the prior performance of their author. For this purpose Mail.ru designers inroduced “Energy Conversion Coefficient” which is simply the share of the best answers to the total number of answers that the person generated. This seems to be a good incentive to provide quality answers and so far this is a unique feature of Mail.ru answers. B stands for bonus, a number from 1 to 100 that is set by the author of a question. The bonus can reflect the difficulty or importance of a problem for its author and supposedly high bonus will motivate people to pay more attention to the question. E is the number of experts in Naver’s expert question, can be 1 or 2 (not available in other services).

The quality evaluation rewards are summarized below:

Evaluation

3form

Naver

Yahoo

Mail.ru

Google

Question reward

0.02*R

1

Answer reward

R*log(E)

Best answer

0.03*A

10

10

10

B

Features

Common features comparison:

Feature

3form

Naver

Yahoo

Mail.ru

Google

Question bookmarking

Y

Y

Y

Y

Y

Question evaluation

Y

Y

N

N

Answering your own question

Y

N

N

Y

Y

Submitting multiple answers

Y

N

N

N

Y

Search questions asked by others

N

Y

Y

Y

Y

Search returns how many answers?

N/A

1

All

All

All

How many answers can be selected as best?

1

2

1

1

1

Question open (days)

until removed

2-15, default 5

4 answ/1 vote

5

5

Innovation/Selection

conc

seq

seq

seq

seq

Social networking

N

Y

Y

Y

Y

Yahoo Answers explicitly forbids you to answer your own quesiton: “You can’t answer your own question.” It forbids to submit another answer if one is already submitted. 3form and Google allow this. In fact, people can use Google system as a discussion forum and post comments and additions to the previous answers (this requires to keep answers in the order they were received, i.e. creates temporal selection bias towards earlier answer).

Unique features

3form doesn’t have subjective and temporal biases of evaluation present in other systems. This improves the chances that the best answers will be selected. The amount of attention each problem receives is proportional to the amount of attention its author (and other people interested in this problem) paid to solving problems of others.

Naver’s registration involves cell phone verification. If you want to register, you have to provide you cell phone to Naver and then input a verification code sent to your cell phone. This makes sockpuppetry (a practice of establishing several accounts to influence the system) much harder than simple email verification used in other services. Owners of several cell phone numbers still can have multiple accounts. Another specific thing about Naver is that most of the questions ask for information rather than knowledge. Maybe it partly explains that our question about the first knowledge network was too unusual for Naver, so it didn’t receive any answers

Mail.ru uses cell phone messaging to auction a limited number of featured questions. The users compete for a limited space by sending multiple IM messaged to Mail.ru number from their cell phones (and paying IM fees). Questions of people who sent the largest number of messages are featured at the front page. In addition to this, Mail.ru has a button, “Send thanks to the answerer.” If this button is pressed the message pops up suggesting to send a text message from a cell phone to Mail.ru number, each thank you costs $1 at Mail.ru. Mail.ru allows to search questions, however this search couldn’t find anything when I entered keywords for my question. It assume it takes time for new questions to become searchable

Google has extended question exposure statistic: number of times the question was shown

A small empirical test

The purpose of this test was not determining “the best” Q&A site as in the MIT Tech review comparison. My purpose is to give some information on what kind of results you could expect from using free knowledge market services. I needed this test mainly to see how the websites work and plan to do more extensive testing later.

Recent article at inc.com How to kill a great idea claims that “Jonathan Abrams created the first online social network” right from the beginning. Wikipedia suggests that at least Classmates.com and SixDegrees.com were created earlier than Friendster and both definitely fall into online social network category. This suggested that this question is non-trivial and might be a good question to post to the free knowledge market sites for the purpose of a small empirical test. I was interested if participants of these sites will be able to suggest a site that is earlier than Classmates.com or explain why Classmates.com shouldn’t be counted as a social network site. This question also asks for problem reformulation “What are the freatures of onlien social network, in the first place?” and requires some research. At the same time it is possible to verify the answers by tracing their references. So here is the question I posted into 3form, Naver, Yahoo, Mail.ru, and Google services:

What was the first online social network?

I am interested to know what was the name of the first online social network, who implemented it, and when. Thank you for your answers!

3form Free Knowledge Exchange:

Wikipedia suggests that it was Classmates.com (1995). Maybe Email (with address book) can be thought of the frist online social network. Email existed since 1972, though I am not sure when the first email address books were implemented.

A social network is a social structure made of connected individuals. This definition suggests that the Internet itself is the first online social network.

“social networking on the PC based internet preceed the internet on PCs itself (having existed on “walled garden” BBS systems of that time such as Compuserve, AOL, Genie and Prodigy before they connected to the mostly then university and government used internet) being roughly 16-17 years of age as a mass market proposition (obviously early social networking existed on mainframes” link

Naver Knowledge-iN:

Yahoo Answers:

I believe it was AOL. Ok, probably not, but that was the first commercially availible one.

Of course it depends on how you define the term “social networking site”.

The worldwide distributed discussion system which is known as usenet (but whose proper name is netnews) was developed in 1979 by Steve Bellovin, Tom Truscott, and Jim Ellis. I personally believe this netnews as the first social networking site — it was certainly the first that relied on the
Internet - although it also used a transport called UUCP for unix-to-unix-copy.

Mail.ru Answers (translated from Russian):

IMHO it is LiveJournal

Google.ru Answers (translated from Russian):

Cites a russian article “The first social net of the Internet”: “Who was the first? Different sources mention different social networks, however the historical records give unambiguous answer—the first social network appears in the internet in 1995. The website of social network Classmates.com was opened for users in 1995 by Randy Conrad, the owner of Classmates Online, Inc. The website helped registered users to find and keep connections with their friends and contacts, with whom one had relationships throughout their life—preschool, school, colledge, work, and military service. Now Classmates.com has more than 40 millions registered users from the US and Canada.”

In addition to the previous answer. … The term “social network” was introduced by sociologist James Barnes from Manchester School in his work “Classes and …” (This long answer lists the major social networks in the US and Russia and citing a jornal article “Social Networks in the Internet” from ComputerPress).

(Third answer lists networks corrently popular in Italy, Latvia, Estonia and seems irrelevant to the question).

Yahoo was the fastest to provide answers. Google was also very fast and also provided the largest number of answers in the first day (3). Two answers from Yahoo and two answers from Google were received in less than 15 minutes after the question was posted. The remaining answers came within one day, with no answers for subsequent days, despite the fact that any question is open for answering for 4 days in Yahoo Answers, 5 days in Google answers, and no time limit in 3form (the question is kept if at least one of the participants is interested in keeping it). I can speculate that Yahoo and Google use recency of the question as a criterion to determine its salience/exposure, this would allow to direct the most of the answerers’ attention towards recent questions and receive answers to new and easy questions faster at the expense of older and more difficult questons. If the problem is not solved within one day, my experience is that it is unlikely to be solved in successive days in Yahoo/Google services unless it is reposted again. In this situation, multiple reposts will be needed to answer a difficult question and on each repost the problem solving process will start from scratch without benefitting from older solutions (you might post a link to the old thread in the question to compensate this). These services seem to be a good way to answer simple questions quickly, but it seems not suitable for solving problems that are somewhat more difficult. 3form, on the opposite, seems more suitable to address more difficult problems that are unlikely be answered in one day. Of course more experimentation and research is necessary to arrive at some reliable generalizations. This post is just a first step in this direction.

Conclusions

As suggested by their name “Answers” services are more appropriate for questions that are easy to answer and especially when the answer is instantly needed. They should be your second choice after doing search on wikipedia or on the web. If the question is not answered within one day, it is unlikely to be answered. A good idea is to post it again (maybe at a different site). Most of the answers at these sites arrive within minutes after the posting. If your problem requires some time, research, and/or creativity, you might have better chances at 3form that gives people more time to find solutions. 3form is also preferrable when (1) you have ill-defined problem that often needs reformulation and/or making assumptions, (2) many other people might be interested in the same problem, or (3) you lack expertise to select the best solution out of many (other services have strong biases in solution evaluation that often prevent them from selecting the best solutions). Korean knowledge markets represented here by Naver Knowledge iN offer the richest set of features. Russian Mail.ru has the most intricate incentive system. In our small test they were not particularly helpful, but their distinctive features seems quite useful.

In summary, if you need to find certain knowledge in English I would recommend the following sequence of steps: (1) Wikipedia search, (2) Google web search, (3) Yahoo Answers, (4) 3form. Each following step requires significantly more time than the previous step. This might change if the Google will make its new Q&A service available in English.

Acknowledgments: This text benefitted greatly from the help of Hwanjo Yu and Sang-Chul Lee in collecting information on Korean knowledge markets that is not available in English.

Google’s free knowledge market service initially was only available in Russia (see my short review of it in a post Google Answers is reborn in Russia). Now China has this service as well. Haochi Chen from Googlified has more details on this. I am going to post a detailed comparison of five knowledge markets soon, including ones of Naver, Yahoo, and Google.

Google is actively exploring human-based computation (HBC) recently. HBC is a class of hybrid techniques where a computational process performs its function via outsourcing certain steps to a large number of human participants. HBC is a ten year old concept that got pervasive on the Internet, but still perceived by many as new or even revolutionary). Academic research in HBC is still in its initial stages despite many internet projects and companies exploring these techniques widely. While HBC was developed in the context of evolutionary computation and Artificial Intelligence, it is often perceived as conflicting with the goal and the very term of AI as HBC often explores natural intelligence (both creativity and judgment of humans) in the loop of a computational learning algorithm. The goal of AI is most often understood as creating a machine intelligence that is competitive to one of humans. From HBC perspective, the artificial and natural intelligences don’t have to be competitors but instead work best together in a symbiosis. HBC is also somewhat outside of the traditional focus of Human-Computer Interaction (HCI) research, even though it is perfectly compatible with the literal meaning of the HCI term. It reverses the traditional assignment of roles between computer and human. Normally a person asks a computer to perform a certain task and receives the result. In HBC it is often the other way around. As a result, some traditional concepts and terminology used in AI and HCI fields may create difficulties when thinking about HBC.

Probably due to the reason described above, Google was also somewhat late to explore this field. It preferred pure AI and datamining techniques to using the hybrid human-computer intelligence. Right from its inception, Google used human judgment expressed in the link structure of the web as input data for algorithms. It is, however, different from outsourcing algorithmic functions to humans that is a main feature of HBC. Matt Cutts, search quality engineer at Google said “People think of Google as pure algorithms, we’ve recently begun trying to communicate the fact that we’re not averse to using some manual intervention. … Google does reserve the right to use humans in a scalable way,” (read the full Infoworld article here). Google introduced voting buttons into its toolbar to collect user evaluations of web pages and help to remove spam from the search results. However, Google wasn’t fully exploring the potential of HBC until very recently. This quickly changes now as Google begins to understand the potential of the technique and willing to test various ways to allow humans not only to evaluate, but also contribute and modify existing content. This kind of testing mostly happens outside of the US. A possible reason may be that Google perceives this to be a high-risk projects: the experimental features Google offers in the US seem to be much more conservative.

In my last post, I described Google Questions and Answers service being tested by Google Russia (I am going to review it in more detail in one of my next posts and compare more systematically with other similar services). More recently, Google UK is testing HBC as a way to improve ranking and coverage of its search results. Mike Grehan noticed that Google UK now allows some users to add URLs to a set of relevant search results: “Know of a better page for digital cameras? Suggest one!” (my thanks for finding this post go to Haochi Chen from Googlified). Members of 3form will find this new google interface very familiar as this is nearly the same interface that 3form uses to evolve solutions to problems for about ten years now, except google doesn’t provide an easy way to choose the most relevant option among those already displayed (submitting the one of the already displayed URLs into the suggestion box will probably work, though not as convenient as selecting one).

Several bloggers referred to the new feature as the beginnings of Google’s social tagging/bookmarking, a response to recent projects attempting to build open-source social search engines, allowing people to edit search results, like Wikia Search and Mahalo. Google’s new feature indeed can turn google search into social tagging/bookmarking tool as the query is essentially a set of tags for the contributed url (if any), so the information Google receives through this service is essentially the same as the one users contribute to del.icio.us or any other similar service.

Today I came across HBS working paper “The Value of Openness in Scientific Problem Solving” by Karim Lakhani, Lars Jeppesen, Peter Lohse and Jill Panetta (link to 58 page PDF is here). The paper studies InnoCentive, a knowledge market similar to 3form that corporations use to solve their research problems unsolved by corporate R&D labs.

InnoCentive was founded by Eli Lilly & Company in 2001 and shares a significant similarity with 3form in organizing the distributed problem solving process, except that it does not broadcast solutions it receives, keeping them private for the corporation that posted the respective problem. As a result, the innovation process at InnoCentive while being distributed is not open: the solvers can’t modify or recombine the solutions proposed earlier or learn from them, as they do at 3form. However, the working paper shows that sharing problems by itself has many advantages over the traditional corporate practice of keeping them closed.

We show that disclosure of problem information to a large group of outside solvers is an effective means of solving scientific problems. The approach solved one-third of a sample of problems that large and well-known R & D-intensive firms had been unsuccessful in solving internally.

There are many interesting observations in this paper that might be relevant to 3form as well and are likely to be interesting to the members of 3form community.

Problem-solving success was found to be associated with the ability to attract specialized solvers with range of diverse scientific interests. Furthermore, successful solvers solved problems at the boundary or outside of their fields of expertise, indicating a transfer of knowledge from one field to others.

Here are the results I found the most interesting:

the diversity of interests across solvers correlated positively with solvability, however, the diversity of interests per solver had a negative correlation

the further the problem was from the solvers’ field of expertise, the more likely they were to solve it; there was a 10% increase in the probability of being a winner if the problem was completely outside their field of expertise

the number of submissions is not a significant factor of solvability

very few solvers are repeated winners

The authors of the HBS paper draw analogy to local and global search to explain effectiveness of the problem broadcasting. They suggest that each solver performs a local search, implying that broadcasting the problem to outsiders makes the search global (”broadcast search” in authors’ terminology). Indeed, if solvers don’t have access to solutions of other solvers (the case at InnoCentive), all they can do is a local search (hillclimbing). From the computational perspective, the InnoCentive problem solving process is analogous to a hillclimbing with a random restart: each new solver performs a local search and returns a locally optimal solution, finally, the best of those locally optimal solutions determines the winner.

I made a curious observation today that the psychological concept of personality may be useful in characterizing social websites. For example, a website can be introvertive or extravertive. As in psychology, these are not absolute categories, but rather an indication of a bias toward one end or the other.

An intravertive social website draws attention of its users towards its local content, while extravertive social website draws attention of its users outside towards the content present on other sites of the web. Two examples to illustrate these are 3form and StumbleUpon, respectively. Both implement essentially the same technique, human-based evolutionary computation. This technique allows people to contribute items to the database, draw random samples from the population of items, evaluate sampled items. The software computes a fitness function from those evaluations and uses it in later sampling. However, 3form and SU use this technique in remarkably different ways. 3form samples content of its own database, provides an easy way to socially bookmark/evaluate/comment on it. However, it is less easy to bookmark any external resource or comment on it: you have to cut and paste its link into the web form and not many people bother to do it. This makes 3form community rather introspective and focused on the content found locally rather than resources found elsewhere. StumbleUpon, on the opposite, samples from the database containing primarily external resources found elsewhere. It naturally directs user attention to perceive the world outside of SU. SU makes it very easy to bookmark and evaluate any external resource with a single click. It is not true, however, for the local resources found at SU’s own site. When I start using SU I initially thought that, unlike most blogs, SU ones don’t support commenting. Then I found that it is possible to comment on a post, but not as easy or intuitive as commenting on external resources. You first need to find a permalink to the post you want to comment on (shown as the date of the post), click on it to open the post in a single window, then you can use normal SU buttons to evaluate and comment on it. Not many people take effort to go this way, so most posts at SU blogs remain without comments.

Though it might be a pure coincidence, but nevertheless interesting that the personality of a website reflects in this case the personality of its architect. My MBTI profile is INTJ (introvertive) and StumbleUpon chief architect and CEO Garrett Camp is ENTP (extravertive).

What about other websites?

Wikipedia was always mildly introvertive. It was always easier to link to an internal page than to create an external link. In addition, Wikipedia culture is discouraging creation of external links. Recently, Wikipedia has become more clearly introvertive by making you solve CAPTCHA, when you try to contribute a link to an external resource or even fix a broken link. This, undoubtedly will decrease the amount of external references in Wikipedia.

Del.icio.us and most social bookmarking tools are extravertive, their primary purpose is to direct attention to the other sites. I am quite curious if their creators are extraverts as well.

Digg seems to be pretty balanced in this respect, it requires high effort from any user trying to use it because of many CAPTCHAS, but commenting on an internal post and submitting a new story with an external reference involves about the same amount of effort.

Jimmy Wales, a founder of Wikipedia in hisrecenttalkssuggests that Wikipedia is not a technological innovation, but a purely social one:

When Wikipedia was started in 2001, all of its technology and software elements had been around since 1995. Its innovation was entirely social - free licensing of content, neutral point of view, and total openness to participants, especially new ones. The core engine of Wikipedia, as a result, is “a community of thoughtful users, a few hundred volunteers who know each other and work to guarantee the quality and integrity of the work.”

In his view, Wikipedia is not an emergent phenomena of the wisdom of crowds, where thousands of independent individuals contribute each a bit of their knowledge, but instead is a relatively well connected small community, pretty much like any traditional organization, e.g. one that created Encyclopedia Britannica. Even taking into account that he is a founder of Wikipedia, I still am quite skeptical about this explanation. In my opinion, it is insufficient to explain the phenomenon of Wikipedia. It also disagrees with my own experience as a Wikipedia contributor. I started to contribute in 2003, registered in 2004, and yet I don’t know other wikipedians personally and rarely thought about Wikipedia as a social network, even though it definitely can support one. Reading a post of Aaron Swartz Who writes Wikipedia made me even more skeptical.

I know that it is quite natural for entrepreneurs to focus more on organizational aspects because that is what they deal with most of the time, as well as it is common for technologists to focus mainly on technology. I am not arguing that Jimmy Wales point of view is wrong, but I am suggesting that it might be incomplete. I believe, we don’t need to choose between emergent phenomena and core community point of view. They are not mutually exclusive, so Wikipedia can be (and, in my opinion, is) an example of both.

Jimmy suggests that the Wikipedia technology and software had been around since 1995. I didn’t find any support for this. If the technology was there in 1995, why it took so long for large wiki-based collaborative projects to appear? I did some quick research into the history of wiki technology. It suggests that Wikipedia had no chances to succeed using the technology that existed in 1995. The elements that enabled large participatory organizations like Wikipedia were added to wiki software six year later, at approximately the same time when Wikipedia project was launched.

Early wikis were lacking two important features: revision history and support for concurrent editing. These two features are crucial for success of any mass collaboration project using wiki.

I first discovered wiki quite late, in the summer of 2002. I quickly grasped the potential of this simple and brilliant collaboration tool by Ward Cunningham: a site with web pages that anyone can edit with very low effort. I saw it as a web extension of CVS, a revision control system that allows programmers to collaborate on the same codebase concurrently. However, as I started to explore the potential advantages of wiki, I found that the implementation I was using has a serious limitation. Indeed, everyone could edit a page, unless it is currently edited by someone else. If I wanted to edit a page someone else is editing right now, a warning message appeared that the page is locked. The lock was advisory, meaning I still could go ahead and edit, disregarding the message. However, in this case, either my or other people’s work will be lost. Waiting for the lock to be released quickly becames annoying as more people start collaborating. My conclusion then was that twiki software wasn’t ready to support collaboration of large groups of people. I searched for an implementation that would not have this limitation but didn’t find it at that time. I even wrote a note into my TODO list to write a wiki software that uses CVS instead of RCS so that it could support concurrent editing (RCS and CVS are two revision control systems, but CVS is newer and allows lock-less concurrent editing). However, later I found a software that provided means of concurrent editing. This was MediaWiki software and it was the first wiki I saw that really could support mass collaboration.

Another feature that was crucial to the success of Wikipedia is a revision history providing a mechanism for reverting unhelpful changes. It was not present in the original wikis. In fact, according to Landmark changes to the Wiki it was added in 2002. Prior to this, another mechanism (Edit Copy) was used, providing a single backup copy of every page that can be edited. Edit Copy was clearly insufficient to save content from vandalism as it is too easy for vandals to edit both the working and the backup copy of a page. However, Wikipedia according to the Internet Archive already had revision history on August 8, 2001 (see View other revisions). At that time Wikipedia used UseModWiki software written by Clifford Adams. Again, according to the archive, UseModWiki got its revision history somewhere between December 9, 2000 and February 1, 2001, that nearly coincide with the launch of the Wikipedia project (January 15, 2001).

Jimmy Wales might be right suggesting that Wikipedia was a social rather then technological innovation, but the technology he refers to was not there in 1995. The features that made Wikipedia possible were added to UseModWiki approximately at the same time the Wikipedia was launched and began to use UseModWiki. It might be a lucky coincidence for Wikipedia or those might be new features of UseModWiki requested by founders of Wikipedia. Maybe some of them can comment on this post.

It is starting to ask job applicants to fill out an elaborate online survey that explores their attitudes, behavior, personality and biographical details going back to high school.

The questions range from the age when applicants first got excited about computers to whether they have ever tutored or ever established a nonprofit organization.

The answers are fed into a series of formulas created by Googleâ€™s mathematicians that calculate a score â€” from zero to 100 â€” meant to predict how well a person will fit into its chaotic and competitive culture.

I didn’t apply for Google employment, but had some experience with their methods of people selection last summer. Google uses a proactive approach to hiring. In particular they actively contact new Ph.D.s and invite them to participate in phone interviews. Google recruiters found my resume on the web and I was suggested and agreed to participate in three phone interviews. Google phone interviews were about 30 minutes each. There were sessions of multiple choice questions and problem solving session in which I was asked to program an algorithmic solution on a piece of paper and dictate the result back to the interviewer. I found that recruiting techniques were not a strong area of Google and approaches were far from innovative. I was puzzled how a company like Google can’t create a simple web application to administer those multiple choice questions or outsource the whole thing to a company that does it better (e.g. Brainbench). Now I see that Google begins to entertain the same thoughts and maybe something will be changing:

â€œAs we get bigger, we find it harder and harder to find enough people,â€ said Laszlo Bock, Googleâ€™s vice president for people operations. â€œWith traditional hiring methods, we were worried we will overlook some of the best candidates.â€

Last month, Haochi Chen and Christian Binderskagnaes (googlified.com) discovered Google Online Assessments that might be a new Google tool to assess people’s skills: “The purpose of this website is still something of a secret, but itâ€™s going to be great, whatever it is.”

We will see how great a new Google algorithmic approach for skill assessment will be. I think they definitely will be more efficient this way saving time of their employees and phone bills. But can they also be more effective? I don’t know the answer to this quesiton. Multiple choice questions still have fundamental limitation: they don’t allow participants to manifest their creativity because they don’t provide a space for any creative solution. They only test ability of judgement.

You are creating a society within a society where you weed out undesirables using a simple algorithm. The problem is … whether creativity and innovation can rise out of homogeneity, even the type of homogeneity that Google is practicing.

Human innovation has evolutionary dynamics, cycles of change and selection. My research suggests that innovation and creativity are manifestations of the underlying evolutionary process. In this case, the diversity is crucial as it is one of the main prerequisites for evolution. This is also supported by results of experimental research suggesting that diverse teams of ordinary individuals outperform homogeneous teams of elite individuals (Hong, 2004). So, from evolutionary point of view the loss of diversity is quite dangerous. Google shares this problem with many top Universities.

From the other point of view (my research in social synthesis), diversity is just one way to increase chances to achieve complementarity of resources needed for synergetic exchange. For example, if backgrounds of two people are too similar, they can have little misunderstandings, but don’t have much chance to benefit from mutual learning. On the other hand, if their interests are complementary, they have a great opportunity to learn from each other if they will be able to overcome misunderstandings.

See also my previous post suggesting another way for employee selection that allows to identify creative solutions and people.

A panel on social search at SES Chicago tried to define social search more precisely yesterday. Chris Sherman suggested the following definition: social search are “wayfinding tools informed by human judgment.” Further discussion of this definition can be found here and here.

I am an evolutionary computation researcher and see the striking resemblance of this new definition to the definition of an interactive genetic algorithm. An interactive genetic algorithm (IGA) is defined as a genetic algorithm informed by human judgement. Genetic algorithm itself is a search procedure inspired by the Darwinian model of evolution. So you can see from here the tight connection between social search as defined above and IGA. However, similarities don’t end here. The situation with defining social search mirrors the earlier one with defining IGA. The definition of IGA was too narrow to encompass other kinds interaction in addition to human judgement. One important part that was missing is the use of human creativity in addition to human judgement or without it. The new class of algorithms therefore was called human-based genetic algorithms (HBGA), though they are even more heavily interactive than IGA. If the definition of IGA did not limit human input to judgement there would be no need to create a new term.

Similar things are now happening with social search. In the case of social search, some examples discussed are not well captured by the proposed definition. For example, Yahoo Answers already uses both human judgement and human creativity and is a typical HBGA. This is why I would like to propose a different definition for social search:

Social search is a search algorithm where some functions are outsourced to humans.

This definition positions social search in abstraction hierarchy between human-based computation and human-based evolutionary computation. On one hand, human-based computation (”algorithms outsourcing some functions to humans”) may be used for other purposes than search. On the other hand, there may be other ways of doing social search than those using evolutionary models.