3D Virtual Worlds and the Metaverse – Current Status and Future Possibilities; Dionisio, Burns, Gilbert: “In the past three decades considerable progress has been made in moving from text-based multi-user virtual environments to the technical implementation of advanced virtual worlds that previously existed only in the literary imagination. …[P]rogressive capabilities enable them to serve as elaborate contexts for work, socialization, creativity, and play and to increasingly operate more like digital cultures than as games. Virtual world development now faces a major new challenge: how to move from a set of sophisticated, but completely independent, immersive environments to a massive, integrated network of 3D virtual worlds or Metaverse and establish a parallel context for human interaction and culture. …[C]entral elements of a fully-realized Metaverse: realism (enabling users to feel fully immersed in an alternative realm), ubiquity (establishing access to the system via all existing digital devices and maintaining the user’s virtual identity throughout all transitions within the system), interoperability (allowing 3D objects to be created and moved anywhere and users to have seamless, uninterrupted movement throughout the system) and scalability (permitting concurrent, eﬃcient use of the system by massive numbers of users). … The ﬁrst aspect of ubiquity – ubiquitous availability and access to virtual worlds – rides on the crest of developments in ubiquitous computing in general. As ubicomp has progressed, access to virtual worlds has begun to move beyond a stationary ‘desktop PC rig,’ expanding now into laptops, tablets, mobile devices, and augmented reality. … The second aspect – ubiquitous identity, or manifest persona – has emerged as multiple avenues of digital expression (blogs, social networks, photo/video hosting, etc.) have become increasingly widespread. … As long as virtual world developments move alongside general ubicomp developments, moving in and out of the Metaverse may become as convenient and ﬂuid as browsing the Worldwide Web is today. … It is also possible that virtual worlds may take a leadership position in this regard, as virtual world artifacts may be more closely linked to one’s digital persona due to the immersive environment… Interoperability in virtual worlds currently exists as a loosely connected collection of information, format, and data standards, most of which focus on the transfer of 3D models/objects across virtual world environments. … [V]irtual world interoperability is not solely limited to 3D object transfer: true interoperability also involves communication protocol, locator, identity, and currency standards… The wide-ranging requirements and scope of digital assets involved in virtual worlds have the potential of making the Metaverse the ‘killer app’ that ﬁnally leads the charge toward seamless interoperability. … Scalability. Virtual world technologies are currently in an initial stage of departure from highly centralized system architectures… Going forward, an integrative phase is needed where the multiple independent research threads are brought together in complementary and cohesive ways to form a whole that is greater than the sum of its parts. … Progress in virtual world scalability implies progress in the scalability of many other types of multiuser, multitiered systems. … There are several factors that promote optimism that a fully developed Metaverse can be achieved, as well as a number of signiﬁcant constraints to realizing this goal. … [A] new generation accustomed to graphically rich, 3D digital environments [both virtual worlds and immersive games oﬀered online and through consoles such as PlayStation, Xbox360, and Wii] is rapidly coming of age and will likely fuel continued development in all immersive digital platforms including advanced virtual worlds. … Along with forces that are propelling the development of the Metaverse forward, there are two signiﬁcant barriers that may inhibit the pace or extent of this progress. The ﬁrst pertains to current limits in computational methods related to virtual worlds. … In addition to conceptual and computational challenges, the development of the Metaverse may be constrained by signiﬁcant economic and political barriers. Currently virtual worlds are dominated by proprietary platforms such as Second Life, Cryworld, Utherverse, IMVU, and World of Warcraft, or government-controlled worlds such as the China-based Hipihi. …[J]ust as the old walled gardens of AOL, CompuServe, and Prodigy were instrumental in expanding Internet usage early on, but ultimately became an inhibitory force in the development of the Worldwide Web, these proprietary and state-based virtual world platforms have sparked initial growth but now risk constraining innovation and advancement. … [T]he advancement of a fully-realized Metaverse would likely be maximized by harnessing the same process of collective eﬀort and mass innovation that was instrumental in the creation and expansion of the Web.“

SEL: “Does Google favor its own sites in search results, as many critics have claimed? Not necessarily. New research suggests that claims that Google is ‘biased’ are overblown, and that Google’s primary competitor, Microsoft’s Bing, may actually be serving Microsoft-related results ‘far more’ often than Google links to its own services in search results. – In an analysis of a large, random sample of search queries, the study from Josh Wright, Professor of Law and Economics at George Mason University, found that Bing generally favors Microsoft content more frequently, and far more prominently, than Google favors its own content. According to the findings, Google references its own content in its first results position in just 6.7% of queries, while Bing provides search result links to Microsoft content more than twice as often (14.3%). … The findings of the new study are in stark contrast with a study on search engine ‘bias’ released earlier this year. That study, conducted by Harvard professor Ben Edelman concluded that ‘by comparing results across multiple search engines, we provide prima facie evidence of bias; especially in light of the anomalous click-through rates we describe above, we can only conclude that Google intentionally places its results first.’ … So, what conclusions to draw? Wright says that ‘analysis finds that own-content bias is a relatively infrequent phenomenon’ – meaning that although Microsoft appears to favor its own sites more often than Google, it’s not really a major issue, at least in terms of ‘bias’ or ‘fairness’ of search results that the engines present. Reasonable conclusion: Google [and Bing, though less so] really are trying to deliver the best results possible, regardless of whether they come from their own services [local search, product search, etc] or not. … But just because a company has grown into a dominant position doesn’t mean they’re doing wrong, or that governments should intervene and force changes that may or may not be “beneficial” to users or customers.”

Edelman/Lockwood: “By comparing results between leading search engines, we identify patterns in their algorithmic search listings. We find that each search engine favors its own services in that each search engine links to its own services more often than other search engines do so. But some search engines promote their own services significantly more than others. We examine patterns in these differences, and we flag keywords where the problem is particularly widespread. Even excluding ‘rich results’ (whereby search engines feature their own images, videos, maps, etc.), we find that Google’s algorithmic search results link to Google’s own services more than three times as often as other search engines link to Google’s services. For selected keywords, biased results advance search engines’ interests at users’ expense: We demonstrate that lower-ranked listings for other sites sometimes manage to obtain more clicks than Google and Yahoo’s own-site listings, even when Google and Yahoo put their own links first. … Google typically claims that its results are ‘algorithmically-generated’, ‘objective’, and ‘never manipulated.’ Google asks the public to believe that algorithms rule, and that no bias results from its partnerships, growth aspirations, or related services. We are skeptical. For one, the economic incentives for bias are overpowering: Search engines can use biased results to expand into new sectors, to grant instant free traffic to their own new services, and to block competitors and would-be competitors. The incentive for bias is all the stronger because the lack of obvious benchmarks makes most bias would be difficult to uncover. That said, by comparing results across multiple search engine, we provide prima facie evidence of bias; especially in light of the anomalous click-through rates we describe above, we can only conclude that Google intentionally places its results first.”

ICLE: “A new report released [PDF] by the International Center for Law und Economics and authored by Joshua Wright, Professor of Law and Economics at George Mason University, critiques, replicates, and extends the study, finding Edelman und Lockwood’s claim of Google’s unique bias inaccurate and misleading. Although frequently cited for it, the Edelman und Lockwod study fails to support any claim of consumer harm – or call for antitrust action – arising from Google’s practices. – Prof. Wright’s analysis finds own-content bias is actually an infrequent phenomenon, and Google references its own content more favorably than other search engines far less frequently than does Bing: In the replication of Edelman und Lockwood, Google refers to its own content in its first page of results when its rivals do not for only 7.9% of the queries, whereas Bing does so nearly twice as often (13.2%). – Again using Edelman und Lockwood’s own data, neither Bing nor Google demonstrates much bias when considering Microsoft or Google content, respectively, referred to on the first page of search results. – In our more robust analysis of a large, random sample of search queries we find that Bing generally favors Microsoft content more frequently-and far more prominently-than Google favors its own content. – Google references own content in its first results position when no other engine does in just 6.7% of queries; Bing does so over twice as often (14.3%). – The results suggest that this so-called bias is an efficient business practice, as economists have long understood, and consistent with competition rather than the foreclosure of competition. One necessary condition of the anticompetitive theories of own-content bias raised by Google’s rivals is that the bias must be sufficient in magnitude to exclude rival search engines from achieving efficient scale. A corollary of this condition is that the bias must actually be directed toward Google’s rivals. That Google displays less own-content bias than its closest rival, and that such bias is nonetheless relatively infrequent, demonstrates that this condition is not met, suggesting that intervention aimed at ‘debiasing’ would likely harm, rather than help, consumers.”

Goncalves, Perra, Vespignani: “Modern society’s increasing dependency on online tools for both work and recreation opens up unique opportunities for the study of social interactions. A large survey of online exchanges or conversations on Twitter, collected across six months involving 1.7 million individuals is presented here. We test the theoretical cognitive limit on the number of stable social relationships known as Dunbar’s number. We find that users can entertain a maximum of 100-200 stable relationships in support for Dunbar’s prediction. The ‘economy of attention’ is limited in the online world by cognitive and biological constraints as predicted by Dunbar’s theory. Inspired by this empirical evidence we propose a simple dynamical mechanism, based on finite priority queuing and time resources, that reproduces the observed social behavior. … Social networks have changed they way we use to communicate. It is now easy to be connected with a huge number of other individuals. In this paper we show that social networks did not change human social capabilities. We analyze a large dataset of Twitter conversations collected across six months involving millions of individuals to test the theoretical cognitive limit on the number of stable social relationships known as Dunbar’s number. We found that even in the online world cognitive and biological constraints holds as predicted by Dunbar’s theory limiting users social activities. We propose a simple model for users’ behavior that includes finite priority queuing and time resources that reproduces the observed social behavior. This simple model offers a basic explanation of a seemingly complex phenomena observed in the empirical patterns on Twitter data and offers support to Dunbar’s hypothesis of a biological limit to the number of relationships.”

Brooks: “If the thing that makes it real is your capacity to have a theory of mind relationship with a certain number of people, I can still imagine that social media would increase people’s capacities. … If [social media tools] succeed they will slowly break Dunbar’s number. … I would expect that Twitter would have a small number of people with a huge number of connections, but they’re not listening to that many people, they’re just talking to that many people.”