I opened up a copy of the New York Times today, and in an empty space within an article, there was a blurb that reads

Social networks put individuals at the center of their own media universes

— I am not even sure I understand what that is supposed to mean. Let alone the notion of a plurality of universes, the idea that media are not between people but rather like belly buttons for individuals to discover themselves within … I just find it mind-boggling. Then again, according to the surrounding words in the article next to this message, social media are depicted as breeding grounds for “fake news”, as cesspools for propagating mythical stories, for manipulating large populations of suckers into following this or that social media expert, leader, salesman or whatever.

“Social” is seen as the big mistake, the errant sidetrack from the collapsing foundations of journalism. Four words seem hidden somewhere in between the lines: I told you so. Naive and forlorn like Dorothy in a dizzying whirlwind, individuals end up as victims of lever-pulling hackers, clowns and con-artists. Social media transport hoaxes and fairy tales, yet they are also instruments targeted at novice users, training wheels to guide their first steps in the cyber-landscape. The virtual world is both for the light-hearted at the same time that it’s a wide field of thin ice. Throughout this portrayal, the real world is not embodied in media. Instead, real-world people with real-world addresses exist behind real-world mastheads printed on real-world paper. They carry real-world business cards, not fake virtual URLs.

Real-world buildings, with real-world street addresses, real-world telephones and such media are the physical conduits for real-world relationships. In contrast (so the argument), virtual facades evaporate into thin air as soon as a video screen is turned off.

This contrast might be all good and fine, except that it is a lie. None of these things are any more real than the other. Main Street is nothing without the street sign signifying it as such. The reason why we can agree to meet at Main Street is that we both understand it to be Main Street, and this agreement is based on us both understanding how to read street signs. Indeed: we agree on many things, of which such street signs are fine examples. We can also agree on the time of day, to speak the same language, or to answer each other’s questions succinctly and truthfully. Such agreements are crucial for us to help each other reach our goals, whether we hold the same goals in common, or whether each of us is trying to reach our own particular individual goals.

By reaching our goals, we become not only successful, we also become who we are. We actually self-actualize our identities. For example: a writer does not simply exist, he or she becomes a writer by writing. A worker becomes a worker by working. A buyer becomes a buyer by buying, a seller becomes a seller by selling, a consumer becomes a consumer by consuming and a producer becomes a producer by producing. As these last examples show, sometimes we can only self-actualize when other conditions are met, and sometimes these conditions also require the engagement of other people. In this sense, reaching our own goals involves a team effort — as, for example, a sale involves the teamwork of both a buyer and a seller.

Therefore, the real world is not so much a matter of separated individuals as it is the interaction and engagement of individuals with each other in a symbiotic process of self-actualization. We become who we are by interacting with one another. Our goals aren’t distinct and separate, they’re intertwined. We need to think of media as bustling marketplaces for such exchanges to take place, rather than as sterile and inert transport mechanisms. These are not empty tubes simply bridging gaps, they are stages for playing out our roles in real life.

There is a spectre haunting the Web: That spectre is populism.

Let me backtrack a moment. This piece is a part of an ongoing series of posts about „rational media“ – a concept that is still not completely hard and fast. I have a hunch that the notion of „trust“ is going to play a central role… and trust itself is also an extremely complex issue. In many developed societies, trust is at least in part based on socially sanctioned institutions (cf. e.g. „The Social Construction of Reality“) – for example: public education, institutions for higher education, academia, etc. Such institutions permeate all of society – be it a traffic sign at the side of a road, or a crucifix as a central focal element on the alter in a church, or even the shoes people buy and walk around with on a daily basis.

The Web has significantly affected the role many such institutions play in our daily lives. For example: one single web site (i.e. the information resources available at a web location) may be more trusted today than an encyclopedia produced by thousands of writers ever were – whether centuries ago, decades ago, or even still just a few years past.

Similarly, another web site may very well be trusted by a majority of the population to answer any and all questions whatsoever – whether of encyclopedic nature or not. Perhaps such a web site might use algorithms – basically formulas – to arrive at a score for the „information value“ of a particular web page (the HTML encoded at one sub-location of a particular web site). A large part of this formula might involve a kind of „voting“ performed anonymously – each vote might be no more than a scratch mark presumed to indicate a sign of approval (an „approval rating“) given from disparate, unknown sources. Perhaps a company might develop more advanced methods in order to help guage whether the vote is reliable or whether it is suspect (for example: one such method is commonly referred to as a „nofollow tag“ – a marker indicating that the vote should not be trusted).

What many such algorithms have in common is that on a very basic level, they usually rely quite heavily on some sort of voting mechanism. This means they are fundamentally oriented towards populism – the most popular opinion is usually viewed as the most valid point of view. This approach is very much at odds with logic, the scientific method and other methods that have traditionally (for several centuries, at least) be used in academic institutions and similar „research“ settings. At their core, such populist algorithms are not „computational“ – since they rely not on any kind of technological solution to questions, but rather scan and tally up the views of a large number of human (and/or perhaps robotic) „users“. While such populist approaches are heralded as technologically advanced, they are actually – on a fundamental level – very simplistic. While I might employ such methods to decide which color of sugar-coated chocolate to eat, I doubt very much that I, personally, would rely on such methods to make more important – for example: „medical“ – decisions (such as whether or not to undergo surgery). I, personally, would not rely on such populist methods much more than I would rely on chance. As an example of the kind of errors that might arise from employing such populist methods, consider the rather simple and straightforward case that some of the people voting could in fact be color-blind.

Yet that is just the beginning. Many more problems lurk under the surface, beyond the grasp of merely superficial thinkers. Take, for example, the so-called „bandwagon effect“ – namely, that many people are prone to fall into a sort of „follow the leader“ kind of „groupthink“. Similarly, it is quite plausible that such bandwagon effects could even influence not only people’s answers, but even also the kinds of questions they feel comfortable asking (see also my previous post). On a more advanced level, complex systems may be also be influenced by the elements they comprise. For example: While originally citation indexes were designed with the assumption that such citation data ought to be reliable, over the years it was demonstrated that such citations are indeed very prone to be corrupted by a wide variety of corruption errors and that citation analysis is indeed not at all a reliable method. While citation data may have been somewhat reliable originally, it became clear that eventually citation fraud corrupted the system.

One widely acclaimed methodology in information science is Dervin’s “Sense Making Methodolgy” (SMM). It is very similar to the way people often think about “storytelling”:

Because communication is embodied and learned in impositional and constraining structures (families, communities, cultures, societies), SMM assumes that most spontaneous communication is, in fact, not spontaneous. Rather, spontaneity in communication invites habitual repetition of hegemony and habitus. Interrupting this usual-ness requires a different kind of interviewing — one based on verbs rather than nouns; one that allows for articulation time and conscientizing; one that gives individuals safety for expressing what they “really” think, feel, do, imagine; one that gives informants freedom to be variable, to be sometimes clear and sometimes muddled; sometimes very cognitive, sometimes emotional, sometimes both at the same time.

In Professor Dervin’s approach, information is seen as the act of reducing uncertainty. The opposite — increasing uncertainty — would therefore be interpreted as something like misinformation or disinformation.

Yet what if we — for example — remove the certainty that God exists? What if people believe a myth to be certainly true, and we remove that notion of truth?

According to Dervin’s views, we would have robbed them the happiness of being confident that their beliefs are true — and in increasing their uncertainty, they would feel less well-informed.