I opened up a copy of the New York Times today, and in an empty space within an article, there was a blurb that reads

Social networks put individuals at the center of their own media universes

— I am not even sure I understand what that is supposed to mean. Let alone the notion of a plurality of universes, the idea that media are not between people but rather like belly buttons for individuals to discover themselves within … I just find it mind-boggling. Then again, according to the surrounding words in the article next to this message, social media are depicted as breeding grounds for “fake news”, as cesspools for propagating mythical stories, for manipulating large populations of suckers into following this or that social media expert, leader, salesman or whatever.

“Social” is seen as the big mistake, the errant sidetrack from the collapsing foundations of journalism. Four words seem hidden somewhere in between the lines: I told you so. Naive and forlorn like Dorothy in a dizzying whirlwind, individuals end up as victims of lever-pulling hackers, clowns and con-artists. Social media transport hoaxes and fairy tales, yet they are also instruments targeted at novice users, training wheels to guide their first steps in the cyber-landscape. The virtual world is both for the light-hearted at the same time that it’s a wide field of thin ice. Throughout this portrayal, the real world is not embodied in media. Instead, real-world people with real-world addresses exist behind real-world mastheads printed on real-world paper. They carry real-world business cards, not fake virtual URLs.

Real-world buildings, with real-world street addresses, real-world telephones and such media are the physical conduits for real-world relationships. In contrast (so the argument), virtual facades evaporate into thin air as soon as a video screen is turned off.

This contrast might be all good and fine, except that it is a lie. None of these things are any more real than the other. Main Street is nothing without the street sign signifying it as such. The reason why we can agree to meet at Main Street is that we both understand it to be Main Street, and this agreement is based on us both understanding how to read street signs. Indeed: we agree on many things, of which such street signs are fine examples. We can also agree on the time of day, to speak the same language, or to answer each other’s questions succinctly and truthfully. Such agreements are crucial for us to help each other reach our goals, whether we hold the same goals in common, or whether each of us is trying to reach our own particular individual goals.

By reaching our goals, we become not only successful, we also become who we are. We actually self-actualize our identities. For example: a writer does not simply exist, he or she becomes a writer by writing. A worker becomes a worker by working. A buyer becomes a buyer by buying, a seller becomes a seller by selling, a consumer becomes a consumer by consuming and a producer becomes a producer by producing. As these last examples show, sometimes we can only self-actualize when other conditions are met, and sometimes these conditions also require the engagement of other people. In this sense, reaching our own goals involves a team effort — as, for example, a sale involves the teamwork of both a buyer and a seller.

Therefore, the real world is not so much a matter of separated individuals as it is the interaction and engagement of individuals with each other in a symbiotic process of self-actualization. We become who we are by interacting with one another. Our goals aren’t distinct and separate, they’re intertwined. We need to think of media as bustling marketplaces for such exchanges to take place, rather than as sterile and inert transport mechanisms. These are not empty tubes simply bridging gaps, they are stages for playing out our roles in real life.

Online, websites are accessed exclusively via machine-readable text. Specifically, the character set prescribed by ICANN, IANA, and similar regulatory organizations consists of the 26 characters of the latin alphabet, the „hyphen“ character and the 10 arabic numbers (i.e. The symbols / zyphers 0-9). Several years ago, there was a move to accommodate other language character sets (this movement is generally referred to as „Internationalized Domain Names“ [IDN]), but in reality this accommodation is nothing more than an algorithm which translates writing using such „international“ symbols into strings from the regular latin character set, and to used reserved spaces from the enormous set of strings managed by ICANN for such „international“ strings. In reality, there is no way to register a string directly using such „international“ characters. Another rarely mentioned tidbit is that this obviously means that the set of IDN strings that can be registered is vastly smaller than strings exclusively using the standardized character set approved for direct registration.

All of that is probably much more than you wanted to know. The „long story short“ is that all domain names are machine readable (note, however, that – as far as I know – no search engine available today on the world-wide-web uses algorithms to translate IDN domain name strings into their intended „international“ character strings). All of the web works exclusively via this approved character set (even the so-called „dotted decimals“ – the numbers which refer to individual computers [the „servers“] – are named exclusively using arabic numerals, though in reality are based on groups of bits: each number represents a „byte“-sized group of 8 bits… in other words: it could be translated into a character set of 256 characters. In the past several years, there has also been a movement to extend the number of strings available to accommodate more computers from 4 bytes (commonly referred to as Ipv4 or „IP version 4“) to 6 bytes (commonly referred to as Ipv6 or „IP version 6“), thereby accommodating 256 x 256 = 65536 as many computers as before. Note, however, that each computer can accommodate many websites / domains, and the number of domain names available excedes the number of computers available by many orders of magnitude (coincidentally, the number of domain names available in each top level domain [TLD] is approximately 1 x 10^100 – in the decimal system, that’s a one with one hundred zeros, also known as 1 Googol).

Again: Very much more than you wanted to know.

The English language has a much smaller number of words – a very large and extensive dictionary might have something like 100,000 entries. With variants such as plural forms or conjugated verb forms, that will still probably amount to far less than a million possible strings – in other words: about 94 orders of magnitude less than the number of strings available as domain names. What is more, most people you might meet on the street probably use only a couple thousand words in their daily use of „common“ language. Beyond that, the will use even fewer than that when they use the web to search for information (for example: instead of searching for „sofa“ directly, they may very well first search for something more general like „furniture“).

What does „machine readable“ mean? It means a machine can take in data and process it algorithmicly to produce a result – you might call the result „information“. For example: There is a hope that machines will someday be able to process strings – or even groups of strings, such as this sentence – and be able to thereby derive („grok“ or „understand“) the meaning. This hope is a dream that has already existed for decades, but the successes so far have been extremely limited. As I wrote over a decade ago (in my first „Wisdom of the Language“ essay), it seems rather clear that languages change faster than machines will ever be able to understand them. Indeed, this is almost tautologically true, because machines (and so-called „artificial intelligence“) require training sets in order to learn (and such training sets from so-called „natural language“ must be expressions from the past – and not even just from the past, but alsoapproved by speakers of the language, i.e. „literate“ people). So-called „pattern recognition“ – a crucial concept in the AI field – is always recognizing patterns which have been previously defined by humans. You cannot train a machine to do anything without a human trainer, who designs a plan (i.e., an algorithmic set of instructions) which flow from to human intelligence.

There was a very trendy movement which was quite popular several years ago that led to the view that data might self-organize, that trends might „emerge from the data“ without needing the nuissance of consulting costly humans, and this movement eventually led to what is now commonly hyped as „big data“. All of this hype about „emergence“ is hogwash. If you don’t know what I mean when I say „hogwash“, then please look it up in a dictionary.

Recently, I posted something on Facebook that I said to Vint Cerf 10 years ago. It was revolutionary then. Even more shocking to me today is that it probably still seems revolutionary.

Why? Why do so many people still appear so lacking in literacy skills? Perhaps even more importantly: Why do I remain so optimistic that more and more people will eventually acquire more and more literacy skills after all?

So far, I am sorry to say that I don’t know why. Maybe I simply prefer to have an optimistic outlook.

But I think almost anyone will have to admit that there are clear signs that a change is indeed presently happening here and now. The Occupy Wall Street demonstrations were clear signs that people are no longer willing to be duped and suckered by governments and corporations alike. The only failure Occupy experienced was a lack of power – in the end, the side with more and most of all more powerful guns won.

Is literacy more powerful than weaponry? The Enlightenment preached that the pen was mightier than the sword, but was that perhaps also simply a hoax?

Again: My optimism leads me to continue to believe in the power of literacy. What happened during the Occupy uprising was, after all, not a true test of literacy against weaponry – it was plain and simple stubborn power against stubborn power… and stronger stubborn power won.

The true test of literacy is when people decide „We won’t get fooled again“… and follow through on their own convictions.

This was one reason people stopped using Google and started using social media websites instead. They didn’t realize the new boss was more or less the same as the old boss. Do they realize this now? Time will tell.

What became quite clear during the Occupy uprising was that the government was not on the side of the 99%. This was perhaps a shock to many… but it is not the first time that a government has sided with commercial and industrial interests.

As I recently wrote: Government may indeed have very little or even no interest in promoting the literacy of its people if it believes it may be threatened by a more literate population. In order to win a following, governments and corporations alike employ propaganda and advertising rather than rational argumentation.

Rational media, instead, are built on a foundation of literacy. Still few and far between (mainly because propaganda and advertising were much more widespread throughout the 20th Century), rational media are not normally closely held by private interests. Indeed, because of the distributed nature of the Internet, it is very difficult to maintain monopoly power over rational media (versus, for example, retard media).

The first sign of a literate public is one which is willing and able to abstain from succumbing to monopoly powers. This was true when Martin Luther nailed his 95 theses onto the front doors of a Catholic church cathedral 5 centuries ago, and it is equally true for anyone who is willing and able to refrain from using Google or Facebook.

Another sign of a more literate public is one which is willing and able to agree on terminology. This is perhaps easier said than done. Obviously, it is extremely difficult in situations where people speak completely different languages. Yet even when people speak more or less the same language, they may have different opinions about many things, and such differences of opinion may lead to differing terminology, and perhaps also significant misunderstandings.

One way to mitigate this problem of potential misunderstanding is to focus intensely on „common language“ terminology. It is possible to sacrifice precision without sacrificing accuracy, and it is a great feat to be content with a solution which is essentially on the mark despite spilling over into minor side effects.

There are many more aspects of a literate society that deserve to be enumerated, but this post is already quite long. So I will simply save them for another rainy day.