Bienvenidos! Welcome!A blog about Tau-Chain, TML and Agoras.

Tau-Chain.info is an unofficial blog that gathers information from various Internet sites about IDNI projects: Tau, Tau-Chain, Tau Meta Lenguaje and Agoras. This blog does not belong nor is it part of the official Tau project. The purpose of the blog is to make the project known, in an independent way, to people.​Tau is a revolutionary blockchain platform designed to scale up social consensus and accelerate knowledge creation in a decentralized network. Agoras will be an advanced marketplace for knowledge and computational resources built on this framework.

​So what exactly do I mean by formal communication? Well when we think of how human beings communicate with machines it is in a formal language. This formal language requires minimized ambiguity for security analysis (how can we analyze code if we cannot effectively interpret it?). The other problem is that the machines require for example that if... then... else and similar conditional statements are well defined and unambiguous.

In computer science, an ambiguous grammar is a context-free grammar for which there exists a string that can have more than one leftmost derivation or parse tree, while an unambiguous grammar is a context-free grammar for which every valid string has a unique leftmost derivation or parse tree. Many languages admit both ambiguous and unambiguous grammars, while some languages admit only ambiguous grammars.

Specifically we know that deterministic context free grammars must be unambiguous. So we know unambiguous grammars exist. It appears the strategy is ambiguity minimization with regard to formal languages (such as computer programming languages).​

For computer programming languages, the reference grammar is often ambiguous, due to issues such as the dangling else problem. If present, these ambiguities are generally resolved by adding precedence rules or other context-sensitive parsing rules, so the overall phrase grammar is unambiguous.[citation needed] The set of all parse trees for an ambiguous sentence is called a parse forest.[1]

The parse forest is an important concept to note. All possible parse trees for an ambiguous sentence is called a "parse forest". This concept is key to understanding the strategy of ambiguity minimization. So we can in practice minimize ambiguity and we know for certain that deterministic context free grammars admit an unambiguous grammar but what does that mean? What are the benefits of unambiguous language in general?

A benefit of ambiguity minimizationSimple English is a form of controlled English designed to minimize ambiguity in English. This is important because by using simple English to codify the rules or write the laws it puts it in a language where there is less of a computational expense (in brain power) to process and interpret the statements.

Her post highlights the fact that there are different love languages and that we don't all speak the same love language. Ambiguity here is actually not a good thing but the simple fact is when someone speaks about love how do we know they are talking about the same thing? As a result we often seek an agreed upon or formally defined "love concept" where we all agree it's love. This is not trivial to find and as a result a topic like love is not easy to discuss in any serious manner. Unambiguous communication or to be more precise (minimized ambiguity) would allow Alice to discuss with Bob the topic of love in a way where they both know exactly what the other is referring to in terms of behavioral expectations, emotions/feelings, etc.

If Alice agrees to love Bob then Bob has no way to determine what Alice means unless he and she agree on a mutually defined concept of love. This highlights how agreement requires very good communication and how minimizing ambiguity can be beneficial at least in this example.

More details:

Ambiguity minimization makes sense when you are following a principle of computational kindness. That is if Alice would like to reduce the computational burden on Bob then she can reduce or minimize the ambiguity of her sentence. This is because in order for Bob to interpret an ambiguous sentence Bob must in essence sort all possible interpretations of that sentence from most likely interpretation to least likely interpretation, and before he can even sort he must first search in order to find all possible or at least plausible interpretations.

This is very computationally expensive for Bob but very cheap for Alice. Alice knows exactly what she means but Bob has no clue what Alice REALLY means.​

A benefit of ambiguityThere are other examples where increasing ambiguity could be beneficial, such as perhaps when the communication is less than formal, or to share a stream of consciousness without turning it into a formal communication. Humor for example rides on ambiguity and a good joke may have multiple layers. Art also leverages ambiguity because it's perhaps meant to be interpreted 20 different ways all to produce a certain desired affect.

Ambiguity allows more meaning to be packed into fewer words. This in a sense is a sort of compression scheme. So if a sentence has multiple possible meanings the levels or meanings are still finite. It's a fixed amount of meanings and so theoretically speaking a search can be conducted. In fact this is what a human being does when interpreting natural language where a sentence can have multiple meanings (they do a search for all possible interpretations of that sentence). The problem with this is that it is computationally expensive as a process at least for the human being to try to figure out all possible interpretations of a sentence.

Lawyers when they do their work are working with a specific knowledge base of common legal sentences and common interpretations known in their profession but the rest of us might see a sentence in lawyer-speak and not really know what it means because we will not know the common interpretations. This is a big problem of course because to form agreements between two parties both parties need to have a common understanding (a kind of knowledge symmetric understandability) allowing them both to interpret roughly the same sentence to mean the same thing.

In a recent article of mine [1] I hinted my strong suspicion that scaling is itself scalable. ​

''Scaling is a problem. Scaling must be scalable, too. Metascale from here to Eternity.''

No matter what a terrific grower a system is - as per its own internal algorithmic growth drive rules - it seems inevitable its growth to get it into entropic mutualization [2] upon impact with a kind of a ... downscaler.

Scaling is everything, yeah. But it is quite intuitive and supported by too big body of evidence to ignore, that, paradoxically: the faster a thing grows - the sooner its encounter with an external and bigger downscaling factor comes.

This realization, refracted through the prism of our 'reptilian brain' layer [3] amplified to gargantuan proportions by our inherent social hierarchicity [4] is the source of the 'Malthusian [5] anxiety' which led to countless violent deaths over all the human history. Fear is anger [6], so the emotion that there is only as much to go around, and that the catastrophe of 'running out' of something is imminent, is the major source of what makes us bad to each other [7].

There are plethora of examples of very well mathematically and scientifically grounded doomsayer scenarios, and we must admit that they all correct as per their internal axiomatics [8] , and simultaneously they are all totally wrong for missing out the obvious - the factors of externalities [9] , the properties and opportunities of the medium which is consumed and/or created by this growth, and which transcend the axiomatics. For growth being always 'growth into'. The fact that doomsday scenarios are so compellingly consistent internally is what makes them so strong and dangerous ideological weapon of mass destruction [10].

b. the grim visions of the whole Mankind becoming telephone switchboard blue collar workers [13],[4] the number of which should've exceeded the number of total world population by now to achieve the same level of telephonization or

c. the all librarians world [14] where it takes more librarians than the whole mankind to serve the social memory in the paper & printed ink storage facilities mode ...

d. the Club of Rome [15] as the noisiest modern bird of ill omen with 'projections' based on the same blind extrapolations as the urban seas of shit or the 'proofs' of the impossibility to connect or educate or feed all - instigating mass destruction fear that ''we run out of everything and will soon all die'' [16], used for justification for mass atrocities VS Julian Simon's [17] - the ''Ultimate Resource'' (1981, 1996) [18]. Cf.: my accelerando article [19] and see what precisely is the Factory for succession of better and better Hanson drives for the last few millions of years - from the Blade and the Fire to the Tau - it is the same thing which identification made Julian Simon from fanatical Maltusianist [20] into rationally convinced Cornucopian [21] ... the human mind.

e. the predator-pray model [22] which this pseudo-haiku [23] I guess depicts best how's it brutally flawed:​

''hawk eat chic -> less chic, human eat chic -> more chic''

for missing out to posit and failure to account for positive feedback loop [24] of predator over pray dynamics ...

f. The comment of Dary Oster [25] , founder of the other passion of mine - ET3 [26], on the aka 'saturation' of the scalables (exemplified in the field of transportation, which btw, being communication ... our social structures map onto mobility systems we have on disposal ... ).:​

''... US transportation growth has focused on automobile/roads (and airline/airport) developments. (And this has been VERY good for the US economy.) The reason is that cars/jets offered far better MARKET VALUE than horse/buggy/train transport did 150 years ago. In the mid 1800s, trains displaced muscle power for travel between cities - because trains offered better market value than ox carts. Trains reached 'market saturation' about 1895 to 1905 (becoming 'unsustainable') - however 'market momentum' produced 20 years of 'overshoot'. Cars/jets were far more sustainable than passenger trains and muscle power, and started to displace trains (and finish off horses). By 1916 the US rail network peaked at 270,000 miles (today less than 130,000 miles is in use).Just like passenger trains hit market saturation, roads/airports are reaching economic limitations. The time is ripe for a market disruption, and all indicators (past and present) say it will NOT come from, or be supported by government or academia -- but from private sector innovations that offer a 10x value improvement (like ET3), AND also offer incentives for most (not all) key industries to participate (like ET3). Automated cars, smart highways, and electronic ride sharing are industry responses that will contribute to overshoot of cars/roads for the next 5-10 years.The main problem i see with the education system is that is that academic research and publication on transportation is primarily funded by status quo industries like: railroads and rail equipment manufactures, highway builders, automobile/truck manufactures, engineering firms, etc. -- all who fund research centered on 'improving' the status quo.Virtually all universities (for the last 1k years+) are set up to drive incremental improvements that industry demands, and virtually all paradigm shifts are resisted until AFTER they occur and are first adopted by industry. Government is the same (for instance in 1905 passing laws to forbid cars that were disrupting horse traffic; or in 1933 passing laws to limit investment in innovation startups to the wealthy (those successful in the status quo)).''

g. Darwinian algo [27] sqrt(n) VS higher algos - like Metcalfe n^2 [28]. It is not precise, it is more of metaphorical, to indicate direction or scale of scaling, rather then rigorous precision, but ... the former figuratively speaking takes 100 times more to put up 10 times more, and the later takes 10 times more to return 100 times more... h. Barter vs money. See.: [29] bottom of page 5 over the bottomline notes, about the later:​

As demonstration how one item out of a scaling barter system, emerges as specialized transactor and accelerator to transcale the barter economy. From within. Endogenously as always. (btw, Extremely strong document where there are entire books read and internalized behind each tight and contentful sentence!)

i. The heat death of the universe [30] VS the realization that the 2nd law [31] - conservation law for entropy/information law does not allow that [32], the asymptoticity [33] of the fundamental limits of nature, the fact that max entropy grows faster than/from/due to the actual antropy growth [34] and that entropy is not disorder [35] and that at the end of the day it is an unbounded immortal universe [36] ... cause it's all a combinatorial explosion [37].

j. The Anthropic principle [38] and the realization that it is extremely hard if not impossible to posit a lifeless universe [39] ...

k. The Algoverse - my 'psychedelic' vision [27] of the asymptotic inexorable hierarchy of the Dirac sea [40] of lower algos which take everything for almost nothing - up towards giving almost everything for almost nothing - Bucky Fuller's runaway Ephemeralization [41]. Algorithms are things. Objects. Structure. Homoousic or consubstantial to their input and output. Things taking things and making things outta the former. Including other algos of course! Stronger ones.

l. The Masa Effect [42]. The Master of Softbank seeing how the machine productivity is on the imminent course to massively overscale the human clients base and his apparent transcaling solution to upscale the clients base with bots and chips, with the same which scales supply in such a too-much way. [43]

n. Limits of growth - present in any particular moment and in any finitary setting of rules [8], [9] but nonexistent in the infinity of rules upgradability. Like a cancer cell trapped in a cage of light [46] vs ... photosynthesis.

o. Ray Kurzweil - static vs exponential thinking [47].

p. Craig Venter's [48] Human Genome project [49] which when commenced in 1990 was ridiculed that will be unbearably expensive and will take centuries to finish, and it did - it costed a unbearable for 1990 fortune and it did take centuries, of subjective time as per the initial projections conditions - being completed in year 2000.

q. Jeff Bezos vision [50] of Solar System wide Mankind:

''The solar system can easily support a trillion humans. And if we had a trillion humans, we would have a thousand Einsteins and a thousand Mozarts and unlimited, for all practical purposes, resources.''

r. The 'wastefulness' of data centers and crypto mining collocation facilities [51] ... which is as funny as to envy the brain for 'wasting' >25% of the body energy. (Btw, the tech megatrend is exponentially and relentlessly towards the minimum calculation energy).

s. The log-scale intuitive measure and smooth straight line visualization coming out of, this quote which I fished out off the net long time ago.:​

"The singularities are happening fairly regularly but at an increasing rate, every 500 to 1000 billion man-years (the total sum of the worldwide population over time). The baby boom of the 1950 is about 200 Billion man-years ago."

ops! go back to Q. With 1 trln. humans population the 'singularities' will occur once a year?!

&

t. the Tau [52][53][54] !!

I can continue with these examples ... forever [wink] - excuse me if I've bored you - but I think that at least that minimum was needed to be shown and it is enough to grok the big picture.

Scaling is the solution. It is a problem too. Its overcoming is what I dub 'Transcaling' for the purpose of that study.

Size matters. Scaling is the way. But the more general is how a system handles change! This is as fundamental as to be in the very core of definition of life and intelligence [55].

Tauchain is all about change handling!

Now, lets knit the 'blockchain' of these all example threads above into a knot like the Norns do [56]:

We the humans (and soon the whole zoo of our technological imitations and reproductions and transcendences of ourselves [42]).

We as the-I [4] are strong thinkers and creators, immensely more road lies ahead than it's been traveled, yes, but yet we, as the-I, are the momentary apex in the Effectoring business [45] in the Known universe ... AND simultaneously we as the-We are mediocre to outright dumb.

We are very far from proper scaling together. The Ultimate resource is not coherent and is not ... collimated. Scattered dim lights, but not a powerful bright mind laser. Dispersed fissibles, but not a concentration of critical masses.

We as The-We - paradoxically- persistently finds ways to transcale its destinies using the power of the-I, but the-We itself does not entertain the scaling well at all [4].

The individual human mind is the unscaled transcaler.

Tau is the upscaler of that transcaler.

I'll introduce herewith another 'poetic' neologism, which occurred to me to depict the scaling props of a system after the Scrooge factor of ''Tauchain - Tutor ex Machina'' [57], and it is the:

Spawn [58] factor

- the capacity and ability of a system to grow through, despite, against, across, from and via the changes. Just like cuboid [59] is about all rectangular things like squares, cubes, tesseracts ... regardless of their dimensionality, the Spawn Factor - to be a generalization of all orders of scaling. Zillion light years from rigor, of course, as I'm on at least the same distance from my Leibnizization [60]. For the lawyer to become a mathematician is what is for a caterpillar to become a a butterfly. :) Transcaling.

Tau transcends the infinite regress of orders of: scaling of scaling of scaling ... by being self-referential. Or recursive. [54]

What is the Spawn factor of Tau?

If you let me I'll illustrate this by a poetic periphrasis of the famous piece of Frank Herbert's [61].:​

I will face my change. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the change has gone there will be nothing. Only I will remain.

tau-chain.info is not the official site of Tau-Chain. This website has NO affiliation or relationship with the company or IDNI organization, and its contents are neither approved nor supervised by it. The website is dedicated to collaborate with the community and helping Tau and Agoras to progress in a sustainable way.