Bienvenidos! Welcome!A blog about Tau-Chain, TML and Agoras.

Tau-Chain.info is an unofficial blog that gathers information from various Internet sites about IDNI projects: Tau, Tau-Chain, Tau Meta Lenguaje and Agoras. This blog does not belong nor is it part of the official Tau project. The purpose of the blog is to make the project known, in an independent way, to people.​Tau is a revolutionary blockchain platform designed to scale up social consensus and accelerate knowledge creation in a decentralized network. Agoras will be an advanced marketplace for knowledge and computational resources built on this framework.

What might a Tau Operating System via a Tau Social Dispersed Computer function like?We know from tauchain.org that the first iteration of Tau is to be a discussion platform not too dissimilar from Facebook. Of course this would simply be the front end or the "face" of what could behind the scenes evolve toward a social dispersed computer complete with a dispersed operating system. The resources have to be managed and a kernel could provide for this in a manner not dissimilar to what we see with EOS. The Agoras or AGRS token specifically represents "resources" as it is the tokenization of resources for whichever application Tauchain will use.

TML provides the basis from which to create the necessary languages to produce a dispersed operating system computer. Zennet even has an algorithm which Ohad himself worked on for the purpose of calculating the resource requirements. All minds will be able to contribute towards the computational resources (at least in theory) of Tauchain.

Because of Zennet there may in fact not be a limit to the amount of computation resources which we could throw at the super computer. It will of course depend on resource management which is where a kernel likely comes into play because any smart apps built to run on Tau will have to ask for resources. Resource management is one of the core functions of a kernel and of an operating system which is why I think it is likely that Tauchain will have one. I think the Ethereum route shows problems with scaling as applications also have to compete for resources in a way where the network cannot self manage it. Cryptokitties for example can render the whole Ethereum network lagged and if this is a computer then it could mean a nonsense app could disrupt more critical apps.

A prime example of a potential smart app for TauchainAn example (which may or may not be feasible) is a health and fitness app. The app in theory could allow any user to provide data such as genetic information, blood test results, exercise tracking, blood pressure, blood sugar and anything else. All of this could provide a feedback loop back to the patient on how to improve their health over time based on the knowledge of Tau. As technology gets better the users could add more devices to provide more data for a better feedback loop. As technology evolves FGPAs could be added to meet the demand for calculations and storage can be rented as well.

An operating system could give priority to this kind of app by load balancing the resources. How would it know to do this? Tau could learn the morals, legal ramifications, and a consensus can emerge that health related apps deserve a premium access to resources because it can save lives.

Just several hours ago lead developer and founder of the Tauchain project Ohad Asor released his most significant code update yet. This blog post will be to discuss some of those updates and put it into context. In order to make sense of the current codebase :"Tauchain Codebase" I will also discuss a bit about the makeup of the code.

The significant breakthrough - Ohad implements the BDDFirst some might be wondering what is BDD? BDD is a data structure called binary decision diagram. This data structure in my opinion is as significant to Tauchain as the "blockchain" data structure was to Bitcoin. For those who do not have a computer science degree I will elaborate on what exactly a data structure is below before discussing what a BDD is and why it is so significant.

Brief discussion on what a data structure isIn programming a data structure is a concept which represents a data organization method. For example blockchain is all about how records are stored as blocks. There are other similar data structures which represent decentralized data management and storage such as for instance the distributed hash table data structure.

The really good programmers choose the appropriate data structure to meet the requirements of the project. BDD was chosen specifically by Ohad because it provides efficiency boosts in a key area necessary for Tauchain to function as intended. In specific we know Tauchain requires partial fixed point logic in order to have decidability in P-SPACE. We also know Tauchain requires decentralization and efficiency. Efficiency can be understood better in terms of the trade off between time and space. We do not have unlimited time or space so we must sacrifice one in order to get more of the other.

Example:

Compression saves space but increases processing time (this is also related to encryption which costs in processing time).

A lookup table saves time but costs space.

When we look at the code base we know that Ohad can optimize the code either by sacrificing space in which the executable will be bigger (but the code runs faster) or he can choose to sacrifice time in which the code is a smaller executable to save memory but might run slightly slower. This highlights the essential trade off between time and space when optimizing code but of course there is more to it because algorithms within a code base have to make similar trade offs.

Now what exactly is a BDD (binary decision diagram)?Now that we understand the basics about efficiency and what a data structure is we can make a bit more sense of what a BDD is. In order to understand why BDD as a data structure is so important to Tauchain we have to remember that Tauchain is about logic. We can take the most basic example of Socrates:​

A predicate takes an entity or entities in the domain of discourse as input while outputs are either True or False. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences are viewed as being unrelated and might be denoted, for example, by variables such as p and q. The predicate "is a philosopher" occurs in both sentences, which have a common structure of "a is a philosopher". The variable a is instantiated as "Socrates" in the first sentence and is instantiated as "Plato" in the second sentence. While first-order logic allows for the use of predicates, such as "is a philosopher" in this example, propositional logic does not.[5]

Based on the rules of first order logic we can have our inputs and receive our outputs. In the most basic example above we an see a bit about how logic works. To elaborate further:​

Relationships between predicates can be stated using logical connectives. Consider, for example, the first-order formula "if a is a philosopher, then a is a scholar". This formula is a conditional statement with "a is a philosopher" as its hypothesis and "a is a scholar" as its conclusion. The truth of this formula depends on which object is denoted by a, and on the interpretations of the predicates "is a philosopher" and "is a scholar".

A truth table has one column for each input variable (for example, P and Q), and one final column showing all of the possible results of the logical operation that the table represents (for example, P XOR Q). Each row of the truth table contains one possible configuration of the input variables (for instance, P=true Q=false), and the result of the operation for those values. See the examples below for further clarification. Ludwig Wittgenstein is often credited with inventing the truth table in his Tractatus Logico-Philosophicus,[1] though it appeared at least a year earlier in a paper on propositional logic by Emil Leon Post.[2]

When we are dealing with logic we may find that a truth table helps with visualization.

Now with this knowledge we have the most basic Socrates example:

All men are mortal.

Socrates is a man.

Socrates must be mortal.

This can be represented via truth table and is called a syllogism. To solve this we simply apply a kind of reasoning called deductive reasoning. This would indicate that if All men are mortal is true and if Socrates is a man is also true then Socrates is a mortal must be true. If we were to say all men are mortal but Socrates is immortal then Socrates cannot be a man. So if Socrates is a man he must be moral or there is what we call a contradiction. Logic is all about avoiding these sorts of contradictions and in specific binary or boolean logic is to reach a conclusion which always must be one of two possible values.

If I ask you to play a game which we want to guarantee will end with either one of two possible outcomes then we have a good example of a boolean function. 1 or 0, true or false, on or off, a or b.

Some of you may be familiar with data structure we call a DAG (directed acyclic graph). For those of you who understand this concept you can visualize a BDD as being very similar to a propositional DAG.

​By David Eppstein [CC0], from Wikimedia CommonsWe know from DAGs that it's a finite amount of vertices, edges, etc. We may also be able to visualize topological ordering and if you remember my post on transitive closure you might also remember the visuals on how that can work:

​This highlights the fact that BDD can be used to create a SAT solver.For example:

A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 1960s (see references below) and is now commonly referred to as the Davis–Putnam–Logemann–Loveland algorithm ("DPLL" or "DLL").[18][19] Theoretically, exponential lower bounds have been proved for the DPLL family of algorithms.

Without getting overwhelmed by technical details the key points are below:

BDD is a data structure of immense importance to the Tauchain project.

BDD enables Tauchain to "come alive" by allowing for even the basic truth table to be applied or the SAT solver to be implemented.

BDD + PFP is what we see in the Github code base. We see that Ohad has implemented BDD for PFP (binary decision diagram for partial fixed point logic).

To read the code for yourself and track the progress of Tauchain development take a look at Github:

As we can see from the current trend in crypto there is now a move toward privacy. Most people underestimate in my opinion the utility of these cryptographic advances. In this blogpost I will highlight a particular advance enabled by these new cryptographic (and hardware techniques such as trusted execution environment) which can be of massive benefit to the long term believers in Tauchain.

The problem: Anyone can copy the code Ohad writes if it's open source​So we have a problem with Tauchain where all of the code Ohad is writing with regard to TML is open source and on Github. This allows a competitor to simply steal his best ideas and in a sense rob the token holders who actually funded the development of the code. This happens very often as we see a new innovation in the crypto space and soon later we see a new ICO or a new group come out of no where acting as if they originated the technology. In some cases the new group may even be much more centralized, more secretive, and very well funded.

The solution: Secret contracts (private source code and execution)The trusted execution environment allows for the protection of intellectual property rights on the hardware level. While sMPC (secure multiparty computation) can also achieve similar ends on the software level. The idea being that this provides a solution to idea theft where a community can keep certain critical pieces of code, data, algorithms, or other unique features secret. This creates an entirely new way to monetize knowledge, code, and ideas, which Agoras will be uniquely positioned to leverage.

Guy Zyskind of the Enigma Project provides the definition for what secret contracts are and how they work. The Enigma Project deserves credit for introducing this technology and for identifying a major problem in the cryptospace. Traditionally on Ethereum or all other current platforms when you release a DApp your code has to be open source. It is not possible to create a closed or private source decentralized app. In addition the app has to be executed in the open so all data running through it is public.

Strategic implementation of private knowledge and source code can allow Tauchain to maintain a dominant positionIn most cases the world benefits if knowledge is shared. In fact I'm in favor most of the time of sharing as much knowledge as is safe. The problem with algorithms, source code, and certain kinds of knowledge is that by sharing that knowledge it provides a competitive advantage to people who have more financial resources. These individuals can simply see Github and copy. They can hire programmers to compete with Tauchain and Agoras developers and as long as the code is open there will be no real reason to buy the Agoras token long term.

What if the Tauchain development team and Agoras developers decide to implement private knowledge bases? What if it becomes possible to run code in a trusted execution environment so that other developers around the world cannot see the code or the algorithms? This would allow Tauchain to build Agoras in such a way that no other project will be capable of duplicating it. This would lock in the value backed by the community brainpower into the Agoras token making it a true knowledge token which cannot simply by copied with ease by another project.

In fact this is a strategy that developers making apps using Enigma's Secret Contracts are looking into as we speak. This competitive advantage of secrecy will change the landscape of the cryptospace. What does this enable for Agoras? Imagine an encrypted Github which developers can contribute to but only the developers can see the code? Imagine after the code is written that no one else can see the code if the code is set to run privately? This would allow developers to code in secret and have the code run on computers without anyone knowing what the code is.

This can open up security vulnerabilities but Tauchain can defend against these. In particular it matters what is private and what is public. Critical aspects can be private while security critical areas can always be kept public. There may even be ways to prove that the code doesn't behave in a certain way without actually sharing the code (using advanced cryptography). In fact my favored way of implementing this feature would be to timelock the release of the source code by a number of months of years.

The idea isn't to keep things closed forever or secret forever. Privacy is about access control and about keeping things secret long enough to maintain a competitive advantage. A time delay to unlock the source code for example could work. It is even possible to allow the community to use puzzle based time lock encryption to have to mine to get the source code released early (if there is a serious need or threat). In this way all secret blocks of code could be unlockable but not for free and this would make it less likely that the community will seek to unlock it unless there is a genuine reason (beyond just to steal ideas).

What do you think about these ideas? If you agree with this or disagree then comment below. Strategic IP (intellectual property) is used by major corporations to give themselves a competitive advantage. The crypto community can do the same thing in ways the legal mechanisms can't do. In fact it can be done in a more fair and better way because often the people or companies awarded IP rights aren't the actual inventors. A knowledge economy is fantastic but if the knowledge is just harvested by big corporations monitoring the wide open network then it's going to be hard to bring value to a knowledge token.

UPDATE: Many people ask where to buy Agoras. The problem is it's not widely available on centralized exchanges. The only exchange I know that has it is Bitshares. So if anyone really wants to buy Agoras (AGRS) which is the token of discussion in this post feel free to buy it at:https://openledger.io/market/AGRS_BTC

42 million intermediate tokens total. Current price is: 0.00010700 BTC which is around 70 cents. This is the cheapest price I've seen it in a while because for a long time it was $1.50-$1.30 range. This is a very speculative token at this time so buy at your own risk as I'm not providing any financial advice. I'm a holder of this token of course and have been for years.​References

This post theme was getting ripe in my head for long time. Something like since 2014.

Recently I got some data to put together the stepping stones for turning my mere suspicion into more of a grounded conclusion.

The problem was that it was also growing in width and depth with time, so here you are a momentary snapshot or sketch-map of it, which I intend to elaborate further on.

I'll start with shooting two slogan-missiles which constitute super-compression of lotsa research and which will be revisited soon in separate series of articles.

Trust is Force

''you trust 'em only as much as you can make 'em to...''&

Money is Mnemonics

yes, precisely THIS is the core essence and function of ANY monetary system - (even the primordial barter one with its naturally emerging special tokens [1], [2] to mitigate its intrinsic exponential wall [3] of unscalabiliuty [4], [5]) - to account or remember human activity. That is, money is always work to prove work. Basically we need to remember due to impossibility of simultaneity of transactions.Which I already went over [33]... and, I beg your pardon. Three, not two slogans. The third one is:

Law is Between, Code is Within

Will explain later what I mean [6] and how it ties up with the former two. In a nutshell is about the enforceability as essential characteristic of all law and now will just hint that the reason why Force (coercion) is deemed to be fundamentally non-decentralizable is the Pauli exclusion principle [7] which is kinda ''location conservation law'' [8].

You already know [9], [10] my taste for epystemological 'archaeology', that's why I think it is better to carry the story on in chronological order.

Back in 2014 I stumbled upon series of extremely astute and deep thought articles [11], [12], [13], [14], [15] on the cost of several well known monetary systems in comparison with Bitcoin, which just has been grown enough to become visible for unaided eye.

I remember I discovered these great articles by the obviously great Hass McCook in the wake of the MtGox [16], [17] boom and bust aftershock, when huge anxiety about the 'wastefullness' of the Bitcoin mining was reigning the public sentiment. (It happens everytime the price nears the production cost).

The search of mine which hit those was driven by the quite legitimate question of:

''If crypto is wasteful, then how much the traditional fiat costs us, god damn it?''

Well, the comparison turned up, as I suspected, not at all in favor neither of the quite recent demetalized fractalized-centralized double-entry book-keeping debts mnemonincs of the banknotes monetary system, nor in favor of the millennia old 'heavy metal' single-entry money where the physical possession of gold/silver denotes your purchase power...

And it occured it was not at all just about costs of mining, refining, casting, ink, printing press, storage, accounting, counterfaiting countermeasures, ... but the bill to pay includes also all the social infrastructure and capital devoted on the making the system to work, and to be kept ticking ...

Essentially all which is know as ... government. All its buildings, all its sallaried humans, all their guns, pens, pensions, courts, judges and bailiffs ... everything.

All that needed in order a common Ledger to be built, maintained, broadcasted and kept. The difference between government and governance is obvious - the former is the means to an end, the later is the end. The former is the machine, the later is the function.

Here is the place to insert three other quick notions which are in the pipeline for revisiting and furnishing with separate articles.:

Firstly, Mnemonics is subject of big evolutionary/development forces just as anything else into the combinatorial explosion which the universe, nature, society is ...

You noticed above the notion of money emergence kinda coinciding with writing? The Sumerian example.

Writing is mnemonics amplifier [18]. Just like the combustion engines are transportation boosters [19].

The better memory and memory sharing system we have on our disposal the better money we have.

Money is technology [20].

Secondly, any book-keeping - regardless whether we write by hand on cave wall or papyri, or by blade on a wooden stick, or by most sophisticated laser-quantum methods on most sophisticated multi-dimensional crystals [21] - is, yeah, a function of writing. We can go even further and state that illiterate verbal folklore - the only thing we got for millions of years - is form of verbal writing onto each other's short-term/long-term memories, just like photography and sound recording is.

The important thing to note here is that in the light of ''Money is Mnemonics'' spell of mine - the accountancy systems do possess cardinality of entries [22], [23], [24].

And it seems that the mega-trend is:

''the more entries handled = the better our money is''

Fiat one - monetary and overall - is double-entry based and relies upon import of trust, blockchain is tripple-entry and trust is built-in. Blockchain is not 'trustless' but is 'autotrophic' [25] in regards with trust.

The third notion turns us back on track with the main theme of this article. It is that of the mutual entropy [26].

The Ledger, no matter which tech it uses to be, has as purpose to define how the individual people's acivity has to be limited for the sake of collective cooperation and collaboration.

The Ledger - product of the particular kind of Mnemonics in play - literally SHAPES and MAKES the society.

As kinda Sorites [27] or Holon [28] or Mereonomic [29] ... generator.

NOW, which costs more? Which one is more wasteful of all the known Ledger or Mnemonic or Monetary systems known?

Literally couple of days ago I stumbled upon ''The $29 trillion cost of trust'' from 24 Jul 2018 by Sinclair Davidson, Mikayla Novak and Jason Potts [30], which made this long time in the making article to come out.

Now I finally have put my eyes on some numbers to juggle with.$29 Trills!The ecumenical [31] or midgardic [32] GDP is evaluated on roughly rounded up ~$100t p.a.There is lots of well grounded criticism [33] on the ability of the present day fiat financial system to actually manage to encompass and measure it all - but lets take this conditional good round figure for the global GDP.The total wealth of ~quarter of $Quadrillion (giving total average depriciation / consumption rate of over a third per year).

GDP evaluates the dynamic part. The work.

Almost 1/3rd of all work is devoted to account for or to prove the work!

Visualize the fiat system as a primitive, primordial, predeluvial or perecursor form of PoW [34].

Funny enough this ~1/3rd global proof-of-work or mnemoic or governance cost strangely coincides with the energy budget of the brain [35] as fraction of the total energy a human body dissipates to live.

The last two pieces of research argumentation to close the topic are.:

''Nothing is Cheaper than Proof of Work'' from 04 Aug 2015 by Paul Sztorc [36]

&

''Bitcoin: A $5.8 Million Valuation Crypto-Currency and A New Era of Human Cooperation'' from 25 Jul 2017 by Mr. Game & Watch [37]

I'm trully impressed by the depth of these two documents. It is as big as - each sentence backed by several book volumes of profound research.

Paul Sztorc convincingly demonstrates that PoW is the most efficient protocol for decentralization or 'trustlessness'. It appears that 'PoW is the cheapest' not only among the blockspace [38] but also cheapest everywhere and everywhen.

Mr. Game and Watch evaluates that if in the present day 100-ish $Trills strong global economy there was nothing but Bitcoin as a form of money - the value of a single BTC would be worth millions of $.

''Banknote waste diﬀers from other types of monetary waste in that it is much harder to perceive, by virtue of the complex nature of banknote creation. In contrast, Bitcoin mining directly consumes electricity, and gold mining obviously requires engineers, machinery, armed guards and so forth. At ﬁrst glance, it seems incredible that impoverished hunter-gatherers would devote some of their precious time to the manufacture of silly beads and shells and other collectibles. And, it seems wasteful indeed, that we humans use our powerful brains primarily to obsess over what other people think of us. All of these activities are wasteful,in a narrow sense, but in a broader sense they maintain the infrastructure required to promote and sustain cooperation. These are social activities – we engage in them because we are not alone.''

Apparently monetary system which involves humans to function is unscalable. In the preTau. It is far easier and unlimited as capacity to grow our electricity and machinery resources, than to replicate humans. [46]

Intuitively, the lower the Cost of Trust the stronger the society, the bigger and with higher acceleration is the growth of the economy, the higher is the affluence and wealth. [18], [39], [40], [41], [19], [42].

If hypothetically the Cost of Trust is zero, the value of the economy will be infinite?

The endogenous automation of production and distribution of trust which the blockchain enables many orders of magntitude lowering of the cost of trust, compared with the present hand-driven system. (As an example - Satoshi himself posited aka 'payment channels' [43] and Lightning Network [44] and such promise hundreds of thousands of times smaller transaction costs all internal to the trusltessness environment of blockchain without to rely upon human work to prove work ...)

At the end, what has Tauchain in common with that all?

Well, lotsa things. I'm light years if not infinitely far from any generalization and systematization, but here you are an improvised list ... of questions :

Tau is generalization of all p2p network protocols. It is even generalization of all Taus by the virtues of self-reference and self-definition.

Evolution is an algorithm. Very poor one. Now we have it as the only means to blind search the phase space of all possible blockchain decentralization protocols [45]. Human ingenuity amplified by Tau enabled collective super-intelligence increases the chances for discovering and harnessing better than the evolutionary algos, so in more controlled manner to evaluate and implement even lower cost trust-making.

​

What if Cost of Trust is negative? Is this at all possible? So far accountancy, money-mnemonics, book-keeping, proving work, management, governance splits the system into controller and controlled, and the former relies upon the resources of the later. Recording and enforcing are form of work, distinct from the 'actual' 'useful' work. Are proof-as-you-work protocols possible? Where the economy just reflects itself as it goes? Self-proving work? Real systemic self-control? [47]

In a recent article of mine [1] I hinted my strong suspicion that scaling is itself scalable. ​

''Scaling is a problem. Scaling must be scalable, too. Metascale from here to Eternity.''

No matter what a terrific grower a system is - as per its own internal algorithmic growth drive rules - it seems inevitable its growth to get it into entropic mutualization [2] upon impact with a kind of a ... downscaler.

Scaling is everything, yeah. But it is quite intuitive and supported by too big body of evidence to ignore, that, paradoxically: the faster a thing grows - the sooner its encounter with an external and bigger downscaling factor comes.

This realization, refracted through the prism of our 'reptilian brain' layer [3] amplified to gargantuan proportions by our inherent social hierarchicity [4] is the source of the 'Malthusian [5] anxiety' which led to countless violent deaths over all the human history. Fear is anger [6], so the emotion that there is only as much to go around, and that the catastrophe of 'running out' of something is imminent, is the major source of what makes us bad to each other [7].

There are plethora of examples of very well mathematically and scientifically grounded doomsayer scenarios, and we must admit that they all correct as per their internal axiomatics [8] , and simultaneously they are all totally wrong for missing out the obvious - the factors of externalities [9] , the properties and opportunities of the medium which is consumed and/or created by this growth, and which transcend the axiomatics. For growth being always 'growth into'. The fact that doomsday scenarios are so compellingly consistent internally is what makes them so strong and dangerous ideological weapon of mass destruction [10].

b. the grim visions of the whole Mankind becoming telephone switchboard blue collar workers [13],[4] the number of which should've exceeded the number of total world population by now to achieve the same level of telephonization or

c. the all librarians world [14] where it takes more librarians than the whole mankind to serve the social memory in the paper & printed ink storage facilities mode ...

d. the Club of Rome [15] as the noisiest modern bird of ill omen with 'projections' based on the same blind extrapolations as the urban seas of shit or the 'proofs' of the impossibility to connect or educate or feed all - instigating mass destruction fear that ''we run out of everything and will soon all die'' [16], used for justification for mass atrocities VS Julian Simon's [17] - the ''Ultimate Resource'' (1981, 1996) [18]. Cf.: my accelerando article [19] and see what precisely is the Factory for succession of better and better Hanson drives for the last few millions of years - from the Blade and the Fire to the Tau - it is the same thing which identification made Julian Simon from fanatical Maltusianist [20] into rationally convinced Cornucopian [21] ... the human mind.

e. the predator-pray model [22] which this pseudo-haiku [23] I guess depicts best how's it brutally flawed:​

''hawk eat chic -> less chic, human eat chic -> more chic''

for missing out to posit and failure to account for positive feedback loop [24] of predator over pray dynamics ...

f. The comment of Dary Oster [25] , founder of the other passion of mine - ET3 [26], on the aka 'saturation' of the scalables (exemplified in the field of transportation, which btw, being communication ... our social structures map onto mobility systems we have on disposal ... ).:​

''... US transportation growth has focused on automobile/roads (and airline/airport) developments. (And this has been VERY good for the US economy.) The reason is that cars/jets offered far better MARKET VALUE than horse/buggy/train transport did 150 years ago. In the mid 1800s, trains displaced muscle power for travel between cities - because trains offered better market value than ox carts. Trains reached 'market saturation' about 1895 to 1905 (becoming 'unsustainable') - however 'market momentum' produced 20 years of 'overshoot'. Cars/jets were far more sustainable than passenger trains and muscle power, and started to displace trains (and finish off horses). By 1916 the US rail network peaked at 270,000 miles (today less than 130,000 miles is in use).Just like passenger trains hit market saturation, roads/airports are reaching economic limitations. The time is ripe for a market disruption, and all indicators (past and present) say it will NOT come from, or be supported by government or academia -- but from private sector innovations that offer a 10x value improvement (like ET3), AND also offer incentives for most (not all) key industries to participate (like ET3). Automated cars, smart highways, and electronic ride sharing are industry responses that will contribute to overshoot of cars/roads for the next 5-10 years.The main problem i see with the education system is that is that academic research and publication on transportation is primarily funded by status quo industries like: railroads and rail equipment manufactures, highway builders, automobile/truck manufactures, engineering firms, etc. -- all who fund research centered on 'improving' the status quo.Virtually all universities (for the last 1k years+) are set up to drive incremental improvements that industry demands, and virtually all paradigm shifts are resisted until AFTER they occur and are first adopted by industry. Government is the same (for instance in 1905 passing laws to forbid cars that were disrupting horse traffic; or in 1933 passing laws to limit investment in innovation startups to the wealthy (those successful in the status quo)).''

g. Darwinian algo [27] sqrt(n) VS higher algos - like Metcalfe n^2 [28]. It is not precise, it is more of metaphorical, to indicate direction or scale of scaling, rather then rigorous precision, but ... the former figuratively speaking takes 100 times more to put up 10 times more, and the later takes 10 times more to return 100 times more... h. Barter vs money. See.: [29] bottom of page 5 over the bottomline notes, about the later:​

As demonstration how one item out of a scaling barter system, emerges as specialized transactor and accelerator to transcale the barter economy. From within. Endogenously as always. (btw, Extremely strong document where there are entire books read and internalized behind each tight and contentful sentence!)

i. The heat death of the universe [30] VS the realization that the 2nd law [31] - conservation law for entropy/information law does not allow that [32], the asymptoticity [33] of the fundamental limits of nature, the fact that max entropy grows faster than/from/due to the actual antropy growth [34] and that entropy is not disorder [35] and that at the end of the day it is an unbounded immortal universe [36] ... cause it's all a combinatorial explosion [37].

j. The Anthropic principle [38] and the realization that it is extremely hard if not impossible to posit a lifeless universe [39] ...

k. The Algoverse - my 'psychedelic' vision [27] of the asymptotic inexorable hierarchy of the Dirac sea [40] of lower algos which take everything for almost nothing - up towards giving almost everything for almost nothing - Bucky Fuller's runaway Ephemeralization [41]. Algorithms are things. Objects. Structure. Homoousic or consubstantial to their input and output. Things taking things and making things outta the former. Including other algos of course! Stronger ones.

l. The Masa Effect [42]. The Master of Softbank seeing how the machine productivity is on the imminent course to massively overscale the human clients base and his apparent transcaling solution to upscale the clients base with bots and chips, with the same which scales supply in such a too-much way. [43]

n. Limits of growth - present in any particular moment and in any finitary setting of rules [8], [9] but nonexistent in the infinity of rules upgradability. Like a cancer cell trapped in a cage of light [46] vs ... photosynthesis.

o. Ray Kurzweil - static vs exponential thinking [47].

p. Craig Venter's [48] Human Genome project [49] which when commenced in 1990 was ridiculed that will be unbearably expensive and will take centuries to finish, and it did - it costed a unbearable for 1990 fortune and it did take centuries, of subjective time as per the initial projections conditions - being completed in year 2000.

q. Jeff Bezos vision [50] of Solar System wide Mankind:

''The solar system can easily support a trillion humans. And if we had a trillion humans, we would have a thousand Einsteins and a thousand Mozarts and unlimited, for all practical purposes, resources.''

r. The 'wastefulness' of data centers and crypto mining collocation facilities [51] ... which is as funny as to envy the brain for 'wasting' >25% of the body energy. (Btw, the tech megatrend is exponentially and relentlessly towards the minimum calculation energy).

s. The log-scale intuitive measure and smooth straight line visualization coming out of, this quote which I fished out off the net long time ago.:​

"The singularities are happening fairly regularly but at an increasing rate, every 500 to 1000 billion man-years (the total sum of the worldwide population over time). The baby boom of the 1950 is about 200 Billion man-years ago."

ops! go back to Q. With 1 trln. humans population the 'singularities' will occur once a year?!

&

t. the Tau [52][53][54] !!

I can continue with these examples ... forever [wink] - excuse me if I've bored you - but I think that at least that minimum was needed to be shown and it is enough to grok the big picture.

Scaling is the solution. It is a problem too. Its overcoming is what I dub 'Transcaling' for the purpose of that study.

Size matters. Scaling is the way. But the more general is how a system handles change! This is as fundamental as to be in the very core of definition of life and intelligence [55].

Tauchain is all about change handling!

Now, lets knit the 'blockchain' of these all example threads above into a knot like the Norns do [56]:

We the humans (and soon the whole zoo of our technological imitations and reproductions and transcendences of ourselves [42]).

We as the-I [4] are strong thinkers and creators, immensely more road lies ahead than it's been traveled, yes, but yet we, as the-I, are the momentary apex in the Effectoring business [45] in the Known universe ... AND simultaneously we as the-We are mediocre to outright dumb.

We are very far from proper scaling together. The Ultimate resource is not coherent and is not ... collimated. Scattered dim lights, but not a powerful bright mind laser. Dispersed fissibles, but not a concentration of critical masses.

We as The-We - paradoxically- persistently finds ways to transcale its destinies using the power of the-I, but the-We itself does not entertain the scaling well at all [4].

The individual human mind is the unscaled transcaler.

Tau is the upscaler of that transcaler.

I'll introduce herewith another 'poetic' neologism, which occurred to me to depict the scaling props of a system after the Scrooge factor of ''Tauchain - Tutor ex Machina'' [57], and it is the:

Spawn [58] factor

- the capacity and ability of a system to grow through, despite, against, across, from and via the changes. Just like cuboid [59] is about all rectangular things like squares, cubes, tesseracts ... regardless of their dimensionality, the Spawn Factor - to be a generalization of all orders of scaling. Zillion light years from rigor, of course, as I'm on at least the same distance from my Leibnizization [60]. For the lawyer to become a mathematician is what is for a caterpillar to become a a butterfly. :) Transcaling.

Tau transcends the infinite regress of orders of: scaling of scaling of scaling ... by being self-referential. Or recursive. [54]

What is the Spawn factor of Tau?

If you let me I'll illustrate this by a poetic periphrasis of the famous piece of Frank Herbert's [61].:​

I will face my change. I will permit it to pass over me and through me. And when it has gone past I will turn the inner eye to see its path. Where the change has gone there will be nothing. Only I will remain.

Lets build an universe [1], [2]. I realize this blog post is the most 'psychedelic' up to now and for long time to go, but some 'poetry' never really hurts ...

We discussed already the worldmaker effectoring [3].

It is quite ancient but also exponentially growing business ... in all possible forms of science [4], faith [5] and art [6]. This modeling [7] usually serves to play out what's possible and what's impossible. Gedanken eksperiment [8], yeah, but isn't all thought [9] merely algorithmic [10] and mere action [11] ?

Usually the posited universes are made of variations and combinations of substance/matter, structure/form and action/process rules. Though, the algorithmic component is always the essential ingredient. Yes, the Laws of Physics are full-fledged, literal algo [12], too. I have those conjectures that it is impossible to think out, make or discover (which is one and a same thing) a lifeless universe [13] and that substance-structure-action are inexctricable, but these are separate topic for some other times to address [14].

Lets put together ours toy-universe [15] out of only pure algorithm. I've never seen such a construct, although the Orbis Tertius [14] is enormous and I bet this vision have occurred gazillions of time in zillions of minds.

It is like an ocean. The primary coin-toss algo which outputs 0s and 1s [16] makes the water. We don't know (yet) if there are even deeper and more fundamental numerical bases [17] for running algos. Most probably the answer is yes by analogy with the Dirac Sea [18] - the deeps to be made of simpler and weaker algos. The most elementary coin-toss thing makes out the ... probabilistics, perhaps the primordial form of logic. The laws of physics (and of machine learning [19] and of darwinian evo algo [20] ...) tell the rule-set how to stitch together lotsa coin-toss outputs. A hint on inspiration for that - David Deutsch's Constructor theories [21]. The laws of physics as entropy [22] limitation of the allowed elementary algo cumulative output. For information being a verb, not a noun - isn't it? Very interested philosophic perspective on algorithm as randomness constrictor [23] raise up...

So, if the Algoverse ocean water is made of elementary coin-toss molecules, being ''liquid'' is just another phase or aggregate state [24].

There is deep duality [25] between probabilistics and logic. Just like the zoo of dualities discovered in accelerating pace by the mathematical physics in the last decades [26] Probability/statistics we make now by logic [27], the reverse ... - well, nobody yet cracked it. Even Kolmogorov. But I bet we will. Most probably the breakthrough will be Ohad Asor name-labeled... To find the know-how to do it the other way round, do logic with probability/statistics. The statistical algorithmic - not the SAT [28], brute force, alchemist [29] way as with NN/ML [19], [30] and other known beasts. This will be nothing less but full merger of maths/logic/philosophy/thought... and physics. Literally!

Excuse me for the haiku [31] simplification. It is deliberate due to realization of my grok constraints. :) Regard it as sharing a poetic impression.

Is there deeper and weaker algo than the digital - the radix-2, deterministic, unitary one? Intuition says ''yes, of course!'' Like with these radix-1 Half-coins [32] of negative and other non-unitary probabilities ... which take two tosses to yield a bit... and there must be transfinity [33] of lower ones, also transfinities of higher and sideways ones ... which is almost as counter-intuitive as Dirac's bottomless night of negative energy [18], but I bet also as much useful. (Lets not even touch numeral bases of Pi, i, e ... etc.), and lets stick to strictly binary 'water' for our oceanic toy-universe for the sake of sanity.

The next important notion of the Algoverse ocenic model is the Algorightmic strength [34] - the weakest algo would be that which takes infinity of tosses to get a full bit. The strongest?

Algorithmic ephimeralization [35] - essentially to do more with less. Or faster - Speed Prior [36] ... which is just another way to say 'more'.

Some algos are too strong - QM, M-Theory - they return way too much bits per 'toss'. Their vcdim [37] converges to infinity. Exponential walls [38] in all directions. Not exactly what Freeman Dyson had in mind [39]... In our ''mockup'' they could be depicted as too hot. Changing the phase of the elementary algo 'water'. Like.:

but because we are all for peaceful use of algorithmic energy - we reject those up here, too - together with the non-unitary statistics down there.

Last piece of the picture - the Algoverse ocean is habitable and inhabited!

By higher algos as life-forms, stronger - but not so strong to turn the 'water' into roaring steam or plasma.

Examples: Calculi [40], geometries, algebras ... software [41]. The genetic inter-algo connection should be that calculus came from Leibniz and Newton and numerous unknown others ... heads, but it is the blind watchmaker [42] of evolution which put those heads together ... (I disagree with Dawkins only on that evolution and design are both algorithms, alternatives but not opposition).

Thus entropically [43] and combinatorially [44] algos kinda-sorta come from one another - the stronger from the weaker.

The stronger are the life-forms living in that ocean. Cause randomness [45] permeates everything, isn't it? ​

Not so far-fetched of a metaphor given the fact that any Effector-ing [3] has totally algorithmic nature and essence. ​​I wonder:

How much higher 'life form' Tauchain in the Algoverse ocean is? Is it mere life form or ... life, new organizing principle to reform all the system?

Masa. Masayoshi Son [1]. The master of SoftBank [2]. The Japanese national of Korean background [3] - really great achievement in this context! The individual with, I suspect, the biggest buying power in all the human spacetime combined. In the world and in the history.

Masa's business record is formidable. He's not just serial and parallel multi-billionaire but a multi-billionaires-breeder [4] - for example he's THE Jack Ma-backer, i.e. THE Alibaba-maker. And many others more ...

He's buying pieces of Google [5] ! $32b cash for ARM [6], undisclosed $b cash for Boston Dynamics [7]. Et cetera. And Masa definitely knows what he's doing with these bits and pieces. What mosaic he's building with those chunks.

Masa has a vision. An yuuuge vision. Masa has a Vision Fund [8]. So, visions fully backed. Backing is what distinguishes a vision from fantasy. SoftBank Vision Fund current minimum check size is $100m by the organization's own rules.

With >$100b shopping spree cash in pocket (and we talking cash, not lower liquidity assets), and an yuge vision the already yuge Vision Fund to get even yuuuger. [9] Cause - you know - trillions are the new billions (and it is not 'just inflation' but in absolute, shear power - productivity beats inflation [10]).

On pragmatic level it is as simple as it is ingenious [16] - the machinery productivity and production grows so immense that inevitably and soon its output/supply exceeds the cumulative human demand. The machines run out of market!

Solution? As obvious as the Frederick Pohl's Midas Plague (1954) [17] - machines doing business with machines [18] (- from about minute 09:00 of the vid onwards). Many orders of magnitude more machine-machine collaboration than all the possible machine-human, human-machine or human-human ones. Trillions and trillions of transhuman chips and bots doing business between each other.

And Masa not just advocates or evangelizes this vision behind his Vision - he does it. Now.

In the narrow-minded aspect it is just matter of (a little) time before Masa notices my precious Tau [19] and ET3 [20] (which I told you I see as 1, not 2 - explanations to be delivered in future posts).

From wide-minded perspective ... Well...

Do you see what I see?

Chatbots porting into Tau.

Masa's chips or bots are into Moore's law [14] state of inevitability, e.g. doomed to cross the human scale barrier and to rush even further ahead. To even crack the human natural language code barrier and to do all what a human can do and more. (On human-machine-Tau-machine-human sandwiching architecture for direct use of the few megayears thin natural language wealth and even the few gigayears deep non-verbal communication capital - some other time in some other posts).

In my previous post [21] I explained my understanding of the ingenuity of Ohad's approach towards the Moravec-hardness problem of the human condition [22] - the realization that it is a waste and side-tracking to follow dehumanizing pathways of creation of biomimetic cybernetic homunculi to mitigate the organic limited human specifications, BUT we use them - Tau is the way the problem to become the solution. We utilitify all the processing and algorithmic capital accumulated over billennia into what we call human.

Is the Tau way into a divergence course with the Masa way? No! Absolutely not.

To make chips or bots of > and >> x100 Einstein intellect is a huge collaborative effort. Machines alone - it'd take few billions of man-years to get there. Humans needed - to serve as the effort amplifier lever fulcrum [23]

Tau with its human-machine-human network topology makes collaboration - for first time ever - really a P2P [24] thing, with social diameter [25] of 1 or even <1 for each and every participant [26] no matter human or machine.

- Tau is Masa vision accelerator.

&

- Tau is the geodesic Agora [27] of all intellects imaginable, no matter 'natural' or 'artificial'.

NOTE: Ohad most probably will disagree with this vision of visions on visions of mine, but I dared to dare already anyways. Sorry, bro. It is of course, not an official Tau Team position.

Foreword of Isaac Asimov (then only 36 years old) ! Recommendation by the legendary mathematician and cyberneticist Norbert Wiener (then 62 years old) ! ... A true jewel! The book is described as:​

A review of "the last ten years' progress in the development of self-governing machines," describing "the principles that make the most complex automatic machines possible, as well as the fundamentals of their construction."

Nineteen fifties !! The midway between the first digital computer made by my half-compatriot John Atanasoff [2] and internet [3]. Almost a human generation span between the former, the book and the later event. Epoch so deep in the past that even television, air travel, rockets and nukes ... were young then.

Same Kondratieff [4] wave phase btw, which hints towards the historical rhyming of socially important intellectual interests. (On how K-waves imprint on the humanity growth curve - in series of other posts to come).

I must admit here that I've never put my hands and eyes onto this book. But, it is stamped into my mind and memory by Stanislaw Lem [5] - one of the greatest philosophers of the XXth century, working under the disguise of a Sci-Fi writer, for being caught on the wrong side of the Iron curtain.

''Summa Technologiae'' (1964) [6] is a monumental work of Lem's, where most issues discussed sound more contemporary nowadays than they were the more than half a century ago when it was built, and for many things also we are yet in the deep past ...​

... Lem reports and discussed the following from the aforementioned Pierre de Latil's book.: ​

''As a starting point will serve a graphic chart classifying effectors, i.e., systems capable of acting, which Pierre de Latil included in his book Artificial Thinking [P. de Latil: Sztuczne mys´lenie. Warsaw 1958]. He distinguishes three main classes of effectors. To the first, the deterministic effectors, belong simple (like a hammer) and complex devices (adding machine, classical machines) as well as devices coupled to the environment (but without feedback) - e.g. automatic fire alarm. The second class, organized effectors, includes systems with feedback: machines with built-in determinism of action (automatic regulators, e.g., steam engine), machines with variable goals of action (externally conditioned, e.g., electronic brains) and self-programming machines (system capable of self-organization). To the latter group belong the animals and humans. One more degree of freedom can be found in systems which are capable, in order to achieve their goals, to change themselves (de Latil calls this the freedom of the "who", meaning that, while the organization and material of his body "is given" to man, systems of that higher type can - being restricted only with respect to the choice of the building material - radically reconstruct the organization of their own system: as an example may serve a living species during biological evolution). A hypothetical effector of an even higher degree also possesses the freedom of choice of the building material from which "it creates itself". De Latil suggests for such an effector with highest freedom - the mechanism of self-creation of cosmic matter according to Hoyle's theory. It is easy to see that a far less hypothetical and easily verifiable system of that kind is the technological evolution. It displays all the features of a system with feedback, programmed "from within", i.e., self-organizing, additionally equipped with freedom with respect to total self-reconstruction (like a living, evolving species) as well as with respect to the choice of the building material (since a technology has at its disposal everything the universe contains).

​I gave only a short summary of the classification of systems with increasing number of degrees of freedom of action as suggested by de Latil, removing from it some highly contestable details of the division. Before we go on to further considerations, it is probably not inappropriate to remark that the presented classification is not complete. One could imagine systems with yet another degree of freedom: for the choice of materials in the universe is necessarily limited by the "catalogue of goods" which are at disposal. However, a system is conceivable which, not satisfied with the range of available things, creates materials "beyond the catalogue", not yet existing in the universe. The theosophist might be inclined to take God for such a "self-organizing system with maximum freedom"; this hypothesis, however, is not indispensable to us, because we can assume, even on the basis of our modest contemporary knowledge, that the creation of "parts beyond the catalogue" (e.g., certain subatomic particles which "normally" do not occur in the universe) is possible. Why? Because the universe does not produce every possible material structure, and it is well known that it does not create, e.g. in stars, nor somewhere else, typewriters; nevertheless the "potential" for such devices is there - and the same, one can imagine, is true for phenomena concerning states of matter and energy in the supporting space-time which cannot be realized by the universe (at least not in the current phase of its existence).''

Longish quote, but every word in it is a worth. When I've read this as a kid back in 1980es ... immediately came to my mind the next, the seventh logical higher effector class.: the worldmaker !!

The degrees of freedom of all the previous six according to the classical taxonomy of de Latil are confined by the rule-set, the local laws of physics.

They are prisoners of an universe. Like birds incapable to reconfig their cage into roomier and cozier ones.

If we regard the laws of nature as code or algorithm, my 7th level effector will be capable to draft and implement itself onto newer and stronger algorithmic foundations. ( Note the seamlessness between computation and robotics in Latil/Lem categorization construct - quite logical indeed, having in mind that software is state of hardware, that matter-form-action are inextricable from each other, but on this in series of other times and posts ... ). Without bond?

To zoom out is useful. It puts the events networks of our spacetime in perspective. Including on what the great Jorje Luis Borges was calling the Orbis Tertius [1]:​

''ORBIS TERTIUS. "Tertius" (Latin = third) is an allusion to: World 3: the world of the products of the human mind, defined by Karl Popper.''

Poetically stated, ''retrodiction studies'' [2], [3], [4] enables us to get a glimpse on the "clear, cold lines of eternity".

Back in 20th century Prof Robin Hanson put together this extremely insightful and strong document [5].​

Long-Term Growth As A Sequence of Exponential Modes,

First Version 9/11/98, Revised 5/03/00, Substantially renewed 12/00

Economy grows. [see: Footnote]. Unstoppable.

Hanson's unprecedented contribution was to provide us with systematic orientation tool on how and why economy grows.

It accelerates. See:

Mode Doubling Date Began Doubles Doubles Transition

Grows Time (DT) To Dominate of DT of WP CES Power

---------- --------- ----------- ------ ------- ----------

Brain size 34M yrs 550M B.C. ? "16" ?

Hunters 224K yrs 2000K B.C. 7.3 8.9 ?

Farmers 909 yrs 4856 B.C. 7.9 7.6 2.4

Industry 6.3 yrs 2020 A.D. 7.2 >9.2 0.094​

The model identifies the past economy accelerators as.:

- neural networks, evolving into doubling brain size each 30-ish megayears (hinting that human level of intelligence is an inevitability: +/-30 millions of year around the Now, by the virtue of the good old 'coin-toss' Darwinian algorithm alone.)

- human as the top-of-the-foodchains predator since around 2 000 000 BC. (maybe the human mastering of the Fire and the Blade to blame), compressing the doubling time with over two orders of magnitude down to a quarter of a million of years.

- Food production, ecosystem manipulation (or rather the collimation of farming, horse domestication and writing as accelerator components), leading to less than 40 human generations per economy doubling.

- All we know as division of labor, specialization, systematized Sci-Tech... industry - the centralized ways for production and control of knowledge leading to another hundreds-fold compression down to mere ~decade of economy doubling time.

My observation about networks in general is a rather obvious one when you think about it: our social structures map to our communication structures. As intuitive as it is to understand, this observation provides great insight into where the technology of computer assisted communication will take us in the years ahead.

Connectivity specs as indicator and drive.

Now, when we leave the past and use these models to gaze into the future, the really interesting stuff comes out.

Aside from giving explanation to the, detected by Brad DeLong in his also monumental paper [6], overall trajectory of the economy, the nucleus of meaning in the Rob Hanson's paper is:​

Typically, the economy is dominated by one particular mode of economic growth, which produces a constant growth rate. While there are often economic processes which grow exponentially at a rate much faster than that of the economy as a whole, such processes almost always slow down as they become limited by the size of the total economy. Very rarely, however, a faster process reforms the economy so fundamentally that overall economic growth rates accelerate to track this new process. The economy might then be thought of as composed of an old sector and a new sector, a new sector which continues to grow at its same speed even when it comes to dominate the economy.

Visualize: a Petri dish and sugar being expanded in size and quantity by the accelerating growth of the bacterial culture in it.

In the CES model (which this author prefers) if the next number of doubles of DT were the same as one of the last three DT doubles, the next doubling time would be ... 1.3, 2.1, or 2.3 weeks. This suggests a remarkably precise estimate of an amazingly fast growth rate. ... it seems hard to escape the conclusion that the world economy will likely see a very dramatic change within the next century, to a new economic growth mode with a doubling time perhaps as short as two weeks.

An economy accelerator avalanche is roaring down the slope of time towards us. A brand new Hanson Engine is about to leave the assembly line.

Tau, is that you?

FOOTNOTE: To wrap up the above statements in the flesh of the deep thesaurus of content onto which they lie, would conservatively consume hundreds of pages. Even if only briefed. I promise to come back to these subtopic meaning expansions (by referring back to here) with series of posts in the months to come to tie up with the notions of.: economy as a network, network as computer, what exactly it processes and outputs, economy (like the universe or life) being endogenously driven positive feedback loop self-amplifying non-equilibrium entropic combinatorial explosion system, the wealth as economy complexity growth in relation with GDP size and the intimate connection of dollars-joules in energy intensity, physical and economic limits of growth, self-reinforcing predator-pray models, knowledge as synonymous with skill and so forth, economic cycles upon the DeLong curve ... to name a few. Readers questions and comments will of course help a lot with the subtopics prioritization, and will boost (incl. mine) understanding. Thank you in advance!

NOTE:I currently have the pleasure and honor to be part of the Tau Team, but this post contains ONLY my personal views.

​What do I mean by the concept of "liquid platform"? This is merely a re-articulation of the concept of self amendment and self definition. In other words it is very much like an autopoietic design. Bruce Lee once said to "be like water", and the reason is because water can adapt to any environment it is placed it by taking the form of the container it is put into.

So by liquid paradigm I mean that the core feature of true next generation platform design is going to be focused on maximum adaptability.Feedback loops and the virtuous cycle

​How can we have a platform which promotes continuous self improvement? If you have a platform with no hard coded "self" then even the design of the platform is under constant negotiation and creation. This is key because it means Tauchain will be able to adapt quicker than all other competing platforms. Quicker than Tezos because Tezos merely provides self amendment but lacks the virtuous cycle, the meta language, etc.

The Tau Meta Language allows for self definition at the level of languages. This means even the communication mechanism between humans and machines can be updated continuously. This continuous updating is the key design breakthrough of Tauchain because it means Tauchain will always be state of the art in any area. Think of a platform like Wikipedia where anyone can update any part of it in real time continuously so that every part of it is always the state of the art.

Starting at languages, the feedback loop can be created between humans and intelligent machines. Humans must make decision on how to design Tau. These design decisions benefit from the virtuous cycle due to the feedback loop between humans and machines allowing the decision making ability itself to be upgraded. This could even allow for the humans to transcend traditional human capabilities by relying on intelligent machines to assist in design which means better future designs, which means better decision making, which means better future designs which leads to better decision making, this represents the "virtuous cycle" by way of a feedback loop between humans to machines to humans to machines to humans etc. The humans improve the quality of the machines by feeding knowledge, feeding new algorithms, feeding just enough for the machines to become intelligent enough to help the humans to help the machines even more efficiently in the next iteration of Tauchain, over and over again.

Humans and machines will seek more good and less bad for the formal specification of Tau itself. Good and bad designs will be defined collaboratively by the human participants by way of intelligent discussion. As discussion scales, bigger crowds means more human minds involved, which means improved design, which leads eventually to a better and perhaps wiser Tau, which of course would lead to wiser even more intelligent discussions, which can lead to an improved formal specification, and to a better Tau. So that is a loop. It is also a loop between improving Tau, improving society, improving Tau, improving society.

Ohad Asor has kept his word and released the first iteration of TML for peer review on Github. This is exciting news and many people have been watching Tauchain for a very long time waiting for this. In this blog post I will try my best to analyze some portions of the code but be aware that this is a task which could take multiple blog posts and explanation from Ohad himself.

What is TML?TML is the core component which will make Tauchain possible. TML is known as the Tau Meta Language. This technology is what will allow humans and machines to effectively communicate back and forth in an organized manner. Many people may be familiar with a compiler, but fewer are familiar with what is known as a compiler-compiler or parser generator. The approach taken by TML resembles that of a parser generator. To be specific, we have what is called partial evaluation.

This quote provides a brief description of what this means when applied for TML:​

A particularly interesting example of the use of partial evaluation, first described in the 1970s by Yoshihiko Futamura,[1] is when prog is an interpreter for a programming language.

​​This is something Ohad will have to elaborate on because the theory is very complicated and hard to explain. My own understanding is that TML will be able to have the unique properties of self interpretation and self definition. So the two concepts to study to analyze the source code thoroughly are partial evaluation (particularly as Futamura describes), and self interpretation.

My understanding of this code is that it allows you to do partial evaluation, which I will describe as a way of translating, where the interpreter and source code are both inputs. If my understanding is correct then we can see in the code the structure for taking in both inputs. The structure uses p and q which are familiar to me from logic used for proofs, and we see the partial evaluation at "p.pe(q);". Partial evaluation to my current understanding is to fix some variables in the given code prior to execution, exploiting the fact that certain variables are not likely to change and can be fixed. Specially TML utilizes the concept of partial fixed point evaluation which in theory has been shown to allow for the self definition property necessary to make Tauchain work as intended.

DLP stands for disjunctive logic program, which has interesting implementations for knowledge representation. This post was made to introduce the concepts of partial evaluation which is critical for making sense of this code base. There are other pieces of the code base which I have not reviewed, dealing with the parser, and disjunctive logic programming is something I am not familiar with. This code however gives programmers and researchers a place to start for deeper understanding and contribution to Tau.

It’s turning out to be another exciting and productive week! The Omni Foundation is privileged to provide a platform to so many innovative projects. Omni allows for the issuance of Smart Properties and assets on the Bitcoin blockchain, and we like to share some of the leading projects using the platform and protocol.

Tau-chain is a fully decentralized P2P network being a generalization of many centralized and decentralized P2P networks, including the Blockchain. Its interpretations, uses, and consequences are far from being a P2P network only, and include software development, legal, gaming, mathematics and sciences, logic, crypto-economies, social networks, rule-making, democracy and votes, software repositories (like decentralized Github+Appstore/Google Play), decentralized storage, software approval and verification, even “doing your homework in History or Math” in some sense (stronger sense that search engines), and many more aspects.

Tau Chain uses the language of Ontologies to unify the languages of:

KnowledgeRulesLogicComputer ProgramsNetwork Protocols

The Software Client

a) Stores the ontology of local rules

b) Determines the actions using a REASONER

Reasoners

a) Are tools that infer new rules or conclusions based on old ones

b) This is done intelligently, using pure logical reasoning + supply proofs of the results.

TAU CHAIN NODE – is therefore, an intelligent agent that is able to communicate with other agents, at the very same language which is human readable.

TAU CHAIN NODE can communicate with other languages, like HTTP, once implemented in RDF.

τ -chain node is an intelligent agent able to communicate with other agents, at the very same language they’re written with, which is quite human-readable. It can communicate with other languages as well, like HTTP, once implemented in RDF.​

The rules of the network are determined by its users. Conversely, many independent “universes” can be created over τ -chain, that may or may not share τ ’s timestamping. They can also reference to each other, allowing code-reuse: recall that rules are code, since Tau Chain has a unified language. Moreover: it provides an ability to implement Decidable computer programs with RDF, namely DTLC languages rather Turing Complete ones.

The implications τ -chain node is an intelligent agent able to communicate with other agents, at the very same language they’re written with, which is quite human-readable. It can communicate with other languages as well, like HTTP, once implemented in RDF.

Arrow of time is brought into the network using the Blockchain algorithm. Items can get into a Merkle tree that will be signed by a miner, roughly speaking.

The network will also function an RDF-speaking distributed storage, namely Kademlia DHT, letting hashes of items to be time-stamped in a mechanism which is up to the rules of those languages will be described later on.

tau-chain.info is not the official site of Tau-Chain. This website has NO affiliation or relationship with the company or IDNI organization, and its contents are neither approved nor supervised by it. The website is dedicated to collaborate with the community and helping Tau and Agoras to progress in a sustainable way.