The TAO — Tritium Development Update #1

This blog begins a wonderful series of events that record the development of the TAO. As many already know, this stands for Tritium, Amine, and Obsidian representing major version 3.0, 4.0, and 5.0 respectively.

Some on topic but slightly tangential reasoning behind the names of the TAO.

Tritium is the third isotope of hydrogen (hydrogen-3)

Amine is the base for all amino acids. It is also a molecule consisting of 4 parts. Catching my drift now?

Obsidian is a volcanic glass that was used as an instrument in prehistoric surgery that can be made sharper than steel. It ranges in the 5’s on the Mohs Scale.

Correlations anyone?

Anyhow, my reasoning for these timely blogs is to give the community better information regarding what has been done since the last blog post. I’m going to start the standard here with dumping a “git pull origin master” from the head commit set back exactly to the date of last blog. This will — you know, let the code speak for itself.

Lots of file dumps, collectively its an addition of ~ 39,000 lines of code. I’ll explain why there’s so much new code, and what’s happening under the hood as we speak. Bear with me if you’re not very technical, or skip to the end since I’m going to get fairly detailed.

Let’s start with the Network Layer:

Network:

The network as you all most likely know is powered by the Lower Level Protocol. The Lower Level Protocol (LLP from here forward) is a polymorphic template protocol. Yes that’s a mouthful, but it’s just another way for us coder people to say, “I build this template once, and I can use it for anything”. This is called good code design, since, you end up needing less endlessly repeating sequences of the same code with slight alterations. In this case, you drop in a new packet type, write a new ProcessPacket method, and boom you have a new protocol.

So what makes this special? It’s because most servers on the internet have a very hard time scaling. They usually follow the blocking one per thread model, using socket selects, or even operating on a single thread asynchronous model like node.js and boost asio. Now this is important to understand, because asynchronous models have been touted as being far more scale able — this is true, but not to be mistaken for how this can be implemented. When you hear people say Node.js scale able elastic computing front end, they mean asynchronous sockets since Node.js uses Google’s V8 Javascript engine, and allows sockets and Javascript on the server side (this usually happens on the client side).

Anyhow, let’s continue. The LLP is also a multi-threaded asynchronous socket packet handler… this means that any user wanting to work with the LLP can create literally any protocol they want, and are always going to have the trusty, well tested, server back end. The LLP is now completely absolved of any boost dependencies, part of the new coding standards in this project. The reason for removing all boost is that it is quite a heavy framework that was developed for the C++98 veterans, to do nice simple things that the STL couldn’t do. Now, C++11/14 standards contain a lot of the boost necessities in STL which means that all boost code can be removed (phew).

So enough on that, let’s see some results from my latest stress test:

This is a single computer, 1000 connections, and 144,485 requests per second. This performance is well within requirements for VISA scale abilty (20k / second peak). It is very important for these base layers to not become bottlenecks, so there has been a lot of attention given to the LLP over the course of this week.

Ledger:

Now we start to get to some fun stuff, the Ledger! As you can see from the above “git pull origin master”, there are a lot of files. Each one, as you will notice, is organized very neatly based on its function, and put in its proper namespace.The Ledger code consists of block validation, transaction validation, and basic lower level ledger data scripts to handle register reads and writes. Most of the legacy code is implemented and operating in order to allow backwards compatibility with the Legacy UTXO sets — while allowing a transition time into the more efficient Tritium Transaction.

There are two block structures here in the ledger layer, Legacy and Tritium. Legacy blocks push the heavy block data and transaction data coupled to a 2MB limit every 50 seconds, newer lighter Tritium blocks only retain the transaction hash, which means that maximum transaction size now goes down to 64 bytes. Since the transaction is broadcast over the LLP to the entire network, and it operates efficiently, transaction processing can be done mostly in the time while waiting for the block. This means that transactions locked into the memory pool, can be cached and verified and easily connected when the block comes, requiring little processing power, and blocks that can contain up to 32,768 transactions every 50 seconds. This amounts to a maximum transaction rate of 655 / second assuming a 42 Kb/s data rate.

This feature will is a stepping stone towards a 3DC simulating a single L1 state channel.

Register:

Now we start to get above the ledger abstraction, and into new territory. Registers are sitting on top of the ledger, having identification tied from the ledger. The ledger scripts are the interface that allow registers to be defined, and permissions of each to be granted. The two types of registers that exist as of now are state and object registers.

A state register is the raw bytes of data that represent the state of whatever object one defines. An object register contains more type-safe data sets. What this means, is that an Account will be an object register to store the state of your balance. It goes similar to this:

std::vector<uint8_t> vchIdentifier;

uint64_t nBalance;

The account’s address is actually the register’s address, so that if a user is to commit a DEBIT op to that register address, the owner of the address would be required to issue the CREDIT of the according balance. The vchIdentifier is what holds the unique type of account this is, which in other words means the ability to create tokens that transfer with the same efficiency as NXS. This is just the beginning, I’ll get more technical on the scope of these layers and how they weave together in blogs following this one.

Operation:

The operation layer gives context to the register layer. To the ledger, a transaction is just raw bytes like a packet is to a network router, but it isn’t until you get higher up the stack that context is given to that data. This is one of the methods that has been developed to scale efficiently, and as you can see, every layer needs to be well thought through, interfaced, and connected together seamlessly. If the lower layers create bottlenecks it affects all the layers above it.

These represent basic byte code logic sequences that execute to change the states of object registers through verification of other nodes. This list will continue to grow, for now, the simpler with the highest functionality, the more efficient and scale able.

I’ll go into more details on my next blog post exactly how these weave together to make for very interesting use cases with Nexus Contracts.

API:

The API, ah yes, what many people are waiting for. This operates on a HTTP-JSON server that allows anyone to create contracts without needing to learn all the lower level code that drives the entire engine. The message format will be broken into many different industry specific API’s, and be powered by an HTTP-LLP server.

As you could see from the above LLP test results, the API will be able to handle quite a lot of load even operating on a single node. Now since the network is Peer to Peer, this means that an API instance will be running on any node that decides to deploy that as a service.

I will go into more details on the specifications of the API and message format in the next blog post.

Until Next Time:

I hope this was an informative blog update— in light of being able to communicate in a more technical nature the developments of the TAO code base. I will continue to share benchmarks of tests, simulating high throughput transaction environments, and more load tests of each layer as they are woven together to be ready for the public testnet deployment.

I look forward to the working groups this year, to get our first standardization process complete — and prepare for main net deployment.