Lot of discussion about different models how to implement multi-tenancy efficiently on different applications. Sorry, no more details about this being published. But it was a long and good discussion. It's so important to consider multiple aspects and get the level and implementation of multi-tenancy right so it won't bring secondary problems. In this case we're talking about especially challenging complex environment, not the daily CRUD stuff where multi-tenancy is quite simple stuff to implement.

Even more discussion about peer to peer / federated / distributed / decentralized networking (whatever you want to call it), databases, data expiration, integrity and distribution (routing, storage, caching), flow charts and processes. Also lot of discussion how proxy, relay, friend, buddy (call it with whatever name) should work. - Sorry, private conversations, this time I'm not even going to quote my-self.

Checked RabbitMQ, ZeroMQ (ØMQ) and snakeMQ. Once again, I'm going to embrace the simples solution which fulfills the know need. So I did select the snakeMQ for my project. So when running a cluster / multiprocessing, this snakeMQ allows me to easily communicate between processes without really caring if those run on same host or on some of the other hosts. Of course I could have simply used Python's native BaseManager from multiprocessing. I've also write a little test application using base manager and a few clients.

Also made my first Python application which successfully calls and returns data from Microsoft Windows Dynamic-Link Libraries (DLL) files using ctypes. There were some minor snags and traps when doing this, but after I got it all straight, it's very easy.

Projects like Freenet were technically successful but still failed to reaches any meaningful user base outside very limited circles.

I'm a tech guy, and I really acknowledge this risk. Do something to see if it technically works. When you have decided yep, it works. It's done and left there.

It requires business side and community traction to be able to take it any further. It seems that most of developers fail hard that that point. Hacker News and GitHub are full of such stories & projects.

Projects like 7-Zip (7zip) and GnuPG (GPG) are rare occurrences where small dedicated team / person(s) just keep pushing the stuff for years, even if there's potentially no compensation for it at all. And it's very technically demanding, hard, complex and sure gives more than enough challenge. So it's not just playing in the park.﻿ I think I've gotta do a donation for both projects again, it's worth of it! It would be nice to check what's the percentage of GitHub projects being actively supported and maintained after 10 years. I'm personally still maintaining some business critical software written 15+ years ago for a few customers.

Some thoughts about Axis Mundi project - I think I missed quite many things from the Wiki and documentation Readme, like temporary message storage when nodes aren't available and the most important and hard part, routing if and when number of users explode. I've been thinking deeply and in technical terms about mesh networking and DHT stuff. Which all basically comes to this same issue. How to make things scalable, efficient and still responsive enough without requiring too much bandwidth or computational resources or storage. my conclusion is that I've suggested model where mesh would be used when it's viable or no alternative is there. But the primary routing should go over Internet and or some main routing service. Without that kind of shortcut transferring data via mesh would be very slow and require so much power & routing resources that it would basically kill the network. Worst part of mesh networking with mobile devices is that the network routes are in more or less constant shift, which adds really considerable amount of route management work. Maybe at night network is somewhat stable, but what when people head to work and offices? Either the network collapses and performs extremely badly for a while or it would just require a lot of resources to keep the network and routing updated, which costs energy aka kills your battery.

A Lot of thoughts for one project how to coordinate distributed data updates where data is getting updated at high intervals. Distributed stuff is very good and easy for caching static things, but when data starts getting volatile things start getting way more complex. As it has been documented for large distributed databases, which Google is using etc.