the coroutine based parser was really cool though. I had to write an async parser for a redis save file once, that was horrible. and having a stateful coroutine based parser would have really made it 1000x easier.

one is the selective replicating database. It’s very granualar and content addressable so that repeated files are only stored and synced once. (Updating to a newer version of a package you already have will only sync down the changed files)

publishing to the central database will require an one-time access token from the other service to prevent people from using it as arbitrary storage and to provide some level of assurance of package contents.

publishing a package will import the files into your local git db, create a signed tag using your private key and send the tag to the high-level server to get the token. Once you actually syncup the files the server doesn’t have yet, it will become available in the general registry

and when the user runs `lit update` or something it will unlock the versions, grab the latest versions of app deps recursivly and then re-lock. The programmer will re-test everything before commiting the new locked versions

yeah you gotta be careful about that, I have a had few projects that I had to fork a deps just so that everything lined up correctly with the dpes versions. kindof annoying. but asking should be enough.

well what I mean is if you connect up to the back end db, and ask for a,b,c and a is on node1 and b,c is on node2, does the db I connect to proxy the requests to the storage nodes where the data is located, or does the client have to connect to node1,node2,nodex… ?