Many modern filesystems are littered with the reeking remains of attempts at supporting metadata (for example, NTFS), most of which nobody cares about and which just add implementation complexity.

I read this… And went into a cold sweat.

Our Firefox extension is in pretty sorry shape. I figured I’d look up the source code for nsIWebBrowserPersist, which comes so close to what we want to do, but even then it’s worse than that.

I realized that if we just stored plain WARC files in EarthFS, all of the dependencies would be in one file and we wouldn’t need dependency tracking at all. In fact we could do that for everything, people already use archives for manga so there’s no reason to convert them to URI lists or anything. In fact, with our system of “invisible dependencies,” having content links is no use because those files aren’t visible.

In fact I was thinking maybe we need a way to garbage collect invisible files that aren’t depended on by anything else. Which goes to show we’re on the wrong path.

You could say breaking these files out into their own addresses gives the opportunity for de-duplication, but as we’ve learned over and over, de-duplication is a red herring.

And our archive extraction system was about to get even worse because we couldn’t use simple URI lists. The meta-data has to be part of the archive, not part of the individual targets, meaning it’d become some JSON monstrosity.

Having files scoped by purpose is absolutely critical. One of the criticisms of Evernote (?) is that when you start saving a lot of web pages with it, they get all mixed together with your notes. But the solution doesn’t involve hiding web pages or notes.

Likewise, if applications want to store non-user-data in EarthFS, the solution isn’t hidden files, because without meta-files you can’t add any meta-data and even the application can’t find them again.

So I’d like to thank angersock for this series of comments, even though an earlier one was a bit harshly worded.

If you want to pitch a more useful and more abstract version of what you’re describing (“how can we present searching and accessing a metadata forest backed by traditional hierarchical file stores”) then by all means I’ll be friendlier but right now you’re coming across as a crank ignorant of the history of the ideas you’re decrying.

Well, this one is pretty harsh too but it’s completely reasonable too. Library Transfer Protocol guy sounds like a crank, and so do I. :(

https://github.com/priestc/LibraryDSS thats an old repo from when I tried to write code for this about a year ago. Most likely if I ever get around to working on it again, I’m going to make a new repo for it.

Oh, too bad, it really is dead.

New plan:

Get rid of dependencies

Figure out our meta-data system, paying due consideration to past failures and the fact there’s probably a very good reason for them

Hopefully change as little as possible, we still wanna ship in two weeks (lol)

BTW after the video on Phil Fish[#] I’m really reconsidering publishing even a portion of my notes. And without them I know EarthFS is DOA. The hero you deserve.

I’ll have to think about it more to see if I can really understand it. The parts that I think I understand already are the concept of “being famous wrong” and strange online divide between “famous” and obscure.

Also, I like the feedback loop between hating someone for being too famous and being famous for being talked about.

It must’ve been really weird for Notch to watch this video. Suddenly understanding “this is what you look like to normal people” when you thought you were a normal person.

I’m ready to say “nowhere” (or “somewhere besides programming languages”) because of finite depth and diminishing returns. C++ will gradually be cleaned up, C will gradually get slightly faster, and that’s about it.

Wait, what about Rust?

Here’s a hypothesis: people understandably want memory-safe languages because 90+% of their code isn’t performance sensitive. So we say, let’s start with a high level language and put in escape hatches for when you really need performance.

The problem is that even if it fits the most common case, it’s still an abstraction inversion. Once you give up that level of control, you can never get it back, even with the biggest escape hatch in the world.

This is why the long term solution is building up from the bottom, with libraries and language extensions on top of C. And Rust is still too big of a leap.

I was also thinking about how the limitations of SQL are technical in nature[#]. In theory even the best C compiler has trouble optimizing composition. The reason it “works” is that the whole thing is so low level that 1. the optimizations don’t actually matter, and 2. it’s too complicated for humans to do any better at it. A query optimizer can’t glue two statements together to save its life, but a small C program has thousands of lines and hundreds of functions.

It’s almost like someone read Paul Graham’s “blub” essay[1] and thought, “what would it mean to take seriously the idea that blub is the best language?”

Everyone knows that building abstractions has a cost - the cost of building the abstractions themselves, the cost of figuring out the particular abstractions employed in a given project, and the cost of comprehending a language flexible enough to support these abstractions. The hope is that the cost of abstraction is an investment: the time you put in will be rewarded in faster development when you put the abstractions to use. But at some point increased abstraction is going to give diminishing returns.

I think this is true too. “A language without abstraction” isn’t a slight against C.

C maxes[#] the optimizing compiler. C++ maxes the language (or the developer, depending on how you think about it). Maybe this is what abstraction actually looks like: finding new places to extract gains from. Haskell and Java try too, although they aren’t very successful in my book.

Golang maxes… conventions? That’s why the compiler errors if you have an include you don’t need, and (apparently) doesn’t encourage abstraction. Or you could just say it maxes language simplicity, which explains garbage collection too.

It doesn’t seem to be working. Maxing simplicity means that a lot of people won’t even be happy. It’s the whole “everyone needs a different 5% of the features in Microsoft Word.” And Word is probably more popular than Notepad even amongst laymen.

Not a bad analogy AFAICT.

And for all that simplicity you don’t actually get much to show for it. A fast compiler (although supposedly D’s is just as fast). Not sure what else.

Where are the next set of gains going to come from?

I’m ready to say “nowhere” (or “somewhere besides programming languages”) because of finite depth and diminishing returns. C++ will gradually be cleaned up, C will gradually get slightly faster, and that’s about it.

Basically, JITs briefly revived interest in programming language design (I’m aware Go is compiled, but it rode the wave), but in the end they didn’t pan out.

This article seems Golang’s equivalent of the takedown of Node.js[#] from a while back. I knew enough about Node to recognize the truth in that article, and I know enough about async, fibers and C to recognize the truth in this one.

Go doesn’t let you build abstractions - it offers what it does, and if its not enough - tough luck.

This comment in particular made me smile[#]. Go actually seems worse than C because at least C lets you do crazy stuff with pointers and the preprocessor when you feel like it’s necessary, whereas Go comes more from the bondage and discipline school of design (from what I’ve heard).

Minecraft is very heavy on the CPU and would benefit from running on native code rather than a managed bytecode based virtual machine environment.

It isn’t heavy duty graphics, so you don’t need a super strong GPU (it’s still quite heavy), but Minecraft is very CPU and memory intensive. Graphics can be fast regardless of the programming language if you unload things to the GPU, but Minecraft requires a lot of CPU side work to prepare the voxel graphics (ie. building GPU vertex buffers, etc from the voxel octree in CPU/main memory).

There are huge potential performance benefits of rewriting it in native code. I don’t think that will happen, though.

Minecraft shocked everybody when it was a popular browser game written in Java when Java was already considered obsolete. It’s easy to dismiss it as a fluke or say that using C++ would’ve been better, but I think there’s more to programming than what gets proclaimed on internet forums, and I think Notch knew something the rest of us didn’t.

Likewise, I’ve learned a lot of stuff they don’t teach you on Hacker News by building EarthFS (first in JavaScript and then in C).

Incidentally the only reason I saw this comment was because the author’s name was in purple. I checked their profile recently because they had some other comment that seemed very good. So it’s a bit weird to see this one and realize that it probably isn’t true.

We only have a sample size of one but the reasonable thing to conclude is that Java was perfect for Minecraft.

This also ties into a talk by Jonathan Blow that I listened to again the other day.

There’s an issue here that goes beyond being smart. There’s some kind of issue of operational wisdom. […]

Here’s some guys who made this really cool game that I… don’t know how to… make… And it’s made by a couple programmers… If I think something’s wrong with what they did, but what they did is kind of beyond my capability, maybe I just don’t understand why they did what they did. Why am I saying they suck when all concrete evidence in the universe says that they don’t suck?

This is a pattern I see… Again, talking about the internet, this happens on the internet all the time.

Not saying exDM69 couldn’t program a great Minecraft clone in C++… But there’s something different about programming the original Minecraft.

So in summary, I saw a channel implementation in C which didn’t seem very good but it got me thinking about whether channels would be useful… Now I won’t worry about it.

Node and Go are both failed experiments (which doesn’t make them bad). Now I’m just waiting for Rust to topple over, because I already think there’s some signs it will happen.

Will EarthFS start showing cracks eventually too? I don’t think so, because I’ve been taking every precaution to avoid fooling myself. That’s where literally 95% of the time has gone. For example, realizing (kind of at the last minute[#]) that we need to track dependencies.

While we can allocate large amounts of memory for 64-bit systems, it relies on overcommitting memory. Overcommit is when you allocate more memory than you physically have and rely on the operating system to make sure that physical memory is allocated when it is needed. However, enabling overcommit carries some risk. Since processes can allocate more memory than the machine has, it has to make up memory somehow if the processes start actually using more memory than is available. It can do this by putting sections of memory onto disk, but this adds latency that is unpredictable and often, systems are run with overcommit turned off for this reason.

I see. That sucks.

Since malloc already “can’t fail” on Linux, I think it would make sense to have overcommit turned on and swap turned off. If you touch a new page and run out of memory, the system kills something.

Stack copying is pretty much impossible in C, plus there’s the extra overhead of constantly checking whether you have enough stack space left.

If C had proper exceptions, you could catch the segfault and unwind, then start over on a bigger stack. Although that would have plenty of problems too.

8KB, which is what Go uses by default, happens to be smaller than what libco will let us create on some platforms.

They say “millions” of Goroutines are common, but that can’t be true, right? A million 8K allocations is going to be slow by itself, plus you have the overhead of running all of them (switching in Golang can’t be faster than switching in libco, can it?).

Right now our stack size in EarthFS is 48KB (on 32-bit) and we’ve never segfaulted aside from some bugs. I’ve been thinking we could probably cut it down to 32K without issue.

We’re still limited in when we can use fibers, which sucks. It’d be so nice to have two or three fibers per connection sometimes, but it just isn’t worth it.

It seems like the next step should be compiling to state machines, somehow.

It’s not just the allocation—just the cost of switching between stack segments is really expensive. Function calls are 2 ns on most architectures; any overhead can easily make that 5x or 10x slower.

If this is the justification, I think it’s a bad one. In the unlikely event that segmented stacks work with libco, that might actually be worth trying too.

Another idea: if you grow to 3 segments, and later shrink down to 1, then you can replace segments 2 and 3 with a single larger segment, getting rid of one of the jumps. But you have to wait until you’re out of that code before you can combine them, at which point it may no longer matter.

There is nothing precluding a hacker from leveraging the economy to scale up his favourite hack (which I believe pg’s early essays were about) or, as Notch did, just to secure himself a peaceful and secure environment to continue his hacking without having to worry about things like food, health and shelter. But that is a different type of hacker than the ones that are attracted to the startup scene nowadays.

There are imperfections in our collective social consciousness, and it’s good that we can notice them. But it’s a crying shame that we aren’t smart enough to work them out alone either. Really a nail in the coffin for the individual, isn’t it?

Remember this?

No one is smart enough to have their random opinions stand up to the level of attention someone like Notch gets.

I watched the This is Phil Fish video on YouTube and started to realize I didn’t have the connection to my fans I thought I had. I’ve become a symbol. I don’t want to be a symbol, responsible for something huge that I don’t understand, that I don’t want to work on, that keeps coming back to me. I’m not an entrepreneur. I’m not a CEO. I’m a nerdy computer programmer who likes to have opinions on Twitter. […] Thank you for turning Minecraft into what it has become, but there are too many of you, and I can’t be responsible for something this big.

Seems like Notch could do with a degree of focus - if not through strong discipline from within, then from external influences.

I think the ability to work on what you feel like and creative exploration is fine, but when there’s a considerable gap between your last qualified success (or a lengthy period since you last actually completed something you were interested in), it should be a warning sign that you need to knuckle down a little.

You have the ability to improve our lives! You must continue to use it for our benefit!

Everything ends up controlled by shitty faceless corporations because we drive all of the good people away. It’s incredible.

Somehow it seems like, in the process of taking the “safe path” to their goals, people often scale down their goals as well. It makes sense if safety is your top priority.

The people who pursue something single-mindedly might attain it without realizing where they ended up. But on the other hand, being careful where you’re going (like I am) might mean you don’t attain anything.

But it’s not a new idea to say that perhaps success and disillusionment go hand in hand.

It’s scary but we need to start giving it some real testing, plus our pull system is going to need a lot of changes and we’re going to have to break backwards compatibility.

I spent maybe too much time polishing up the readme. But it’s good because I gave myself some hard-hitting questions in the FAQ, and came up with some pretty good answers (IMHO). I’ve already come up with another one that I’ll have to add too.

I think I’ve figured out how to handle pull dependencies. It just involves adding a second queue. We don’t actually care what order the dependencies are added in, as long as they are added before the things that depend on them. But I’m going to think about it some more because the whole thing is getting too complex.

As Coyle (again, from The Talent Code) explains about the famously talented and accomplished Brontë sisters in relation to their work as novelists:

“…the myth Barker upends most completely is the assertion that the Brontës were natural-born novelists. The first little books weren’t just amateurish — a given, since their authors were so young— they lacked any signs of incipient genius. Far from original creations, they were bald imitations of magazine articles and books of the day, in which the three sisters and their brother Branwell copied themes of exotic adventure and melodramatic romance, mimicking the voices of famous authors and cribbing characters wholesale.”

I hope everyone knows how stupid I started out.

I hope no one knows how stupid I still am.

Keywords: humor, learning

I suspect many people doubted Notch when he started work on Minecraft. Although by that time he had already been programming for 25 years. People were probably skeptical of the team that made Angry Birds. That may have just been extrapolating from the 51 games that Rovio made before that project became a new standard for mobile gaming. The success of Super Meat Boy was not guaranteed. However Tommy Refenes had been making games for 18 years before that, and Edmund McMillen, Tommy’s collaborator on the game, worked on 14 finished games before Super Meat Boy (including its free Flash precursor, Meat Boy).

BTW, this was humbling.

BTW BTW, the reason I’m reading an article like this now when I should know it already is because I want to see how well I really understand it and because I want to see how true it really is.

Personally I think “edge learning” is somewhat overrated. Where did I recently see “drill and kill”?

[The] criticism of practice (called ‘drill and kill,’ as if this phrase constituted empirical evaluation) is prominent in constructivist writings. Nothing flies more in the face of the last 20 years of research than the assertion that practice is bad. […]

I wish to hell we had decided on something smaller in scope to make instead. We did learn a lot from that big ~3 year project, in my case most of it technical aspects of engineering projects of that scale, but one thing we certainly did not learn was how to make that game any good.

I’ve been asked by aspiring ARG designers how big they should scope, much like this conversation, and I always advise them to be realistic. That is, they should be more realistic than we were. What I was struggling with in my post is that my advice is to do the opposite of what I did, when what I did was great for my career. I don’t recommend this route because I had to put in two years of unpaid work around the clock, sometimes forgetting to eat, to make it happen. I don’t recommend the lifestyle it requires to build something from nothing… and even with everything we poured into it, we had no guarantee anyone would play.

The internet unlikely anecdote generator rears its head again… But this is certainly an interesting comment.

Maybe it’s because the ARG market (I don’t even remember what the A stands for, alternate?) was/is less developed.

(there are some extraordinary exceptions, but for that matter there are extraordinary exceptions of people starting in all kinds of ways that don’t necessarily make those reliable points of entry for others)

I’m still unsure about something this basic.

Somehow it seems like, in the process of taking the “safe path” to their goals, people often scale down their goals as well. It makes sense if safety is your top priority.

I’ve always been able to go all in because I’ve never felt like I had anything to lose. And don’t insult my imagination by saying I never had anything at risk.

The sand in the depths of hell is a magical sand.

Somehow it seems like, in the process of taking the “safe path” to their goals, people often scale down their goals as well. It makes sense if safety is your top priority.