I'm more comfortable animating with code than with a tool like flash or after effects, so I generate animations myself with custom code.

I use the RenderTargetBitmap and GifBitmapEncoder classes (plus GIMP afterwards to fix the frame delays and repeat count, and cut down the file size a bit) to translate what's showing in a WPF app into a gif.

I'm more interested in how his two friends are going to build an MMO that lasts 15 years and has unprecedented AI with "modules of technical brilliance" programmed in their spare time.

If there's one thing you can be pretty sure about, it's that when people start out with claims as grand as theirs, when they currently have nothing more than a 2 post blog and a logo, it ain't gonna happen.

To be fair, their blog post also sets off my, uh, "unnecessary flowery language detector" (aka "Peter Molyneux Detector"). They are aware of this. I was actually pre-warned that I'd dislike the post, since it's more of a vision than a technical spec.

You're getting downvoted for some reason, but you're basically right. If anything, you're underestimating the cost of developing an MMO. WoW cost around $63 million to produce. SWTOR may have cost as much as $200 million.

MMOs are the single most difficult type of game to produce. Which makes it all the more face-palm-worthy every time I see forum posts that say "I'm a beginning game developer and I'm making an MMO!" Sure, buddy. I'm not saying the OP's friends can't do it, since I don't know anything about their credentials or resources, but they've definitely got their work cut out for them.

Exactly, sure they might be able to make an "mmo". But as far as something that competes with really anything in the market, even the awful failures, there are specialties that they can't come close to tapping with a rag tag crew.

It depends on a lot of factors, really. My current company stated as a "rag tag crew"...of highly experienced, driven, and marketable game developers. Because of that, they were able to secure funding and grow a huge amount. So I won't rule out the possibility that the OP's friends will succeed, since I don't know them, but it's a huge undertaking that can not be taken lightly. I wish them luck.

MMOs are in a different league to other games. Recent history is littered with stories of fantastically skilled, and previously successful game developers who already had highly experienced teams, but became unstuck when they tried their hand at producing an MMO. Even household names like Bioware, with possibly the largest budget and dev team in gaming history fell well short of their initial hopes and plans.

Nobody should start to undertake a large project. You start with a small trivial project, and you should never expect it to get large. If you do, you'll just overdesign and generally think it is more important than it likely is at that stage. Or worse, you might be scared away by the sheer size of the work you envision. So start small, and think about the details. Don't think about some big picture and fancy design. If it doesn't solve some fairly immediate need, it's almost certainly over-designed. And don't expect people to jump in and help you. That's not how these things work. You need to get something half-way useful first, and then others will say "hey, that almost works for me", and they'll get involved in the project.

Minecraft is a very successful closed source game, too. "Almost works for me" doesn't mean "crashes after loading almost all textures", I guess it should be interpreted as having the core fun part implemented and working, like mining blocks and building stuff out of blocks in case of Minecraft, after which Notch was able to sell a shit-ton of copies and finance working full-time on various extra content.

I'm sure it applies, the $millions budgets afforded by copyright just muddy the waters. Hobbyist driven games are different. They just reach some extent of "fun" before the community starts to snowball.

Basically, it means that he wasn't a genius with a plan, he just got lucky. People who do what he did might get lucky, too, or they might not. People who do something different might get lucky, or might not.

But if you worship at the alter of selection bias, you can say "this guy did X and succeeded, I'll ignore the thousands of other people who did the same thing but failed to make the big time, and thus I can assume that everyone I've heard about who did X and succeeded is proof that X is why they succeeded."

Linus is a very very smart man. That doesn't mean he has found the sole valid route to success.

Actually, Freax started off as a terminal emulation program that made use of Andrew Tannenbaum's MINIX filing system, sought POSIX compatibility for what little it did, then in that same initial post cheekily said:

I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat... P.S. Yes - it's free of any minix code, and it has a multi-threaded fs.

So why did this git post this in a MINIX forum. Unsurprisingly, Tannenbaum said LINUX is obsolete:

MINIX is a microkernel-based system. The file system and memory management are separate processes, running outside the kernel. The I/O drive are also separate processes. LINUX is a monolithic style system. This is a giant step back to the 1970's.

Here's Mr Torvalds typically polite response:

Your job is being a professor and researcher: That's one hell of a good excuse for some of the brain damages of minix. I can only hope (and assume) that Amoeba doesn't suck like minix does.

At the time Torvalds worked at the University of Helsinki. This is no way to speak to a Professor.

. . . Time for some serious flamefesting!

I am staggered that someone with such poor interpersonal skills could delegate and organise a group of international master hackers without... well, hacking most of them off. How he got GNU to be referred to by the self-aggrandising title 'Linux' I will never comprehend. Don't expect me to take his advice.

First of all, Tanenbaum was the one who started the thread titled "LINUX is obsolete." Right after the section you have included from the first post, he writes:

That is like taking an existing, working C program and rewriting it in
BASIC. To me, writing a monolithic system in 1991 is a truly poor idea.

So I'd say Tanenbaum threw the first punch. Now, yes, you did quote accurately from Linus's post, but let's look at some other parts of Linus's reply:

True, linux is monolithic, and I agree that microkernels are nicer. With
a less argumentative subject, I'd probably have agreed with most of what
you said. From a theoretical (and aesthetical) standpoint linux looses.
If the GNU kernel had been ready last spring, I'd not have bothered to
even start my project: the fact is that it wasn't and still isn't. Linux
wins heavily on points of being available now.

...And, from Tanenbaum's followup:

I still maintain the point that designing a monolithic kernel in 1991 is
a fundamental error. Be thankful you are not my student. You would not
get a high grade for such a design :-)

Writing a new OS only for the
386 in 1991 gets you your second 'F' for this term. But if you do real well
on the final exam, you can still pass the course.

Sorry, but I'd say Linus was not being the cheeky one here. In fact, I'd say he was far more politic here than some of the raging flames he's delivered on linux-kernel.

And reply I did, with complete abandon, and no thought for good taste
and netiquette. Apologies to ast, and thanks to John Nall for a friendy
"that's not how it's done"-letter. I over-reacted, and am now composing
a (much less acerbic) personal letter to ast. Hope nobody was turned
away from linux due to it being (a) possibly obsolete (I still think
that's not the case, although some of the criticisms are valid) and (b)
written by a hothead :-)

Linus "my first, and hopefully last flamefest" Torvalds

Tannenbaum's initial irritation can probably be ascribed to not welcoming someone infiltrating the newsgroup for his OS in order to announce his own that is based upon his own FS. It is fair to describe Linux today as obsolete 1970's technology. There are older technologies that are still ok. UNIX is that old. Torvalds' response could have been far more diplomatic and pragmatic, admitting that he wanted to leverage the monolithic GNU. The Hird has yet to manifest itself, so saying:

Linux wins heavily on points of being available now.

Is about the only part of his response that isn't unjustified. Professors are cranky as they deal with upstart students who think they know better all the time, so we shouldn't be too surprised with his response to this git.

Robot A (RA) has his own clock CA and robot B (RB) has his own clock CB. The first task is to exchange information about internal clock synchronization between them. It is simple:

second 0 of RA in CA -> send message to RB with value CA0 (timestamp CA)

second 1 of RA in CA -> send message to RB with value CA1 (timestamp CA)

RB receives message from RA in time CB0 (timestamp CB) with value CA0

RB receives message from RA in time CB1 (timestamp CB) with value CA1

RB calculates (CA1-CA0)/(CB1-CB0) which gives measurement of time on the other side. The same could be done from CB->CA. Once you have the measurement of time on the other side, the rest is simple. With this measurement you could send messages with local timestamp which could be properly converted to time on the other side and properly interpret received timestamps in the next round (timestamp of receive, timestamp of send). Which gives you absolute latencies of both channels.

As you can see, it is simple.

cheers,

Rafal

This is exactly the sort of solution I was coming up with, then disproving, before I realized the problem was impossible. I can't keep track of four interacting variables without a diagram, so graphed things out to show the proposed protocol gives the same result in an AB=BA=2s case as in an AB=3BA=3s case.

The diagram is on imgur. I scaled the computed values by a constant (half the round trip time) so it could be placed on the diagram, but that has no effect on the result.

I'm more comfortable animating with code than with a tool like flash or after effects, so I generate animations myself with custom code.

I use the RenderTargetBitmap and GifBitmapEncoder classes (plus GIMP afterwards to fix the frame delays and repeat count, and cut down the file size a bit) to translate what's showing in a WPF app into a gif.

I wished it was some kind of graphical scriptable graphics package, a bit like dot/graphviz but more dedicated to this kind of timeline graphics (or maybe a set of scripts for the said dot/graphviz package)...

I knew I had heard your name before and from a google search I found out why. You were the creator of a WC3 bot in VB.net. This also explains the warcraft 3 netcode reference in your article.
I was actually working on my own bot until I found out yours existed which was already so advanced it made mine pointless. Still, a very educational experience.

Isn't it a valid the assumption that clock skew is relatively stable over short periods of time? Otherwise, a clock would not be very useful in the first place... With that assumption, measuring the latencies is fairly trivial, at least as a point measure (needs only two roundtrips, which again supports that clock skew won't change significantly over that time...).

You're right, something is weird with Safari here. The duration of the animations match roughly 50 ms multiplied by the number of frames, but towards the end of each sequence it's skipping some frames and CPU usage goes up. Safari also crashes when I start the profiler.

Chrome and Firefox seems to behave properly. However, it appears that the CPU usage still can get high here depending on what's in the viewport. I could probably get close to 100% with these too, if I had a large enough screen resolution to display the entire page.

You're assuming that two "robots" are sending messages to each other as fast as they can.

Here's the solution to your problem of finding the latency between the two:

Robot A sends a timestamp to Robot B (00:00)

Robot B tells Robot A it saw that timestamp at a particular time (00:05) and asks Robot A to send another timestamp when it thinks Robot B's time is 00:15.

Robot A waits until 00:10 and then fires its message to Robot B.

Robot B reads the message and determines immediately whether there is a 1, 2, or 3 second delay by comparing the timestamp versus its internal clock.

Robot B then implicitly knows (via the test experiment, would need to do similar handshaking over the second line in the real world) the delay of both lines and sends that information to Robot A.

Either Robot B or Robot A, now knowing the latency of the to/from channels, can assume control of "real" time and synchronize the clocks (ordering the other Robot to change its time to now()+path delay)

In order to convince me, you'll have to draw out what happens in the case where AB=BA=2s with no clock skew AND the case where AB=3s, BA=1s, and B's clock is at t=1s when A's clock is at t=0s. (I've already drawn it out for several other people, to show how their idea doesn't work.)

That made the graph much harder to understand, though. Also, the animation is just too fast to really comprehend what was going on. It would have been much better if you put up the interesting frames and let the users switch between them manually to compare at their own pace.

Personally I thought it was genius. It gives the brain something to chew on. Like watching the gears move in a watch. If we could learn to visualize more kinds of math problems when learning math, I'm sure we'd be engaging more of our brains and learn more. Combining those with the analogies or stories or puzzles, as you did, is great. Good work.

The graph shows A sending a message at t_a=0 (containing 'I think it is t=0'). B receives that message at t_b=2 and replies ('Well I think it is t=2'). A receives the reply at t_a=4.

Since all of the send/receive times do not change as the clock skew is changed (the left/right movement of B), A can't possibly be computing the clock skew from the invariant values t_a=0, t_b=2 and t_a2=4.

Ah, I sent t_b instead of t_b - t_a. Luckily, A can compute t_b - t_a once the value of t_b arrives.

It's unfortunate that you have a hard time understanding the diagrams. They're what really made the "unsolvability" 'click' for me. I realized I could pretend the messages were elastics connecting the two time lines and thus drag the skew all over the place without changing their send/arrive times.

There's nothing here that's hugely interesting, but yes, it's at least accurate. It's impossible to measure the one-way latency between two nodes on a network unless you both have reference to a trusted third party (preferably one not on the internet, for example GPS). Only round-trip latency is measurable or perceptible. Usually internet latency is fairly symmetrical (meaning that you can assume that the lag one-way is pretty close to half of the round-trip), but it's not hard to find places where that doesn't apply — for instance, a busy ADSL link.

I believe consumer satellite uplinks (at least those from about five years ago) create terribly asymmetric latency. Using the satellite uplink, latency was on the order of a second. With a dial-up uplink, latency was negligible again, but with the bandwidth provided by the satellite downlink.

This only works in theory (i.e. not) or over relatively high latency links. Even on an "only Gb" LAN the skew between machines is way higher than NTP-enforced precision. My understanding is that the only practical way for this is PTP, preferrably with high-precision timestamps in the NIC.

On a small (one switch), lightly-loaded LAN, it's possible to keep two systems within around 100μs of each other 95% of the time using NTP. PTP is pretty awesome though, especially the prospect of switchgear that can automatically account for switching latency and jitter.

The first robot sends two packets one second apart.
The second robot measures the time between the first and second packet arriving.
The second robot sends two packets one second apart including the measured latency on the 1st link.

I'm still having trouble following. This seems like it would be trivial if you have synchronized clocks. If it's provably impossible to determine this, it seems it must also be impossible to synchronize the clocks?

I think you're on the right track, but you need to send the message back around. Then A can compare the delta in time between the sendings and B's perceived delta between the arrivals. The latencies drop out, and A knows the clock skew. Once he knows the clock skew, he can correct the arrival times to his time system and measure the latency.

This seems like it should be possible to me. Assuming the latencies are constant, you could pass around a packet that each party time stamps. Later, they do it again. Initiator looks at the difference in time between sending the two packets and compares it to the receiver's perceived difference between their arrivals. That gives him the clock skew. Once he has that, the rest is easy.

Did you actually draw out what will happen? In the case where AB=BA=2s with no clock skew AND the case where AB=3s, BA=1s, and B's clock is one second behind A's clock? Put in any known rate of clock drift you want, the cases are indistinguishable.

I think your mistake is assuming a "missing millisecond of clock drift" can come from an extra one second of latency in one direction but not from an extra clock skew of one second.

Alright, I came up with a solution that I think works before reading about how it can't be proved and I cannot see how it fails. Can anyone see the flaw in this protocol?

First, I can narrow the latency between the two clocks to one of 3 values. To do this, A sends a message to B timstamped with send time. B assumes a 1 second delay and sets his clock (BC) to be synchronous with A's (AC) assuming a 1 second delay.

Now, because the actual delay from A->B could be 1, 2, or 3 seconds, BC is now equal to AC + 0, 1, or 2.

This is likely where my solution diverges from Strilanc's attempts because once the possible delays between A and B are narrowed, it becomes a game of elimination.

B sends a message back to A timestamped according to his newly updated clock. A checks the apparent delay by AC. If the apparent delay is 3s, their clocks must be synchronized, as if BC = AC +1, then the B->A would need a delay of 4s to result in that apparent delay, and if BC = AC + 2, it would need a delay of 5 (This is the delay minus how far B is ahead of A).

If the perceived delay were 2, their clocks would also be synchronized, as if BC = AC + 1, B->A would be 3, which would mean A->B would be 1s, but if that were true their clocks would be in sync, as that was the assumed delay. If BC = AC + 2, B->A would have to be 4 long.

If the perceived delay were 1s, BC would need equal AC + 1, because if BC = AC + 0, B->A would have a 1s delay requireing A->B to have a 3s delay and meaning that BC = AC + 2. And if BC = AC + 2, the line would need to have 3s delay, which would mean BC = AC + 0 because A->B would have a 1s delay.

The perceived delay could never be 0, as if B->A were 1s, BC = AC + 2 and the perceived delay would be -1s, and if B->A were 2s, BC = AC + 1 and the perceived delay would be 1s, and if B-> A were 3s, BC = AC + 0 and the perceived delay would be 3s.

There could never be a perceived delay of -1s as if BC = AC + 0 the line would have to have a delay of -1s, and if BC = AC + 1 the line would need a delay of 0s, but if BC = AC + 2 the line would have a delay of 2s, which would imply that BC = AC + 1 due to the syncing process.

EDIT.
No... this doesn't work. We could still converge and agree on the wrong model if our start times aren't synchronized. I could think its (1/3) and my friend thinks its (3/1) while actually it's (3/1) for me.