Today, we are excited to announce Quick Boot for the Android Emulator. With Quick Boot, you can launch the Android Emulator in under 6 seconds. Quick Boot works by snapshotting an emulator session so you can reload in seconds. Quick Boot was first released with Android Studio 3.0 in the canary update channel and we are excited to release the feature as a stable update today.

It's 1000000 times more useful. Modern PC will process more data in 1 second than C64 during its whole lifetime. Let's face it - old computets sucked, same as current state of art PCs will suck 20 years later. It's nothing more than bunch of plastic and wires.

It's 1000000 times more useful. Modern PC will process more data in 1 second than C64 during its whole lifetime. Let's face it - old computets sucked, same as current state of art PCs will suck 20 years later. It's nothing more than bunch of plastic and wires.

This actually adds credence to cybergorf's point. Given the fact that hardware has gotten so much better, one would expect modern software to perform many times better than it does. When it comes to clock time, real and significant hardware gains have largely been offset by software inefficiencies.

Sure we're tempted to say android is 1000 times more complex, but in all seriousness the entire kernel should load in the blink of an eye given how fast flash storage is, and by past standards it'd be inexcusable for an app loader to take so long to load itself on such fast hardware. The truth of the matter is that the software industry has left optimization on the back burner arguing that hardware improvements make software optimization irrelevant. This is a very common justification in the field of software development, and if that's the industry's consensus, then so be it. But we shouldn't lie to ourselves and pretend that modern inefficiencies are intrinsically do to additional complexity, no we must recognize the fact that the art of software optimization has gotten lost along the way.

A. Kernel boot time has nothing to do with storage performance and has everything to do with the complexity of you call a "PC". While the C64 had very limited set of devices that were initialized from ROM, a modern kernel needs to support 1000 upon 1000 of CPU types, chipsets, devices, etc. Most of them with unbelievably complex initialization sequence. If you don't believe me look at the driver source code of modern GPUs or 25/40/100 GbE network devices.

B. The amount of optimization that goes into the kernel these days is million miles ahead of type-a-couple-of-1000s-of-ASM-LOC and shove them into a ROM that was used to design the C64.

A. Kernel boot time has nothing to do with storage performance and has everything to do with the complexity of you call a "PC". While the C64 had very limited set of devices that were initialized from ROM, a modern kernel needs to support 1000 upon 1000 of CPU types, chipsets, devices, etc. Most of them with unbelievably complex initialization sequence. If you don't believe me look at the driver source code of modern GPUs or 25/40/100 GbE network devices.

In all honestly most of the complexity is bloat. Even linus torvalds has acknwoledged linux has a bloat problem. Android is tuned for very specific hardware. If the kernel on your android phone does "support 1000 upon 1000 of CPU types, chipsets, devices, etc.", then whoever built it did a pretty bad job in trimming it down to only the hardware present on the device. Know what I mean? The build it supposed to be optimized for that specific hardware.

B. The amount of optimization that goes into the kernel these days is million miles ahead of type-a-couple-of-1000s-of-ASM-LOC and shove them into a ROM that was used to design the C64.

In all honestly most of the complexity is bloat. Even linus torvalds has acknwoledged linux has a bloat problem. Android is tuned for very specific hardware. If the kernel on your android phone does "support 1000 upon 1000 of CPU types, chipsets, devices, etc.", then whoever built it did a pretty bad job in trimming it down to only the hardware present on the device. Know what I mean? The build it supposed to be optimized for that specific hardware.

The OP in this sub thread was comparing C64 to a PC. He wasn't talking about Android, nor was I.

Arguably not true by performance per clock...

Actually, The amount of rows/sec PostgreSQL -on- Linux can store or index is billion years a head of any database that existed in the C64 or even the early PC days. Even if you generate odd matrices such as rows/sec per Mhz or rows/second per disk RPM, the performance difference is simply staggering.
Same goes for networking (packets/sec per Mhz) and graphics (triangles/sec per Mhz).

Sure, but it doesn't explain why the overhead is so much greater.

Sure it does.
Back in the C64 or even in the XT days, a graphics card was nothing more than a dual ported memory chip.
Today a graphics card is a hugh-SMP CPU that's expected to push billions of vertices and handle complex requests simultaneously. How can you possibly expect to continue program such a beast by calling 'int 10h'?

Networking is no different. How can you possibly compare a C64 modem that was barely cable of pushing 1200bps via the simplest of interfaces (serial port) to a multi-PCI-E 100GbE network device that includes smart buffering, packet filtering and load-balancing?

IMHO it's true of most code.

Being a system developer I can't say that I care much for user facing code

I don't even know where to begin with this.
As an adolescent, I used to crack software (protections) for fun. I started off on the Commodore Amiga with 68k assembler. In order to crack, one had to disassemble (parts of) the software and put in a fix. Let's just say that the code the compilers created was far from optimal. Even if there was an entire routine missing, I could usually 'create some space' by simply optimizing the crappy compiler code.
I also cracked a bit of '86 software. Eventually I stopped, as I got bored.
Today's code is such a bloat that often you can't even tell earth from water. Today's software does with 30M what we would have done in less than 2K using assembler. Draw your own conclusions...

I feel the need for sharing an anecdote. A while ago, I was developing an application for an embedded device in C++. There was the need for 'biased' random generator functions, just as they were introduced with C++11 random number distributions. Unfortunately the compiler in use didn't support C++11 yet. As the test-device had plenty of flash, I decided to include the random stuff via Boost... Soon, I was surprised to find out that my application had grown more than 10x in size and was now taking up more than half of the flash memory. An investigation revealed that the binary now even included internationalization routines! I'm guessing that the damn I18N had to be set up as well; this, on a device with only a few LEDs.
The compiler (and even more so the linker) was set to eliminate unused code/functions.
Luckily a compiler supporting C++11 was ready a few days later. Now the random stuff takes up only a few KB (though, arguably still more than needed).

Have you ever wasted your time by looking at the source-code of Mozilla's Firefox? I went crazy after half an hour. There are tons of functions doing (more or less) the same thing. Obviously, they lost track of things. This is not only problematic in terms of code bloat and program efficiency, but also a security problem. You find an error in one function and you can almost be certain that there are more functions with the same error under a different name... what a mess! :-P
Bottom line: Complexity is just an excuse for not polishing the building blocks. Today's software "foundations" are often already rotten in their core.