Posted
by
Unknown Lamer
on Tuesday August 12, 2014 @03:47AM
from the order-out-of dept.

MojoKid (1002251) writes Ever since Nvidia unveiled its 64-bit Project Denver CPU at CES last year, there's been discussion over what the core might be and what kind of performance it would offer. Visibly, the chip is huge, more than 2x the size of the Cortex-A15 that powers the 32-bit version of Tegra K1. Now we know a bit more about the core, and it's like nothing you'd expect. It is, however, somewhat similar to the designs we've seen in the past from the vanished CPU manufacturer Transmeta. When it designed Project Denver, Nvidia chose to step away from the out-of-order execution engine that typifies virtually all high-end ARM and x86 processors. In an OoOE design, the CPU itself is responsible for deciding which code should be executed at any given cycle. OoOE chips tend to be much faster than their in-order counterparts, but the additional silicon burns power and takes up die area. What Nvidia has developed is an in-order architecture that relies on a dynamic optimization program (running on one of the two CPUs) to calculate and optimize the most efficient way to execute code. This data is then stored inside a special 128MB buffer of main memory. The advantage of decoding and storing the most optimized execution method is that the chip doesn't have to decode the data again; it can simply grab that information from memory. Furthermore, this kind of approach may pay dividends on tablets, where users tend to use a small subset of applications. Once Denver sees you run Facebook or Candy Crush a few times, it's got the code optimized and waiting. There's no need to keep decoding it for execution over and over.

Let's see if I have this right:With the OoOE cpu, the instructions from the code are handled by the cpu to decide what order to process them so you get a faster overall speed.

With the Project Denver cpu, it's an in-order processor, but it uses software at runtime to decide what order to process the code in and stores that info in a special buffer, but that software is itself ran by the cpu in the first place to make the OoOE decisions.

So in case of JVM, you'd think it's flaky for the JIT to happen on the same CPU as the one that is executing the code?

Bear in mind that nowadays, the CPUs don't anymore need to be designed to run even closed source, boxed version operating systems with top performance. The bootloader and kernel can be custom-compiled for the very specific CPU version and won't *necessarily* need the helper.

I think you're missing an important part. The Order Optimizations are run a separate CPU/CORE. This CPU/core can be shutdown saving power or used executing other threads increasing speed. Remember Transmedia , only made mobile/power saving processor because they could save power by not running the OoOE engine the whole time. This is a great approach to saving power by only doing optimization once and executing several times. One Problem is when the code paths change drastically you need to have some way to

More to the point, if the advantage of switching to in-order is having less silicon (and therefore a smaller power draw), isn't that completely undone by having a whole second CPU in there that makes it twice as large as its predecessor?

A more effective use of the second core when not doing optimization would be to switch to running application load. Today with highly multithreaded designs there are other far more effective solutions than Out of Order anyway... like Intel's Hyper-threading (Simultaneous-Multithreading - SMT).

In-order processors are a better choice as long as your program is well optimized.Optimizing for in-order processors is difficult, and not something that is going to be done for 99% of programs. It's also very difficult to do statically.

NVIDIA has chosen to let the optimization be done by software at runtime. That's an interesting idea that will surely perform very well in benchmarks, if not in real life.

This is an area where post compile optimization can shine. By watching actual execution with live data, the post compiler optimizer can build branch choice stats to tune against based on actual operation rather than static analysis at compile time. HP's dynamo project IIRC was based around this idea, it'd recompile binaries for the same architecture it ran on after observing them running a few times. I believe the claims were an average 10% improvement in perf over just compiler optimized binaries.

It's hard drive manufacturers that insisted on decimal instead of binary prefixes, not RAM makers. In fact, it's fairly difficult to make RAM to decimal prefixes.

At any rate, RAM specs usually don't count OS overhead anyway, so this just makes the kernel heavier in a sense. Hell, this is probably being done with a proprietary kernel blob (as we know, Nvidia loves their magic blobs that give you amazing performance only on their hardware...)

It's not, obviously - they're both the same size at 2^31 bytes. Where you run into problems is if you want to create a 2 000 000 000 byte stick of ram. And then you have a bit of a problem: Addressing.

In hardware, exactly 2^N memory locations are addressable with N bits. You therefore need only ensure that your number of address lines corresponds to your number of memory locations in order to ensure consistency. If however 7% of possible addresses are invalid you need to insert logic somewhere to make su

Although I know only a little about CPU design, this sounds like one of the most revolutionary design changes in many years. The question in my mind is how well it will work. The CPU can use information at runtime that a static analyser running on a separate core might not have ahead of time, most obviously branch prediction information. OOO CPU's can speculatively execute multiple branches at once and then discard the version that didn't happen, they can re-order code depending on what it's actually doing including things like self-modifying code and code that's generated on the fly by JITCs. On the other hand, if the external optimiser CPU can do a good job, it stands to reason that the resulting CPU should be faster and use way less power. Very interesting research, even if it doesn't pan out.

Well, you could look at what the Hotspot JVM does which is probably a closer analogy, and it works very well.

But then if you are using a JVM that recompiles code on the fly (or Apple's latest JavaScript engine which actually has one interpreter and three different compilers, depending on how much code is used), the CPU then has to recompile the code again! Unlikely to be a good idea.

There's a different problem. When you have loops, usually you have dependencies between the instructions in a loop, but no dependencies between the iterations. OoO execution handles this brilliantly. If you have a loop where each it

If you have a loop where each iteration has 30 cycles latency and 5 cycles throughput, the OoO engine will just keep executing instructions from six iterations in parallel. Producing code that does this without OoO execution is a nightmare.

Loop unrolling is hardly a nightmare, it's one of the simplest optimizations and can easily be automatized.

Loop unrolling is hardly a nightmare, it's one of the simplest optimizations and can easily be automatised.

Good luck. We are not talking about loop unrolling. We are talking about interleaving instructions from successive iterations. That was what Itanium expected compilers to do, and we all know how that ended.

Out of order execution can only do one thing actually: cope with varying latency of operations. For most normal instructions a LIW/VLIW/explicit scheduled processor (yes there are some that aren't a *LIW type) can in most case do better than the dynamic scheduler. Where OoO execution really shines is hiding L1 cache misses and in some cases even L2 cache misses and there static scheduled code have a hard time adapting to hit/miss patterns.The standard technique for statically scheduled architectures is to move loads up as far as possible so that L1 misses can at least partially be hidden by executing independent code, often using specialized non-faulting load instructions that can fail "softly" and be handled by special code paths. The problem doing things like that is that fine grain handling isn't really possible due to code explosion.

But it is fully possible to do partial OoO execution just for memory operations and maybe that's what Nvidia is doing. Maybe not.

Out of order execution can only do one thing actually: cope with varying latency of operations.

It also covers up the sometimes bad instruction ordering of compilers, which has predictably led to compilers being even worse at it. No point modeling which execution units are free when the OOE pipeline reduces all the important inner loop stuff to the latency of the longest dependency chain after just a couple iterations...

I'm hearing a lot of reference to the Mill, but I'm very much unclear on whether they can actually do what they claim to do. The CPU isn't implemented and we don't have real world data on it, as far as I understand. History is littered with revolutionary seeming designs which in practice turned out to have very marginal or non-existent gains due to more frequent then expected edge cases and the like.

The feeling I get from the Mill is that it sounds a little bit too much like the monolithic/microkernel debat

There is a lot of information available on the Mill architecture [millcomputing.com] at this point, and very little reason to doubt its feasibility. Essentially all of the parts have been demonstrated in existing architectures, and the genius is in how they are combined in such a simple and elegant manner. Implementation issues aside, the idea is rock solid, and has too much potential to ignore. Perhaps the layman can not appreciate it, but the architecture has a profound ability to simplify and secure the entire stack of s

it's certainly different but not revolutionary, I worked on a core that did this 15 years ago (not transmeta) it's a hard problem we didn't make it to market, transmeta floundered - what I think they're doing here is the instruction rescheduling in software, something that's usually done by lengthening the pipe in an OoO machine - it means they can do tighter/faster branches and they can pack instructions in memory aligned appropriately to feed the various functional units more easily - My guess from reading this article is is that it probably has an LIW mode where they turn off the interlocks when running scheduled code.

Of course all this could be done by a good compiler scheduler (actually could be done better with a compiler that knows how many of each functional unit type are present during the code generation phase) the resulting code would likely suck on other CPUs but would still be portable.

Then again if they're aiming at the Android market maybe what;s going on is that they've hacked their own JVM and it's doing JIT on the metal

I really wonder about this, too. Perhaps they determined that the common case of a read is one which they can statically re-order far enough ahead of the dependent instructions for it to run without a stall but that doesn't sound like it should work too well, in general. Then again, I am not sure what these idioms look like on ARM64.

The bigger question I have is why they didn't go all out with their own VLIW-based exposure of the hardware's capabilities. As I recall of the Transmeta days, their problem w

I think the entire point of having 7 micro-ops in flight at any point in time combined w/ the large L1 caches and 128MB micro-op instruction cache is designed to mitigate this, in much the same fashion the shear number of warps (blocks of threads) in PTX mitigates in-order execution of threads and branch divergence.

Based on their technical press release, AArch64/ARMv8 instructions come in, at some point the decoder decides it has enough to begin optimization into the native ISA of the underlying chip, at which point it likely generates micro-ops for the code that presumably place loads appropriately early s.t. stalls should be non-existant or minimal once approaching a branch. By the looks of their insanely large L1 I-cache (128kb) this core will be reading quite a large chunk of code ahead of itself (consuming entire branches, and pushing past them I assume - to pre-fetch and run any post-branch code it can while waiting for loads) to aid in this process.

The classic case w/ in-order designs is of course cases where the optimization process can't possibly do anything in-between a load, and a dependent branch - either due to lack of registers to do anything else, lack of execution pipes to do anything else, or there literally being nothing else to do (predictably) until the load or branch has taken place. Depending on the memory controller and DDR latency, you're typically looking at 7-12 cycles on your typical phone/tablet SoC for DDR block load into L2 cache, and into a register. This seems like it may be clocked higher than a Cortex A15 though, so lets assume it'll be even worse on denver.

This is where their 'aggresive HW prefetcher' comes into play I assume, combined w/ their 128KiB I-cache prefetching and analysis/optimization engine, denver has a relatively big (64KiB) L1 D-cache as well! (for comparison, the Cortex A15 - which is also a large ARM core - has a 32KiB L1 D-cache per core) - I would fully expect a large part of that cache is dedicated to filling idle memory-controller activity with speculative loads to take educated "Stabs in the dark" at what loads are coming up in the code to once again, in the hope of getting some right and mitigating in-order branching/loading issues further.

It looks to me like they've applied the practical experience of their GPGPU work over the years and applied it to a larger more complex CPU core to try and achieve above-par single core performance - but instead of going for massively parallel super-scalar SIMT (which clearly doesn't map to a single thread of execution), they've gone for 7-way MIMT and a big analysis engine (logic and caches) to try and turn single-threaded code into partially super-scalar code.

This is indeed radically different to typical OoO designs in that those designs waste those extra pipelines running code that ultimately doesn't need to be executed to mitigate branching performance issues (by running all branches, when only one of their results matters) - where as denver decided "hey, lets take the branch hit - but spend EVERY ONE of our pipelines executing code that matters - because in real world scenarios, we know there's a low degree of parallelism which we can run super-scalar, and we know with a bit more knowledge, we can predict and mitigate the branching issues anyway!"

Hats off, I hope it works well for them - but only time will tell how it works in the real world.

Fingers crossed - this is exactly the kind of out of the box thinking we need to spark more hardware innovation. Imagine this does work well, how are AMD/ARM/IBM/Intel/IT going respond when their single-core performance is sub-par? We saw the ping-pong of performance battles between AMD/Intel in previous years, Intel has dominated for the last 5 years or so, unchallenged - and has ultimately stagnated in the past 3 years.

If their cache lines are 64 bits, then it's quite possible that successive instructions (based on execution time stamp) are in the same cache line. Remember that this has to improve execution speed most of the time, and not decrease execution speed. As for data caches, I'm not sure - a good prefetcher will help a lot in this. This has the possibility to slow down execution speed... I wonder how often and how long the execution of a thread can continue when there's a data cache miss... M

Recovering the slowdown in subsequent steps = use the full width of seven microops to run significantly faster than a typical out-of-order ARM design. As for continuing when there's a data cache miss, I was referring to out-of-order designs, which might - or might not - be stalled in a couple more instructions because of dependencies on not-yet-processed data (which is loaded from memory).

There might be some "hints" for microprocessor for the data to cache - if so, those could be added in the generated microcode at some time before they're really needed, increasing the chance to have them available in cache and/or reducing wait time. Of course, I don't know for sure but you could read a value in a register then zero the register. This might be optimized out of microprocessor run (so it won't consume energy to load and then zero the register), but still go through the data fetch engine, so it

It's not the core instruction set that's the problem with booting alternate OSes.. as long as you stick to the base archtecture you'll be fine. It's the lack of standardization when it comes to boot firmware and device configuration that's the problem.
The server ARM initiative at least is standardizing on UEFI and ACPI.. hate them or love them, making ARM hardware more similar to Intel Architecture hardware will likely make it easier to support both.

Buffer in the main memory, software that optimize most-used code. It looks like an OS job for me, something that could be implemented in the linux kernel and benefit all CPUs, provided that you have the appropriate driver.

According to the paper, it looks like biggest novelty is... DRM. The optimizer code will be encrypted and will run in its own memory block, hidden from the OS. It will also make use of some special profiling instructions which could as well be accessible to the OS. Maybe they will but they say nothing about it.

According to the paper, it looks like biggest novelty is... DRM. The optimizer code will be encrypted and will run in its own memory block, hidden from the OS.

DRM is already fully supported in ARM processors. See TrustZone [arm.com], which provides a separate "secure virtual CPU" with on-chip RAM not accessible to the "normal" CPU and the ability to get the MMU to mark pages as "secure", which makes them inaccessible to the normal CPU. Peripherals can also have secure and non-secure modes, and their secure modes are accessible only to TrustZone. A separate OS and set of apps run in TrustZone. One DRM application of this is to have secure-mode code that decrypts encrypted v

Nope. All standard OoO mechanisms is one of pushing - that is pushing of operations to execute from the scheduler to the execution units. The execution units are dumb and only consume data, operation information giving a set of results.In most OoO designs the amount of operations actually capable of flowing through the execution units are less that the theoretical width, limited either by the scheduler or the retirement logic.

A VLIW can scale to greater actual execution throughput however it is hard to make

I think NVidia tied their hands by retaining the ARM architecture. I suspect the result will be a "worst of both worlds" processor that doesn't use less power or provide better performance than competitors.

In order execution, exposed pipelines, and software scheduling are not new ideas. They sound great in theory, but never seem to work out in practice. These architectures are unbeatable for certain tasks (e.g. DSP), but success as general purpose processors has been elusive. History is littered with the corpses of dead architectures that attempted (and failed) to tame the beast.

Personally, I'm very excited about the Mill [millcomputing.com] architecture. If anybody can tame the beast, it will be these guys.

Looking at Shield Tab reviews, the K1 certainly appears to have the processing power but actually putting it to use takes a heavy toll on the battery with the SoC alone drawing over 6W under full-load: in Anandtech's review, battery life drops from 4.3h to 2.2h when they disable the 30fps cap in GFXBench.

The K1's processing power looks nice in theory but once combined with its power cost, it does not sound that good anymore.

The idea of offloading instruction scheduling to the compiler is not new. This was particularly in mind when Intel designed Itanium, although it was a very important concept for in-order processors long before that. For most instruction sequences, latencies are predictable, so you can order instructions to improve throughput (reduce stalls). So it seems like a good idea to let the compiler do the work once and save on hardware. Except for one major monkey wrench:

Memory load instructions

Cache misses and therefore access latencies are effectively unpredictable. Sure, if you have a workload with a high cache hit rate, you can make assumptions about the L1D load latency and schedule instructions accordingly. That works okay. Until you have a workload with a lot of cache misses. Then in-order designs fall on their faces. Why? Because a load miss is often followed by many instruction that are not dependent on the load, but only an out-of-order processor can continue on ahead and actually execute some instructions while the load is being serviced. Moreover, OOO designs can queue up multiple load misses, overlapping their stall time, and they can get many more instructions already decoded and waiting in instruction queues, shortening their effective latency when they finally do start executing. Also, OOO processors can schedule dynamically around dynamic instruction sequences (i.e. flow control making the exact sequence of instructions unknown at compile time).

One Sun engineer talking about Rock described modern software workloads as races between long memory stalls. Depending on the memory footprint, a workload could spend more than half its time waiting on what is otherwise a low-probability event. The processors blast through hundreds of instructions where the code has a high cache hit rate, and then they encounter a last-level cache miss and and stall out completely for hundreds of cycles (generally not on the load itself but the first instruction dependent on the load, which always comes up pretty soon after). This pattern repeats over and over again, and the only way to deal with that is to hide as much of that stall as possible.

With an OOO design, an L1 miss/L2 hit can be effectively and dynamically hidden by the instruction window. L2 (or in any case the last level) misses are hundreds of cycles, but an OOO design can continue to fetch and execute instructions during that memory stall, hiding a lot of (although not all of) that stall. Although it's good for optimizing poorly-ordered sequences of predictable instructions, OOO is more than anything else a solution to the variable memory latency problem. In modern systems, memory latencies are variable and very high, making OOO a massive win on throughput.

Now, think about idle power and its impact on energy usage. When an in-order CPU stalls on memory, it's still burning power while waiting, while an OOO processor is still getting work done. As the idle proportion of total power increases, the usefulness of the extra die area for OOO increases, because, especially for interactive workloads, there is more frequent opportunity for the CPU to get its job done a lot sooner and then go into a low-power low-leakage state.

So, back to the topic at hand: What they propose is basically static scheduling (by the compiler), except JIT. Very little information useful to instruction scheduling is going to be available JUST BEFORE time that is not available much earlier. What you'll basically get is some weak statistical information about which loads are more likely to stall than others, so that you can resequence instructions dependent on loads that are expected to stall. As a result, you may get a small improvement in throughput. What you don't get is the ability to handle unexpected stalls, overlapped stalls, or the ability to run ahead and execute only SOME of the instructions that follow the load. Those things are really what gives OOO its adva

This is a good post (the point about hiding memory latency in particular), but you should still wait to judge the new chip until benchmarks are posted.

If you have ever worked on a design team for a high performance modern CPU, you should know that high level classifications like OOO vs In-Order never tell the whole story, and most real designs are hybrids of multiple high level approaches.

One critical piece of information which is available JUST BEFORE time and not much earlier is which precise CPU/rest of device the code is running on! I don't buy that an OOO processor can do as good of a job optimizing for than in real time than a JIT compiler that has 100x time to do its work. If a processor has cache prefetch/test instructions, these can be inserted "hundreds of cycles" before memory is actually used. OOO can work around a single stall, but how about a loop that accesses 128K of RAM, wit

Prefetching instructions hundreds of cycles ahead of time have to be highly speculative and therefore are likely to pull in data you don't need along with missing out on some data you do need. If you can improve the cache statistics this way, you can improve performance, and if you avoid a lot of LLC misses, then you can massively improve performance. But cache pollution is as big a problem as misses because it cause conflict and capacity misses that you'd otherwise like to avoid.

So it seems like a good idea to let the compiler do the work once and save on hardware. Except for one major monkey wrench: Memory load instructions

Thats not the only monkey wrench. Compilers simply arent good enough in general, and there is little evidence that they could be made to be good enough on a consistent basis because architectures keep evolving and very few compilers actually model specific architecture pipelines...

This is why Intel now designs their architectures to execute what compilers produce well, rather than the other way around. Intel would not have 5 asymmetric execution units with lots of functionality overlap in its latest CPU's if compilers didnt frequently produce code that requires it...

Which leads to compiler writers spending the majority of their effort on big picture optimizations because Intel/etc are dealing with the whole low level scheduling issues for them... the circle is complete.. its self-sustaining.

I can only assume that Nvidia's engineers are aware of all this, since it is pretty basic stuff when it comes to CPU design really, and that TFA is simply too low on detail to explain what they are really doing.

Maybe it is some kind of hybrid where they still have some OOO capability, just reduced and compensated for by the optimization they talk about. It can't be as simple as TFA makes out, because as you say that wouldn't work.

I think your generalization of static scheduling performs poorly on a Mill.:) The Mill architecture [millcomputing.com] uses techniques which essentially eliminate stalls even with static scheduling, at least to about the same extent that an OOO can. Obviously, there will be cases where it will stall on main memory, but those are unavoidable on either. See the Memory talk in particular for how the Mill achieves this, and other improvements possible over OOO. The entire series of videos is fascinating if you have time, but

I've heard of Mill. I also tried reading about it and got bored part way through. I wonder why Mill hasn't gotten much traction. It also bugs me that it comes up on regular google but not google scholar. If they want to get traction with this architecture, they're going to have to start publishing in peer-reviewed venues.

I looked at the Mill memory system. The main clever bit is to be able to issue loads in advance, but have the data returned correspond to the time the instruction retires, not when it's issued. This avoids aliasing problems. Still, you can't always know your address way far in advance, and Mill still has challenges with hoisting loads over flow control.

One might expect that, but the Mill is exceptionally flexible when it comes flow control. It can speculate through branches and have loads in flight through function calls. The speculation capabilities are far more powerful, and there are a lot of functional units to throw at it. There will be corner cases where an OOO might do slightly better, but in general the scales should be tipped in the other direction. If anything, the instruction window on conventional hardware is more limiting.

Peer-reviewed venues don't reject things that are too novel on principle. They reject them on the basis of poor experimental evidence. I think someone's BS'ing you about the lack of novelty claim, but the lack of hard numbers makes sense.

Perhaps the best thing to do would be to synthesize Mill and some other processor (e.g. OpenRISC) for FPGA and then run a bunch of benchmarks. Along with logic area and energy usage, that would be more than good enough to get into ISCA, MICRO, or HPCA.

Why not have all applications ship in LLVM intermediate format and then have on-device firmware translate them according to exact instruction set and performance characteristics of the CPU? By the time code is compiled to ARM instruction set, too much information is lost to do fundamental optimization, like vectorizing loops if applicable operations are supported.

Suppose for a moment that you are building a new processor for mobile devices.

The mobile device makers - Apple, Google, and Microsoft -- all have "App Stores". Side loading is possible to varying degrees, but in no case is it supported or a targeted business scenario.

These big 3 all provide their own SDKs. They specify the compilers, the libraries, etc.

Many of the posts in this thread talk about how critical it will be for the compilers to produce code well suited for this processor...

Arguably, due to the app development toolchain and software delivery monoculture attached to each of the mobile platforms, it is probably easier than ever to improve compilers and transparently update the apps being held in app-store catalogs to improve their performance for specific mobile processors.

It's not the wild west any more; with tighter constraints around the target environment, more specific optimizations become plausible.

No one needs to do anything for software to run on these at all. nVidia would be developing a kernel module or something that would JIT existing software into their optimized in-order pipeline, then cache that result. The out-of-order architectures all do this too - in hardware (which uses more power maybe, but also executes more quickly and theoretically gets into sleep mode more often).

There's no need for anyone to generate special code for these CPUs, but it is interesting that a common perception is tha

Once Denver sees you run Facebook or Candy Crush a few times, it's got the code optimized and waiting.

I am so fortunate to live in such an advanced age of graphics processors, that let me run the equivalent of a web browser application and a 2D tetris game. What progress! We truly live in an age of enlightenment!

Does this architecture require us to load the "NVidia processor driver" which comes with 100 megabytes of code specializations for every game shipped?
That is, after all, why their graphics drivers perform so well - they patch all shaders on top-end games...