Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

MojoKid writes "ARM debuted its new 64-bit microarchitecture today and announced the upcoming launch of a new set of Cortex processors, due in 2014. The two new chip architectures, dubbed the Cortex-A53 and Cortex-A57, are the most advanced CPUs the British company has ever built, and are integral to AMD's plans to drive dense server applications beginning in 2014. The new ARMv8 architecture adds 64-bit memory addressing, increases the number of general purpose registers to 30, and increases the size of the vector registers for NEON/SIMD operations. The Cortex-A57 and A-53 are both aimed at the mobile market. Partners that've already signed on to build ARMv8-based hardware include Samsung, AMD, Broadcom, Calxeda, and STMicro."
The 64-bit ARM ISA is pretty interesting: it's more of wholesale overhaul than a set of additions to the 32-bit ISA.

The first drafts of the ARMv8 architecture became available to a few ARM partners about 4-5 years ago. They've since been working closely with these partners to produce their chips before releasing their own design. The aim was to have third-party silicon ready to ship before anyone started shipping ARM-designed parts to encourage more competition.

ARM intentionally delayed releasing their own designs to give the first-mover advantage to the partners that design their own cores. In the first half of next year, there should be three almost totally independent[1] implementations of the ARMv8 architecture, with the Cortex A50 appearing later in the year. This is part of ARM's plan to be more directly competitive with the likes of Intel. Intel is a couple of magnitudes bigger than ARM, and can afford to have half a dozen teams designing chips for different market segments, including some that never make it to production because that market segment didn't exist by the time the chip was ready. ARM basically has one design, plus a seriously cut-down variant. By encouraging other implementations, they get to have chips designed for everything from ultra-low-power embedded systems (e.g. the Cortex-M0, which ARM licenses for about one cent per chip), through smartphone and tablet processors up to server chips. ARM will produce designs for some of these, and their design is quite modular, so it's relatively easy for SoC makers with the slightly more expensive licenses to tweak it a bit more to fit their use case, and companies like nVidia, TSMC and AMD will fill in the gaps.

The fact that ARM is now releasing their own designs for licensing means that their partners are very close to releasing shipping silicon. We've seen a few pre-production chips from a couple of vendors, but it's nice to see that they're about to hit the market.

[1] ARM engineers consulted on the designs, so there may be some common elements.

It helps if you read what I wrote, rather than pasting random bits out of context. ARM Cortex-A50 chips are likely to appear in 2014 (maybe earlier, but probably not much earlier). ARM partners such as TSMC, nVidia, and Qualcomm, who have been developing their own ARMv8 implementations for the last 3 or so years are due to release production silicon in the first half of next years - they're shipping samples to their partners now.

Look at the article in detail. Isn't it funny how the A-15 is the super-miracle chip that is going to stick it to Intel in the server world! Oh wait.. now that super-miracle chip is the 64-bit ARM Miracle Chip (TM) and the A-15 has been relegated to smartphones instead of taking over the server world.

Fortunately, Intel is completely incapable of making any improvements to its chips whatsoever, so ARM's victory in 2014 is assured.

LOL Wut? ARM has YET to hit even the IPC of a Pentium 4, how many years ago was that chip released? They aren't even close to the Core series, and in case you ain't noticed companies like Nvidia, that have sank BIG bux into ARM, are having to pile on the cores to get decent performance which of course blows the power budget to shit.

The simple facts are 1.- ARM doesn't scale and 2.- Its a hell of a lot easier for Intel, who already has insane levels of IPC, to scale down and have low power chips than it is

It hasn't yet. Perhaps it will. So far, it targets a different space than amd64.

and 2.- Its a hell of a lot easier for Intel, who already has insane levels of IPC, to scale down and have low power chips than it is for ARM to scale up and not blow their power budgets

But so far, this strategy has not permitted Intel to achieve as low TDP nor as much IPC per watt as ARM. So you can say that it's easier, but so far ARM has not scaled up as far as Intel, nor has Intel scaled down as far as ARM. When intel delivers a TDP as low as ARM with the same performance, wake me up, and I'll care. Until then, ARM is working fine, and many people are pretty happy with existing ARM-based tablets, as evinced

Its a hell of a lot easier for Intel, who already has insane levels of IPC, to scale down and have low power chips than it is for ARM to scale up and not blow their power budgets

[Citation required]

Intel have insane levels of IPC because they use insane amounts of power. IPC and power are correlated, yeah? It takes power to run all those parallel circuits that can look ahead in the instruction stream and provide high IPC. The laws of physics work the same way for Intel as they do for ARM. So, why is it e

Concur. ARM is fucked, it's funny people don't see that. They will move back to basically providing only ultra low cost and ultra low power chips. They will see their high end aspirations kicked to the ground, and will be pushed lower and lower in the tablet and mobile phone market over the next 2 years.

ARM's core business is processor cores — CPU layouts to laymen — that other companies can take and add their own extra bits to before manufacturing. It's called System-on-Chip (SOC) and it's an area where Intel doesn't have much of a grasp precisely because it involves giving your design to other companies, letting them modify it by adding stuff, and then manufacture it themselves (or through a third-party). Now, it's hardly surprising that the high-end SOC guys want a larger addressing mode; the forces which pushed desktops also push mobile devices and the like.

What amuses me is the push to e.g. servers. Gee, can I get a server with 1000 cores that's slower in almost every operation than a 16 core Xeon server? Oh, and can you make it useless for virtualizing servers or doing anything but light load trivially parallelizable tasks?

I know a few scientists with tasks to do that are embarrassingly parallel and with far more data than you can shake a whole bushel of sticks at. Being able to stuff even more cores into a rack (where power and cooling are usually the main constraints) is going to be of great interest to them. Whether it will beat out GPUs is the real question though. I expect it will for some workloads (ones with more complex conditional processing in the individual units of processing) but not others. And Intel remains the fastest situations where raw single-threaded power is required, which frankly is a lot of code.

Supercomputing doesn't operate under the same constraints that desktop or normal server computing does. Supercomputer makers try to pack as much computing power in as small a space as possible (because delays effectively due to the speed of light are quite a significant problem otherwise). All too often, the main challenge with a supercomputer is stopping it from cooking itself and setting fire to the building...

But which would be MORE valuable to your HPC friends, a thousand cores that individually are weaker than a Pentium 2, or 250 Magny Cours or Core i7 chips that can blow through twice as many instructions per second as those 1000 chips?

As we all know TINSTAAFL and there is ALWAYS a trade off, and in the case of ARM its frankly lousy IPC. So far many companies have sunk serious R&D into ARM and they haven't solved the problems so its doubtful that the arch is gonna hit even P4 levels of IPC anytime soon. F

I think ARM is very safe in mobile devices. It's low power consumption most ARM chipsets have video acceleration, so it can still play HD video. It's licensable too, which is handy for SoCs. Also the cores are tiny.

Moving to Atom would give phones more CPU power but I'm not convinced there is much need for it. I've got an Samsung Galaxy S2 and it's not like there is anything I do that is CPU bound. Smartphones have poor battery life already though, and a move to x86 is going to make that worse.

Now Intel are talking about licensing Atom, but I think they face an uphill battle. ARM's mix of low thermal power/low CPU power compared to x86 and small licensable cores aimed at TSMC is basically ideal for people like Samsung, Qualcomm and so on. In fact Qualcomm have spent a lot of money developing their own microarchitecture for ARM - the Snapdragon and Krait cores. If they moved to x86 they would not be able to do that. NVidia are obviously graphics focussed. So it's hard to see the ARM SoC vendors switching to Intel.

Of course Intel is safe in servers and laptops because there you do need x86 compatibility and more horsepower even at the cost of a higher power consumption.

I agree that ARM is safe in mobile and tablets. That doesn't mean that Intel won't get a slice of the market, but I don't think that Intel will ever have the flexibility and price advantages of ARM.

Moving to Atom would give phones more CPU power

That's debatable as well, because Intel only offers a single-core, hyperthreaded Atom at 1.3GHz (with some turbo features), that only performs well in Javascript benchmarks. Quad-core A9s and dual-core A15s are more than competitive, and definitely include far better graphics these days - something that people wi

Your Atom specs are false. The Motorola Razr i (okay, not available in the US because it doesn't have LTE) is shipping with a dual-threaded 2.0 GHz Atom core. The next chips due for the holiday sales season will be dual-core, hyperthreaded, and also spec'ed for 2.0 GHz with LTE. The graphics isn't quite there yet, but the gradual move away from third-party vendors will be a step in the right direction.

and for a bit more $ and Watts you can get a much faster x86/x64 chip.

Or, in ither words, for some less batery life, you can get a computer that performs faster (notice that price doesn't even enter the equation). That may seem like a deal for you, but it is not for lots and lots of people.

The netbooks and servers market is already owned by Intel, ARM can't lose it (they can only gain market share there). Weight and batery life are two of the most important features of a tablet (only comparable to screen si

For most Linux hosting, I don't think anyone cares a damn about x86 compatibility. As long as there's a Debian and CentOS/RHEL support for the architecture, and a decent Java implementation, nobody gives a damn, I don't think. All the open source stuff can be compiled pretty much for anything you want, and the world-facing web "stuff" typically uses php anyway, so as far as the user is concerned it can run on PDP-11 architecture with paging.

Except for the new mini-Pad, of course. Apple is pushing whatever people will give them money for. If people will give them more money for more speed, they'll take it. But is the market really demanding it, or has Apple simply discovered that the people who are willing to give them money for their inferior product in the first place are willing to give them even more of it for more and shinier?

It's interesting I'm typing this on a netbook. That's got an Atom N570 a 1.6Ghz dual core, in order hyperthreaded CPU. My phone has dual core Cortex A9s which are 1.2Ghz out of order and single issue.

If you'd have said five years ago that Arm would go out of order and Intel would go in order, I'd have thought it was absurd. Then again you're comparing the (then) slowest Atom with the (then) fastest Arm.

Unfortunately there's no result for an N570 but judging by the other results doubling up the number of cores should make it a bit faster than the Exynos 4210. Still it's probably quite close. Which is remarkable actually - the Exynos uses slow mobile SDRAM and the Atom uses DDR2.

The fanboys can waste mod point but truth is truth and Good Lord Man is that page old and out of date! I don't see what you are trying to "prove" with a test that 1.-Nobody with ANY common sense at all is gonna actually be doing in real life on ANY of these chips, and 2.- A test that has got to be at LEAST 2 years old in the results, which is like a decade on the mobile front! They only have the single core Atoms, they only have the AMD E350, which isn't even sold anymore,....what good is this "test" and wh

DDR-2? DDR-2! Welcome to the Pleistocene. I am constantly amazed at what a piece of shit the Atom design is. 45 nm and just now making it to 32 nm? Snort. It should be 22 nm like the Ivy Bridge. At least ValleyView is FINALLY getting real graphics, not that third party toy crap.

Atom as a piece of Intel's range (not as a specific design anything like the garbage it is now) has HUGE potential to improve.

I've always thought that a lot of the problem with ARM systems is that they typically use mobile SDRAM, Which is low power, but is also clocked slowly and has a rather narrow bus. So if you paired an ARM with the same memory you get in an Atom system, you'd see better figures. I think that is part of what has happened with the Chromebooks.

There are a total of 8 x 256MB DDR3L devices (2GB total) that surround the Exynos 5 Dual SoC (4 on each side of the PCB). Each device is 8-bits wide, all connecting up to the 64-bit wide DDR3L memory controller. The DRAM is clocked at a 1600MHz data rate, resulting in 12.8GB/s of memory bandwidth to the chip. The Exynos 5 Dual integrates two ARM Cortex A15 CPU cores as well as an ARM Mali-T604 GPU.

Of course they are. Their immense popularity in mobile phones and tablets, and their presence in both the DS and 3DS is such a sign of failure. Attracting enough attention fro Microsoft to port Windows to ARM is also a sign of massive failure.

What amazes me is how people can be blind to this, when it should have been obvious when they needed "helper chips" like the Broadcoms to do 1080P decode, something that a P4 from 8 years ago could do with relative ease.

The ARM arch was just never designed to scale to the kinds of levels we are needing in today's devices, and you are seeing that as they try to ramp up the power budgets quickly go to shit. this is why you aren't seeing 3GHz ARM chips like you did X86 during the MHz wars, the power budget to

Next gen Intel x86 has a lower idle power draw and better performance/watt than current ARM cpus. I don't know what new ARM cpus may be out next year, but that means Intel has closed the gap on cellphones.

The A15 supports LPAE, so you can have a 40-bit physical address space with a 32-bit virtual address space. This lets you have up to 1TB of physical RAM in your tablet (which might be interesting if you wanted to memory map the flash directly), as long as no single application uses more than 4GB of address space. Given that on my 64-bit laptop, the process with the most virtual address space at the moment is using 1.2GB, I think that's probably a safe assumption for a few years...

But there's no reason why you could have multiple processes taking up more than 4GB in total - the kernel would just map their address space when they were running and unmap it when they weren't.

Actually, it's simpler than this with LPAE. ARM has a tagged TLB, and the way LPAE is implemented you don't even need to do the unmapping. LPAE just extends the page table format, so the translation is from a 32-bit input to a 40-bit output instead of from a 32-bit input to a 32-bit output.

I will take the opposing view, which I call "reality". Intel is a full process level ahead and 14nm is coming in a little over a year. They are dominant in manufacturing, that helps a _lot_. Remind me what the A6X was manufactured at? Oh, yeah - 32nm. Maybe late next year we'll see 20nm ARM chips...maybe.

Your ignorance is fascinating. Intel are the biggest single manufacturer, yes, but ARM sells to nearly every other manufacturer. It's a different business model. Let's emphasize that for you: ARM are fab-less; they don't manufacture themselves. A consequence of this is that the process scales for ARM are usually a lot larger, and that's because the non-Intel manufacturers are a generation or two behind. (Intel spends a lot on staying ahead on that front.) But that's OK; the other manufacturers are usually u

Are Intel licensing that to other manufacturers for SOC use? No? Then it's not a real player; cutting the component count at the overall device level is more important than speed to phone makers, as it's cheaper for them like that.

Intel sells full fledged SoCs equivalent to Samsung's Exynos line, etc. with Atom cores. Sure ARM licenses cores, but they aren't even a fabeless chip company they're a processor IP company. They sell Nvidia, Samsung, MediaTek etc. the right right to use ARM IP in their SoCs, which are then fabricated by TSMC, GlobalFoundries etc. Of course Intel isn't going to license you their cores. They want to sell you an SoC they designed, not their IP. That's like saying Qualcomm isn't a real player in the SoC busin

I don't really understand the dynamics of how x86 will move into the smartphone arena. I can't imagine companies like Samsung, who's manufacturing their own ARM based SOCs, would pay a premium for an Intel CPU that would cut massively into their profits, right? How does Intel get their foot in the door? Who's going to use Intel CPUs in their smartphones and tablets? And why is everyone so quick to write off ARM; a company who's processor designs are actively competing in mobile, and give the nod to Inte

Well, ARM designs the IPs that will go into those products... and they are ready to start selling the IP. It takes a couple of years to build SOCs around them, and then to build the devices.

I've been wondering about just how much lead time they gave their partners prior to this announcement. Given the rate at which AMD is burning cash and credibility, I doubt they can afford a lead-time that's too long.

More likely, there was some development going on in parallel between ARM and their partners. If I had to guess, AMD started the move to ARM about the time they began discussions on purchasing Seamicro, and soon after lost a bunch of senior executives and engineers (at least some of who probabl

What would "IP" mean anyway, with the fact that it is utterly impossible to ever control every single human being and computer you sent a copy to

We're talking about CPU designs. If you copy it illegally, then you have to spend a billion or so dollars on a fab. Or spend a few million to buy some time on someone else's (and hope that they don't check with ARM that you're actually a licensee). Enforcing copyright is pretty much impossible on mass-market goods, but when you're talking about something that has a target market of maybe 100 companies (if you're wildly optimistic) then it's not exactly hard to keep track of them.

That depends. If you market it as an ARM core without passing their compatibility test suite, then they'd be very unhappy. Similarly, if you violated any of their patents. But read the rest of my post: the point is that the number of potential customers is small. We actually have an ARMv7 implementation that runs in an FPGA that ARM knows about. They're quite happy with it, because it gives our students experience working with their architecture. There's a fairly widely distributed design from Edinbur

Apple & Samsung can sell us non-modifiable devices with locked-down hardware apparently this is supposed to make Linux take over

The vast majority of Samsung ARM devices are modifiable & do not have locked-down hardware. Apple on the other hand does, but I have no idea why you think Apple's locked down devices are going to help Linux take over (wtf have you been smoking?).

Who knows if this will be successful or not, but a world with AMD is a world with one more innovator bringing fresh, new ideas to the table and trying things that the members of a smaller oligopoly wouldn't.

How is it innovation to take something that sucks and xerox it 64 times? AMDs chips consume more power and perform less computational operations, compared to Intel chips. They've been behind the curve and falling farther for awhile; And it has everything to do with poor management. AMD is not an argument for competition driving innovation. Pick a better example.

AMD are not very good engineers. But at the absolute, very least, they provide value chips which, though significantly behind the curve, still force Intel to keep their prices fairly honest. Yes, AMD is significantly behind the curve, but at least they're trying new things and hopefully something will stick.

And on an irrelevant side note, let's not forget that Intel was wildly successful with their antitrust actions. They must have made 10x in profit what they got fined in the lawsuit....

Dunno, considering the budget they are working with I'd say getting performance even in the same ballpark as Intel is pretty impressive, especially once you factor in that they are a full process node behind at the fabs. Their multi-threaded top-end speed is the same or faster than Intel, it's only in IPC that they are still behind. Their performance/$ is tied or better.

I suppose AMD can't do much about their situation with the fabs, and it would be interesting to see what Bulldozer derivatives can do when they get down to the ~22nm process. But I think IPC is more important than you're giving it credit for. Taking laptops for example, it's unfortunate that we don't have any options that rival anything i5 or above. Trinity is decent on the low-end, ie. the i3/Celeron range, but imagine if they tried to put Bulldozer on a laptop - you'd have to settle for either Celeron-com

Man, why do you people keep referring to companies as multiple individuals? Why can't you just refer to the company as a single entity (which it is) and refer to the people who work for it as invidivuals? Why are the British so poor with English?

AMD used to employ some excellent engineers, who designed the K7. Then they let them go for short-term financial gain. Now they're gone.

English is called English because it's from England. By definition, what English people speak is correct English (the hint is in the name), and what the US speak is incorrect. Otherwise it would be called American, not English.

Unfortunately, that's how we refer to singular individuals when speaking in that tense too, because of one of the many weaknesses in the English language. If it wasn't so damned powerful, I'd have given up and learned something else by now.

You could have used "it" to refer to AMD (if it's a singular entity rather than a group of people, then by the same logic it's inanimate). Grammar is a matter of convention, and the convention is that companies are grammatically plural. You might as well ask why French has feminine tables.

Grammar is a matter of convention, and the convention is that companies are grammatically plural

Well, that's the convention in UK English, and as far as I can tell all of its non-American derivatives. Perhaps this says something about our attitude towards corporate personhood, but when I refer to a company as a single entity I refer to actions taken under their trademark, I don't think of it as a single thing except insofar as it's under a single legal document.

As much as I'd like to see AMD stay around, I don't think AMD will be important either way for the success of these chips.

They probably will be successful, as the natural successor of 32 bit ARM chips today, in a similar way as x86 chips have evolved from 32 to 64 bit. And thus find their way into tablets, netbooks, and -why not- low power (or densely packed) servers too. With or without AMD packing those chips in there.

I always wonder, why change the ABI so often? after all the instruction set is only the interface between the C compiler and the underlying VLIW CPU engine. That's why the first 64 bit processors were actually slower in 64 bit than in 32 bits, and even today they aren't that faster in 64-bit mode.I suspect is all a Patent game, that's why CPU designers keeps modifying the ABI. Their patents are expiring all the time.

First, don't conflate the ABI and the ISA. The ABI, the Application Binary Interface, describes things like calling conventions, the sizes of fundamental types, the layout of C++ classes, vtables, RTTI, and so on. It is usually defined on a per-platform (OS-architecture pair) basis. This changes quite infrequently because changing it would break all existing binaries.

The ISA, Instruction Set Architecture, defines the set of operations that a CPU can execute and their encodings. These change quite frequently, but usually in a backwards-compatible way. For example, the latest AMD or Intel chips can still run early versions of MS DOS (although the BIOS and other hardware may be incompatible). ARM maintains backwards compatibility for userspace (unprivileged mode) code. You could run applications written for the ARM2 on a modern Cortex A15 if you had any. ARM does not, however, maintain compatibility for privileged mode operations between architectures. This means that kernels needed porting from ARMv5 to ARMv6, a little bit of porting from ARMv6 to ARMv7 and a fair bit more from ARMv7 to ARMv8. This means that they can fundamentally change the low-level parts of the system (for example, how it does virtual memory) but without breaking application compatibility. You may need a new kernel for the new CPU, but all of the rest of your code will keep working.

Backwards compatible changes happen very frequently. For example, Intel adds new SSE instructions with every CPU generation, ARM added NEON and so on. This is because each new generation adds more transistors, and you may as well use them for something. Once you've identified a set of operations that are commonly used, it's generally a good use of spare silicon to add hardware for them. This is increasingly common now because of the dark silicon problem: as transistor densities increase, you have a smaller proportion of the die area that can actually be used at a time if you want to keep within your heat dissipation limits. This means that it's increasingly sensible to add more obscure hardware (e.g. ARMv8 adds AES instructions) because it's a big power saving when it is used and it's not costing anything when it isn't (you couldn't use the transistors for anything that needs to be constantly powered, or your CPU would catch fire).

The problem for Transmeta was that no manufacturers understood, or could market a low power laptop or tablet. According to popular magazines at the time, the sort with large intel ad budgets, it was 40% slower than Pentium 4 and therefore a complete and utterly miserable failure that you should avoid like the plague.

The fact that it drew only 3 watts at max load was somehow glossed over.

According to them, ARM Cortex A57 core is a tweaked ARM Cortex A15 core with 64 bit support. And ARM Cortex A53 core is a tweaked ARM Cortex A7 core with 64 bit support. It is possible to mix A57 and A53 cores in the same die to improve efficiency.

What I would like to see is this kind of approach in the x86 world. Imagine having an AMD processor with two fast cores (Piledriver's successor, Steamroller) for heavy processing and two lower cores for longer battery life (Bobcat's successor Jaguar).

What I would like to see is this kind of approach in the x86 world. Imagine having an AMD processor with two fast cores (Piledriver's successor, Steamroller) for heavy processing and two lower cores for longer battery life (Bobcat's successor Jaguar).

I'll bet you a dollar it would be more efficient to spend the code creating more separate functional units that do the same work as the existing functional units, but which could be switched off when unneeded. You'd still have half the CPU, but you wouldn't have any cores that just sat around fondling themselves most of the time.

The problem is the static leakage of transistors. It increases as the node width decreases, and for a given node you have two choices to generate a transistor: either high-speed and high-leakage, or low-speed and low-leakage. Even with DVFS enabled, you will have better power results if you use the CPU with the slow transistors than the one with the fast transistors. Hence the switching between two types of cores with different optimizations but executing the same code.

A fortune 500 datacenter can easily cost up to 1 million a year in electricity! I/O, not CPU performance is the bottleneck in most servers so the slower ARM wont make that much a deal. Also a kick ass GPU can improve SuperComputing a lot more than a tweaked out Xeon if AMD can pull it off with a decent graphics for scientific workloads.

The main selling point of ARM (even more important than power efficiency) that kept them alive all this time is price.

No.

There are plenty of cheap embedded cores. AFAICT ARM actually command a bit of a premium.

And also, the point was about price/performance. Basically as soon as the hardware gets exotic in some way, the price/performance usually goes down. At which point, you may as well go with commodity hardware unless you have very specialised needs.

ARM 64's ISA is radically different than ARM32. All of the things that make Arm "ARM" are gone, such as conditional execution, having the program counter as general purpose register and more. Not only that, the binary encoding is totally different. The binary encoding for ARM64 is a total confusing mess compared to ARM32. I wouldn't say that ARM64 was a well designed ISA.

Other processors made much cleaner transitions between 32 and 64-bit such as MIPS, Power/Power PC and Sparc. Even i386 and x86-64 are much closer than ARM32 vs ARM64.

ARM 64's ISA is radically different than ARM32. All of the things that make Arm "ARM" are gone, such as conditional execution, having the program counter as general purpose register and more. Not only that, the binary encoding is totally different. The binary encoding for ARM64 is a total confusing mess compared to ARM32. I wouldn't say that ARM64 was a well designed ISA.

The binary encodings are a mess, yes, due mostly to the urge to adapt and produce some consistency with the AArch32 instructions. The ARM ABI has seriously evolved and the encoding possibilities are quite... nasty now if you look at ARMv7.

Thankfully, the assembler takes care of that for us.

Conditional execution is nice, but it really interferes with modern architectures. The ARMv8 core is a fully speculative, out-of-order with register renaming implementation. Conditional execution breaks this as the processor has to track more state since any combinations of instructions in the stream could have any combination of conditional execution.

Ditto the PC - it was nice to be able to jump by simply writing to the PC, but man does it complicate internal design if any instruction can arbitrarily move the PC to any register value. In the end, the few uses of conditional execution and the ability to move anything to the PC without using a branch or return style instruction was probably so limited, there was no point.

Oh, and there are 31 registers - X0 through X30. The 32nd register is special depending on the instruction - for ADD and SUB, "X31" means the stack pointer. For most other instructions, it means the zero register (reads as zero), something borrowed from MIPS, and allowing interesting register-only instruction forms to be used when the immediate value is zero. It does result in oddball uses though, like
SUB SP, 0, X0 ; Set SP.to play with the stack pointer.

If you're a system level programmer, AArch64 is MUCH nicer (no more damned coprocessors). I know, I've done a fair bit of it.

I'd wager that there is n't a conditional move uOP when the x86 cmov instruction is decoded, in fact on the original P6 arch there is n't a major speed improvement by using cmov in fact cmov performance various considerably from processor to processor.

Conditional instructions are also available in the AArch64 assembler. But instead of the condition affecting whether the instruction is executed or not as in AArch32, it affects its behaviour. For example, you have the following Aarch64 instruction that can do what cmov does.

CSEL Wd, Wn, Wm, cond;Wd = if cond then Wn else Wm.

It's more efficient that the AArch32 case, where you have to use two instructions to achieve the same result (MOVcond followed by MOVcondbar)

All of the things that make Arm "ARM" are gone, such as conditional execution, having the program counter as general purpose register and more

The advantage of conditional instructions is that you can eliminate branches. The conditional instructions are always executed, but they're only retired if the predicates held. ARMv8 still has predicated select instructions, so you can implement exactly the same functionality, just do an unconditional instruction and then select the result based on the condition. The only case when this doesn't work is for loads and stores, and having predicated loads and stores massively complicates pipeline stage interactions anyway, so isn't such a clear win (you get better code density and fewer branches, but at the cost of a much more complex pipeline).

They also have the same set of conditional branches as ARMv7, but because the PC is not a GPR branch prediction becomes a lot easier. With ARMv7, any instruction can potentially be a branch and you need to know that the destination operand is the pc before you know whether it's a branch. This is great for software. You can do indirect branches with a load instruction, for example. Load with the pc as the target is insanely powerful and fun when writing ARM assembly, but it's a massive pain for branch prediction. This didn't matter on ARM6, because there was no branch predictor (and the pipeline was sufficiently short that it didn't matter), but it's a big problem on a Cortex A8 or newer. Now, the branch predictor only needs to get involved if the instruction has one of a small set of opcodes. This simplifies the interface between the decode unit and the branch predictor a lot. For example, it's easy to differentiate branches with a fixed offset from ones with a register target (which may go through completely different branch prediction mechanisms), just by the opcode. With ARMv7, an add with the pc as the destination takes two operands, a register and a flexible second operand, which may be a register, a register with the value shifted, or an immediate. If both registers are zero, then this is a fixed-destination branch. If one register is the pc, then it's a relative branch. Because pretty much any ARMv7 instruction can be a branch, the branch predictor interface to the decoder has two big disadvantages: it's very complex (not good for power) and it often doesn't get some of the information that it needs until a cycle later than one working just on branch and jump encodings would.

Load and store multiple are gone as well, but they're replaced with load and store pair. These give slightly lower instruction density, but they have the advantage that they complete in a more predictable amount of time, once again simplifying the pipeline, which reduces power consumption and increases the upper bound on clock frequency (which is related to the complexity of each pipeline stage).

They've also done quite a neat trick with the stack pointer. Register 0 is, like most RISC architectures, always 0, but when used as the base address for a load or store, this becomes the stack pointer with ARMv8, so they effectively get stack-relative addressing without having to introduce any extra opcodes (e.g. push and pop on x86) or make the stack a GPR.

ARMv8 also adds a very rich set of memory barriers, which map very cleanly to the C[++]11 memory ordering model. This is a big win when it comes to reducing bus traffic for cache coherency. This is a big win for power efficiency for multithreaded code, because it means that it's easy to do the exact minimum of synchronisation that the algorithm requires.

As an assembly programmer, I much prefer ARMv7, but as a compiler writer ARMv8 is a clear win. I spend a lot more time writing compilers than I spend writing assembly (and most people spend a lot more time using compilers than writing assembly). All of the things that they've removed are things that are hard to generate from a compiler (and hard to implement efficiently in silicon) and all of the things that they've added are useful for compilers. It's the first architecture I've seen where it looks like the architecture people actually talked to the compiler people before designing it.

Might AMD just license the instruction set and not the hardware design? Could they then bolt an ARM instruction decoder onto their core right next to the x86 decoder and run code for either architecture on mostly the same hardware?

I work at a tech company, and almost everyone I know there owns an APU based machine - generally for HTPC uses, or so they say. Yes, it is true that the fastest chips are made by Intel, but when you look at the cost of typical (not high end) machine, AMD is hard to beat, especially when the graphics in and APU will work fine for you.

"Yes, it is true that the fastest chips are made by Intel, but when you look at the cost of typical (not high end) machine, AMD is hard to beat, especially when the graphics in and APU will work fine for you."

Interesting... care to give an example? In most cases, I feel I can spec out an Intel based machine for the same price (also using the IGP) that is fast enough for HTPC use and runs cooler and quieter while using half the power... unless you actually need the extra GPU power or the additional cores AMD likes to throw in at the same price point, what's the point in going with AMD? A Sandy Bridge Celeron/Pentium is more efficient and provides enough processing power for any HTPC I've seen.

I'd actually like to start buying AMD again in order to give them some support, but Intel's where the efficiency bang-for-the-buck is at right now... this may be different for those of you who don't pay your own power bill *cough*mom's_basement*cough*:p

I'm sure Intel would luuuuve to start offering chips at 2X the price of the equivalent AMD one again; if only iPads, Phablets, web apps, the console-led stagnation of game requirements, cloud computing, and Windows XP being almost good enough wasn't killing all the demand at higher price points.

Yep. Intel needs them to appear to have competition so various governments' antitrust investigative units will keep their hands off Intel's business practices.

OTOH, this is (to me) an obvious long-shot that AMD can survive long enough to see and perhaps help ARM do to Intel what Intel did to Sun, IBM and other high-end chipmakers. Perhaps they (AMD) can find funding to last the time it will take for ARM to defeat x86-64.

Or perhaps it won't take very long at all, considering I could replace my ancient deskt

It helps to have AMD around, but doesn't prevent antitrust issues. Even if AMD has around 20% of market share in consumer desktops, Intel can still be dragged over the coals for anti-competitive behaviours. Any company's ability to use anti-competitive measures pretty much depends on them being dominent, in either the market in question or another that would grant them a significant advantage. e.g. John's Washing Machines using their market dominance in washing machines to break in to the toaster market by

Amen. And we'd still be running IE 6 with an annual coat of paint slapped on it. Seen the same bullshit in state protected monopolies and cartels, such as telecoms and banking. Airlines as well before low cost carriers came in and changed the game.

AMD though have a tough fight. Intel's manufacturing alone puts them way ahead of the game. Doesn't mean though that AMD can't grab a segment.

Itanium was just a recompile. The problem was that the resulting code was then typically very slow, because Itanium is a complete bitch as a compiler target. In contrast, ARMv8 is a beautiful architecture to target. To give you some idea of how easy it is, the ARMv8 back end for LLVM was written entirely by one guy in under a year and already performs well (although there's still room for optimisation). LLVM, GCC and ICC all still suck at producing good code for Itanium, and they have had hundreds of man years of effort thrown at them.