Unlike the A7 and iOS, Android and ARM won't be able to go 64-bit overnight.

Yesterday, Ars Reviews Editor Ron Amadeo told me he had observed some interesting activity in the GitHub repository for Android. Several commits made over the last couple of weeks have made extensive mention of "AArch64," the architecture used by 64-bit ARM SoCs.

While the vast majority of the Android Open Source Project (AOSP) code isn't released to the community until after Google announces a new Android version, the Github repository is described as "a read-only mirror" of common AOSP repositories. This isn't the first time the open source community has contributed 64-bit code to Android, nor does this necessarily mean that we'll see a 64-bit ARM build of Android running on shipping hardware any time soon. However, at least a few of these commits have Google employee names on them, suggesting that work is being done in earnest.

Intel has already announced a 64-bit version of Android 4.4 that will run on its own Bay Trail Atom processors, but Intel-powered Android devices remain a relatively small niche for now. Android and its developers won't be able to fully embrace 64-bit until the ARM ecosystem does, and even once the hardware support is there it will take some time for the software support to follow. The GitHub commits make this as good a time as any to talk about ARM and Android's path to 64-bit support and how it mirrors the move from 32-bit to 64-bit that happened in PCs a decade or so ago.

Why 64-bit?

64-bit is hardly an essential feature for your phone today, but that was also the case when the first Athlon 64 CPUs dropped back in 2003. The then-current Windows XP would run happily in a single gigabyte of RAM or even less, and the Athlon 64 found success less because of its 64-bit-ness and more because it was also good at running 32-bit code.

In the long run, though, 64-bit hardware gets you a couple of things that will prove to be beneficial. The most obvious is full support for more than 4GB of RAM, a limit that Android devices may well be running up against in the next year or two. Most high-end Android phones ship with 2GB of RAM today, though newer models like the Galaxy Note 3 are beginning to creep up to 3GB. Some chips already partially support this via ARM's Large Physical Address Extension, or LPAE (PDF), but full 64-bit support will be required for apps to take advantage of more RAM.

Many applications don't need and won't benefit from having this much memory, but that's not all that 64-bit ARM brings to the table. The second benefit, which we saw firsthand in our review of the iPhone 5S, is that the ARMv8 instruction set is inherently more efficient than the 32-bit ARMv7 instruction set it replaces. John Poole of Geekbench summed it up to us best:

"[ARM has] cleaned up the architecture, they've removed a lot of the cruft that's built up over the years," Poole told Ars back in September. "They've also gone ahead and updated with things like more registers and better SIMD instructions, all while keeping the instruction encoding size exactly the same. There's roughly a comparable number of instructions… With the extra floating point registers and the extra SIMD instructions, especially if you've got numerically intensive code that can vectorize well… you're going to see a great increase in performance. I think in some cases it went up 200 percent."

64-bit code running on Apple’s A7 is capable of running around 33 percent faster than 32-bit code running on the same processor, according to our CPU benchmarks. As mobile SoCs begin to run up against performance-limiting thermal and power walls, that’s the kind of performance increase that’s difficult to ignore.

Waiting for hardware

So 64-bit ARM hardware has some desirable features, even if it's not really a must-have feature in the here-and-now. It's just going to be a while before we have 64-bit ARM chips from anyone other than Apple.

These will only begin to trickle out in the second half of this year, when products based on ARM’s 64-bit Cortex A53 and A57 architectures are slated for release. They won’t start showing up in high-end Android phones and tablets until 2015, though—so far the 64-bit chips we know about are the server-targeted Opterons from AMD and the midrange Snapdragon 410 from Qualcomm.

The high-end Snapdragon 805 that Qualcomm will start shipping in the second half of this year is still based on a 32-bit Krait architecture, and that chip will probably end up in most of this fall's fastest devices (barring a drastic change in Qualcomm's roadmap or its OEM partnerships). The 64-bit flavor of Nvidia's new Tegra K1 is scheduled to ship in the latter part of this year, but Nvidia's chips don't have the reach that Qualcomm's do.

Hardware needs software

64-bit capable hardware doesn’t mean a whole lot without 64-bit software, and at the very least, the ARM vendors’ tardiness here gives Google plenty of time to get 64-bit ARM support baked into Android and Google’s own apps. It’s entirely possible that the first high-end smartphones and tablets of 2015 will introduce 64-bit ARM hardware and a 64-bit ARM version of Android simultaneously.

Even assuming that these 64-bit phones ship with a 64-bit version of Android on day one, the software and hardware fragmentation native to Google’s Android ecosystem means that it will take some time to reach critical mass. As we’ve already seen in the long and protracted move from pre-4.0 versions of Android to post-4.0 versions of Android, app developers go where the users are. Even now, going to the Google Play store and searching for any given app will still return apps that look like they were designed to fit in on Gingerbread, not on one of the five versions of Android that Google has shipped since then. It’s going to take just as long for developers to justify building 64-bit versions of their applications.Update: As some of you have pointed out, the just-in-time (JIT) compiling Google uses with the Dalvik VM and the ahead-of-time (AOT) compiling used with ART can compile apps to be natively 64-bit automatically, though there are other features of 64-bit ARM CPUs that might require more effort from developers.

Even in the Apple ecosystem, where 64-bit phones and tablets have existed for a few months now, common apps are still more likely to be 32-bit than 64-bit. Hooking an iPhone 5S up to Xcode's Activity Monitor and launching a few common applications makes that clear—even the ones that have already been redesigned for iOS 7 are still usually 32-bit.

Enlarge/ Most major apps have been updated for iOS 7, but they're still 32-bit ARM apps, not 64-bit ARM64 ones.

Andrew Cunningham

Assuming that our projections are accurate, Android’s 64-bit transition will begin in earnest right around the time that Apple completes its own transition. We can expect Android’s transition to 64-bit to follow the same general path that Windows did a decade ago. AMD struck first in 2003 with a 64-bit CPU that also ran 32-bit instructions. By the time Intel's Core 2 Duo hit in 2006, most new desktops and laptops were shipping with chips that could execute 64-bit code. After Windows 7 came out in 2009, most new computers slowly began shipping with 64-bit Windows installed by default rather than 32-bit Windows (Microsoft still offers 32-bit Windows, which may or may not change with Windows 9).

Smartphones and tablets will likely follow a similar path on a more compressed timeline. We can expect the first chips to trickle out late this year, the very first 64-bit-capable Android phones to begin hitting shelves late this year and early next year, and 64-bit Android to become a high-end mainstay in late 2015 or early 2016. Surprise announcements could easily accelerate this schedule, but it's going to be a while before 64-bit Android becomes the norm.

I'm not sure how Android handles things but for windows the whole memory thing was ridiculously complicated to explain to people. Yes with 32bit addressing and the standard block size you end up with 4gbs of addressable space. For any modern operating system each application should get their own address space and the the memory manager tasks care of mapping that address space to actual locations either in memory or swapped out someplace. For Windows xp this was split with 2gbs usable by the application (by default) and 2gbs for the os. In the OS portion you had ranges for Memory mapped devices like the portion of your video cards memory that was mapped to someplace the OS could access or other devices like network cards, hard drives etc. Windows also used it's portion to track things and for passing information to and from the application. Yes there was a switch for windows XP (/3gb) that changed this to 3gb for the app and 1gb for the os but this could cause the OS to run out of space and cause problems like network card drivers failing and loosing network connectivity or graphics drivers crashing so was problematic at best. In Vista and later you could also fine tune that split to something like 2.5/1.5. On top of that the32bit app still had to be compiled with a specific flag that said it knew how to handle more than 2gbs of memory or that is all it would be able to see. A couple random MS docs with some information about this.

How nasty of a hack PAE is really depends on what you are looking for. Letting your system have access to more than 4gbs of ram so that you can have multiple applications using 2gbs of ram simultaneously it really shouldn't be too bad as the Memory management is already hiding the true physical location from the app anyway. For a single application that needs to use more than 2/3/4gbs of memory PAE is horrible to deal with and going to a 64 bit system is much better.

In the long run, though, 64-bit hardware gets you a couple of things that will prove to be beneficial. The most obvious is full support for more than 4GB of RAM,

This is not true. It gets you support for more than 4GB of address space per process. This is not the same thing as 4GB of system RAM.

Quote:

a limit that Android devices may well be running up against in the next year or two.

No it'll be quite a few years before individual phone apps need that kind of address space.

Quote:

Most high-end Android phones ship with 2GB of RAM today, though newer models like the Galaxy Note 3 are beginning to creep up to 3GB. Some chips already partially support this via ARM's Large Physical Address Extension, or LPAE (PDF), but full 64-bit support will be required for apps to take advantage of more RAM.

Yes and no. If you want a single app to be able to use more than several GB of RAM, then yes. But if you have a phone with 4 or 6 of RAM, you probably are fine with 32 bit mode because you won't want to give most of your RAM to just one process. It'll probably be a while before we see apps that need that kind of memory on a phone.

Quote:

Even assuming that these 64-bit phones ship with a 64-bit version of Android on day one, the software and hardware fragmentation native to Google’s Android ecosystem means that it will take some time to reach critical mass. As we’ve already seen in the long and protracted move from pre-40 versions of Android to post-4.0 versions of Android, app developers go where the users are. Even now, going to the Google Play store and searching for any given app will still return apps that look like they were designed to fit in on Gingerbread, not on one of the five versions of Android that Google has shipped since then. It’s going to take just as long for developers to justify building 64-bit versions of their applications.

What are you talking about? Android apps are built in Java. They become 64 bit when you run them in a 64 bit runtime. Generally you won't have to do anything.

Yes, I don't agree too with the statement that it will take long to see 64 bit apps. Android is VM based, so on 64bit all bytecode is JITted to 64bit. And since native C/C++ code with android can easily be compiled to ARM,ARM-v7, MIPS,X86 with just a simple statement, you can expect that most apps also add ARM64. I don't know how iOS apps work, but Android apk's can include multiple architectures in them for NDK code.

For real optimisations that need manual coding to take advantage of (SIMD?), I can imagine it indeed will take a while.

There's really going to be no reason any non-game Android app would need 64-bit, it's still incredibly rare to have a desktop app take up more than 4GB RAM. The increased system RAM for background tasks and caching is where the benefits will come in.

This is not true. It gets you support for more than 4GB of address space per process. This is not the same thing as 4GB of system RAM.

Are you sure? Isn't Windows 32bit limited to 3.x GB for the whole OS? Only with a 64bit Windows you have > 3GB available for the whole OS. I would guess that also goes for linux. So it's both per process and for the whole OS.

This is not true. It gets you support for more than 4GB of address space per process. This is not the same thing as 4GB of system RAM.

Are you sure? Isn't Windows 32bit limited to 3.x GB for the whole OS?

Depends on the version. Windows Vista/7/8 are. XP wasn't until SP2 or 3. Eventually MS was forced to disable addressing more than 4GB of physical address space because so few device drivers actually supported it.

It's really surprising that all those top line iOS apps are still running 32 bit. I have a bunch of smallish apps in the store, and when I needed to bring it to 64-bit so as to use the face recognition library, it took around 20 minutes to tidy up my use of 'int'. It did double my executable's size though.

This is not true. It gets you support for more than 4GB of address space per process. This is not the same thing as 4GB of system RAM.

Are you sure? Isn't Windows 32bit limited to 3.x GB for the whole OS?

Depends on the version. Windows Vista/7/8 are. XP wasn't until SP2 or 3. Eventually MS was forced to disable addressing more than 4GB of physical address space because so few device drivers actually supported it.

Are you talking LPAE? Without it, a 32 bit OS is indeed limited to 4 GB of address space. LPAE is a cheat, a workaround. I enabled it in an XP box years ago, and it made it flaky and unstable. I disabled it and bought 64 bit Vista and repartitioned and went dual boot. (Vista 64 bit was flaky, too. But thats a different story). I lived with just over 3 gb in XP and the full 4 gb in Vista.

Developers don't need to make 64-bit versions of Android apps except NDK apps. If Dalvik/ART will be 64-bit, all Java-apps automatically will be 64-bit.

This. If anything, any given Android manufacturer can make the switch faster then Apple could in terms of how common apps support 64-bit, since most non-games are just using the dalvik vm. Whether or not any other developers are upgrading to 64-bit will be mostly irrelevent, since none of them will be using 64-bit chips. Them upgrading (or not) will have made no difference to whether, say, Samsung can take advantage of a 64-bit ARM chip in their phones. The only major difference will be long-term for game developers.

This is not true. It gets you support for more than 4GB of address space per process. This is not the same thing as 4GB of system RAM.

Are you sure? Isn't Windows 32bit limited to 3.x GB for the whole OS?

Depends on the version. Windows Vista/7/8 are. XP wasn't until SP2 or 3. Eventually MS was forced to disable addressing more than 4GB of physical address space because so few device drivers actually supported it.

Until XP SP1 actually. And it worked fine if you had BIOS support for it and the drivers of all your stuff was written according to MS spec. Most were, but notably nVidias weren't, and caused BSODs when you tried it. After trying to get nVidia to fix their broken garbage for a while, MS eventually gave up and disabled the feature, pointing everyone to 64-bit Windows (which was coming out soon at that point) instead.

I'm not sure how Android handles things but for windows the whole memory thing was ridiculously complicated to explain to people. Yes with 32bit addressing and the standard block size you end up with 4gbs of addressable space. For any modern operating system each application should get their own address space and the the memory manager tasks care of mapping that address space to actual locations either in memory or swapped out someplace. For Windows xp this was split with 2gbs usable by the application (by default) and 2gbs for the os. In the OS portion you had ranges for Memory mapped devices like the portion of your video cards memory that was mapped to someplace the OS could access or other devices like network cards, hard drives etc. Windows also used it's portion to track things and for passing information to and from the application. Yes there was a switch for windows XP (/3gb) that changed this to 3gb for the app and 1gb for the os but this could cause the OS to run out of space and cause problems like network card drivers failing and loosing network connectivity or graphics drivers crashing so was problematic at best. In Vista and later you could also fine tune that split to something like 2.5/1.5. On top of that the32bit app still had to be compiled with a specific flag that said it knew how to handle more than 2gbs of memory or that is all it would be able to see. A couple random MS docs with some information about this.

How nasty of a hack PAE is really depends on what you are looking for. Letting your system have access to more than 4gbs of ram so that you can have multiple applications using 2gbs of ram simultaneously it really shouldn't be too bad as the Memory management is already hiding the true physical location from the app anyway. For a single application that needs to use more than 2/3/4gbs of memory PAE is horrible to deal as you have to start actually tracking the pages and a whole bunch of other headaches if the os even allows you to do that. Going to a 64 bit system is much better if a single app needs more memory.

Most of this is from memory due to supporting Photoshop back when 64-bit was first coming out and photoshop made the transition from 32-bit to 64-bit.

Yes, I don't agree too with the statement that it will take long to see 64 bit apps. Android is VM based, so on 64bit all bytecode is JITted to 64bit. And since native C/C++ code with android can easily be compiled to ARM,ARM-v7, MIPS,X86 with just a simple statement, you can expect that most apps also add ARM64. I don't know how iOS apps work, but Android apk's can include multiple architectures in them for NDK code.

For real optimisations that need manual coding to take advantage of (SIMD?), I can imagine it indeed will take a while.

There's really going to be no reason any non-game Android app would need 64-bit, it's still incredibly rare to have a desktop app take up more than 4GB RAM. The increased system RAM for background tasks and caching is where the benefits will come in.

I'm not sure why you are getting down voted. Maybe the dram lobbyist is trawling the forum.

The kind of programs I run on a phone aren't memory hogs. I do that on a desktop, and they can run for days. About the only use I would ever have for more dram on a phone is virtualization.

You could also use the extra dram for multiple users, which in the case of a phone would be having a work and personal account on the same device at the same time. Something like a next generation of BlackBerry Balance.

There's really going to be no reason any non-game Android app would need 64-bit, it's still incredibly rare to have a desktop app take up more than 4GB RAM. The increased system RAM for background tasks and caching is where the benefits will come in.

I'm not sure why you are getting down voted. Maybe the dram lobbyist is trawling the forum.

The kind of programs I run on a phone aren't memory hogs. I do that on a desktop, and they can run for days. About the only use I would ever have for more dram on a phone is virtualization.

You could also use the extra dram for multiple users, which in the case of a phone would be having a work and personal account on the same device at the same time. Something like a next generation of BlackBerry Balance.

I know I personally down-voted him because he is once again reiterating the provenly false idea that the only benefit to this 64-bit transition is an increase in memory. Very few people care about the increase in memory, and it is not going to be the driving force behind the transition any time soon.

Linus Torvalds, who ought to know better than you or I, begs to differ:

I think you are confusing PAE (what that page is referring to) and LPAE (what you said above).

More generally, I don't disagree with his argument about address, but I think its silly to blindly apply it to mobile devices which have very different use cases then servers and desktops. This doesn't magically make it easier to port everything to a new arch though (because its not). Nor does it make LPAE a hack (again because its not). When you look at how it would work, LPAE is a useful mechanism.

I down voted you because that improved register story is borderline bunk. I've played with all sorts of optimization flags during compilations of c or cpp and it hardly makes a difference.

You are wrong on this specific point. gcc has a lot of trouble with ARM7's constrained register options. (More than) doubling the available registers for most code is actually a pretty big deal, both in terms of performance and power efficiency. I have personally rewritten a decient amount of c code into ARM asm specifically to work around register allocation problems on gcc/ARM/android. It is not "bunk".

I still don't really understand what Andrew thinks is going on with Android though. His article seems to imply that there is some kind of long transition to 64 bit going on in the Android world. He even compares it to Windows and iOS which is just plain weird.

Android is arch independent. Windows is not. iOS is not. People are already running Android on 64 bit x86 machines with 8GB of RAM. It works fine, uses the extra registers on x86-64 too. Why wouldn't it? Android is is arch independent. Its nothing like Windows or even iOS.

What is actually happening in those AOSP and github commits is that the underlying dalvik/art frameworks are being updated to support objects that actually span a 32 bit boundary. Right now a lot of stuff just assumes that you won't have a 4GB buffer/image/whatever. So while you can have as much RAM as you want, you may not actually be able to do things like allocate some objects larger than 2/4GB.

This may seem weird, but its actually pretty normal for non-native frameworks. .NET in 64 bit mode on Windows has lots of similar restrictions (e.g. arrays can't be more than 2^32 entries long) that are just now being relaxed in .Net 4.5+. Generally these things don't matter because even in 64 bit mode normal programs don't allocate billions of entries in a single array, single images that span gigabytes, etc.

There's really going to be no reason any non-game Android app would need 64-bit, it's still incredibly rare to have a desktop app take up more than 4GB RAM. The increased system RAM for background tasks and caching is where the benefits will come in.

I'm not sure why you are getting down voted. Maybe the dram lobbyist is trawling the forum.

The kind of programs I run on a phone aren't memory hogs. I do that on a desktop, and they can run for days. About the only use I would ever have for more dram on a phone is virtualization.

You could also use the extra dram for multiple users, which in the case of a phone would be having a work and personal account on the same device at the same time. Something like a next generation of BlackBerry Balance.

I know I personally down-voted him because he is once again reiterating the provenly false idea that the only benefit to this 64-bit transition is an increase in memory. Very few people care about the increase in memory, and it is not going to be the driving force behind the transition any time soon.

I down voted you because that improved register story is borderline bunk. I've played with all sorts of optimization flags during compilations of c or cpp and it hardly makes a difference.

The only 32 bit code I run is for embedded linux, and I don't anticipate going to 64 bits soon since 500M is plenty.

Now I can see tablets needing 64 bits, which may be one of the driving forces. Remember, Google owns the phone market and will soon own the tablet market. Resistance is futile.

Not only are you wrong for the reason redleader laid out, but I was speaking to the overall architecture improvements. You seem to feel that the >4GB limit and increased register size are the only things that changed in the new architecture.

There's really going to be no reason any non-game Android app would need 64-bit, it's still incredibly rare to have a desktop app take up more than 4GB RAM. The increased system RAM for background tasks and caching is where the benefits will come in.

There's unintended or maybe it's intended benefit of a large memory are that could be used to randomize code location as a measure against certain hacks.

[...]And dammit, in this age and date when almost everybodyhas a gigabyte of RAM in any new machine, anybody who stillthinks that "not that many people need 64-bits" is simplynot aware of what he's speaking of. [...]

It's really surprising that all those top line iOS apps are still running 32 bit. I have a bunch of smallish apps in the store, and when I needed to bring it to 64-bit so as to use the face recognition library, it took around 20 minutes to tidy up my use of 'int'. It did double my executable's size though.

There are actually a fair number that support 64 bit. My AutoCad 360, for example, moved to 64 bit quickly, and has a major increase in performance.

But most apps, top line or not, don't need 64 bits. The advantages lie elsewhere overall.

I still don't really understand what Andrew thinks is going on with Android though. His article seems to imply that there is some kind of long transition to 64 bit going on in the Android world. He even compares it to Windows and iOS which is just plain weird.

Android is arch independent. Windows is not. iOS is not. People are already running Android on 64 bit x86 machines with 8GB of RAM. It works fine, uses the extra registers on x86-64 too. Why wouldn't it? Android is is arch independent. Its nothing like Windows or even iOS.

I'll just note here that iOS does run on intel processors, in the form of the iOS simulator. Runs pretty well, too. All of the apps in Xcode are compiled down to the intel version to run in the simulator, and Apple compiles them down to ARM code for release.

Or something like that. But I'm pretty sure iOS itself is platform agnostic for users.

It's nice to get an architectural cleanup, but will it be possible to continue running applications that don't need large address spaces as 32 bit? Will there be a 32 bit Dalvik still available? I ask because if you don't need to address more than 4 GB in an application, doubling pointer size can greatly expand your program's memory footprint for no gain. As an extreme example, I had an in-memory graph database that tracked many small objects and it went from ~1500 MB to ~2900 MB resident size for a typical use case when it was compiled for 64 bit Linux vs. 32 bit. It didn't even run any faster as native 64 bit, perhaps due to the extra memory traffic. I preferred to build the application 32 bit even though it would be deployed under 64 bit OS.

Especially on a mobile device, I'm concerned that forcing everything 64 bit will make everything more RAM-hungry, which requires more RAM in the device even if most apps don't need large address spaces, which also means increased power consumption for RAM refresh.

Or something like that. But I'm pretty sure iOS itself is platform agnostic for users.

Not really, although I can see why you'd think that from Andrew's article. iOS apps still have to be ported to 64 bit and then recompiled from source. You most likely won't have to do that for Android unless you're using the NDK which is uncommon. If you don't have the source and the developer doesn't release a 64 bit binary, you are out of luck.

On the Android side, most APKs will simply become 64 bit automatically. No recompilation, no source, no redownload, nothing.

There's really going to be no reason any non-game Android app would need 64-bit, it's still incredibly rare to have a desktop app take up more than 4GB RAM. The increased system RAM for background tasks and caching is where the benefits will come in.

I'm not sure why you are getting down voted. Maybe the dram lobbyist is trawling the forum.

The kind of programs I run on a phone aren't memory hogs. I do that on a desktop, and they can run for days. About the only use I would ever have for more dram on a phone is virtualization.

You could also use the extra dram for multiple users, which in the case of a phone would be having a work and personal account on the same device at the same time. Something like a next generation of BlackBerry Balance.

I know I personally down-voted him because he is once again reiterating the provenly false idea that the only benefit to this 64-bit transition is an increase in memory. Very few people care about the increase in memory, and it is not going to be the driving force behind the transition any time soon.

I down voted you because that improved register story is borderline bunk. I've played with all sorts of optimization flags during compilations of c or cpp and it hardly makes a difference.

The only 32 bit code I run is for embedded linux, and I don't anticipate going to 64 bits soon since 500M is plenty.

Now I can see tablets needing 64 bits, which may be one of the driving forces. Remember, Google owns the phone market and will soon own the tablet market. Resistance is futile.

But is this thread about "moving to 64 bit" or "moving to AArch64"?

If the former, then I agree: 64bit wide pointers start to be critical when individual processes require lots of address space (not just RAM - mmap'd files count here too, for instance). In the meantime ARM's long format pagetable helps for systems as a whole with >4GB ram. (at the cost as Linus points out, of being aukward for the kernel to be unable to map "all of RAM" at once, but thats possible, if fiddly and unclean, to work around if needed)

If the motivation is the second point - then removing cruft from the ISA is the main benefit for now. More registers, no more "every/most instruction is predicated" which I understand is a pain to implement from a CPU pipeline perspective.

So no one seems to be "needing" 64 bits, or even *needing* AArch64 anytime soon, But there are good things that come with the new ISA. Just that they happen to be bolted to 64bit address space is a separate issue.

Also - better to have a 64bit-ready codebase for when the time comes that it is needed, than to ignore it now because 'no one needs it' and then be caught with your pants down when people do.

I wonder if we'll end up with something like the x32 ABI that x86 has: Stick to 32bit pointers and calling convention, but feel free to use all the new registers and instructions

I still don't really understand what Andrew thinks is going on with Android though. His article seems to imply that there is some kind of long transition to 64 bit going on in the Android world. He even compares it to Windows and iOS which is just plain weird.

Android is arch independent. Windows is not. iOS is not. People are already running Android on 64 bit x86 machines with 8GB of RAM. It works fine, uses the extra registers on x86-64 too. Why wouldn't it? Android is is arch independent. Its nothing like Windows or even iOS.

What is actually happening in those AOSP and github commits is that the underlying dalvik/art frameworks are being updated to support objects that actually span a 32 bit boundary. Right now a lot of stuff just assumes that you won't have a 4GB buffer/image/whatever. So while you can have as much RAM as you want, you may not actually be able to do things like allocate some objects larger than 2/4GB.

Don't these commits imply that there's more work to be done than you're suggesting? Running on 64-bit is one thing, but obviously there's plenty of work to be done to ensure everything runs well, can take advantage of the 64-bit/ARMv8 hardware features, etc.

The Windows and iOS comparisons are more about comparing approaches/ecosystems than comparing technology. Apple can put out a new phone and say "hey! We're 64-bit!" because it's vertically integrated. Windows and PCs had a longer road because you had to get buy in from lots and lots of different players to get to the point where 64-bit was widely used. That's all I mean by comparing Android to Windows - first you need ARMv8 chips, and then 64-bit kernels and drivers and etc, then you need OEMs to use that stuff in their products, and then apps that take advantage of that stuff. Does that make more sense?

I don't know how much Android will need 64 bit computing or generally what benefits it will provide. For iOS, I would want to recompile for 64 pretty quickly, just since Apple put a ton of work into speeding up 64bit code. Like using tagged pointers for memory management (seriously, retain and release calls are much faster, and that happens all the time--ARC just automates it). Apple I think played with the calling conventions too, and is definitely using more registers that 64bit ARM gives.

I don't know if Android can do specific optimizations like that, which give such a speed up to code.

One more thing: how did Apple get out a 64 bit chip a year ahead of everyone else? From this article it sounds like the first 64 bit chips from Qualcomm and co won't come until 2015--an 18 month lag. How did they fall so far behind? It's not like Apple is going to rest on their laurels.

One more thing: how did Apple get out a 64 bit chip a year ahead of everyone else? From this article it sounds like the first 64 bit chips from Qualcomm and co won't come until 2015--an 18 month lag. How did they fall so far behind? It's not like Apple is going to rest on their laurels.

The basic ARM specs were out there, the chip design was available. Only Apple decided to use it. Now that they have, everyone else wants to catch up.

Linus Torvalds, who ought to know better than you or I, begs to differ:

I think you are confusing PAE (what that page is referring to) and LPAE (what you said above).

More generally, I don't disagree with his argument about address, but I think its silly to blindly apply it to mobile devices which have very different use cases then servers and desktops. This doesn't magically make it easier to port everything to a new arch though (because its not). Nor does it make LPAE a hack (again because its not). When you look at how it would work, LPAE is a useful mechanism.

I think the points I and PeterB made in that thread are applicable here.

I understand the difference (LPAE/LPAS is ARMs implementation of the same basic idea for which the x86 implementation is called simply PAE) but I don't think it matters. Android is based on the Linux kernel. The lead developer has written a long text (and a bunch more if you read on in the thread) explaining why physical address extensions are a bad idea - written from first hand experience. Said kernel had been available in 64-bit versions for some time. Why on earth should Google bother with a LPAE version of the kernel instead of going straight to a 64-bit version? Bonus question: can you cite an example where PAE (x86, that is) has worked well as a long term solution? We know from that text that it doesn't work well on Linux. MS disabled it entirely on Windows. Apple didn't bother implementing it in the first place. Not exactly ringing endorsements.

I still don't really understand what Andrew thinks is going on with Android though. His article seems to imply that there is some kind of long transition to 64 bit going on in the Android world. He even compares it to Windows and iOS which is just plain weird.

Android is arch independent. Windows is not. iOS is not. People are already running Android on 64 bit x86 machines with 8GB of RAM. It works fine, uses the extra registers on x86-64 too. Why wouldn't it? Android is is arch independent. Its nothing like Windows or even iOS.

What is actually happening in those AOSP and github commits is that the underlying dalvik/art frameworks are being updated to support objects that actually span a 32 bit boundary. Right now a lot of stuff just assumes that you won't have a 4GB buffer/image/whatever. So while you can have as much RAM as you want, you may not actually be able to do things like allocate some objects larger than 2/4GB.

Don't these commits imply that there's more work to be done than you're suggesting? Running on 64-bit is one thing, but obviously there's plenty of work to be done to ensure everything runs well, can take advantage of the 64-bit/ARMv8 hardware features, etc.

Obviously porting an OS to a new platform is a lot of work, but ARM64 is not that different then what they had to do for MIPS or x86.

The Windows and iOS comparisons are more about comparing approaches than comparing technology. Apple can put out a new phone and say "hey! We're 64-bit!" because it's vertically integrated. Windows and PCs had a longer road because you had to get buy in from lots and lots of different players to get to the point where 64-bit was widely used. That's all I mean by comparing Android to Windows - first you need ARMv8 chips, and then 64-bit kernels and drivers and etc, and then apps that take advantage of that stuff. Does that make more sense?

Yeah but what long road ahead? iOS and Windows had to have a transition because they were shipping native code. Android isn't to any real degree. There isn't really a transition at all in the general sense that you're thinking.

I think the analogy to .NET is most apt. Do you remember how that switching to 64 bit went?