Samsung’s Exynos 5 Octa: Checking out the chip inside the Galaxy S 4

This new chip has eight CPU cores, but that's only part of the story.

Update: A previous version of this article included benchmarks from a 1.9GHz Snapdragon SoC that were labeled as being from the Exynos 5 Octa SoC. The text and charts have been corrected to reflect this.

Samsung officially unveiled its new flagship Galaxy S 4 smartphone on Thursday after weeks of speculation, leaks, and strange ad campaigns. The company's presentation was focused mostly on the software side of the equation, with all of the hardware information rattled off in just a few minutes at the beginning of the presentation.

Despite the fact that the S 4 looks a lot like its predecessor, there's quite a bit of new hardware under the hood. Today, we wanted to take a quick look at the chip that powers the international versions of the phone, Samsung's new Exynos 5 Octa system-on-a-chip (SoC). We should note: the US versions of the S 4 likely won't include this chip, but if precedent tells us anything, we will eventually get our hands on it, possibly in the form of a future Samsung tablet.

It's a given that this chip will be faster than the Exynos 4 Quad that powered the international Galaxy S III, but the new chip's architecture also brings a few interesting things to the table. Let's take a look.

Eight cores (technically)

The Exynos 5 Octa attempts to balance performance and battery life using eight CPU cores—four high-powered Cortex-A15s and four slower Cortex-A7s.

ARM

Most mentions of the Exynos 5 Octa simply say that it has eight CPU cores. This isn't untrue—the chip actually does have eight distinct CPU cores—but not all of these cores are created equal.

The biggest issue in designing a chip for a smartphone or tablet is balancing performance and power consumption, and most modern chips attempt to do both—the chips can use multiple cores and higher clock speeds when higher performance is called for, but will typically disable cores and lower clock speeds during light or idle use. The Octa attempts to solve this problem using a CPU configuration that ARM calls "big.LITTLE."

Big.LITTLE pairs two distinct CPU cores, one larger and faster (in this case, a Cortex-A15 running at 1.2GHz) and one that is smaller and more power-efficient (a Cortex-A7 running at 1.6GHz). These two cores support the same instruction sets and can execute all of the same code, so speed and power consumption are the main differences between them. Lighter tasks like Web browsing and e-mail checking will be executed on the power-saving Cortex-A7 cores, while more computationally intensive tasks like gaming will be sent over to the Cortex-A15s.

The core switching is controlled by a firmware layer that sits in between the software and the chip itself. Operating systems can be tweaked to better support big.LITTLE's particular arrangement of cores, but any OS that supports power state switching for CPUs (any mainstream operating system from the last decade or so) can take advantage of big.LITTLE without any additional changes.

Different versions of this idea have existed in shipping products for some time now—the most prominent example is probably Nvidia's Tegra chips, which include a single "companion" or "shadow" core that kicks in for light use so that the more power-hungry main CPU cores can switch off. Big.LITTLE simply takes it further, pairing each high-end CPU core with a slower one. Since the Exynos 5 Octa is the first big.LITTLE chip to ship, we don't have much real-world evidence that one approach is superior to the other, but in both cases the concept is similar.

There are two different implementations of big.LITTLE that hardware makers can use: one in which the Cortex-A7 and A15 cores can be active at the same time (called "big.LITTLE MP" in ARM's documentation) and one in which a Cortex-A7 core powers down when its corresponding A15 core powers up and vice versa. By all appearances, the Octa uses the latter implementation.

Samsung's demo video for the chip has some CPU usage examples toward the end, and as long as the examples used here are representative of how the chip actually works, the A7 and A15 cores can't both be used at the same time—the chip has eight cores, but only four of them can be active at any one time. The upshot of this is that the Exynos 5 Octa's maximum performance will be consistent with a quad-core Cortex-A15 chip like Nvidia's Tegra 4.

Samsung's Exynos 5 Quad demo video. The CPU usage examples in question start at about the one minute mark.

Benchmarks aside, the relatively low clock speed of the Octa in the S 4 is a byproduct of using a chip like the A15 in a smartphone. In a tablet with more room to dissipate heat and a larger battery to compensate for the resultant bump in power usage, there should be plenty of room to ramp the clock speed up a bit. Given that the Exynos 4 Quad from the international Galaxy S III later made its way into both the Galaxy Note 10.1 and the Galaxy Note 8.0, it's a safe bet that we'll see some version of the Octa make its way into future Samsung tablets.

An Apple-esque GPU

Enlarge/ Imagination Technologies' diagram for the PowerVR SGX544MP3 that powers the Octa.

There's (thankfully) much less to say about the GPU in the Exynos 5 Octa. It's a triple-core Imagination Technologies PowerVR SGX 544MP3, which is a bit of a departure for Samsung—its past Exynos chips (including the Exynos 5 Dual) have largely depended on ARM's Mali GPUs, but the Octa is using something much more similar to the GPUs that Apple uses in its A-series SoCs.

The 544MP3 is in fact very similar to the triple-core 543MP3, with the only real difference being API support (the 544MP3 supports DirectX 9 and OpenGL 2.1, suggesting that this chip may find its way into Windows phones and tablets going forward). Again, we weren't able to run any specific GPU benchmarks, but depending on the GPU's clock speed we should be looking at performance in the same ballpark as the Apple A6, albeit in a phone with a much higher-resolution screen.

Just from this chip, it's not clear whether going with Imagination's graphics technology over ARM's is a new direction for Samsung or a one-off decision for the Octa in particular, but in any case it's a big win for Imagination. Its graphics technology isn't quite as widespread as Qualcomm's Adreno GPUs (at least not in the United States), but to have their technology included in so many high-profile, top-selling phones has got to be good for their bottom line.

Sorry, North Americans, but this probably doesn't apply to you (yet)

The Exynos 5 Octa is an interesting chip, but those of us in North America probably aren't going to get to see it, at least not in the Galaxy S 4. As is often the case, the American version of the phone will instead use a 1.9GHz quad-core Snapdragon 600 SoC. This is a fairly common practice for most of the major smartphone players, since Qualcomm's LTE modems are the most mature in the market at the moment, but with companies like Intel and Nvidia catching up, we should hopefully start seeing a little more variety later this year and into next.

We weren't able to run our full suite of benchmarks on the Snapdragon-equipped S 4 we briefly handled, but we were able to get a couple of quick browser benchmarks that we can use to compare the S 4 to both the US version of the Galaxy S III and the Tegra 4 reference tablet we spent some time with at Mobile World Congress a couple of weeks back—this chip is a quad-core Cortex-A15 chip, and so at the same clock speed should share at least similar performance to the Exynos 5 Octa.

Note that these Tegra 4 numbers come from Nvidia's Tegra 4 reference tablet, which is technically pre-release hardware. All tests were performed using the built-in Android browser; the Nvidia tablet and Galaxy S 4 were all running Android 4.2, while the Galaxy S III was running Android 4.1.

Despite running at the same clock speed, the Tegra 4 numbers show off the advantages of using four Cortex-A15 CPU cores. The Exynos 5 Octa's cores will be clocked significantly lower, but I'd be interested to see how the Octa in the international version of the phone compares to the Snapdragon 600 in the US version.

Compared to the dual-core 1.5GHz Snapdragon S4 in the Galaxy S III, though, the S 4 acquits itself reasonably well—both the CPU and the GPU will be substantially upgraded over last year's model. This will almost certainly be the model of the phone that we get for our review, and rest assured that we'll be benchmarking it more thoroughly when the time comes.

As for the Exynos 5 Octa, as we mentioned earlier it's a fair bet that this will make it to American shores in the form of a future tablet from the company, or possibly even something like the ARM Chromebook (which uses the Exynos 5 Dual). When we get our hands on a device that uses the chip, we'll be paying especial attention to the battery life under various workloads to see whether big.LITTLE lives up to its promises—as companies continue trying to push the performance envelope without destroying their devices' battery lives, chips like the Octa are only going to become more common.

Can someone explain why there are different processors delivered to different countries? At first glance it makes little to no sense.

In an acronym, LTE.

Radio functions are increasingly often included into System on Chips now, to further streamline power requirements. The U.S.' use and spectrum for LTE is unique and therefore requires separate hardware; in this case, Snapdragon's 600 SoC was chosen along with Qualcomm's LTE radio.

This is roughly what Samsung did with the Galaxy S III as well.

The last three paragraphs of this article should preface every article in the U.S. about the Galaxy S IV: instead we will be inundated by useless hardware comparisons and hipster angst regarding the power conservation versus the performance of the Exynos big.LITTLE implementation.

Can someone explain why there are different processors delivered to different countries? At first glance it makes little to no sense.

In an acronym, LTE.

Radio functions are increasingly often included into System on Chips now, to further streamline power requirements. The U.S.' use and spectrum for LTE is unique and therefore requires separate hardware; in this case, Snapdragon's 600 SoC was chosen along with Qualcomm's LTE radio.

This is roughly what Samsung did with the Galaxy S III as well.

The last three paragraphs of this article should preface every article in the U.S. about the Galaxy S IV: instead we will be inundated by useless hardware comparisons and hipster angst regarding the power conservation versus the performance of the Exynos big.LITTLE implementation.

So what happens if you take the American version of the S3/S4 abroad? Are you limited to 3G or slower?

Andrew Cunningham wrote:Big.LITTLE pairs two distinct CPU cores, one larger and faster (in this case, a Cortex-A15 running at 1.2GHz) and one that is smaller and more power-efficient (a Cortex-A7 running at 1.6GHz).

Ignoring for a moment the question of what smartphone software or usage pattern really benefits from 4 cores of any kind, can the big.LITTLE architecture possibly be an efficient way to handle the range of tasks? After all, my car doesn't have a V8 under the hood and and the added expense and complication of a lawnmower engine in the trunk. Given that normal cores can be throttled or shutdown as required, I'm skeptical.

Andrew Cunningham wrote:Big.LITTLE pairs two distinct CPU cores, one larger and faster (in this case, a Cortex-A15 running at 1.2GHz) and one that is smaller and more power-efficient (a Cortex-A7 running at 1.6GHz).

The speeds are backwards.

Look at the benchmarks in the article. In particular, the Exynos Octa versus the 1.9 GHz Tegra 4. They both have Cortex-A15 cores, somewhat similar memory architectures. The vast majority of the difference between the two will be driven by clock rate.

Can someone explain why there are different processors delivered to different countries? At first glance it makes little to no sense.

In an acronym, LTE.

Radio functions are increasingly often included into System on Chips now, to further streamline power requirements. The U.S.' use and spectrum for LTE is unique and therefore requires separate hardware; in this case, Snapdragon's 600 SoC was chosen along with Qualcomm's LTE radio.

The Exynos 4 Quad that was in the International version of the S3 also came to the US in the Galaxy Note 2, so it stands to reason that the Note 3 will come with the Octa in tow worldwide...which leads me to the question: Why is the Octa not worldwide for the S4? The Exynos 4 Quad in the Note 2 seems to demonstrate that it can be done. Just seems like an odd step back by Samsung given that they've worked out this problem before.

Andrew Cunningham wrote:Big.LITTLE pairs two distinct CPU cores, one larger and faster (in this case, a Cortex-A15 running at 1.2GHz) and one that is smaller and more power-efficient (a Cortex-A7 running at 1.6GHz).

The speeds are backwards.

Nope. Everyone else is reporting the same. The A15s run at a lower frequency than A7s

Can someone explain why there are different processors delivered to different countries? At first glance it makes little to no sense.

In an acronym, LTE.

Radio functions are increasingly often included into System on Chips now, to further streamline power requirements. The U.S.' use and spectrum for LTE is unique and therefore requires separate hardware; in this case, Snapdragon's 600 SoC was chosen along with Qualcomm's LTE radio.

Andrew Cunningham wrote:Big.LITTLE pairs two distinct CPU cores, one larger and faster (in this case, a Cortex-A15 running at 1.2GHz) and one that is smaller and more power-efficient (a Cortex-A7 running at 1.6GHz).

The speeds are backwards.

Nope. Everyone else is reporting the same. The A15s run at a lower frequency than A7s

The A15 was really designed more as a tablet CPU since tablets have huge batteries and can dissipate the heat A15's put out over a larger surface area, but they can be shoehorned into phones if you keep the clock frequency low.

Can someone explain why there are different processors delivered to different countries? At first glance it makes little to no sense.

In an acronym, LTE.

Radio functions are increasingly often included into System on Chips now, to further streamline power requirements. The U.S.' use and spectrum for LTE is unique and therefore requires separate hardware; in this case, Snapdragon's 600 SoC was chosen along with Qualcomm's LTE radio.

Ignoring for a moment the question of what smartphone software or usage pattern really benefits from 4 cores of any kind, can the big.LITTLE architecture possibly be an efficient way to handle the range of tasks? After all, my car doesn't have a V8 under the hood and and the added expense and complication of a lawnmower engine in the trunk. Given that normal cores can be throttled or shutdown as required, I'm skeptical.

The issue isn't whether the car has a lawnmower engine for light driving, the issue is that this chip has 4 lawnmower engines for light driving.

Also, I would like a future article to address precisely how the cores are utilized. For example, can two A15s be active with two A7s? Can all cores be shut down except for one A7? How does the chip now to switch from an A7 to an A15, and based on what formula does it choose a mix of cores. There are a lot of permutations available.

Andrew Cunningham wrote:Big.LITTLE pairs two distinct CPU cores, one larger and faster (in this case, a Cortex-A15 running at 1.2GHz) and one that is smaller and more power-efficient (a Cortex-A7 running at 1.6GHz).

The speeds are backwards.

Nope. Everyone else is reporting the same. The A15s run at a lower frequency than A7s

When first announced, I distinctly remember the A15s running at 1.8Ghz and the A7s running at 1.2Ghz. My guess is that they ran into power issues as the smaller Exynos 5250 can draw way more power than a smartphone can provide at peak (almost 7W by AnandTechs testing) and that was with only 2 A15 cores at 1.7Ghz.

Ignoring for a moment the question of what smartphone software or usage pattern really benefits from 4 cores of any kind, can the big.LITTLE architecture possibly be an efficient way to handle the range of tasks? After all, my car doesn't have a V8 under the hood and and the added expense and complication of a lawnmower engine in the trunk. Given that normal cores can be throttled or shutdown as required, I'm skeptical.

You may not have a separate lawnmower engine in your vehicle, but many people find that having an additional electric motor + batteries makes things more fuel efficient. Hybrids probably make a better analog for multiple batteries rather than multiple core clusters...but I hate car analogies in the first place

Having the additional A7 cluster doesn't add much expense, and can supposedly yield up to a 70% increase in battery life (Obviously, CPU isn't the only drain on the handsets battery though and we can't expect an overall increase that big.) DVFS can still be used and scaled for either cluster.This solution just adds additional flexibility.

If you're going to call the Exynos 5 Octa an 8-core then you should call the Tegra 4 (and 3) a 5-core. However, I think it's more accurate to call the Exynos 5 Octa and the Tegra 4 both quad cores since neither can run more than 4 cores at a time.

If you're going to call the Exynos 5 Octa an 8-core then you should call the Tegra 4 (and 3) a 5-core. However, I think it's more accurate to call the Exynos 5 Octa and the Tegra 4 both quad cores since neither can run more than 4 cores at a time.

big.LITTLE MP mode allows all eight to be used at once. That will not be the mode implemented on the S4, but the Octa itself isn't inherently limited.ARM whitepaper on the architecture.

Ignoring for a moment the question of what smartphone software or usage pattern really benefits from 4 cores of any kind, can the big.LITTLE architecture possibly be an efficient way to handle the range of tasks? After all, my car doesn't have a V8 under the hood and and the added expense and complication of a lawnmower engine in the trunk. Given that normal cores can be throttled or shutdown as required, I'm skeptical.

The issue isn't whether the car has a lawnmower engine for light driving, the issue is that this chip has 4 lawnmower engines for light driving.

Also, I would like a future article to address precisely how the cores are utilized. For example, can two A15s be active with two A7s? Can all cores be shut down except for one A7? How does the chip now to switch from an A7 to an A15, and based on what formula does it choose a mix of cores. There are a lot of permutations available.

EXACTLY. big.LITTLE seems like a bizarre way to solve the problem, insanely expensive in silicon. To put it differently, the problem it seems to be solving is an assumption that OSs are frozen in how they will treat the chips and cannot be changed. IF this were true, then the only way to make a lower power system would be to have each OS visible core consist of the big and the LITTLE part, with OS-transparent switching between them. But this seems like a totally flawed assumption. I imagine (given Tegra) that Android can perfectly well handle switching between 2 or 4 active cores and just 1. iOS can obviously be rewritten to support this if/when Apple wants this functionality. And if Windows and Blackberry are unwilling to update their OSs, honestly who cares?

All in all it seems like a really strange direction for ARM, one which I don't understand. It's really hard to imagine a realistic scenario where having 4 active low-power cores is a better match to the compute/energy parameters of a problem than a single low-power core. Is it purely to get around patents?

About cell phones in general, what is the point of talking up the number cores the respective mobile cpu has? It isn't remotely akin to discussing the number of cores in an x86 cpu, for instance, and always leads inevitably to the same old ill conceived x86 vs. ARM daydreams--that state of the art ARM is on a performance par with soa x86. It's not even close.

And in terms of what a cell phone is supposed to look like, just how many variations can you imagine for a smartphone? I mean, the face of the thing has got to be all touchscreen, doesn't it? Form follows function perfectly in this case. And as long as Samsung keeps putting the word "Samsung" on the face of its cell phones, it certainly isn't possible to assume that anyone will confuse Samsung and Apple products (Never mind the myriad of differences that manifest on screen after turning the phones on.) Smartphone cell phone exterior design cannot by its nature be radically different among cell phones, just as automobile tires and steering wheels (and many other things) cannot be radically different among competing automobile manufacturers--form has often got to follow function by necessity.

Ignoring for a moment the question of what smartphone software or usage pattern really benefits from 4 cores of any kind, can the big.LITTLE architecture possibly be an efficient way to handle the range of tasks? After all, my car doesn't have a V8 under the hood and and the added expense and complication of a lawnmower engine in the trunk. Given that normal cores can be throttled or shutdown as required, I'm skeptical.

The issue isn't whether the car has a lawnmower engine for light driving, the issue is that this chip has 4 lawnmower engines for light driving.

Also, I would like a future article to address precisely how the cores are utilized. For example, can two A15s be active with two A7s? Can all cores be shut down except for one A7? How does the chip now to switch from an A7 to an A15, and based on what formula does it choose a mix of cores. There are a lot of permutations available.

EXACTLY. big.LITTLE seems like a bizarre way to solve the problem, insanely expensive in silicon. To put it differently, the problem it seems to be solving is an assumption that OSs are frozen in how they will treat the chips and cannot be changed. IF this were true, then the only way to make a lower power system would be to have each OS visible core consist of the big and the LITTLE part, with OS-transparent switching between them. But this seems like a totally flawed assumption. I imagine (given Tegra) that Android can perfectly well handle switching between 2 or 4 active cores and just 1. iOS can obviously be rewritten to support this if/when Apple wants this functionality. And if Windows and Blackberry are unwilling to update their OSs, honestly who cares?

All in all it seems like a really strange direction for ARM, one which I don't understand. It's really hard to imagine a realistic scenario where having 4 active low-power cores is a better match to the compute/energy parameters of a problem than a single low-power core. Is it purely to get around patents?

Depending on the OS, the ability to use many cores simultaneously itself is in doubt.

Blackberry 10 and iOS both run native code and are both based on an OS that has always supported spreading threads for native code across multiple CPU cores. (Symmetric Multiprocessing) Blackberry 10 is based on QNX and iOS is based on Unix.

Things get trickier when you move to Android and Windows Phone. Both of those OSes run apps on top of virtual machines and those virtual machines were designed to be multithreaded, but only on a single CPU core.

Both OSes give coders an escape hatch for games where you can ignore the virtual machine and write directly to the hardware, but that's not what the vast majority of apps for those platforms do.

Google added some support for SMP in Android 4, but as Intel discovered while porting Android to x86, it's got a long way to go:

Quote:

"If you are in a non-power constrained case, I think multiple cores make a lot of sense because you can run the cores full out, you can actually heavily load them and/or if the operating system has a good thread scheduler.

A lot of stuff we are dealing with, thread scheduling and thread affinity, isn't there yet and on top of that, largely when the operating system goes to do a single task, a lot of other stuff stops. So as we move to multiple cores, we're actually putting a lot of investment into software to fix the scheduler and fix the threading so if we do multi-core products it actually takes advantage of it."

You may or may not recall that dual core Android devices were marketed and sold to the public long before Android had any support for the second core. Now we're seeing 8 cores marketed on an OS whose support for multiple cores is still dodgy.

You'll see benchmarks that look good, but those are written to ignore the virtual machine like the games do. Then you see performance numbers for regular apps written to run on the virtual machine and article authors scratching their heads about why that app didn't see the expected performance increase.

I'm glad this story was run, everyone calling it an octacore was irritating, since only 4 cores will ever work at once and 4 are low power energy saving cores.

The loss of the GPU is probably the bigger deal since the CPU isn't faster than what we can get here, the Adreno 320 is good but it still doesn't catch up to PowerVR, and this one would be clocked even higher than the iPhone has it.

I wonder how much the 4 low power cores really help out, over the 1 low power core in the Tegra 3 and 4. I guess they mean you can do more low power tasks at once, but I wonder if 4 are really necessary or if it's mostly for the marketing that comes with "Octa". For many tasks you'd likely want to get it done as fast as possible so it can get back to idle and shut down those A15 cores, I wonder how many use cases will make good use of the four low power ones.

I think people are going to be surprised by how well a Krait 600 running at 1.9GHz will perform in comparison.

What I'm interested in is how well the A7 cores save power. These cores should be more efficient than even the low power core in Tegra 3. Tegra 3's low power core is just an A9 built on a different process and clocked down (one of the reason why the S4Pro versions of the same phone tend to have better battery life and equal performance). The A7 is designed specifically to be even more efficient than A9.

I'm also thinking that even Anand has made a mistake about the clock frequencies. I'm pretty sure it'll be 1.6GHz A15 and 1.2GHz A7 but we'll see.

....Depending on the OS, the ability to use many cores simultaneously itself is in doubt.

Blackberry 10 and iOS both run native code and are both based on an OS that has always supported spreading threads for native code across multiple CPU cores. (Symmetric Multiprocessing) Blackberry 10 is based on QNX and iOS is based on Unix.

Things get trickier when you move to Android and Windows Phone. Both of those OSes run apps on top of virtual machines and those virtual machines were designed to be multithreaded, but only on a single CPU core.

Both OSes give coders an escape hatch for games where you can ignore the virtual machine and write directly to the hardware, but that's not what the vast majority of apps for those platforms do.

Google added some support for SMP in Android 4, but as Intel discovered while porting Android to x86, it's got a long way to go:

You may or may not recall that dual core Android devices were marketed and sold to the public long before Android had any support for the second core. Now we're seeing 8 cores marketed on an OS whose support for multiple cores is still dodgy.

You'll see benchmarks that look good, but those are written to ignore the virtual machine like the games do. Then you see performance numbers for regular apps written to run on the virtual machine and article authors scratching their heads about why that app didn't see the expected performance increase.

Windows Phone 8 shares the same NT kernel as Windows 8 and shares a subset of the new WinRT API. The NT Kernel can use up to 64 cores, if needed. Please excuse my ignorance, but what virtual machine is Windows Phone 8 using? My understanding is that everything is ultimately compiled to native code regardless if you use C++, C#, or JavaScript to write your apps.

Microsoft had two competing technologies that were suitable as the basis of a modern API—COM and .NET—but each had drawbacks. .NET has rich metadata, safe programming languages, and fits neatly with many conventions of modern programming languages (such as their use of interfaces and object-oriented inheritance). On the other hand, .NET uses a complex runtime with a virtual machine, rather than native code, which potentially exacts a performance penalty, and is somewhat awkward to use and integrate with existing native C and C++ programs.

COM is weaker in many regards—less descriptive metadata, no built-in notion of inheritance, unsafe programming languages, and in most ways far more awkward to use than .NET. But COM does have an important advantage: it has no virtual machine, being native code from the ground up. COM is also the technology used by many of the big, old Windows programs, including the all-important Office.

Within Microsoft, there are also certain political considerations at play. Internal opinion about .NET is divided. Many teams use the technology to good effect and regard it as important. The Windows division ("WinDiv"), however, has a different view. The many developmental difficulties that occurred during the (essentially abandoned) development of Windows Longhorn were attributed, at least in part, to the use of .NET code. The team also believes that native C++ development is what most developers want. This has tended to lead to an avoidance of the use of .NET even when it's an appropriate or desirable technology.

When the Windows team created WinRT, indications are that these non-technical concerns weighed at least as heavily as any technical reasons. As a result of this distaste for .NET, combined with COM's native nature and extensive use in major pre-existing Windows applications, the decision was made: WinRT is built on COM.

Andrew Cunningham / Andrew has a B.A. in Classics from Kenyon College and has over five years of experience in IT. His work has appeared on Charge Shot!!! and AnandTech, and he records a weekly book podcast called Overdue.