Tegra 4i will take the fight to Qualcomm's Snapdragon chips later this year.

When Nvidia announced its new Tegra 4 system-on-a-chip at CES last month, there was still a piece of the puzzle missing. Tegra 4, with its four ARM Cortex-A15 CPU cores and 72 GPU cores, is unquestionably a high-performance chip relative to its competition, but with higher performance comes higher power consumption. Tegra 4 can also be combined with Nvidia's new i500 LTE modem, but the modem is a separate chip rather than one that's integrated directly into the SoC.

This sort of a chip will serve both tablets and high-end smartphones well enough—both the Nexus 4 and LG's Optimus G use a similar combination of Snapdragon processors and modem chips—and Nvidia's slides are careful to point out that these are the devices that Tegra 4 is intended for. However, such a chip works less well for phones that want decent performance but also need to prioritize battery life. For those phones, a single SoC with an integrated modem is generally the preferable choice—the BlackBerry Z10 and most Windows phones go this route.

We've known all along that Nvidia planned to introduce another chip, codenamed Project Grey, to serve this market, but so far we haven't heard much else about it. That changes today: ahead of next week's Mobile World Congress, Nvidia has announced details for Project Grey, now named Tegra 4i and set to launch later this year. The chip looks like a much better fit for smartphones than Tegra 3 ever was—let's look at the details.

The CPU

Enlarge/ Tegra 4i is a 28nm SoC with all of the important smartphone parts—CPU, GPU, and LTE modem—integrated into a single 12mm by 12mm die.

Nvidia

Let's start with the CPU, the five large golden squares in the die shot above. As in other Tegra SoCs, Tegra 4i uses four CPU cores for most of its processing, but also includes a fifth power-saving "companion" core that is engaged only when your device is idle or asleep. The companion core cannot be enabled when any of the other four CPU cores are being used, and vice-versa.

Unique to Tegra 4 is the architecture of the CPU cores—where Tegra 3 uses ARM's Cortex-A9 architecture and Tegra 4 uses Cortex-A15, Tegra 4i uses something called Cortex-A9 R4. We don't know a lot about exactly how this architecture will perform relative to other Cortex-A9 chips, but it's more likely to be a small jump than a generational leap—Tegra 4i's CPU cores will almost certainly be quicker than Tegra 3's cores at the same clock speed, but both will fall short of the cores used in the full Tegra 4.

To help Cortex-A9 fight against newer architectures from the likes of Qualcomm, Nvidia is ramping up the clock speeds relative to Tegra 3—Tegra 4i will have single-core CPU speeds of up to 2.3GHz, much higher than the 1.7GHz maximum in (non-overclocked) Tegra 3 SoCs. The clock speed when two-to-four of the CPU cores are engaged will undoubtedly be lower, but Nvidia isn't publicizing that number at this point (in Tegra 3, the difference was usually only 100MHz, so it may not even be a large drop).

Tegra 4i's CPU cores aren't going to set performance records, and that will be doubly true in late 2013 after Qualcomm's 600- and 800-series Snapdragon chips have had some time to proliferate. The 4i will improve significantly on Tegra 3, though, putting the SoCs CPU performance firmly in "good enough" territory.

The GPU

The Tegra 4i's GPU performance should get a more impressive generational bump from Tegra 3—it uses the same custom Nvidia GPU cores as Tegra 4, but cuts their number down to 60 from 72. There were 12 GPU cores in Tegra 3, which Nvidia strongly suggests should result in five-times-higher performance.

Do keep in mind, though, that the number of cores is only going to be one part of the graphics performance equation—the chip's memory interface will also be important here, and it's a safe bet that Tegra 4i will have a bit less memory bandwidth than Tegra 4 proper. Compare it to Apple's SoCs—the Apple A5X for the first Retina iPad added another pair of Imagination Technologies GPU cores, but to keep that high-resolution screen running smoothly it also used a 128-bit, quad-channel memory controller, up from the dual-channel controller in the older A5. We don't know the details of Nvidia's memory controllers yet, but expect something similar to happen here.

The clock speed of the GPU cores is also an unknown, but will probably be a bit lower in Tegra 4i relative to Tegra 4 proper. The takeaway is that Tegra 4i's GPU will be much faster than Tegra 3's (and a much larger bump than the CPU side of the chip gets), but in both number of GPU cores and memory interface it will still be a fair bit behind Tegra 4 proper.

The modem

Enlarge/ Compare the separate i500 modem below the Tegra 4 chip to the fuchsia cores that represent the i500 modem in the Tegra 4i—they're the same modem, and they share the same capabilities.

Nvidia

Finally, we're at what may be the most important part for Nvidia's smartphone strategy—the modem. So far, Qualcomm has essentially cornered the mid-to-high-end smartphone market in the US, not just because of its generally good performance but also because most of its chips feature an integrated LTE modem. This is one reason why phones released in the US often use Snapdragon SoCs despite using different SoCs in their international versions, and as Nvidia's first chip with an integrated LTE modem the Tegra 4i should help the company win some of that business.

There is no difference in features between the modem integrated into the Tegra 4i and the separate version offered for use with the Tegra 4—a look at the die shots for both chips shows the same number of execution resources, and both modems share the i500 model number. Thus, our earlier observations about the modem in the Tegra 4 apply here:

Current cellular chips use fixed-function parts to enable certain technologies (3G, 4G, and so on). Nvidia's software-programmable approach allows each of the i500's eight multi-purpose processors to perform different functions as needed: the same silicon can provide support for multiple wireless technologies, which greatly cuts down on the amount of silicon needed to provide phones and tablets with the different connectivity options they need. This modem is the first fruit of Nvidia's 2011 purchase of Icera Semiconductor.

And the rest

Enlarge/ Nvidia will be using its "Phoenix" reference phone to show Tegra 4i off to its partners.

Nvidia

Moving beyond the hard specifications for a moment, Tegra 4i also includes a few of the other multimedia features advertised with Tegra 4: the always-on HDR camera (now named the Nvidia Chimera Computational Photography Architecture) and the video-encoding/-decoding engine are the two most important.

It will probably be a while before we see Tegra 4i in any shipping, consumer-ready products, but Nvidia does have some hardware that it's sampling to its partners to show off the new chip: its reference phone is codenamed "Phoenix" (do you see what they did there), and the picture above is basically all we know about it. Nvidia will be showing off Tegra 4i (and, one assumes, the Phoenix reference phone) at Mobile World Congress next week—we'll be meeting with Nvidia, and we'll update you with any additional details we can get.

25 Reader Comments

Based on the adoption rate if tegra 1-3, I don't expect to see this very often. An article on semiaccurate basically said that nvidia lied to oems about performance on the previous tegras which had the effect of burning them out so they all jumped to qualcomm. If true, I would take they stated here with a grain of salt.http://semiaccurate.com/2013/02/18/nvid ... es-at-ces/

I have a couple Tegra chips in my portable electronics, it's nifty and all, but still nothing to even show it to others. That little support will end. Nvidia can play around in other markets and hold out on actually releasing their new tech in the PC market to try and maximize their profits along with AMD (where's the competition?!).

Well, I'm not going to support it, it's kind of obvious they are just holding back and making their Tegra stuff look better by not moving their Geforce stuff forward and not having to worry about AMD significantly out performing them.

Based on the adoption rate if tegra 1-3, I don't expect to see this very often. An article on semiaccurate basically said that nvidia lied to oems about performance on the previous tegras which had the effect of burning them out so they all jumped to qualcomm. If true, I would take they stated here with a grain of salt.http://semiaccurate.com/2013/02/18/nvid ... es-at-ces/

This! Tegra 3 had pretty bad performance, especially after all the hype it was getting.

Also the CPU here is an quadcore A9 so pretty old and power-hungry. The "battery saver core" doesn't help much here. And the GPU is smaller. meh...

Maybe I missed it in something previous, but what's the idea behind the "always on HDR camera?"

Just to cherry-pick, the iPhone's HDR function (which combines a regular shot, an overexposed shot, and an underexposed shot to improve the lighting in photos) needs to be turned on manually and introduces some extra delay while the extra photos are taken and combined (and also introducing a little extra blurriness, especially in low light, since the photos aren't 100% identical). Both Tegra 4 and 4i are made to allow HDR to happen in basically real-time, bringing the benefits of HDR without the downsides (at least in theory).

The wording there does make it sound like the camera is on all the time, not the HDR. Lemme fix that. :-)

Maybe I missed it in something previous, but what's the idea behind the "always on HDR camera?"

Just to cherry-pick, the iPhone's HDR function (which combines a regular shot, an overexposed shot, and an underexposed shot to improve the lighting in photos) needs to be turned on manually and introduces some extra delay while the extra photos are taken and combined (and also introducing a little extra blurriness, especially in low light, since the photos aren't 100% identical). Both Tegra 4 and 4i are made to allow HDR to happen in basically real-time, bringing the benefits of HDR without the downsides (at least in theory).

The wording there does make it sound like the camera is on all the time, not the HDR. Lemme fix that. :-)

I've had the G2X (T2 ... I think) and currently have an Asus TF300 (T3).

With both of those products their performance has been wholly underwhelming. For me Nvidia is gonna have to do a whole lot of convincing to have me buy another one of their mobile chipsets (and to be clear I LOVED my NForce 2 chipset).

Based on the adoption rate if tegra 1-3, I don't expect to see this very often. An article on semiaccurate basically said that nvidia lied to oems about performance on the previous tegras which had the effect of burning them out so they all jumped to qualcomm.

Since TI left this market, Qualcomm is the only alternative to Nvidia, unless you want to source your chips from a direct competitor in the Android OEM space (Samsung). So I suspect Nvidia well sell a lot of these just because it's one of the only two realistic options left, and it's the only one using ARM reference designs.

Based on the adoption rate if tegra 1-3, I don't expect to see this very often. An article on semiaccurate basically said that nvidia lied to oems about performance on the previous tegras which had the effect of burning them out so they all jumped to qualcomm. If true, I would take they stated here with a grain of salt.http://semiaccurate.com/2013/02/18/nvid ... es-at-ces/

That's Charlie Demerjian. His "reporting" is actually less accurate than Fox News especially when it comes to nVidia.

He also reported that Apple was going to dump Intel chips for their Macbooks back in 2011.

That being said, nVidia needs to blow the house down with their newer SOCs. They've introduced some amazing gear but by the time actual products hit the market, the competition is showing off their new gear that was just around the corner (Example: Tegra 3 was shown off at CES 2011 but the Transformer Prime with the Tegra 3 was available for purchase 10 months later which was about a month before Qualcomm showed off Krait ...and the GPU in both the Tegra 3 and Krait were still outperformed by the Apple A-series even though the A-series was more expensive to produce).

If I were to give nVidia some advice it would be to create a performance SOC that ignores production costs just to showcase some superiority. I know they're capable of it.

That so called 'die shot' that NVIDIA uses in their presentation is just a mock-up. A relatively simple way for them to illustrate what their chip contains. I bet you that a real die shot of Tegra 4(i) will be quite a lot different.

Maybe I missed it in something previous, but what's the idea behind the "always on HDR camera?"

Just to cherry-pick, the iPhone's HDR function (which combines a regular shot, an overexposed shot, and an underexposed shot to improve the lighting in photos) needs to be turned on manually and introduces some extra delay while the extra photos are taken and combined (and also introducing a little extra blurriness, especially in low light, since the photos aren't 100% identical). Both Tegra 4 and 4i are made to allow HDR to happen in basically real-time, bringing the benefits of HDR without the downsides (at least in theory).

The wording there does make it sound like the camera is on all the time, not the HDR. Lemme fix that. :-)

All I know is I'm glad the manufacturers were forced to use Q's S4 chips here in the states instead of that Nvidia garbage. When a dual-core chip hands all 4 of your asses to you, while using less battery, you didn't do your job.

I place the blame on the manufacturers too, since they wanted to say "Four Cores!" and making a worse phone was the result of that.

Based on the adoption rate if tegra 1-3, I don't expect to see this very often. An article on semiaccurate basically said that nvidia lied to oems about performance on the previous tegras which had the effect of burning them out so they all jumped to qualcomm.

Since TI left this market, Qualcomm is the only alternative to Nvidia, unless you want to source your chips from a direct competitor in the Android OEM space (Samsung). So I suspect Nvidia well sell a lot of these just because it's one of the only two realistic options left, and it's the only one using ARM reference designs.

Nice photoshops from Nvidia there. I can understand that they might not have decent die images to show off, but scaling the Wayne floorplan (if that was ever a real die shot) down and superimposing a badly scaled i500 modem is quite poor.

But given that it is probably artistic license, this is a move in the right direction for Nvidia. The integrated modem will be appealing for cost reduced devices, and the clock speed of the ARM A9 cores is a great achievement.

Ouch, I knew the Cortex a15 was really power hungry, and the A7 (ARM's other new core) left something to be desired in terms of performance. I didn't know whether they were bad enough to not even bother for phones. This puts NVIDIA at a distinct disadvantage, especially as basically the only high end direct ARM core licensee left besides Samsung.

No wonder everyone is just going for Qualcomm. You can either do that or go for something uncompetitive. Good news for their Snapdragon chips then. At least for ARM both Qualcomm and Apple still are instruction set licensees.

Your article annoys me because of its lack of rigurous convention for naming things. You keep talking about "cores" for the Nvidia Tegra and then mention again the term "cores" for the Imagination Technologies based GPUs used in recent Apple's SoCs.The problem is first that in both cases, "core" refers to completely different things and second you don't seem to realize you are getting it wrong. Nvidia uses the term "core" for purely marketing reasons while in fact "core" in the Tegra case means only SIMD units. However in the case of the PowerVR SGX 554MP4 used in the iPad 4 for example, the four cores are actually a full GPU core with its own given number of SIMDS .

The difference is of course significant and using the term "core" without preliminary defining what it means in your discussion makes all your description totally meaningless and your competence to explain hardware specifications to your readers quite questionable.