Post Your Comment

190 Comments

I'm just curious why, in every single Intel-provided slide, they use the name "Bay Trail" as two words, why you use "Baytrail" as a single word? There is no ambiguity, they mean Bay Trail as two words.Reply

You should email corrections in grammar/spelling to the author. When you put them in the comments it gets orphaned after they make the fix. They don't clean up the comments to correspond to the cleanup of the article.Reply

One new series of Android tablets to launch this week are powered by Intel's new Z2580 Clover Trail processor - which offers impressive performance for mid-range devices.

Ramos Technology is one of the better-known China based tablet manufacturers and has teamed-up with Intel to introduce the I-Series with 8", 9" and 10" Android models (starting at $199).... with very competitive pricing and solid features, including high resolution displays.

Intel's new processor with Hyper-Threading technology runs four threads simultaneously and scores extremely well in benchmark testing, compared to other mainstream Quad-core tablets.

The i9 is the first of the series available this week and offers an 8.9-Inch model - which features a 1900x1200 display with Samsung advanced PLS technology.

One of the first sources in the U.S. to feature the new Ramos I-Series, and for complete details -- go to-- Tab l e t S p r i n tReply

That's all you bring in is $5600?? Wow dude, you need to stop sitting at home, playing music, and spending money you don't have on over priced Apple computers, and get out there and find a real job bro...Reply

EMMC...*sigh* really, that was the only problem I had with Clover Trail tabs was the eMMC was terrible lay slow at anything. I thought they were adding SATA 2 support. Wouldn't running a external USB 3 drive be faster? LolReply

It's the Tablet optimized version that'll compete with ARM based devices... though, it'll have access to faster LP-DDR3 RAM (Clover Trail was stuck with LP-DDR2), and faster v4.5 eMMC instead of the older v4.41 eMMC drives... Meaning drive performance can be up to double as they introduced a nearly double bandwidth connection and enhancements like cache memory...

You can already see some demonstrations of Bay Trail vs Clover Trail that shows it doing things like loading a game noticeably faster...Reply

Baytrail systems will come with eMMC 4.51 parts that will run in HS200 mode. Overall perf improvements almost doubles when compared to the previous generation eMMC devices on Clovertrail that were running in DDR50 mode.Reply

In the graphics the S800 is ahead but on the cpu side Baytrail dominates.So when you look at the SoC as a whole, it is too close to call.The difference maker could be the energy consumptionBaytrail should run less energy consumptionReply

Really? Have you seen the battery use benchmarks Anandtech has done on the S800 in a phone form-factor? If not, go read through some of them. The S800 SoC doubles, sometimes triples, the S4Pro and S600 SoC battery life. And that's in a phone! Add in the extra thermal headroom of a tablet, along with the larger battery of a tablet, and I fail to see how Bay Trail can compete.Reply

The SD800 is limited in a phone and not running at the full possible performance and so is using a lot less power than it would in a tablet! Not to mention the smaller, typically around 720P screen on the phone versus 1080P on the tablet also tends to reduce power consumption for the phone...

Really, the Clover Trail was already shown by Anandtech to be more energy efficient than many Cortex A9 ARM SoCs like the Tegra 3. While Bay Trail improves that power efficiency by about 5x!

Even while providing over twice the performance of Clover Trail and with double the number of cores, the Bay Trail still only uses about the same or less power than Clover Trail.

The Nokia Sirius 10" Tablet was going to be released with a SD800 and only claimed up to 10 hours run time! Bay Trail 10" tablet like the upcoming Asus T100 on the other hand claim up to 11 hours, albeit with a lower resolution screen!

While Anandtech showed extremely good performance per watt for Bay Trail in this review...

So at the very least they're very competitive on power efficiency and it's not a issue as you suggested!Reply

Well, you know, even if that were true, the point would be it's made by Intel and not Qualcomm. Intel doesn't get money from Snapdragon CPU sales, hence their interest in making their own CPU.

In terms of CPU performance, it looks to me like Intel has really delivered here, but the graphics performance is still holding them back. Still it's hard to say Snapdragon smokes the new Atom. At worst they trade wins depending on whether it's CPU or GPU. Reply

Six months? Both Toshiba and Dell already announced 8" Bay Trail Tablets that will be released by the end of October of this year!

Asus also announced a T100 2 in 1 tablet with keyboard dock... And you can't run the full range of software on a ARM SoC, but Intel can run any flavor of Windows, Android, Linux, or even OSX... They're also scaling up to higher end versions with even more performance that'll be low cost alternatives to Haswell...Reply

Same impression I got. For Intel to spend another 2 years to retool Atom to this stage yet, outgunned by both S800 and Tegra4 is really a POOR showing for the king of CPUs!. Many have said, just lay Atom to rest and continue with Haswell core "slash n burn" to cut down on power consumption and replace the pathetic MediaHD gpu with a variant of Rouge6 from Imagineering. Just as what Apple has done. The fact that Intel kept wanting to retain x86 compatibility shows, its constraints to out-do Arm in this low power game. Even on 22nm process the results are way LESS than passing grade (I would give 45% mark which is a FAIL still). Marginal but fail no doubt!. This chip should have been out 3 years ago and things might not have looked this bad. I guess Intel going with Quark might spur something useful. But I doubt it as Arm M3 is a mighty little thing just as hard to crack as the Cortex brothers .... Nice try Intel but fail still. Sorry.Reply

if you read the results carefully, you can clearly see this atom's CPU muscle is stronger than S800.As for Tegra4 in Shield, please remember it's a fan device(meaning the power budget is >5W, a lot higher than this baytrail device), which makes the performance comparison unfair. Even then, some reviews find that baytrail trades blows with shield. To me, it shows significant perf/W advantage for silvermont core against both krait and A15.

this tested baytrail platform was an intel prototype, so comparing it to a shipping product is a bit skewed, also tegra 4 can be passively cooled, it is just that it needs active cooling to keep its performance stable in the shield formfactor. Note that the tegra 4 has a better gpu than baytrail [higher performance higher power] which might be adding to the overall power draw.Reply

i agree on the point it's an intel prototype, so to be fair we need to wait till next month to really try on a shipping baytrail-T product. But as for tegra 4, my sources are saying even with CPU only heavy load(to remove the GPU effect), its power dissipation is still too high. Thus makes tegra4 pretty weak in terms of perf/W.I am sure we will get the clear picture next month as they come out.Reply

I had a terrible experience with those unstable Atom z2760 equipped Samsung XE500T. Although there are good sides, it's a nightmare dealing with all those driver bugs and battery drain. I even tried clean installing my own Windows on it. Eventually, I had to go back to using a older driver for couple things, and turn off the sound, and now it shows less battery drain than before. I don't even know if connected standby is working properly or not, it never really updates anything. It really sucks with intel's buggy drivers. I would wait and watch couple months to see how these work out, but wouldn't expect intel to get any better at their driver support.Reply

Were any power consumption numbers observed while running 3D workloads? I'd imagine that it would be lower than the multi-threaded CPU benchmarks, but it'd be a nice data point to have rather than guessing.Reply

"Tablet SoC" = not efficient enough to be put in smartphones. My rule says if it's not a "smartphone chip", then it's not a "mobile chip". Wake me up when Intel actually launches a smartphone chip that's used in tablets, too.

This will get away with it, because tablets have larger batteries, so they think we won't notice it's less efficient.Reply

Medfield and Clover Trail+ ATOM are already being used in phones and tablets... the direct upgrade is called Merrifield and will be launched early next year but aside from being coupled with a LTE modem it's still based on the same Silvermont Architecture as this Bay Trail model!

While Bay Trail is even more energy efficient than the previous Clover Trail ATOM! The Z3770 is providing roughly twice the performance but using no more power than the previous Z2760 Clover Trail, which was a 3W TDP rated SoC...

So battery life would only be worse if they coupled it with energy hogging parts like a 2560x1600 resolution screen for example... The back lighting alone would increase power consumption by up to 30%...

Asus had issues getting good enough battery life for their Tegra 4 updated Infinity Transformer model as well because they used that high resolution screen, for example...Reply

This is truly incredible. I did not think that Silvermont would outperform Kabini, especially given that Kabini has a much, much higher TDP.

It's amazing -- AMD's Jaguar cores have been utterly invalidated. Kabini still has quite the graphics advantage, but at with a power draw several times higher than Bay Trail. This lead will only be exacerbated when Intel's 14nm Airmont drops in next year.

This is truly unbelievable. The question of who will win the ARM war has been answered.

It took 5 years of pain, but it seems like the wait was worth it.Reply

I prefer a Kabini tab all the way. Where did you see such outrageous outperformance? CPU cores are better on some tests, graphics get pounded by AMD it's not even funny.Kabini would give me all the CPU power i need from a tablet, and i prefer more GPU power. Don't really care about power draw, i own a surface pro, anything would be an improvement.Reply

Bay Trail cost Intel less to make than the previous ATOM, thanks in part to the move to the 22nm FAB... which is much better developed now than when Intel introduced it with Ivy Bridge. So they're pushing for even lower pricing, which at best you might find Temash competing with but not Kabini...

There are already two design wins for 8" tablets that look like they'll be barely priced over $300 and run full Windows 8.1... And they can go cheaper if they release it with Android instead... Never mind the lower end version of Bay Trail... the Z3770 is a quad core but they can go down to dual and even single core models...Reply

"Bay Trail cost Intel less to make than the previous ATOM, thanks in part to the move to the 22nm FAB... "

Dream on... the previous Atom was 32nm and BayTrail is 22nm trigate. In which universe is BayTrail going to be cheaper?? Maybe if the area of the BayTrail were 1/3rd that of the 32nm Atom, which was, what around 120mm^2. Using trigate and double patterning is not cheap!

Secondly, people have mentioned that power consumption should be better than Snapdragon 800. Really!! Snapdragon 800 is in phones which means it's power consumption is lower than a tablet-only chip like the Baytrail. Had Baytrail been that super duper good on power consumption, rest assured, they would have put it in a smartphone...

Maybe if we had used some non-javascript based benchmarks (the new geekbench 3.0 anyone) we would have seen where the Baytrail chip is in relation to the best in the ARM camp with regards the CPU. As it is, we know that Baytrail is soundly beaten in the GPU department and the general Anandtech fluff on CPU benchmarks makes it difficult to say who is where on the CPU front... Maybe the scammed AnTuTu benchmark would also have helped Intel. Wonder why Anand didn't put it in...Reply

Had Baytrail been that super duper good on power consumption, rest assured, they would have put it in a smartphone.

They are going to put it into a smartphone with Merrifield. As I understand it, the reason Bay Trail is targeted at tablets while Merrifield is targeted as phones is mostly the integration of a modem into Merrifield. Phone makers tend to shy away from multi-chip solutions if they have other options given the space premium in phones. I'm sure lower clocked parts and/or dual core offerings will available (just like its ARM counterparts), but the architecture is still the same 22nm Silvermont found in Bay Trail.Reply

No dream, just reality! Intel's 22nm FAB is on its second gen, they got costs down and the ATOM is a far smaller and cheaper chip to produce than Intel's Core series. So yes, costs are down... really, look at the pricing of the devices being announced at IDF! They're all much cheaper than the when Clover Trail based devices first came out!

And Silvermont is going into phones, it'll replace the present Medfield and Clover Trail+ early next year when they release Merrifield!Reply

well, there are two different things here1) whether it costs less vs clovertrail or is it just that the pricing is different: Clovertrail was on the 32nm fab, which is 4-6 years old by now, whereas 22nm trigate is only 2 years old. Apart from the age of the fab, double patterning matters - it means that you have double the number of steps as single patterning (in same stages). Even for relatively old fabs, double patterning will increase costs. Trigate also adds cost, not least in testing. I would bet my last penny that Baytrail is more expensive to manufacture than clovertrail, but Intel is pricing it lower (meaning lower margin) to not repeat past mistakes. Pricing Baytrail at $100+ will mean missing the market altogether, no matter how good the chip is.&2) costs versus the core series is irrelevant. The final costs depend primarily on the area of the chip. But Intel hasn't given out any details. WHY??? might I ask. Given the amount of graphics power and cores (4 vs 2 in clovertrail), I would think that the area of the Baytrail might be closer to 100mm^2. So, yea the core i7 processor of 177mm^2 is much larger but sells for $450 retail. Big difference...

By the time Intel is readying to put these in a smartphone, Qualcomm and the others will have products in 20nm (planar) with at least ~30% increase to CPU and GPU (maybe more in GPU on HSA capability) and Intel will fall behind again.

Sorry but you lose the cost bet, for one thing you're forgetting the 22nm FAB reduces the amount of material needed for each chip and they can put more units on each wafer! So mass produced the cost of things like Tri-Gate get absorbed and negated... The increase in cost is also less now than when they first introduced the 22nm FAB. This isn't new technology anymore and the FAB yields are high enough to reduce costs now!

Simple fact of the matter is we're already seeing pricing announced for Bay Trail based devices and they're starting well below what Clover Trail devices were introduced at!

The Asus T100 only has a starting price of $349, PCMag even did a benchmark (something to do with matching names and addresses using an algorithm), which took Clover Trail over 30 minutes but the Z3740 in the T100 finished it in just over 15 Minutes... and it includes the Keyboard dock in the pricing!

So the lower pricing and higher performance is a reality!

And no, Intel will already be putting these in Smart Phones by the time Qualcomm and others will have products in 20nm in actual products! Merrifield is getting a early 2014 release and besides, by the time they do push out 20nm into actual products Intel will already be getting ready to push out their 14nm Airmont update, which they already put on a accelerated schedule for rollout!

Really, it's understandable to be pessimistic, Intel took over 5 years to get serious with both mobile and the ATOM but believe it or not they're finally serious about it...Reply

This is just silly. Intel already put almost twice as much transistors as CT+ in BT-T. The die size should be similar, thus invalidate your whole point of being cheaper. 32nm was back then much older tech than 22nm now. 32nm began shipping back in Q4 2009, CT was some 3-years later. Assuming BT-T ships this year, there's only a 2-year gap since intel's first 22nm offerings. So neither die size nor maturity is on BT-T's side. And the sheer cost in moving to 22nm will be the deciding factor. I wouldn't be surprised if the silicon costs twice as much to make and the total cost is 30% higher.Pricing has NOTHING to do with cost itself. BT-T is priced lower to remain competitive, that's all. Case in point, Haswell 4C-GT2 (177mm2, 4702M@2.2GHz) is more expensive than Haswell 2C-GT3 (181mm2, 4600M@2.9GHz), that is with the same process, TDP, time to market. And higher frequency / bigger die has higher defection rate. So there goes your cost theory.Reply

Also, keep in mind NVIDIA claims the per transistor cost stopped being competitive beyond 28nm-node. The general trend holds true for Intel. Even more so in CT v BT-T case since 22nm is still young whereas when CT announced, 32nm HKMG was widely used by every FAB in the world.Reply

Nonsense, first 22nm was started longer than 2 years ago. It only began producing shipping products 2 years ago but they were working on the technology much longer than that and it doesn't take long to perfect, which they have to do before they seriously move on to the next FAB advancement and they're already starting to get ready for that as well!

Really, Intel is way ahead on many of these technologies. For example, Intel had been using HKMG at 45nm, long before ARM manufacturers caught up at 32nm, and Tri-Gate is already a well developed technology for Intel. Also, Bay Trail is a much simpler chip than the Core Processors. So it's doesn't require as high precision to make as the much more complex Core processors and thus they are a lot easier to get good yields for them!

And again, my point is the prices of actual devices coming out with Bay Trail is lower than the previous Clover Trail... So regardless of what you think Bay Trail is in fact cheaper than Clover Trail!

Not to mention Intel stated this months ago that Bay Trail will be cheaper than Clover Trail! Actual product pricing shows they kept their word!Reply

I don't think the performance is all that surprising. Intel has been working with a major process advantage, and their CPU architecture is simply better at this point, as well.

I saw some articles elsewhere on the 'net where the writer was saying they didn't think Bay Trail would use less power than Kabini, which was just stupid. If Bay Trail used as much power while on a much better process node, it would be ridiculous. Reply

The multi-threaded benchmarks are completely uninteresting --- how many cores get slapped on a die is based on marketing consideration, not tech.The interesting benchmarks are the single-threaded ones. If we believe what Apple says, the combination of the ARM-v8 ISA, higher frequency, and the usual "more transistors so smarter micro-architecture" gets Apple (and presumably the other high-end ARM designs) to rather better than this single-threaded performance, and, I would guess, at rather lower power.Reply

The thing with Baytrail is that it has turbo core where as Kabini does not, meaning its probably spending most of its time around 2ghz or so which is 33% faster clock speed or about 25 to 28% faster on a perceptible level. It got me wondering when cinebench put them equal in IPC but everything else was saying it was faster, it makes me think cinebench super hammered the cores and everything else wasnt as punishing, thats the only way I can account for that performance boost in everything else. Im not saying that it wont be perceptibly faster, especially with lower power consumption, what I am saying is that its graphics are absolute garbage, I mean, it still loses out to Brazos... Either way for its power usage its pretty damned good, but again its a non mature 28nm process vs. a very mature 22nm one, Im definitely curious to see what kabini looks like after a refresh.Reply

Don't forget the real speed of these is 1.33-1.5 Ghz, not whatever they're using for Turbo. But Turbo is what the benchmarks are using. You won't get that performance all the time on your device, though.Reply

Uhh, his point is COMPLETELY valid.The actually existing usage models of these devices is very much short sprints for which high throughput is ideal, followed by long bursts of doing nothing. Yes, this doesn't match a particular class of games, and doesn't match whatever weird inappropriate process (like cinebench) that you want to run on your phone, but it matches how MOST people uses their phones (and tablets).

This is the one area where, IMHO, Intel have a clear advantage over ARM. But I suspect it is very much a temporary advantage. Apple, for example, obviously spent all their A7 design time getting the 64-bit in place, and on the remaining low-lying micro-architectural fruit. But I expect that for the next chip, this sort of ability, to turbo up to high frequencies for short periods of time, will be their primary focus.Reply

No x64? Guess I wont be getting one of these, right now then. I'd rather run Ubuntu on it than Android, since the latter is better for smartphones. Ubuntu being based on my beloved Debian that I have been using since the stoneage of linux. =)Reply

Quote:"Although the core architecture is 64-bit in design, there will be no OS support for 64-bit Bay Trail at launch. Windows 8.1 with Connected Standby appears to still be 32-bit only, and obviously Android is 32-bit only at this point as well."

Yeah, they said CedarTrail was supposed to have 64-bit support to, but then pulled the rug at the last moment. These BayTrail-t CPUs are going to be so short lived, don't expect much from Intel support after about January (if that long). They dedicate the resources to Airmont-tReply

Cedar Trail technically did but because it used Imagination GPU the driver support never materialized, especially after Intel decided to give up on netbook range ATOM and re-purposed it for the mobile market. Even some of the D Series Cedar Trail models were terminated in the same year of release.

Bay Trail, though is using Silvermont architecture that is fully 64bit and is using Intel's own GMA based on a scaled down version of the Ivy Bridge HD4000... and Linux support for Intel's driver support was added back in April btw!

The only thing is Intel isn't pushing the 64bit advantage for mobile devices but Bay Trail is also going into Laptops and desktops/servers... specifically the Bay Trail M and D series that'll be sold under the Celeron and Pentium Brand names will be when they push the 64bit advantage but since it's the same architecture it means you should be able to get drivers for all Silvermont based devices...Reply

Clearly, the answer is no. For most tablets and phones, it's not nearly better enough.

Convincing existing ARM vendors to move to Intel will not be an easy task. There will be huge costs involved. In order to drive such a move, Intel would need far better performance or a far better price point, but probably both. It seems unlikely that Intel has either. The performance is only minimally better than ARM's current offerings, and while we don't know pricing, given that Intel has the highest margins in the industry and ARM among the lowest, one must assume that Intel's new silicon won't be price competitive with ARM.

So where will these chips shine? The only place they seem likely to find a home is in full (non-RT) X86 Windows 8 tablets and phones. Windows is the only large mobile player that requires X86. For the rest of the market, X86 is a liability.

Intel may mark this a success if it kills off Windows RT. In that singular goal, these chips seem likely to succeed. Such a victory will do little to make Intel powered tablets competitive with Android. Full X86 Windows boxes will have the full duo of WinTel taxes. Because of this, it's hard to see these products ever being price competitive with Android, not to mention the dearth of mobile apps on the Windows platform.

Intel and Microsoft have allowed ARM and Android too large of a head start. Now, their lead may be nearly insurmountable.Reply

Samsung already used the old Atom in their Galaxy Tab 3 10.1, and they make their own ARM licensed cores in-house. It's not going to take much to get these venders to switch. If the performance is there, and the price is competitive, plenty will make the switch. These OEMs design electronics as their business, it's not going to be a huge difficulty for them to make designs with Atom cores instead of ARM cores. And considering X86 works with both Windows and Android, I don't see why having a higher compatibility base is somehow a negative. Reply

So why have Samsung and Asus released Android devices featuring Intel CPUs? Both Samsung and Asus purchase a lot of expensive Intel chips for their laptops. It would be less than surprising were they to have been compensated with discounts for having released Intel powered Android devices.

Another huge problem for X86 Android is software support. Nearly all Android applications are compiled for the ARM instruction set. The hundreds of thousands of existing Android apps *WILL NOT RUN* on an Intel powered Android devices. At best, they need recompilation, at worst, rewriting. Moving hundreds of thousands of ARM compiled software to X86 is a heavy lift. Intel has a recompilation service, but it's only able to do so much.

The bottom line is that Intel is just now, finally releasing a CPU competitive with ARM. ARM has a massive lead. A larger lead than Intel has ever had in the PC market. To convince manufacturers to relinquish ARM for X86, Intel doesn't just need minimally better technology, they need far better technology and equal or better pricing.

Right now, Intel's technology is not that much better than ARM, and their pricing? Unless they decide to sell below cost, they'll likely never beat ARM pricing.Reply

I agree with much of this but I think you're a bit off on the Android applications aspect. A vast vast majority of the Android apps are written in Java so there's not incompatibility with x86 there. For the native apps, recompiling to x86 is somewhat trivial since Android is a Linux OS. Third, it seems that Intel's ARM-to-x86 ISA translation program works pretty well.Reply

Yes - and to further correct Dentons comment on Android compatibility, Intel has a binary translator for Android that will convert ARM ISA to x86 on the fly. It works amazingly well. If x86 gets more traction in Android devices, it will be used less and less as app developers compile for x86 in addition to ARM (which is trivial). Reply

Intel may mark this a success if it kills off Windows RT ... not to mention the dearth of mobile apps on the Windows platform

Wow, you just talked about Intel killing off WinRT and then moved on to talking about a lack of applications for Windows. You can't have it both ways. Either legitimize WinRT as a competitor and bash it for a lack of applications or (more realistically) dismiss WinRT and accept that Windows has more applications, higher quality, and fully featured than any app store. Since when did a fully featured application become inferior to an app. How many (software) things can you do on a tablet that you can't do with a Windows PC or even OSX. Let's even throw Linux in their for kicks and grins.

given that Intel has the highest margins in the industry and ARM among the lowest, one must assume that Intel's new silicon won't be price competitive with ARM.

Also consider that the biggest advantage of maintaining a process lead is cost. Yes a new process cost more than an old process, especially when applying new techniques like double masking and tri-gates, but the bulk of the cost is still in the silicon. The exact same chip fabricated on a smaller process generically means lower cost due to the ability to fit more chips per wafer. Intel maintains the highest margins in the industry because they also maintain the lowest cost per comparable chip. I'd imagine that Intel will give these chips a price tag to match (or slightly exceed) their level of performance compared to their ARM competition. Unfortunately, as you said, simply being competitive isn't enough to justify a rapid switch over of an ARM dominated market. They are going to need to offer something the their competitors don't have or eat significantly into their margins. That said, they will get some design wins simply by being competitive and being Intel. Pickup into the Windows market will help as manufacturers could conceivably use the exact same or very similar hardware to power both a Windows and an Android tablet, saving cost. This could fuel a slow long term takeover, but like you, I don't see a sudden switch.Reply

The 3DMark extreme bench scores look suspect. I think the graphs are swapped.

Clearly AMD's graphics in Kabini smoke Intel's products and many other non-IvyBridge GPUs. I wonder how much of a power hit did Kabini take to produce that. Meaning, would a comparable performing GPU to Intel's make the power to performance ratio be more favorable at maybe 3.5W max under load? I am not thinking if Kabini was cut down on the GPU to even Bay Trail that it would beat Intel's power consumption. I suspect it would be closer.

I am irritated they don't just call it Atom on desktop and laptop. Clearly trying to get out of the netbook Atom stigma. Whatever. Ultimately, I am still disappointed that Atom doesn't do enough on the GPU side. It still leaves me as a consumer having to make a choice between graphics intensive and CPU intensive workloads. And the fact AMD's Kabini is even close on CPU performance is a weak showing because Intel has a mature process node advantage on virtually everyone now.Reply

more improtantly, can someone explain how Anand was able to run android with x86 kabini and x86 ivy bridge on the android GPU bench? since when can x86 processors run ARM natively? if not, did Intel actually let Anand use their android pre-beta port at the other 2 platforms?Reply

yeah but I was under the impression it uses VM and an custom build on that. Anand was able to run GPU bench (I assumed this meant native x86 build), with 4.2.2 build at that. Isn't android is built for ARM only?Reply

All of Android runs in a VM. Every Android device in the world. Apps can call native routines via JNI and some apps do contain native *.so libraries (for multiple ISA's) but in the end, Android is a VM. Your UI, your system apps, everything runs through Dalvik.

The difference between Atom devices and ARM devices is that Intel has included a binary translator to convert the ARM *.so to x86 code on-the-fly. If there are no x86 .so included in the app then the x86 device will use the ARM-v7a library via the translator.

It is very easy for app developers to compile their libraries natively if they choose. Most apps have NO native libraries in them, they're all Java. When compiling a native app, just tell the system which ISAs to compile for. Of those apps which do have native components most compile for ARM, ARM-v7a and MIPS. Flicking another switch to also compile for x86 is not that difficult.Reply

It wouldn't be anywhere close. AMD doesn't have Intel's process advantage. If AMD could get power down by just cutting GPU performance then they probably would have done so. In a tablet power consumption is pretty damn important.

In fact, more so than these load tests, idle power consumption is probably the key, and that wasn't fully tested at this point. Reply

Obviously the proper comparison here would be with Temash, since that's AMD's tablet SoC. Kabini isn't meant to compete with Bay Trail in power consumption, so it's pretty far from a surprise that it draws more power.

I'm guessing the results would be pretty much the same: Bay Trail beats Temash slightly in CPU benches, but still gets soundly thrashed in GPU benches.

Also, don't the current Kabini SKUs lack AMD's equivalent of turbo boosting? I thought I saw that on the Kabini review article...Reply

The comparison with Kabini is an interesting one. Kabini supports more instruction sets (Bay Trail is Westmere-level which means it lacks AVX) but it has a single memory channel and lacks any form of turbo. Amusingly though, Kabini is practically on-par here in terms of single threading (per clock) but is being hampered a little in multi-threading if 7-Zip is any indication (memory bandwidth?).

Add in turbo to Kabini and the Bay Trail advantage in single-threading disappears, but power usage is obviously higher. AMD have their work cut out here; Beema (and HSA) can't come soon enough.

I think AMD's yearly cadence will help here - Beema has HSA plus GCN2.0 at the very least on a more mature 28nm process - however AMD will still be on 28nm when Intel goes 14nm. Still, if HSA is the be-all-and-end-all that we've been told it is, they have hope.

The 4600M is embarrassed in CineBench; clock either SoC at 2GHz and they'll equal it for far less power. CineBench is very Intel-friendly, though the same trick in 7-Zip's multi-threaded test would result in the same outcome. Proof indeed that Kaveri is needed.Reply

Kabini has Turbo, only the two dual core Temash models drops it... and Bay Trail has its alternative to Turbo Boost that's called Burst technology...

And HSA requires industry support... they need to make it a standard, which is why they started up the HSA Foundation, but until that happens then it's only a novelty like Nvidia's CUDA, and right now it's mainly just ARM backing them there so far...Reply

The only Jaguar-based chip (outside of consoles) with any turbo functionality (note I'm typing "turbo" in lower case as I'm referring to the basic technology and not a specific implementation) is, at this time, the A6-1450, which is a Temash chip. In addition, without the turbo dock, it cannot actually achieve its turbo frequencies.

Consider that ARM is what generally binds most of these together, and their huge dominance of the smartphone and tablet market, and you can easily see that HSA isn't just another "novelty like Nvidia's CUDA".

Sorry but nothing you just said changes anything I stated! The Smartphone market has no real use for HSA or hUMA and until it does then there will be no real push to adopt it.

Problem is there's more widely accepted standards already available and what AMD is pushing only really benefits their hardware! Really, HSA is not limited to what AMD is trying to establish... they just want their solution to become the defacto standard but that's easier said than done!

AMD won't even add discrete GPU support until sometime next year. So you're stuck with limited APU's right now for example!

So this isn't even a finished standard they're trying to push!

Right now the main interest is in the server market, where custom software is common and it's a lot easier to push a new proprietary solution and that's the main reason why ARM and it's many partners are interested in it...

Problem is for this to really take off it needs to appeal to the general consumers but like CUDA, applications are few and far between right now...

Sure, there's potential but there's nothing to suggest yet it will succeed any more than CUDA did and that has been around a lot longer!

Though, it does benefit AMD in that they can easily swap out their x86 cores for ARM cores and still offer HSA, etc... So gives them some much needed diversity and flexibility as they continue to push their solutions to market...Reply

One thing that would be nice to see in the charts is max turbo clocks as well as nominal clocks where relevant. You list all parts with nominal frequencies, which makes it hard to get a handle on per-clock performance. Off the top of my head, I know that the i7-3517U is going to have max turbo around ~3GHz and that the Pentium 2020M has no turbo, but I had to go look to see that the trinity part listed has a 3.2GHz turbo.

Of course, that doesn't really get into the discussion of how much time each CPU really spends at those turbo clocks--that is obviously a more nuanced point, which I'd love to see you guys investigate as it changes with chassis/cooling design and probably other vendor-specific settings--but it'd still be helpful as a basis for comparison. And of course if you ever include ARM chips in those charts, which generally include unsustainable max clocks (and no listed TDP).Reply

this. the 4600m has some very interesting turbo issues, often dropping below base clocks when any power is sent to the gpu. and, was this atom boosting the whole time, or was it running at its base clocks?Reply

What i whould like to see is the E-350/E450/E2-1800 being added in the games comparison, the HD63xx based may be more powerfull, but in games the IGP could not get the most of it because of crappy CPU performance and lack of memory bandwidth of Brazos.Reply

Another design by Intel with an overpowered CPU and an underwhelming GPU? This thing should have at least 6EUs to be competitive. And those two extra cores aren't really necessary since you can't expect a tablet to be productive. Why don't they make it 2C+6EU?However Bay Trail is really going to put the last nail in the coffin on AMD's mobile strategy. AMD's only advantage here is a way faster GPU but only on 15W/25W Kabini parts. The 3.9W Temash is clocked at 225MHz which put it on the same level as HD6310 and thus the GPU is at best as slow as the 4EUs that Bay Trail has. AMD now has nothing competitive even though Kabini/Temash is a massive improvement over Brazos. Reply

I think we need to compare IGP performance in actual games, as i was saying, for example the HD6310 performs quite well in benchmarks, and yet still get beaten by HD2000 in some games because the slow cpu and memory bandwidth was not helping at all. Bay Trail Atoms need to be compared to Brazos, Temash and Kabini in games, benchmarks may lead to have a wrong impresions here, i think it may able to beat brazos in some games.Reply

The performance looks really good. It was about time Intel entered the fray.

That being said Bay Trail faces a number of obstacles to be accepted in mobile space.

1) Bay Trail executes the wrong instruction set. Android is the new Windows, and making it work on the Bay Trail is critical for the SoCs succes. And while porting the Android software stacks to x86 is doable it is not trivial. The user of a Bay Trail device might not be able to download every app from the Market. Why doesn't Intel just make it an ARM SoC?

2) Price. A Tegra 4 or Snapdragon 600 SoC can be had for less than 20$. Will Intel compete at this price-point? That would be very unlike the Intel we know.

3) Other parts of the SoC: Krait and Cortex cores can be combined with on-chip dsp processors, radios, various controllers and other stuff. The appeal of an Intel SoC will be lessened, if it requires one, two or tree external chips to be added to the design. Qualcomm SoC's are very succesful in part because they put so much in a single package, leading to a simpler and less expensive board.Reply

Actually, Google officially supports Android on x86... Intel devices have been getting the latest Android release for the past year!

For Medfield and Clover Trail+ they just added a Binary Translation layer to ease compatibility with Native, ARM optimized, apps... which are mostly games... While Google support means they also updated all the SDKs so developers can easily develop for both platforms and support for Intel devices has improved significantly over the last year.

Just look up early reviews when Intel ATOM based Android devices first came out a year ago to more recent ones and you'll see most apps run fine now...

While Silvermont supports virtualization extensions and that can be used to accelerate some Binary Translation and similar operations for better performance for non-x86 optimized native apps... combined with the better performance should make for a much more comparable experience... Mind, with Android apps being mostly hardware agnostic, Intel started with over 90% compatibility and that has only improved as developer support has grown and will continue to improve if Intel keeps on gaining market share in the mobile market...

It also helps they're going to start pushing Android on traditional PC systems, many running Android on the Intel processors... So that'll help boost developer interest in x86 support...Reply

Anyone that understands the value of really good single-threaded performance on integer workloads as that is by far the most important property for a tablet would be stupid not to select Bay Trail.

Almost every single Android application runs on x86 based Medfield phones, so compatibility with existing Android-applications is a non-issue.

It comes down to price of the device, if the Silvermont based Android tablets end up at roughly the same price, why wouldn't I want the better performing one (battery life seems to be about the same)?Reply

Intel isn't going to kill ARM. While Intel is finally releasing a product that's competitive with ARM, it's probably five or six years too late. To paraphrase from above, convincing existing ARM vendors to move to Intel will not be an easy task. There will be huge costs involved in any such move.

In order to convert ARM customers, Intel would need either far better performance or far better price points, and probably both. It seems unlikely that Intel has either.

Even with these latest chips, Intel's performance is only minimally better than ARM's current offerings, and while we don't know pricing, given that Intel has the highest margins in the industry and ARM among the lowest, one must assume that Intel's new silicon won't be price competitive with ARM.

Further, this chip's stand out feature is its X86 compatibility, something neither of the top mobile operating systems require or desire. ARM also allows mobile manufacturers a large, competitive marketplace from which to purchase CPU's. Were tablet manufacturers to spurn ARM, they'd drop themselves right back into Intel's high-margin arms.

These chips seems specifically designed to kill off Windows RT. In that alone, they'll probably succeed. Gaining real market share from ARM would require robust X86 versions of iOS and Android. While X86 Android does exist, almost entirely due to work by Intel themselves - there are tremendous application issues. Google seems unlikely to do much of the heavy lifting, the development would continue to be almost entirely the responsibility of Intel.

ARM has a massive lead. To keep that lead ARM just needs to stay good enough. ARM can win by staying reasonably competitive with Intel's performance while continuing to destroy them on price. Unless Intel brings both massive performance gains and tremendous price reductions, it won't be worthwhile for the big existing ARM players to lift heaven and earth to support X86.

Intel's only hope is X86 Windows 8 for tablets takes off. Good luck with that. Devices with a full WinTel tax will never be cost competitive with ARM / Android. Reply

Intel already got the ATOM within less than $10 to no more than $20 more than an ARM SoC for the ATOM SoC... And Bay Trail will be priced even lower than the present ATOM!

And they already are competitive on power efficiency, leaving performance as the last thing they needed to excel at and Bay Trail looks to get enough for it to count...

Mind, Intel doesn't have to deal with hardware fragmentation as their own driver support for their own GPU is far better than most of the closed drivers used for the proprietary graphics used by the majority of ARM devices.

And people are more used to using desktop Linux on Intel systems anyway... So don't underestimate their chances too much, they're far from terrible...Reply

see, her is the issue. 10-20 bucks more means the chip is wither 50% or 100% more expensive then the competitor, and offers only a small improvement performance wise. add in atom's tainted name, its no wonder that OEMs dont want to use atom as much as, say, qualcomm chips.Reply

Whether Intel's chips are "only" 50% more expensive or 100% more expensive, why would that motivate a move to Intel? What's the advantage? A 5% performance gain? That's not a motivator. Especially considering the considerable advantages of ARM. With ARM, tablet builders can purchase CPU's from a multitude of manufacturers. With X86, that competition would be non-existent.

Were Intel to have a 50% or 100% performance advantage, some manufacturers might be enticed, at least for their higher end products, but that's not where we are. As things stand now, only those manufacturers needing X86 compatibility are likely to bite. The only need for X86 is Windows. So other than the handful of Android tablets that Intel pays to have built, all of these chips will go into Windows 8 machines. Likely 99% + of shipping volume.

This is a chip designed to prolong the WinTel duopoly. That is this chip's best case scenario. It's going to have almost no real-world uptake in Android and absolutely no uptake on iOS. Its intrusion into the mobile ecosystem will, at best be 5% to 10%.

To clarify, I was referring to this specific chip, not X86 in general. Were I being overly precise, I'd have written "The only significant market need for this particular, mobile-specific, low-power X86 chip is Windows."

It's hard to see any of the other markets you've mentioned needing, wanting, or using these chips. That's a massive problem for Intel, as right now, the tablet market is almost entirely iOS and Android. For iOS and Android, X86 compatibility is at worst a significant liability, at best, a completely unneeded expense.

It seems unlikely Intel will see much uptake with Apple or Android manufacturers. How else can Intel gain traction in the Tablet market if not with Windows? And that, as we know, requires the Microsoft tax, pricing Intel well out of the current tablet market.

Intel's in a tough spot. While they have finally released a chip with power / performance parity to ARM's latest, it may be 5 or 6 years too late.Reply

I think the same thing that Intel did to the RISC based processors in the 1980s, 90s and 2000s will be done to it by ARM. Disruption from below, my friend.and what did happen - first you say the competition is cheap and low quality (on whatever metrics) and ignore it. then you say the same thing, but secretly tool up to compete (which takes a few years), in the meantime, the cheap but good-enough parts are improving much faster than you are. The reason you take time to compete is because you don't see much money there and by corporate style IRR calculations it doesn't make sense to compete. By the time you are ready to compete (which takes many years, 5-6 in the case of Intel), the erstwhile cheap and bad competition has become cheap and competitive and finally cheap and good, then better. Then you die.

Intel dead in 3-4 years??? I Hope ARM chip manufacturers take the Intel threat more seriously than you do. Intel have the resources and now (finally) the mobile focus to one day have a monopoly...for the sake of us consumers I hope ARM chips can remain competitive in the long run. Intel have caught up a lot in Bay Trail and this will make the next generation of chips very interesting. We may end up with a situation similar to Intel/AMD in high TDP chips....Intel for high performance mobile/tablets and ARM chips for the low end.Reply

I know that this is slightly off topic, but I was under the false impression that Connected Standby would be coming to 8.1 even on the architecture. Are there any plans whatsoever to bring that feature?

It's not a huge deal, but it'd be nice for Skype calls or even for Windows 8.1's new "Alarms" app to have that support. It actually makes the idea of upgrading to a Haswell tablet from my Surface a bit less appealing.Reply

Connected Standby is supported and has been support, it's only new for Intel's higher end process but Haswell will bring that support to the Core processor... The ATOM SoCs already had it... It just requires both the OS, system firmware, and hardware support to work...

If anything is off, like bad wifi drivers, then it won't work correctly...Reply

Well, first of all, that was supposed to say "I was under the false impression that Connected Standby would be coming to 8.1 even on the x64 architecture."

Secondly, what I'm referring to is the quote from the article: "Although the core architecture is 64-bit in design, there will be no OS support for 64-bit Bay Trail at launch. Windows 8.1 with Connected Standby appears to still be 32-bit only"

I'm completely aware that it depends on every part of the chain and that support for it is now available in Haswell, but that doesn't do much if I can't use all of my 4 or even 8GB of RAM with that fancy new chip.Reply

I also think that the Bay Trails poor GPU performance is deliberate, so that the Bay Trail SoC doesn't cannibalize on Core/Haswell sales. A 50$ Bay Trail with HD4000 would simly eat up the sales of Intels more expensive parts.Reply

It's a TDP/thermal thing. I'm guessing the J2000 and N3000 series will have more EUs. CPU performance on this is still waaaay behind anything in the Core iX series, so it's not really any competition for anything except low-end AMD notebooks and stuff that is using SB-based Celerons.Reply

I used to be able to keep up with all the code names, I'm starting to lose it now...

"Bay Trail's overall 3DMark Ice Storm score (720p) is about on par with Brazos rather than being a competitor for Kabini. Bay Trail's HD Graphics core is based on Ivy Bridge and it's a cut down implementation at that."

"3DMark's Physics test is basically a multithreaded CPU benchmark, which allows the Z3770 to pull ahead of the A4-5000."

"If we isolate graphics alone however, the Z3770 once again falls behind Brazos."Reply

Where are all the people that kept claiming it was impossible for Silvermont to have better CPU performance than Kabini in previous articles? It's either as fast or faster with significantly less power draw than the AMD part. GPU performance is a bit of a downer but right inline with what I expected based on a cut down HD4000. Kabini's GPU kicks its ass, but it's not like either of them is fast enough for me to play real PC games on so it's immaterial to me for now. I'd like this in an 8" or 10" with a 2560x1440 display, a reasonably fast IO solution, and Windows 8.1 for ~$500.Reply

"Intel's way ahead of AMD", by 30% is good but not great. I would say by 60% would be great but it is not the case. AMD is also on generation older process node, so it is not that bad for them as their gpu makes up the difference in some ways.Reply

I was going to be excited about Silvermont and all was well until I saw the GPU charts and changed my mind. Absolutely miserable performance.

Come on, Intel. At this rate you would have been better off using IT GPUs. Care to explain using only 25% of HD 4000's EUs? I don't think heat is an excuse here, and power consumption shouldn't be either. Reply

This shatters AMD's tablet aspirations. If the Z3770 performs around the A4-5000, then AMD's low power APU, with 2 cores and running at 1GHz, using more power, is pretty much a no go. From the few benchmarks here it looks like the new Atom's GPU performance will be around that of the 225MHz AMD GPU.

All in all, while I'd love even better GPU performance, I think I'll be happy with a Z3770 tablet, and in CPU performance it will be an upgrade to my E-350 laptop. I'll be waiting to see what Airmont brings next year.Reply

Nope, power consumption would still be too high, AMD has nothing that can really match up in this space. This has been true with Kabini and all prior iterations. This is why there is so few design wins. AMD has a power consumption problem (just as Intel did a few years back). Reply

Jaguar appears to lack the sort of dynamic L2 power gating we will be seeing with Steamroller which would make sense in low-power situations. Also, it's still a dual-issue front end - highlighted as a possible decent boost to performance. We don't actually know this particular detail as regards Bay Trail; it could very well be dual-issue as well.Reply

not so sure about that bobcat sold fairly well. Also if amd optimized the a4-5000 for tablet scenarios then the power consumption would be closer to atom. notebookcheck reviewed the a4-5000 in the acer e1-522 http://www.notebookcheck.net/Review-Acer-Aspire-E1... and there web browsing test reached 3:47 hours on a 37Wh battery [equating to a ~10W draw with laptop components]Reply

A Few things,Browser Performance, or JavaScript performance are heavily influenced by its VM. And the version it used to test. Not mention these VM has the best tuning on x86 for years. And Safari JS's VM JvascriptCore isn't very optimized for Karken. As shown in www.arewefastyet.com

And i dont argee much the conclusion. While others has been siting Anand as an Pro Intel site. I have often disagree with it. But looking at those results I dont see how on the CPU side is winning anything. At least not against Shield or the upcoming Qualcomm MSM. As i have said the Javascript Benchmarks are not a reliable way to look at it. And all other CPU based performance Intel is either behind or on par. Not winning the benchmarks. So No, the CPU isn't the best performance here.

And on GPU, it concluded with " Intel's HD graphics in Bay Trail appear to be similar in performance to the PowerVR SGX 554MP4 in the iPad 4" I mean unless you are interpreting the results much more differently then what i read i the graph. Then please do explain yourself there. Because I dont read how it is "similar". There are like at least 20% of performance difference.

And if you want to compare with the best of ARM SoC breed coming up in the same time frame ( And actually shipping it )? Wait until Apple announce its A7X coming in the next iPad.

If I may have missed something you explained. Please enlighten me. Otherwise this is a review that is quite pro Intel.Reply

"And if you want to compare with the best of ARM SoC breed coming up in the same time frame ( And actually shipping it )? Wait until Apple announce its A7X coming in the next iPad."Yeah, the A7X is going to look like 50% ahead of Intel HD gpu and will lead the SoC market ingpu performance for the next 2 years AGAIN!.Reply

You know, I have defended Anand before with people claiming such things, but I can somewhat see what some people mean. Thankfully I rarely come here for any of his stuff, just mainly come to the site for the forums these days.Reply

yeah that's what I see happening as well. I'm seriously considering getting a T100 anyways because it's pretty good value for a windows device... but as good as bay trail is, it's going to be slower than the A7X by a large margin. I think intel aimed too low.Reply

I disagree. GPU performance is not the end-all of tablet performance criteria. They can all do 2D fast enough. For everything but games, if it can play video streams smoothly, then it's fast enough. Overall snappiness and responsiveness is more a CPU thing. So, if the power usage is as claimed, then saying that the Intel chip is a contender in the tablet market is more than fair. I don't see how it is bias toward Intel if it is just the simple truth.Reply

"And on GPU, it concluded with " Intel's HD graphics in Bay Trail appear to be similar in performance to the PowerVR SGX 554MP4 in the iPad 4" I mean unless you are interpreting the results much more differently then what i read i the graph. Then please do explain yourself there. Because I dont read how it is "similar". There are like at least 20% of performance difference."

In T-Rex offscreen, it equals the iPad4, so I would say he is correct, but off course T4 and Adreno 330 are significantly ahead. Intel have probably realised that most consumers don't know or care what graphics are in their tablet/phone and are sold on core and Mhz.

What people don't seem to realise is how strong the Intel inside argument will be in tablet sales, given they will mostly be bought in computer store alongside laptops. No consumer has any idea who ARM or Qualcomm are.Reply

If I may have missed something you explained. Please enlighten me. Otherwise this is a review that is quite pro Intel.

Or, ya know, he only had a couple of hours with the platform including install time. You might get a more in depth look once devices hit the labs.

And if you want to compare with the best of ARM SoC breed coming up in the same time frame ( And actually shipping it )? Wait until Apple announce its A7X coming in the next iPad.

He only had a couple of hours to test. I'm pretty sure new iPad wasn't launch in this time frame. When the new iPad is launched, I suspect, based on past experience, that Anand and his associates will test it and compare it to the other products available to them at that time. Throwing a little tid bit of what is to come based on information you already possess is fine and dandy, but you don't want to mix speculation with your actual performance comparisons.

The only way to compare this to the up coming A7X in a relevant manner is to wait until it comes out. However, there will always be a newer potentially better product coming out "soon". Then he really would have to show a bias in choosing which new product to launch the article around. Furthermore, he would deprive his readers of the information he has now.

The data is presented in an easy to read fashion so that you can come to your own conclusions, even if they conflict with his. This shows that he assumes that (at least some of) his reader are capable of analyzing data and making informed conclusion based on it. For instance, there has been mention of the soon to be released A7X chip and based on the details given, we can make an informed hypothesis on where its performance will be. However, lack of other details leave other aspects of its performance unknown. Rather than speculated about an unknown as if you weren't intelligent enough to come to your own conclusions, Anand has presented you with data that you can draw conclusions from and speculate on your own about its performance relative to the A7X. When it is released, I'm sure you will see an article with hard facts.Reply

After some much deeper thoughts and research. I think the conclusion is still a little too good. But it is at least good enough for Intel to compete with the best. The problem is Anand didn't highlight enough why. And merely saying this is good isn't much of a technical reviews while those numbers speak things very differently. Reply

Actually, the base frequency of these Bay Trial Atoms set at 1.3Ghz is a telling sign. A sign that even with 22nm process, BT cannot seem to power well at 1.6Ghz or as some might expect 2.0Ghz. To see that it uses "turbo core" tricks to overclock is some form of cheating but non the less useful to achieve the target performance at the expense of power. Sure, all Arm cores power down to close to 200Mhz when idle but their "base frequency" was around 1.6 Ghz rather than 1.3Ghz. So the microarchitecture does not scale well to power use in the perf/watt metric. Nice attempt but so-so outcome. And do not tell us going to 14nm is going to solve all of these. When the Arm chips get to 20nm, it is game over in this segment for non-Arm.Reply

While Silvermont's single threaded FP performance seemed identical to Jaguar, its single threaded integer performance is much higher in the 7-Zip benchmark.

It seems to me that the multi threaded integer performance in the 7-Zip benchmark is similarly higher (25% and 27% respectively).

The dataset footprint is large enough to require main memory accesses, ...

What are the chances that the difference in memory subsystem is partially responsible? When you have more time with Bay Trail on hand, I'd sure like to see the cache and memory latencies and bandwidths.Reply

My biggest take away from this is seeing an Atom beating the Jaguar cores that are appearing in the next console gen and that worries me for the longevity of the next console cycle. Are we a year away from seeing mainstream smartphones that have faster single threaded performance than the premier consoles?Reply

CPU performance is stellar, GPU is basically average -- consoles need the GPU ummmphhhhh, I don't see this impacting consoles at all. In addition, the margins on consoles are so low I suspect Intel (if they had a compelling solution) would walk away from the business anyway, much like nVidia did.Reply

I think Intel needs to make a big, big push to get a Bay Trail chip into a Google Nexus product of some kind (possibly to let Google iterate on the OS a bit with regards to x86 CPU's), which was the original reason for Nexus btw.

I also think Intel needs to convince MS with extremely low cost chips to make a Surface with Windows 8 and Bay Trail. Just dump Surface based on RT or knock it down to rock bottom pricing. Like $99 for Surface RT, $199 for Surface Pro Bay Trail, $499 for Surface Pro I3, $699 for Surface Pro I5, and $999 for Surface Pro I7 with Iris 5200.

That's if they insist on keeping RT around rather than just fold its development into Windows Phone, merge the two, and quietly take RT as a tablet OS around back behind the shed where the sun is just coming up over the trees, where the wind is blowing a little, and MS tells RT for tablets to look out through the trees.

Birds would start and fly away suddenly as a gunshot echoes.

Sad, but knowing it was for the best, MS could take the remains of RT to frankenstein-stitch into Windows Phone. It's not like it'd be the first time they rebooted a phone OS AND not the first time they stitched unrelated things together into a new product.Reply

Seems they are weakest in GPU, as usual. I'd like to know why in one test they get clobbered by the Nvidia shield for Physics but then in another physics test the clobber Nvidia. Same thing happens with graphics. You'd think they'd be consistently better/worse at one or the other.

With Nvidia releasing their first "real" mobile GPU soon (Q2 2014 I believe) Intel is going to have to at least double GPU performance in under a year to stay competitive. I doubt they can do that.

With that said, I really hope 3rd party OEM's starting using Bay Trail's CPU but packaging an Nvidia GPU.

It really is too bad Intel and Nvidia had that falling out over who buys who and is worth what and all that. If they were working together, the best CPU engineers with the best GPU engineers, just think what could have been. Reply

would it be possible in the future to put each devices soc in parenthesis or something in the charts, ideally with nominal frequencies, core count etc. that way it would be possible to evaluate comparable products from other manufacturers that you didnt directly review.

like shield (tegra 4, a15 quad/1.9ghz, 72core/??mhz..) something like that. also even though this is a tablet review they still use largely the same hardware as smartphones, so it would make comparing actual hardware alot easier. even considering something like the moto x, with the frequency information next to the cores its performance would make sense relative to the rest of the products on the graph.

as it is now, you cant really draw any conclusions unless you cross reference the items listed with their specs on wikipedia or even worse random android forums.

I am just curious, Why does anand tech use the base frequency when intel is advertising its baytrail with the boosted frequency? Is the boost disabled? Can you provide system configurations for the Windows 8.1 benchmarking?Reply

I was wondering similarly. But nah. No integrated SATA controllersl. Even AMD does not seem interested in the NAS market. To make a low-end to mid-end x86 for NAS would need good tweaking for dual channel RAM, integrated SATA (4 or 6 ports), integrated USB3 and lock standard 64MB Vga frame buffer. It should be low cost and flexible enough but Marvell is catching up to the aging Atom solutions which are no longer lower power .... Reply

Hey guys.. i was wondering.. what if atom is the next "core"? I mean, looking back at the P IV> centrino>core revolution, what if we are going to see a core>atom revolution where atom will catch up and surpass core performances but with much lower power consumption?Reply

On your comment on Sept 2013: " Whether we’re talking about Cortex A15 in NVIDIA’s Shield or Qualcomm’s Krait 400, Silvermont is quicker. It seems safe to say that Intel will have the fastest CPU performance out of any Android tablet platform once Bay Trail ships later this year."It is April 2014, seven months later and there are no commercial Android tablets using Bay Trail....so much for a good Intel processor....Reply