Anand really needs to convince Brian to get a room. There's some serious latent man (boy)-crush happening through-out the podcasts!Anand's continued giggling & predilection towards anything & everything Brian says is frankly difficult to listen to. This kind of attention disregarding pervasively the otherguests/members of the podcast is surely not diplomatic & equitable - epsecially by someone in a role of leadership as Mr. Shimpi is.. being the CEO & boss of the 'website'/'business'.Reply

I had the same thought the other day, but while listening came up with the following snag:

While upping performance (with Ivy Bridge, for example) has not been problematic at all, what you can not physically unlock is binning for low power consumption.

Additionally, there's the possibility, that motherboard manufacturers team up and release CPUs that are soldered onto carrier PCBs. Then the motherboard manufacturers would dictate the socket (within the architectural constraints of the BGA package) and make a slim markup on (exchangable) CPUs.Downside would be lengthened interconnects, but the upside would be more degrees of freedom to competition in the mainboard market.Reply

About halfway through, and already it's an interesting talk guys. Good work as always.

As an aside, can you quickly look into the Podcast naming schema, meta data and tag usage?

Right now, the Podcast tag is missing from Episodes 5-7 (though a search for "podcast" brings up all of them), and the recent Episodes 9-12 are displaying under a weird "Anand Shimpi1's Album" on my phone (Android, HTC OneX, Google Play Music). Episodes 1-8 went into "Anand Shimpi's Album" as a comparison.

Also, a picture of the AnandTech logo wouldn't hurt either. In addition to aesthetic cleanliness with the rest of ones Album view, those sidelong glances at the screen from people sitting beside a smartphone user on a train or plane lead to curious Google searches and new listeners.

I think we'll see things like Intel's Next Unit of Computing with an optical Thunderbolt connection to a self housed/powered GPU at 30Gbits/s. The days of the actively cooled CPUs are numbered, but GPUs should stay beefy, especially with the new high res monitors coming out.

It's only a matter of time until HP creates a crappy all in one printer with an HDMI out and an OS designed from the ground up to sell your ink cartridges.Reply

So I have messed with win8 duel head setup a little bit, and it is really close to awesome... but not quite.For desktop, it is pure awesome. There is simply no longer any need for extra software to get things to behave nicely. You have hard corners for snapping between screens, applications show up on the superbar on the proper windows, etc.

ModernUI/Metro... not so much. Part of it is the point of ModernUI: It is designed for small, mobile platforms. This even shows up in some apps which look terrible on big screens because the app designers never though ti would be running at high resolution. Anywho, Metro apps only ever run on a single display, and you get a feel that there are 4 different interfaces.1) Charms overlay, which rules over all, and I wish it had more features/capability2) Start Screen, which is always only ever on one display (I'd love to see this on something like a lilliput USB display as an always on feature... that would be cool. Even better? let me hook up my phone and use it as the start menu for my PC.)3) Merto/ModernUI interface, which any single app can only take up to 1 display and only 1 instance, with a max of 2 metro apps on a single display. I would like to see this change to where you can have multiple instances/windows of apps like web browsers. And for those with large screens I would love to be able to snap apps to both sides of a screen (3 apps per screen), and be able to have some say of how wide the snap is because it is too thin for some apps, and too wide for others. Metro needs something for multi-display to tell you what the active 'window' is as things like the Charms Settings menu are context sensitive, and will not always be clear as to what you are controlling.4) Desktop, which is almost a metro app itself, is the only thing which can span displays. I wish that we could see windowed metro apps run within desktop (especially things like card games or web browsers), but other than that it is just like win7 but with all of the multidisplay mods that you wanted from 3rd party apps already built in.Reply

Problem with all that is, if you start making Metro too flexible and adjustable it becomes a hassle for the tablet user and you just end up making it more like the regular desktop, which wasn't the point of Metro... Tho I do like some of the ideas, I'm sure we'll see Metro improved along those lines in the future, but with a measure of refinement and restraint.

Microsoft doesn't get enough credit for Metro IMO, it's the first truly useful touch UI overlay capable of any real multitasking, and it's far more forward looking than iOS or even Android.

#2 is genius tho... But forget phones, what we REALLY need is a sort of Metro Smartglass for tablets! Build an app that just streams the Metro side of things to any tablet regardless of OS, that'd be the first truly useful and mainstream convergence of touch and the desktop. Just put some limits in place for stuff that obviously won't translate accEsacross the stream (games) or limit it to Windows tablets, either way that could make Metro really popular on the desktop.

Only gotcha with that is... I'm sure the desktop is such a niche at this point that they figure it doesn't deserve that kinda development time. They're expecting far more people to be using convertibles over the next couple of years than desktops.Reply

Another great reason for why Microsoft should have NOT combined the 2 UI's and forced the Metro one on desktop users. Now every changes they'll make it will be a compromise either for PC users or for tablet users.Reply

Around 15 min mark and discussing the convergence of Atom and Core is going to happen. Atom will only hang around if there are significant power and/or die space saves (cost) for Intel to keep it on their roadmaps. Intel is agressively moving Atom forward and it'll be the premiere CPU design on 14 nm so we'll really get to see how low Atom's power consumption can go.

Doing a big-little strategy for Core-Atom doesn't make sense in the x86 space due to the power overhead of x86 decoders. The only design with a reasonable chance of pulling that off would be something similar to Bulldozer where the front end is shared between a big and a little core. A large micro-op/trace cache would also be helpful to cut down power consumption by power gating the decoder.

Ian pretty much had why Broadwell as a mobile only part around the 18 min mark: DDR4. Migrating to DDR4 will require a new socket but the initial Haswell parts will be DDR3. Intel has not built in any DDR4 migration plan into socket 1150. Thus with Broadwell going DDR4, Intel can make that change in the mobile market where it is expected to be BGA packaging to begin with. The desktop PC market does not want migrate to a new socket with every generation. There is already irritation from the socket 1156 to 1155 move and it will happen again with 1155 to 1150. Customers aren't seeing the benefit as the IO hasn't noticeably change between the sockets: dual channel DDR3, 16 general PCI-e lanes and 4 PCI-e lanes used for DMI. There were some minor changes but none really merited a new socket. Why couldn't Intel have provided some backward compatibility between sockets like AMD has done in the past? (IE 1155 CPU in a 1156 socket like AM3 chips working in AM2 sockets on the AMD side)

At 23 min, there is an electrical and power advantage to using BGA over LGA, in particular with regards to noise. One thing worth noting is that DDR4 only allows for one DIMM per channel. Many laptops will be moving to soldered on memory as there is no real merit to expansion (rather, direct replacement). This will remove the DIMM slot which further improves signaling and further reduces power consumption.

The 386 didn't have BGA version. There was a quad edge connect version for chip carries and a PGA version. The PGA version could fit into a socket (which was before ZIF came to market) or at times it was soldered down.

At 31 min, check out netkas.org for news about Mac GPU drivers on OS X. You'd be surprised what PC cards you can just take and us inside of a Mac Pro. These cards would also work with TB -> PCI-E 16x adapter (at 4x bandwidth). Flashing EFI based firmware isn't necessary on some of them (Radeon 6870 is a good example with working kernel extensions for OS X 10.7 and 10.8).

At 36 min, Sharp has a 32" panel that is 4096 x 2160 that has been in production since May of 2012. Unfortunately I can't find a single display using it.

I'd also like to see a review of the two new 29" displays from Dell and LG that have 2560 x 1080 resolution.

Intel went to core only partially because of AMD. They were simply hitting a power wall with the Netburst architecture. Tejas was going to be an IPC reduction while maintaining the same power and clock speeds as Prescott. The design could scale to the high clock speeds but Intel wasn't willing to ship a 200W+ CPU in the consumer space to get any tangible performance over Prescott.

Re-listening to this at work had me catch another tidbit. Around the 29 minute market the pod cast is discussing the concept of 'board of boards'. A couple of things to throw out there.

Both Intel and IBM are going to be moving to optical interconnects for high servers shortly. Both have shown off prototypes and IBM had plans to ship a POWER7 box for HPC usage using optical interconnects between chassis to create a single coherent node. Due to costs, I have no idea if this technology will scale down to consumer levels due to costs, but this would be a means to implement that board of board ideas.

Also why not have a 'universal' socket for both CPU, GPU and CPU+GPU chips with on package memory? Essentially taking AMD's HSA idea and scaling it out. Depending on user needs, a quad socket board could have four CPU chips for legacy server workloads or one CPU + three GPU chips for gaming or four CPU+GPU chips for HPC workloads. This would be great for consumers if any company could pull this off. Alas I can only see AMD doing this realistically and the effort would be suicidal considering the current state of the company.

The other thing to factor in with on package memory is that not only will you have differing CPU strengths on different class motherboard but also the amount of memory would be another axis. So a low end motherboard would have a low end processor and a small amount of memory. Compare this to a high end motherboard with lots of IO, a high end CPU and lots of memory.

Another thing worth mentioning is that IO will become a distinguishing feature between desktop and mobile SoC's. I'd expect the desktop versions to come with more SATA ports, more PCI-E lanes, more USB ports etc. despite using the same die. Packaging will obviously be different. Desktop chips will also be able to run the IO at higher speeds (PCI-E 3.0 on the desktop vs. 2.0 in mobile, USB 3.0 on the desktop vs. USB 2.0 in mobile).Reply

A note about the audio levels: they're really low, at least on the iTunes M4A feed. You should do a little post-production to bump those levels up. I usually have to crank up my volume 4x or so to make it listenable. I think there are some drag-and-drop tools that'll do it for you if you don't want to take the extra time to do it, like Levelator.

First, with Haswell, Broadwell, and keeping the LGA1155 parts around, I'm reminded of the situation with Nehalem where all the low-end processors, Pentiums and Celerons, were still Core 2. I get the impression we'll have Sandy Bridge Pentiums for years to come. I hope that doesn't make LGA1150 parts too expensive, like all the 1156 parts seemed to be.

As far as BGA and sticking the SOC on a board, see also the EOMA-68 standard. http://elinux.org/Embedded_Open_Modular_Architectu... The idea is that one card can power your desktop, laptop, or smartphone - just not all at the same time. Intel could probably get Broadwell into the 10W-with-cooling power profile if they wanted to.

They can't really converge Atom and Core. From a technical point of view, it may make sense, but from a business point of view, it won't, which is exactly why they will not do it. If Core gets integrated into Atom, then they will need to keep pricing them at $20 anyway, which means it would collapse their entire pricing strategy for Core chips.

On the other hand, they don't want to get Atom become as good as Core i3, while it costs $20, because people may start preferring Atom to Core "too much", and against collapse their business, as Intel can only thrive on high margin chips.

Intel is stuck here, because you can bet ARM has no problem trying to make their chips more and more powerful to catch-up with Intel's core, while keeping it at $20 per chip. ARM has no conflict of interest here. Intel does, and that's why it's doomed.Reply

And my point was that if Intel doesn't learn to compete with ARM at every level for $20 chips - if they can't learn to adapt their business for $20 chips, then Intel will be gone from the mainstream consumer market, because ARM will chase them out, as their chips become more powerful, and entrench more and more on Intel's territory for a much lower price.

So Intel either learns to survive with $20 chips, or they'll be chased away from these markets, with their only escape in high-end servers and supercomputers (at least until ARM and Imagination's MIPS make their play there).

Hey guys, so I got a little confused by the current/next Krait versions. I thought that the Krait v3 (Fusion3 platform?) was already in the market, inside devices like the Nexus4, Droid DNA or Padfone2 (as per QCM nomenclature S4 Pro tier). And the Krait v2 were the ones with the adreno 225 that debuted early this year (S4 Plus tier). But you said that the V3 will come only in devices next year? What I'm I misunderstanding here?Reply

The APQ8064 is an S4 Pro part - It has four Krait v2 cores and the Adreno 320. The MSM8960 is an S4 part - it has two Krait v2 cores and the Adreno 225 along with an integrated baseband.

Then MSM8960T is an S4 Pro part - it has two Krait v3 cores and the Adreno 320 along with an integrated baseband. The only phone with this chip in it that I'm aware of is the Nokia 920T for China, which is either rumored or confirmed (I forget which) to contain an MSM8960T, giving it a sizeable performance advantage over the US and European Nokia 920, which has an MSM8960.

Personally, I'd love to have a phone with the MSM8960T in it. It seems far preferable to the quad core part for battery life concerns and everyday usability.Reply

Thanks smartypnt4, I got it. So we can expect to see a four core krait v3 in the future, with adreno 320 and integrated baseband? And yes the lumia 920 for china mobile was officially launched today;)

Anyway it will be interesting to which SoC will have the most design wins in 2013. A15 designs to my knowledge do not have integrated basebands (Tegra4, OMAP5, Exynos 5xxx) but apparently can offer superior raw performance over the S4. On the other hand the S4 it's an integrated solution. Reply

Yes, the logical progression is that a 4-core part will surface eventually.

And if the Krait 300 (v3) really does achieve a 15% increase in IPC through architectural improvements, it will be very, very competitive with the current A15 cores from what I've seen. That's difficult to say given that there's only one A15 device on the market (Nexus 10), but I think they'll be very close to each other.

The reason Qualcomm is having yet another record-breaking year is that they've got an integrated solution with a radio that can be used on any of the US carriers, and a fair few of the international markets. It'll be interesting to see if NVIDIA's Tegra 4 with LTE or Intel's next chip that supposedly has LTE will be able to gain any ground against Qualcomm in the coming year.Reply

Is anything known about the Adreno 320's clocks in the Krait 200 vs. the Krait 300 parts? I'd think that if they managed to increase the clock speed of the CPU cores, they'd similarly be able to bump up the clock on the GPU. But maybe not. I'm not sure whether the clock speed increase comes from changes to the CPU core (pipe stage shortening, etc.), or if it's from a different or substantially more mature process at TSMC. Thoughts?Reply

They technically only denied going BGA-only across the spectrum. It's important to differentiate because Broadwell is only one part of the spectrum. As always, Intel's response on these matters comes down to:

"We don't comment on rumor and speculation. We remain committed to multiple PC segments including desktop and enthusiast, and will continue to innovate in those markets."