Earlier this year, NVIDIA showed off a roadmap for its Tegra line of mobile system on a chip (SoC) processors. Namely, the next generation Tegra 4 mobile chip is codenamed Wayne and will be the successor to the Tegra 3.

Tegra 4 will use a 28nm manufacturing process and feature improvements to the CPU, GPU, and IO components. Thanks to a leaked slide that appeared on Chip Hell, we now have more details on Tegra 4.

The 28nm Tegra 4 SoC will keep the same 4+1 CPU design* as the Tegra 3, but it will use ARM Cortex A15 CPU cores instead of the Cortex A9 cores used in the current generation chips. NVIDIA is also improving the GPU portion, and Tegra 4 will reportedly feature a 72 core GPU based on a new architecture. Unfortunately, we do not have specifics on how that GPU is set up architecturally, but the leaked slide indicates that the GPU will be as much as 6x faster than NVIDIA’s own Tegra 3. It will allegedly be fast enough to power displays with resolutions from 1080p @ 120Hz to 4K (refresh rate unknown). Don’t expect to drive games at native 4K resolution, however it should run a tablet OS fine. Interestingly, NVIDIA has included hardware to hardware accelerate VP8 and H.264 video at up to 2560x1440 resolutions.

Additionally, Tegra 4 will feature support for dual channel DDR3L memory, USB 3.0 and hardware accelerated secuity options including HDCP, Secure Boot, and DRM which may make Tegra 4 an attractive option for Windows RT tablets.

The leaked slide has revealed several interesting details on Tegra 4, but it has also raised some questions on the nitty-gritty details. Also, there is no mention of the dual core variant of Tegra 4 – codenamed Grey – that is said to include an integrated Icera 4G LTE cellular modem. Here’s hoping more details surface at CES next month!

* NVIDIA's name for a CPU that features four ARM CPU cores and one lower power ARM companion core.

[H]ard|OCP set out to determine how well AMD and NVIDIA's cards can deal with the new Call of Duty game. To do so they took a system built on a GIGABYTE Z77X-UP4-TH, a Core i7 2600k @ 4.8GHz, and 8GB of Corsair RAM and then tested a HD7970, 7950 and 7870 as well as a GTX680, 670 and 660Ti. There is good news for both graphics companies and gamers, the HD7870 was the slowest card and still managed great performance on maximum settings @ 2560x1600 with 8X MSAA and FXAA. For the absolute best performance it is NVIDIA's GTX680 that is your go to card though since this is a console port, albeit one that [H] describes as well implemented, don't expect to be blown away by the quality of the graphics.

"Call of Duty: Black Ops II is the first Call of Duty game on PC to support DX11 and new graphical features. Hopefully improvements to the IW Engine will be enough to boost the CoD franchise near the top graphics-wise. We also examine NVIDIA's TXAA technology which combines shader based antialiasing and traditional multisampling AA."

The skeptics were right to question the huge improvements seen when using GPGPUs in a system for heavy parallel computing tasks. The cards do help a lot but the 100x improvements that have been reported by some companies and universities had more to do with poorly optimized CPU code than with the processing power of GPGPUs. This news comes from someone who you might not expect to burst this particular bubble, Sumit Gupta is the GM of NVIDIA's Tesla team and he might be trying to mitigate any possible disappointment from future customers which have optimized CPU coding and won't see the huge improvements seen by academics and other current customers. The Inquirer does point out a balancing benefit, it is obviously much easier to optimize code in CUDA, OpenCL and other GPGPU languages than it is to code for multicored CPUs.

"Both AMD and Nvidia have been using real-world code examples and projects to promote the performance of their respective GPGPU accelerators for years, but now it seems some of the eye popping figures including speed ups of 100x or 200x were not down to just the computing power of GPGPUs. Sumit Gupta, GM of Nvidia's Tesla business told The INQUIRER that such figures were generally down to starting with unoptimised CPU."

NVIDIA will be celebrating the release of Call of Duty: Black Ops II by launching the first-ever “GeForce GTX Call of Duty Rivalries” competition which pits top colleges against each other in Call of Duty: Black Ops II four-person, last team standing multiplayer matches. Participants in the first round of competition include the storied rivalries of Cal vs. Stanford, USC vs. UCLA and UNC vs. NC State. Two additional wildcard colleges from any accredited college in the United States will also be chosen by the Facebook community to field teams. See details on GeForce.com or visit NVIDIA’s Facebook page on how you can walk away with a Maingear gaming rig.

In addition to the contest NVIDIA also released the GeForce 310.54 beta driver with specific benefits for players of Black Ops 2, specifically the inclusion of TXAA.

Delivers up to 26 percent faster performance in Call of Duty: Black Ops 2 and up to 18% faster performance in Assassin’s Creed III.

We have seen quite a few driver updates since the release of cards like the HD 7970 GHz Edition and the GTX 680 which inspired [H]ard|OCP to revisit the performance of these cards in several games. Some results were not surprising, the two top cards have historically run neck and neck in performance and price and that remains true now. There was a definite loser however, the performance of the GTX 660 Ti matches that of the HD7870 but the price is similar to the much faster HD7950. Check out the full results here.

"With the recent release of new beta drivers from both AMD and NVIDIA, and the upping of clocks by AMD, significant performance gains have been claimed by both parties for current generation video cards. We will investigate with a 6-way roundup comparison to see if we can crown a champion."

Graphics card manufacturer NVIDIA launched a new Tesla K20X accelerator card today that supplants the existing K20 as the top of the line model. The new card cranks up the double and single precision floating point performance, beefs up the memory capacity and bandwidth, and brings some efficiency improvements to the supercomputer space.

While it is not yet clear how many CUDA cores the K20X has, NVIDIA has stated that it is using the GK110 GPU, and is running with 6GB of memory with 250 GB/s of bandwidth – a nice improvement over the K20’s 5GB at 208 GB/s. Both the new K20X and K20 accelerator cards are based on the company’s Kepler architecture, but NVIDIA has managed to wring out more performance from the K20X. The K20 is rated at 1.17 TFlops peak double precision and 3.52 TFlops peak single precision while the K20X is rated at 1.31 TFlops and 3.95 TFlops.

The K20X manages to score 1.22 TFlops in DGEmm, which puts it at almost three times faster than the previous generation Tesla M2090 accelerator based on the Fermi architecture.

Aside from pure performance, NVIDIA is also touting efficiency gains with the new K20X accelerator card. When two K20X cards are paired with a 2P Sandy Bridge server, NVIDIA claims to achieve 76% efficiency versus 61% efficiency with a 2P Sandy Bridge server equipped with two previous generation M2090 accelerator cards. Additionally, NVIDIA claims to have enabled the Titan supercomputer to reach the #1 spot on the top 500 green supercomputers thanks to its new cards with a rating of 2,120.16 MFLOPS/W (million floating point operations per second per watt).

NVIDIA claims to have already shipped 30 PFLOPS worth of GPU accelerated computing power. Interestingly, most of that computing power is housed in the recently unveiled Titan supercomputer. This supercomputer contains 18,688 Tesla K20X (Kepler GK110) GPUs and 299,008 16-core AMD Opteron 6274 processors. It will consume 9 megawatts of power and is rated at a peak of 27 Petaflops and 17.59 Petaflops during a sustained Linpack benchmark. Further, when compared to Sandy Bridge processors, the K20 series offers up between 8.2 and 18.1 times more performance at several scientific applications.

While the Tesla cards undoubtedly use more power than CPUs, you need far fewer numbers of accelerator cards than processors to hit the same performance numbers. That is where NVIDIA is getting its power efficiency numbers from.

NVIDIA is aiming the accelerator cards at researchers and businesses doing 3D graphics, visual effects, high performance computing, climate modeling, molecular dynamics, earth science, simulations, fluid dynamics, and other such computationally intensive tasks. Using CUDA and the parrallel nature of the GPU, the Tesla cards can acheive performance much higher than a CPU-only system can. NVIDIA has also engineered software to better parrellelize workloads and keep the GPU accelerators fed with data that the company calls Hyper-Q and Dynamic Parallelism respectively.

It is interesting to see NVIDIA bring out a new flagship, especially another GK110 card. Systems using the K20 and the new K20X are available now with cards shipping this week and general availability later this month.

Asus has announced a refresh of its Zenbook lineup of Intel-powered ultrabooks to accompany its new VivoBooks and VivoTabs running Windows 8. Available next month, the PC OEM is introducing six new laptop SKUs with Ivy Bridge processors and dedicated graphics cards from NVIDIA. Specifically, the Asus Zenbook UX21A, UX31A, UX32VD, UX42VS, US52VS, and U500VZ ultrabooks are coming soon with the refresh.

The UX31A Ultrabook with touch display

The new Zenbooks will have Ivy Bridge processors, up to 10GB of memory, and up to NVIDIA GeForce GT 650M graphics. They maintain the aluminum chassis of Asus’ previous generation ultrabooks but up the hardware ante. The company has expanded the lineup to include models with 11.6,” 13.3,” 14,” and 15.6” IPS displays, backlit keyboards, and multitouch trackpads. The U500VZ and UX31A can even be outfitted with capacitive touchscreen displays.

The ASUS UX42VS Zenbook

The VX42VS further includes an optical drive, but otherwise the Zenbooks source of storage lies in solid state or hybrid hard drives. Interestingly, the UX32VD and U500VZ can even be configured with two 256GB solid state drives in RAID 0 (Ryan’s favorite kind of RAID).

The ASUS UX52VS Zenbook

The following chart outlines all the known specifications. Note that several of the ultrabooks are not listed on Asus’ website yet so exact dimensions are unknown for the UX52VS and U500VZ in particular.

Zenbook

UX21A

UX31A

UX32VD

UX42VS

UX52VS

U500VZ

Dimensions

299 x 196.8 x 3 ~ 17 mm

325 x 223 x 3 ~18 mm

325 x 223 x 5.5 ~18 mm

14" tapers to 6mm

~15" tapers to 6mm

~15"

Weight

1.1 kg

1.3 kg

1.45 kg

1.5kg

2.2kg

2 kg

Processor

i5 3317U or i7 3517U

i5 3317U or i7 3517U

i5 3317U or i7 3517U

i3, i5, or i7 IVB

i5 or i7 ULV IVB

i7 std voltage

RAM

4GB

8GB*

6GB*

6GB

10GB

8GB

Graphics

HD4000

HD4000

GT620M

GT645M

GT645M

GT650M

Storage

256GB SSD

256GB SSD

2 x 256GB SSD (RAID 0)

1TB Hybrid Hard Drive

1TB Hybrid Hard Drive

2 x 256GB SSD (RAID 0)

*onboard + 1 x SODIMM

All of the new Zenbook laptops will be available in November and will come with Windows 8. Pricing will range from $699 to $1999 for the premium model (The U500VZ). Specific pricing details should become available closer to launch.