Microsoft demonstrated today that the company is keeping its flagship product in touch with the advances in computer hardware. The company announced new version of Windows 10 Pro, named Windows 10 Pro for Workstations. This OS is slated to arrive with the Fall edition of Windows Creators Update. With AMD and Intel releasing 32- and 28-core processors, reporting as much as 128 cores/threads in a 2P (dual socket) configuration, the question how will Microsoft follow the breakneck pace of CPU and GPU wars. New Windows 10 Pro represents a highly tuned version of operating system, focusing on reducing system latency and increasing responsiveness as much

After several months of anticipation and speculation, NVIDIA finally unveiled the “Ti” version of GeForce GTX 1080. First used on a GeForce 2 Ti some 17 years ago, a monstrous new graphics card that outperforms even the much desired Titan X (Pascal). While the launch was not a surprise thanks to a heap of leaks, its release window and price caught many by surprise; “next week” and (only) $699. The “ultimate GeForce” GPU, as CEO Jen-Hsun Huang called it, will offer up to 1.6GHz boost clocks and an special “OC” clock of 2GHz, the company said during the launch event in San Francisco. The GeForce GTX

NVIDIA’s scenario about the GeForce / Quadro / Tesla line-up experienced a lot of turnover over the past couple of years. The sequence of “launch as GeForce, downclock as Tesla, optimize and launch as Quadro,” changed into “launch as Tesla, optimize as GeForce and be reliable as Quadro”. With Pascal, story turned to be almost the same. NVIDIA introduced GP100 as Tesla in April 2016, followed with GP102 chip as Titan X (no longer branded as GeForce), Quadro P6000 and Tesla P40. At the same time, the GP104/106/107 did not experience the same sequence, with only GP104 debuting as Quadro P5000 and Tesla P40. Second day of

At the International Supercomputing Conference (ISC), which takes place this week in Frankfurt, Nvidia finally unveiled the PCIe version of its largest chip, the GP100. This is not the rumored GP102 chip and confirms words spoken by Jen-Hsun Huang, co-founder and CEO of Nvidia Corporation – when he said that the company ‘taped out all the Pascals’: GP100, GP104 and GP106. The GP100-based Tesla P100 is a quite long dual-slot card, which rivals dual-GPU Tesla K80 in its length. The board features lower clock for both GPU and the HBM2 memory, meaning only the Nvidia NVLink-based daughterboards will feature GP100 chip in its full performance

Given that we won’t be seeing any high end GPU hardware until the first quarter 2017 (HBM2-powered AMD Vega 10, Nvidia Pascal GP100), the focus for 2016 will be on the mainstream cards. The shift from 28nm to 16nm (Nvidia) and 14nm (AMD) forced the companies to adopt a conservative approach and focus on entry-level and mainstream silicon, rather than the “highest of all ends”. While Nvidia did launch its 15 billion transistor silicon named GP100 i.e. Tesla P100 at the Nvidia GPU Technology Conference, Jen-Hsun Huang did state that real volume shipments will only start in the first quarter of 2017, roughly the same time

At the 2016 GPU Technology Conference, Nvidia finally unveiled the Pascal GPU architecture. Perhaps the most interesting aspect of the GPU aren’t the capabilities the Pascal architecture brings, but rather the first non-Intel driven high-end bandwidth interface since AMD launched HyperTransport in 2001. NVLink standard launched in 2014, when IBM announced its tie up with Nvidia to bring the high-speed interconnect to the market. The goal of NVLink is to remove its future GPU architectures from the dependencies of PCI Express, and achieve maximum bandwidth. If NVLink was replaced with 100% PCIe lanes, the design simply would not be as efficient in terms of lines needed, and would

VentureBeat reports that Los Angeles-based OTOY managed to reverse engineer Nvidia’s CUDA language to run on chips other than Nvidia’s own GPUs. That means programs written in the CUDA language can now run on GPUs provided by Intel, AMD, and ARM. Thus, software built for NVIDIA GPUs will work on a multitude of devices ranging from an AMD-based console (PlayStaton 4, Xbox One) to an Apple iPad or iPhone. The cloud rendering company launched in January 2009, and has developed a technology that uses “clusters of GPUs” in the cloud to render cinema-quality graphics that’s streamed to a client within a web browser. The company also provides

AMD inputted a lot of strategic investments in heterogeneous system architecture (HSA). The most logical step forward is building a suite of tools designed to ease development of high-performance, energy efficient heterogeneous computing systems. This is what the “Boltzmann Initiative” is all about it seems. The “Boltzmann Initiative” leverages HSA’s ability to harness both central processing units (CPU) and AMD FirePro™ graphics processing units (GPU) for maximum compute efficiency through software. The first results of the initiative are featured this week at SC15 and include the Heterogeneous Compute Compiler (HCC); a headless Linux® driver and HSA runtime infrastructure for cluster-class, High Performance Computing (HPC); and the Heterogeneous-compute Interface for

At the recently held 2015 HotChips conference, Avinash Sodani (KNL Chief Architect, Senior Principal Engineer, Intel) gave a speech how Intel plans to expand the Xeon Phi product lineup from a server-only, PCIe card concept into three different packages, which would appeal to the workstation and server customers in different fields. On SC’15 Conference, which takes place in Austin, TX – Intel finally confirmed the strategy and is coming out with a workstation product that will feature a fully-enabled Knights Landing (KNL) Many-Core processor. In the first half of 2016, the company will ship Intel-built, Intel-branded workstation powered by self-booting Xeon Phi processor. The processor will be able to boot standard

As we are approaching Computex and the majority of press and media analysts are in the plane en route Taipei, companies such as Intel, Nvidia and AMD are polishing their press releases for the first day of the show. One such product is GeForce GTX 980 Ti, a product refresh which does not have a lot to do with ‘refresh’. While the original GTX 980 was based of GM204 GPU, featuring 2048 CUDA cores attached to 4 or 8GB of GDDR5 memory. As you might have guessed, the chip was using 256-bit memory bus. When you combine GPU clock of 1.12 GHz and GDDR5 clock

Early this morning, I received word from Tamas Miklos, author of EVEREST. This popular system benchmark and utility just got a major upgrade, supporting several new and useful tests. In fact, this is the very first benchmark that checks your compliance with OpenGL 3.0 API, but it doesn’t stop there. GPGPU devices information is also added, supporting both ATI Stream and Nvidia CUDA APIs. Given the speed of development, we might even get GPU-independent GPGPU benchmark, who knows. New feature is also Alert – sensor monitoring utility that triggers audio visual alert on overheating, voltage drop, overvoltage or cooling fan failure. This might prove quite

Expanding on its role as CUDA Center of Excellence, University of Illinois in Urbana-Champaign is launching a 13-week seminar with focus on parallel computing. Well, GPU Computing, that is. Parallel@Illinois is the name for the whole project of GPU Computing, and this seminar was organized by prof. Sanjaj J. Patel and Wen-mei Hwu. Under a not-so-scientific moniker Need For Speed Seminar Series, this 13-week course will feature domestic alumni such as Mark Hasegawa-Johnson, Dan Roth, Narendra Ahuja, Stephen Boppart, John C. Hart, Tom Huang and Seth Hutchinson, and guests such as Keith Thulborn (UI Chicago), Sam Blackman (Elemental), Nikola Bozinovic (MotionDSP), Mark Johns (Tapulous) and

Back in May 2008, Nvidia’s Editors Day hosted a presentation by young guys from Elemental Technologies Inc (ETI). The demonstrated software was Badaboom, CUDA-powered video transcoder that demolished Intel’s Core 2 Quad processor when used in conjuction with GeForce 8800GTS. Months have passed, and guys worked hard on developing Badaboom in order to be ready for August release. But, their second project, RapiHD encoder for Premiere CS4 Pro needed some engineering help. So, the guys pushed back the release of Badaboom and Badaboom Pro until after the launch of CS4. It was a tough call, but with the release of Adobe Creative Studio 4 over