AMD has shared a few updates on the highly anticipated HBM 2 based Vega architecture and we finally know the official specs.

A bunch of AMD RTG executives including Raja Koduri, Senior Vice President, and Chief Architect, Scott Herkelman – Corporate VP and General Manager Radeon Gaming, Ogi Brkic General manager of Radeon Pro graphics and Mike Mentor Vega chief architect and a dozen other AMD chaps helped us understand what Vega is all about.

It is now official, there will be three Vega cards. The fastest is the Radeon RX Vega 64 liquid cool edition and it comes with 64 next gen compute units, 4096 stream processors and a base clock of 1406 MHz. The boost clock is 1677 MHz and the memory bandwidth is at 484 GB/s. The card packs peak SP performance of 13.7 TFLOPS with peak half precision of 27.5 TFLOPS and has 8GB HBM 2 memory. The board power is at quite steep 345W and it comes with Asetek based watercooler out of the box. AMD didn’t want to comment on the TDP values.

The next up, the Radeon RX Vega 64 has 64 next gen compute units, 4096 stream processors and a base clock of 1247 MHz. The boost clock is 1546 MHz and the memory bandwidth is 484 GB/s. The card packs peak SP performance of 12.66 TFLOPS with peak half precision of 25.3 TFLOPS. The Vega 64 has 8GB HBM 2 memory. The board power is 295W and both of these cards have the same 2048-bit wide memory interface. This card has illuminated Logo and Pixel, an Isothermic Vapor chamber and comes in a slick solid metal construction out of the premium 240 grit brushed finish.

The third is the Radeon RX Vega 56, and it is basically a Vega chip with one cluster disabled. Companies usually do this to be able to get better yields. The card has 3584 stream processors, 1156MHz base clock, 1471 MHz boost clock and 410 GB/s bandwidth. Its peak SP performance stops at 10.5 TFLOPS and peak half precision is at 21 TFLOPS. It also comes with 8GB HBM and it has a total board power of 210 W.

The Radeon RX Vega 64 will sell for $499, the Radeon RX Vega 56 for $399 while the water-cooled card will sell for $699.

Fudzilla and the rest of the invited media still have to see the actual performance of the cards. We learned a lot about the Vega architecture and positioning, and based on the information we have today the Radeon RX Vega 64 should target the Geforce GTX 1080, while the Radeon RX Vega 56 goes after Geforce GTX 1070 cards. Again, we cannot stress enough that we haven’t seen the actual performance of the card so it is too early for a conclusion.

AMD did tell us that the minimum FPS range should be to Vega 64’s advantage when compared to Geforce GTX 1080. Vega 64 on 3440x1440 (48 to 100 Hz range) can stay in 53 to 76 FPS range in Ashes of singularity, Battlefield 1, Deus Ex: Mankind divided, Doom, Forza, Gears of War, Hitman, Sniper Elite 4 and Warhammer. Geforce GTX 1080 should be at 45 to 78 FPS range having the lower minimum rate and slightly higher maximum range. R9 Fury X had 42 to 58 FPS range while the Geforce GTX 980 Ti ranges from 34 to 57 FPS.

The range on 4K 60 Hz gaming stays between 43 FPS and 57 FPS on Vega 64, between 32 and 46 on Fury X, between 26 and 47 at Geforce GTX 980 TI, while the 1080 ranges from 26 to 56.

It is unlikely that even the water-cooled Vega 64 can beat the Geforce GTX 1080 TI air cooled card. AMD confirmed that the card will ship in August, but we don’t have any better date than that, just August.

AMD is also offering Radeon packs and it will give you $200 of $936 Samsung C34F791 curved 3440x1440 100 Hz Freesync monitor, $100 of Ryzen 7 CPU with motherboard combo and $120 value with Wolfenstein II and Pray games.

This is the way to ensure that not all the cards are simply not going to go to the miners. Again, even though we’ve seen a lot, we still need to see how good the actual performance is.

During a press event in New York City today, Microsoft Windows boss Terry Myerson took the stage to announce Windows 10 S, a new version of the company’s operating system designed at undermining the rise of low cost Chromebooks in American classrooms.

On its surface - get it? - Windows 10 S is a locally installed version of Windows that operates just like existing Home, Pro and Enterprise editions. The “S” stands for “Streamlined, significant performance, security,” according to Myerson. “But I personally like to think of it as the ‘Soul’ of today’s Windows,” he said.

All apps are installed through Windows Store

Microsoft has taken a series of requests from teachers and other education professionals and has essentially designed a version of Windows that places restrictions on which applications are allowed to be installed by students. To prevent unauthorized Win32 programs from being added onto school machines, the company only lets Windows Store apps to be installed so they can be verified for integrity against internal checks. This is important for classrooms where teachers want to control which apps their students can use.

The inability to install outside software also lets Microsoft reduce boot times for low-end classroom laptops to around 15 seconds, giving students quicker access to applications when class begins.

With the addition of Microsoft-verified security comes the major downside of not being able to find a school or work-related application inside the Windows Store. The idea is great for grade school students who don’t need access to Adobe Premiere Pro, AutoCAD, 3ds Max Design, MATLAB, and WavePad, but it does mean that their web browsing choices will also be locked down to Microsoft Edge and Opera. Firefox and Chrome are not available in the Windows Store. This includes the installation of retail versions of Microsoft Office, though it appears the company is now preparing to launch standalone Office apps for the Windows Store designed for the desktop.

Windows Store will need some changes

Of course, Microsoft will need to work out some known workarounds in its Windows Store if teachers are to be given full control over the classroom PC environment. The company will need to begin removing VPN apps from the Windows Store that let students unblock apps and bypass webpage blocks anonymously, or it will need to advise IT admins that they can configure Group Policies to prevent access to the Windows Store once the required apps are installed.

On the upside, the advantage to having a tight seal on security is that students will have a more difficult time finding viruses, malware and trojan horses to install, or even old school software that causes battery drainage and occasional blue screens.

Preinstalled on all new Windows 10 education PCs starting at $189

Microsoft says that in addition to its new Surface Laptop, Windows 10 S will begin shipping on education PCs that will start as low as $189. This will include a free subscription to Microsoft Office 365 for Education with Microsoft Teams. In contrast, Google’s current Chromebook for Education pricing model starts with the Acer C7 Chromebook at $259 all the way up to Lenovo’s Thinkpad X131e at $399, plus an additional $30 for management and support.

The company is also introducing Microsoft Intune for Education, a version of its cloud-based management tool that allows educators to provision student device groups through a single console.

As a bonus, Microsoft will also provide school districts a free switch to Windows 10 S from their existing Windows 10 Pro license keys. In reverse, it will also enable them to upgrade preinstalled Windows 10 S devices to Windows 10 Pro for the low price of $49.

Last but not least, the company is going to let students off the hook by including a free subscription to Minecraft: Education Edition for new Windows 10 education PCs. It is nice enough that they are preventing the installation of viruses and malware through curated Windows Store apps, but Minecraft access is an additional cherry on top.

AMD is apparently delivering on its promise that it will push developers to bring patches that will improve performance for its Ryzen CPUs and Oxide Games' Ashes of the Singularity is the first one to get it.

Thanks to Oxide Games' new patch for Ashes of the Singularity, the Ryzen CPU performance has been improved by up to 30 percent in "Average Frames Per Second All Batches".

According to AMD, as well as the performance analysis from some sites like Tom's Hardware, Legit Reviews and PC Perspective, the performance improvement with the Ashes of the Singularity: Escalation Update puts the Ryzen 7 1800X ahead of Intel's Core i7-7700K, which is its main rival.

The new update for Ashes of the Singularity now properly scales with the increase of the resolution, as noted by Legit Reviews and it appears to benefits from higher frequency DDR4 memory as well.

Hopefully, the patch from Oxide Games for Ashes of the Singularity is just a start and the rest of the developers will follow the same path. Stardock and Oxide Games showed that it can be done and bear in mind that AMD already has a big partnership with Bethesda as well as some other developers.

AMD has also released developers kits and plans to have over 1000 Ryzen-based systems to developers this year which allow them to optimize their current and future games to the new CPU architecture. AMD has also released the CodeXL Power Profiler as an Open Source tool for GPU debugging and GPU/CPU profiling, which should at least help developers to bring certain CPU optimizations.

Western Digital has announced its first entry into the NVMe Solid State Disk market. Coming out of the gate strong, its first offering is a PCIe Gen3 x4 NVMe-based SSD that it expects to deliver more than three times the sequential read speeds over their current SATA SSDs portfolio

Initially to be offered in 256GB and 512GB capacities, it recommend people pair the product with a high-capacity hard drive, or as primary storage when building a future-ready PC. The WD Black PCIe SSD boasts up to 2050MB/s and 800MB/s sequential read and write speeds. Western Digital's internal benchmarking shows that people who are using the new WD Black PCIe SSD to boot up, load read-intensive games or applications, or shut down a system may realize a performance improvement of more than 10 seconds when compared to SATA SSDs.

One feature that is always a concern when it comes to SSDs is quality and support. Western Digital is standing by its industry-leading 1.75M hours MTTF, as well as making sure the drives qualified for its WD Functional Integrity Testing (F.I.T.) Lab certification. Just like the previously launched SSDs from Western Digital, the WD Black PCIe SSDs include free, downloadable, WD SSD Dashboard software, which allows continuous performance, capacity monitoring, and firmware updates. Both models of the WD Black PCIe SSDs will come with a five year limited warranty.

Consumers can expect the WD Black PCIe SSDs to be available in 256GB and 512GB capacities in a singlesided M.2 2280 form factor. The manufacturers suggested retail price (MSRP) for the WD Black PCIe SSD will be $109 USD for the 256GB and $200 for the 512GB. The WD Black PCIe SSD will be available worldwide in the first half of 2017.

Western Digital has formally entered the Solid State Drive (SSD) market today with two offerings in the WD Blue and WD Green product lines. The SSDs will broaden the company’s portfolio of existing hard drives for PCs and workstations. Consumers stand to benefit from the combination of reliable storage, 1.75M hours MTTF, reduced power overhead and less heat when compared to HDDs.

WD Blue SSDs by design are optimized for multitasking and resource-heavy applications. At launch they will be available in 250GB, 500GB, and 1TB capacities, and in both 2.5-inch/7mm case or M.2 2280 form factors. Being a performance drive, the WD Blue SSD is expected to offer up to 545MB/s and 525MB/s sequential read and write speeds, and endurance of up to 400 TBW. Western Digital has told us that the suggested retail price will start from $79 for the 250GB model to for the $299 for the 1TB model.

WD Green SSDs are set to deliver essential-class performance and be available in 120GB and 240GB capacities, and in both 2.5-inch/7mm case or M.2 2280 form factors. The WD Green lineup of SSD drives are designed to have a sequential read and write times of up to 540MB/s and 405MB/s and endurance up to 80TBW. The drives have an expected launch date of later this quarter.

Both drive offerings will include free, downloadable, WD SSD Dashboard software, as well as are protected by a three year limited warranty.

HTC is committed to updating most of its flagship phones within 90 days of the Google official OS release. Last year's flagship HTC M9 has been updated and we have noticed that most benchmark scores have significantly increased.

After the update installed we noticed that the Antutu score jumped from an average of 56000 under Android 5.1 software to a whopping 111411 that we are seeing now. Some of our German colleagues saw similar performance increases. When we saw the HTC M9 at the Mobile World Congress 2015, we scored 53852 and this figure partially increased over the time. At least until we received the Android 6.0 update.

Geekbench scores also jumped. Before the update we were seeing around 800 in single core test and 2430 in the multi-core test. After the Android 6.0 update, which took more than an hour to install, the single core score jumped to 1245 (close to a 50 percent increase) and the multi-core jumped to 3828 – a 63 percent increase.

The above scores are from May 2015 and February 3 2016 (with Android 6.0 update).

3Dmark GPU tests remained the same. The Basemark II score increased to 1668 from 1500. It is not as dramatic as the other scores.

Android 6.0 has a new compilerbut this is not the performance change you should normally expect from an Android OS upgrade. We are talking to HTC to get to the bottom of it, to see what might be the cause of such dramatic performance increase. Maybe HTC finally uncaged the Snapdragon 810. .

The battery life doesn’t seem to be affected and the phone doesn’t get hot, so users can only be happy.

Microsoft's upcoming DirectX 12 API will bring plenty of performance improvements but one the most impressive is its ability to mix GPUs and combine Radeon and Geforce graphics cards for some extra performance.

A fresh report from Anandtech.com, managed to pair quite a few DirectX 12 graphics cards and test their performance in Ashes of the Singularity DirectX 12 demo. It shows that it works and provides some impressive performance gains which were higher than we expected.

Currently, alternate-frame rendering using the multi-GPU method works, but does not replace AMD's SLI or Crossfire configurations which give much higher performance gain. Plenty of factors have an impact on the performance. The GPU is the primary one along with the graphics cards mixed.

The performance gains when mixing different GPU vendors are quite impressive as Anandtech.com managed to mix the Radeon R9 Fury X with Geforce GTX 980 Ti, which provided a better performance gain compared to R9 Fury X and R9 Fury or GTX 980 Ti and Titan X. At 4K/UHD resolution, a mix between the R9 Fury X and GTX 980 Ti was quite a bit faster and the same configuration had a higher average frame rate even at 1440p resolution.

What is even more impressive is you can mix older graphics cards. Anantech.com shows you can also add a Geforce GTX 680 to the Radeon HD 7970 to get a much higher average frame rate. While the HD 7979 gives an average of 30 FPS and an GTX 680 pushes an average of 24.5 FPS at 1440p resolution, the combination gives a decent performance improvement ending up with an impressive average of 46.4 FPS.

The ability to mix two graphics cards that were previously unable to work together is an impressive DirectX 12 feature. It already works in Ashes of the Singularity demo is even more impressive. This will certainly make an upgrade for an average user easier as it is possible to use an older graphics card for an additional performance boost and mix graphics cards that previously could not work together in a multi-GPU configuration.

With the Wii U now out in the US, a number of people have been tearing the unit apart to figure out what makes it tick. The comparisons, however, seem to be centered on the CPU. While Nintendo will not talk about the clock speed of its CPU, it is becoming apparent that while this CPU is stronger and more powerful than what was in the Wii, it simply is slower when compared to the Xbox 360 and PlayStation 3.

What we don’t know yet is how much the slower CPU will hamper the development of software for the platform, or if it is possible to dynamically control the CPU clock speed as some have suggested, which could yield better on-demand performance, but we doubt it.

The decision is clearly a money saving move by Nintendo, and so far we have not seen anything that makes us think that the system is hampered by the choice in a very negative way. So far, the CPU appears to be up to the task, but long term evidence suggests it could struggle.

President of Nintendo Satoru Ivata has revealed that the performance gap between Wii U and the competition's next-gen consoles will be smaller than it was with Wii, PS3 and Xbox 360.

Ivata said the company hasn't been sucessful in keeping the Wii's momentum in the last two years. He attributed the problem to smaller number of titles, brought about by the launch of 3DS and Wii U.

As far as sales go, Wii has been beating PS3 and Xbox 360 silly for most of its career, but did not get far from the casual gaming market. This means that although the console sold like hotcakes, it never actually catered to serious gamers.

Microsoft and Sony, on the other hand, raked in plenty of dough from multi-platform titles not available on Nintendo's console. The reason for this was Wii's lack of horsepower, something which Ivata claims will be fixed on Wii U. Unfortunately, Ivata says he cannot guarantee that Wii U will not end up just like Wii when it comes to multi-platform gaming.

He said the performance gap between Wii U and the competition will not mimic the scenario with the original Wii and its competitors, and will in fact be much thinner. He conceded that the competing consoles may indeed be faster, especially since they will launch in 2013 or 2014. However, he said that future console support for 720p and 1080p will mean Wii U will not lag behind the competition.

Ivata once again talked up Wii U's GamePad controller, reminding the world of just how practical it can be. He pointed out that game consoles "have long been 'parasites' of TV sets at home", and I bet many a mother would agree with this. So, gamers will now be able to continue games even after that cranky person who pays for their upbringing takes over the picture box.

In related news, the company revealed that its 3DS hasn't had the desired momentum in the States and the EU, and, when we're already stating the obvious here, I'd like to add that the sky is very blue, or at least it seems so most of the time.

So far we've had a few GTX 670 graphics cards in our tests. Throughout the tests, we've seen that overclocked GTX 670 cards can score comparably to GTX 680 and what's even better, many of Nvidia's partners offer factory overclocked GTX 670s running at GTX 680 clocks. One such card is Gainward's GTX 670 Phantom 2GB.

The Phantom's GPU runs at 1006MHz while the memory is at 1527MHz. Note that the reference clocks are 915MHz for the GPU and 1502MHz for memory.

An important difference compared to the GTX 680 is that the GTX 680 comes with eight SMX units and 1536 CUDA cores (each unit containing 192 CUDA cores), while the GTX 670 has seven SMX units and 1344 CUDA cores. Nvidia kept the identical memory system used on its GTX 680 card, meaning four 64-bit memory controllers (256-bit memory interface) and 2GB of GDDR5 memory.

We already tested the GTX 670 Phantom, here, and today we'll show you what it's capable of when running in SLI. What we're particularly interested in is the price-performance ratio. We're hoping that two GTX 670 Phantom cards will score similarly to two GTX 680, which would mean that they outscore a single GTX 690. Note that buying two GTX 670 Phantom cards can save up to 250 euro compared to two GTX 680 cards or a single GTX 690.

With the launch of GTX 670, Gainward launched a new version of its ExperTool, which is now in version II. ExperTool allows for overclocking Kepler based graphics, displaying sensor readouts or doing simple fan RPM control. It looks much better than the previous versions as well. You can find it here. We must say it would've been great if Gainward threw in a sensor readouts for the second card in SLI chain as well.

Tha card comes in a really neat looking box with a handle for carrying.

Phantom cooler has its own style. The fans are hidden and they can be seen only when looking straight at the card.

Three heatpipes are in charge of transferring the heat from the coolers base to the heatsink. Two 8cm fans take care of cooling the heatsink.

GTX 670 Phantom graphics card comes with two dual-link DVIs, standard HDMI and DisplayPort connectors. The card can run up to four displays simultaneously.

The cooler base is made of aluminum instead of more commonly used copper.

GTX 670 Phantom is about 24.7cm long, which is about the same as the reference GTX 670, but the Phantom cooler will take up three slots while the reference cooler is only two slots wide. This can prove to be a problem if you are planning on 3-way SLI.

Nvidia decided to use a minimalistic PCB, which is only 17.2cm long. Actually the cooler is to blame for the GTX 670's length of 24.7cm. As you can see from the pictures below, Phantom's PCB is slightly changed. All memory chips are placed on the GPU side, while with the reference design, odd and even memory chips were placed on opposite sides of the PCB.

Beside the difference in distribution of the memory chips, Gainward ’s PCB looks similar to the Nvidia’s reference PCB showed on picture below.

Nvidia GTX 670 2GB

GTX 670 comes with 2GB of memory. It has eight memory chips, just like the reference card. Gainward’s GTX 670 Phantom runs Hynix memory chips (model No: H5GQ2H24AFR-R0C), which are specified to run at 1500MHz (6000MHz GDDR5 effectively).

Gainward GTX 670 Phantom’s cooler has to deal with the factory overclock but it manages to do its job well, at least when it comes to keeping thermals in check. Our test with a single GTX 670 Phantom graphics card revealed temperatures up to 79 degrees Celsius, but SLI drove the first card to 85 degrees Celsius.

When it comes to noise levels, a single GTX 670 Phantom is about the same as the reference GTX 670. Cooling performance and noise are even better after considering the high factory overclock of the Phantom, but we have indeed seen better. Unfortunately, our GTX 670 Phantom SLI ran a bit loud after some gaming. The fans weren't too loud, but they are louder two GTX 680s in SLI.

The fans are quiet in idle mode.

For overclocking we left the fans in AUTO mode, since manual settings didn’t affect overclocking that much. Thermals were good even after our overclock but the fans are really loud.

Power Consumption

With two GTX 670 graphics cards we can save more than 200 euros and gain similar performance to GTX 680 SLI. We need to overclock those GTX 670 cards off course, but for those who do not want to deal with overclocking, two GTX 670 Phantom cards are a viable option. The GTX 670 Phantom sports a factory overclocked GPU which is set at 1006MHz, and this is exactly the same clock used on the GTX 680.

Performance of a single GTX 670 Phantom graphics card is close to that of GTX 680 but not the same, mainly because GTX 680 has 1536 CUDA cores while GTX 670 has 1344. We haven't noticed any significant difference in games except in tessellation heavy tests. Memory subsystems on both cards are the same 256-bit ones and each card has 2GB of GDDR5 memory. As expected, GTX 670 SLI power consumption is a bit lower compared to the GTX 680 SLI.

The performance boost we got with SLI is great. We could play any game at 2560x1600. Additional overclocking is similar to what we scored with a single GTX 670 Phantom card.

The only thing we did not like with GTX 670 Phantom SLI is fan noise. The fans are not too loud but are not comfortable either. Two GTX 680 cards in SLI are a bit quieter compared to the GTX 670 Phantom SLI.

If you value quiet operation and power consumption, the best decision would be to go for GTX 690. 1000 euro buys two GTX 680 cards, or a single GTX 690. At the same time, 760 euro for two GTX 670 Phantom cards sounds like a much more reasonable choice for most of us.

In short we just showed that performance-wise, two factory overclocked GTX 670 Phantom cards can hold their own against the GTX 680 SLI, and they are certainly a more affordable option. Bear in mind though that Phantom cooling is three slots wide.