The QNAP TVS-463 8G is powered by an AMD GX-424CC, part of the Steppe Eagle family of SoCs which includes a Mullin's based Radeon R5E GPU. There are several models ranging from the entry level which sports only 4GB of RAM, which can be expanded to 16GB with the review model TechPowerUp recieved sitting in the middle at 8GB. You can install up to four 2.5" or 3.5" SATA3 disks in a variety of RAID configurations, the NAS ships empty so you will need to provide your own drives. It is a little expensive, just over $800, which includes the internal PSU and the built in OS to allow you to activate your NAS via the web with a simple command. It has two Gigabit ports with LACP support and you can even pick up an expansion card to increase it to 10GbE, read the full review to get an idea just how capable this NAS is.

"QNAP has for the first time used an AMD CPU with one of their NAS offerings. The new series is codenamed TVS-x63, and today, we will evaluate the TVS-463, which, as its model number implies, can take up to four HDDs. It is also 10GbE ready through an optional expansion card."

It will not be officially rolled in until kernel 4.2 but you can currently grab the new binary blob by following the links from Phoronix. This new AMDGPU kernel driver will be used by both the full open-source driver and the Catalyst driver provided officially by AMD and provide support not only for the R9 285 but upcoming families as well. There is still some development to be done as AMD's Alex Deucher told Phoronix that this initial code lacks power management features for Tonga but that will be addressed shortly.

"At long last the source code to the new AMDGPU driver has been released! This is the new driver needed to support the Radeon R9 285 graphics card along with future GPUs/APUs like Carrizo. Compared to the existing Radeon DRM driver, the new AMDGPU code is needed for AMD's new unified Linux driver strategy whereby the new Catalyst driver will be isolated to being a user-space binary blob with both the full open-source driver and the Catalyst driver using this common AMDGPU kernel driver."

The CEO of AMD is an unexpected, but probably very accurate, source when it comes to knowing the Windows 10 release date. First off, the news broke on a quarterly earnings call. When you make a statement on those, you have a strong legal obligation to be telling the truth according to the knowledge that you have at the time. Also, as a major hardware vendor of CPUs and GPUs, her company would have been notified by Microsoft so that they could plan development of graphics drivers and so forth. It also aligns with the “Summer” announcement made last month by Microsoft.

Of course, this led to a flurry of comments that claim three months will not be enough time to bake a successful product. Others, naturally, claim that Microsoft has been developing software for long enough to know that they can finish their product in three months. Still others shrug and say, “Yeah, you both make sense. I'm going to go play some Grand Theft Auto.”

One aspect that I don't see mentioned enough is that Microsoft has multiple projects and teams on the go, and we only see a fraction of what is being done in our Insider branch. Despite the narrative that Microsoft wishes to avoid another Windows 8 fiasco and they want their users to guide development, they have alluded that a major reason for the Insider program is to test their build delivery system. While I am having a bit of a hard time finding the supporting quote, I did find one reference to it being the reason for ISOs being delayed.

And finally – we heard from you loud and clear you want ISO images of the new builds we release. We felt it was important to listen to that and give you what you want – but there’s a catch. Getting the update & install data from our Preview Builds mechanism is super important for us. It helps us ensure smooth ESD distribution, download, and upgrade success for this program going forward, and also will help us ensure great upgrades for people once we release Windows 10. So we’re going to release the ISOs at the same time as we publish to the Slow ring. That means if you want to be FIRST and FASTEST to get the build, you’ll need to use our Preview Builds mechanisms (either automatic or Check Now in PC Settings to download.) If you must have an ISO you’ll have to be a bit more patient. I hope that you’ll consider that a fair tradeoff.

So what is my point? Basically, it is difficult for us to make assumptions about how baked Windows 10 is from our standpoint. They are being more open with us than ever about their development methods, but we don't know certain key things. We don't know what final feature set they plan. We don't know how much work has been done on any individual feature since it was merged into a build that we saw. We also don't know how much has been done by third parties. In some cases, a release in three months could equate to like, six months of work for a specific team since their last contribution was merged. I do think that any major feature we see at BUILD will pretty much be the last additions to the OS before it launches though, unless they have a surprise that will surface at E3 or something.

Also, remember that the things they show us are slanted to what they want feedback about.

Just over three years ago AMD purchased SeaMicro for $334 million to give them a way to compete in HPC applications against Intel who had recently bought up QLogic and the InfiniBand interconnect technology. The purchase of SeaMicro included their Freedom Fabric technology which was at that time able to create servers which could use Atom or Xeon chips in the same infrastructure. AMD developed compatibility with their existing Opteron chips and it was thought that this would be a perfect platform to launch Seattle, their hybrid 64bit ARM chips on. Unfortunately with the poor revenue that AMD has seen means that the SeaMicro server division is being cut so they can focus on their other products. Lisa Su obviously has more information that we do on the performance of AMD but it seems counter-intuitive to shut down the only business segment to make positive income, but as The Register points out the $45m which they made is down almost 50% from this time last year. AMD will keep the fabric patents but as of now we do not know if they are looking to sell their server business, license the patents or follow some other business plan.

"Tattered AMD says it's done with its SeaMicro server division, following a grim quarter that saw the ailing chipmaker weather losses beyond the expectations of even the gloomiest of Wall Street analysts."

Grand Theft Auto V launched today at around midnight GMT worldwide. This corresponded to 7PM EDT for those of us in North America. Well, add a little time for Steam to unlock the title and a bit longer for Rockstar to get enough servers online. One thing you did not need to wait for was new video card drivers. Both AMD and NVIDIA have day-one drivers that provide support.

Personally, I ran the game for about a half hour on Windows 10 (Build 10049) with a GeForce GTX 670. Since these drivers are not for the pre-release operating system, I tried running it on 349.90 to see how it performed before upgrading. Surprisingly, it seems to be okay (apart from a tree that was flickering in and out of existence during a cut-scene). I would definitely update my drivers if they were available and supported, but I'm glad that it seems to be playable even on Windows 10.

Over at Techgage one of the writers recently updated their system, due to budget constraints they needed to stay in the $600-700 range all told which of course indicates an AMD build. They chose the $138 FX-8320E for their processor, along with a pair of GTX 760s, the ASUS M5A99FX Pro R2.0, 8GB of DDR3-1866 and with storage, power, cooling and case they managed to keep within the ir budget. The question remain is if it is powerful enough for reasonable gaming duties such as Borderlands 2. Read on to see if the recommendation is to go with AMD or the i3-4330 and a low end H97 board.

"Released this past fall, AMD’s FX-8320E processor promises to deliver a lot of processing power for those on a budget. It sports eight cores, and as a Black Edition, its overclocking capabilities are unrestricted. But is that enough to make this the best go-to budget processor, especially for gamers?"

The screen technology itself was impressive: a 2560x1440 resolution, IPS-style implementation and a maximum refresh rate of 120 Hz. (Note: the new marketing material indicates that the panel will have a 144 Hz maximum refresh rate. Maybe there was a hardware change since CES?) During a video interview with ASUS at the time it was labeled as having a minimum refresh rate of 40 Hz which is something we look forward to testing if and when we can get a sample in our labs.

At the time, there was some interesting debate about WHY this wasn't a FreeSync branded monitor. We asked AMD specifically about this monitor's capability to work with capable Radeon GPUs for variable refresh and they promised there were no lock-outs occurring. We guessed that maybe ASUS' deal with NVIDIA on G-Sync was preventing them from joining the FreeSync display program, but cleary that wasn't the case. Today on Twitter, AMD announced that the MG279Q was officially part of the FreeSync brand.

I am glad to see more products come into the FreeSync monitor market and hopefully we'll have some solid gaming experiences with the ASUS MG279Q to report back on soon!

Process Technology Overview

We have been very spoiled throughout the years. We likely did not realize exactly how spoiled we were until it became very obvious that the rate of process technology advances hit a virtual brick wall. Every 18 to 24 months we were treated to a new, faster, more efficient process node that was opened up to fabless semiconductor firms and we were treated to a new generation of products that would blow our hair back. Now we have been in a virtual standstill when it comes to new process nodes from the pure-play foundries.

Few expected the 28 nm node to live nearly as long as it has. Some of the first cracks in the façade actually came from Intel. Their 22 nm Tri-Gate (FinFET) process took a little bit longer to get off the ground than expected. We also noticed some interesting electrical features from the products developed on that process. Intel skewed away from higher clockspeeds and focused on efficiency and architectural improvements rather than staying at generally acceptable TDPs and leapfrogging the competition by clockspeed alone. Overclockers noticed that the newer parts did not reach the same clockspeed heights as previous products such as the 32 nm based Sandy Bridge processors. Whether this decision was intentional from Intel or not is debatable, but my gut feeling here is that they responded to the technical limitations of their 22 nm process. Yields and bins likely dictated the max clockspeeds attained on these new products. So instead of vaulting over AMD’s products, they just slowly started walking away from them.

Samsung is one of the first pure-play foundries to offer a working sub-20 nm FinFET product line. (Photo courtesy of ExtremeTech)

When 28 nm was released the plans on the books were to transition to 20 nm products based on planar transistors, thereby bypassing the added expense of developing FinFETs. It was widely expected that FinFETs were not necessarily required to address the needs of the market. Sadly, that did not turn out to be the case. There are many other factors as to why 20 nm planar parts are not common, but the limitations of that particular process node has made it a relatively niche process node that is appropriate for smaller, low power ASICs (like the latest Apple SOCs). The Apple A8 is rumored to be around 90 mm square, which is a far cry from the traditional midrange GPU that goes from 250 mm sq. to 400+ mm sq.

The essential difficulty of the 20 nm planar node appears to be a lack of power scaling to match the increased transistor density. TSMC and others have successfully packed in more transistors into every square mm as compared to 28 nm, but the electrical characteristics did not scale proportionally well. Yes, there are improvements there per transistor, but when designers pack in all those transistors into a large design, TDP and voltage issues start to arise. As TDP increases, it takes more power to drive the processor, which then leads to more heat. The GPU guys probably looked at this and figured out that while they can achieve a higher transistor density and a wider design, they will have to downclock the entire GPU to hit reasonable TDP levels. When adding these concerns to yields and bins for the new process, the advantages of going to 20 nm would be slim to none at the end of the day.