The official press deck for Coffee Lake-S was leaked to the public, so Intel gave us the go-ahead to discuss the product line-up in detail (minus benchmarks). While the chips are still manufactured on the 14nm process that Kaby Lake, Skylake, and Broadwell were produced on, there’s more on them. The line-up is as follows: Core i3 gets quad-core without HyperThreading and no turbo boosting, Core i5 gets six-core without HyperThreading but with Turbo boosting, and Core i7 gets six-core with HyperThreading and Turbo boosting.

While the slide deck claims that the CPU still has 16 PCIe 3.0 lanes, the whole platform supports up to 40. They specifically state “up to” over and over again, so I’m not sure whether that means “for Z370 boards” or if there will be some variation between individual boards. Keep in mind that only 16 lane of this are from the processor itself, the rest are simply a part of the chipset. This unchanged from Z270.

Moving on, Intel has been branding this as “Intel’s Best Gaming Desktop Processor” all throughout their presentation. The reasoning is probably two-fold. First, this is the category of processors that high-end, mainstream, but still enthusiast PC gamers target. Second, gaming, especially at super-high frame rates, is an area that AMD has been struggling with on their Ryzen platform.

Speaking of performance, the clock rate choice is quite interesting compared to Kaby Lake. In all cases, the base clock had a little dip from the previous generation, but the Turbo clock, if one exists, has a little bump. For instance, going from the Core i7-7700k to the Core i7-8700k, your base clock drops from 4.2 GHz to just 3.7 GHz, but the turbo jumps up from 4.5 GHz to 4.7 GHz. You also have a little more TDP to work with (95W vs 91W) with the 8700k. I’m not sure what this increase variance between low and high clock rates will mean, but it’s interesting to see Intel making some sort of trade-off on the back end.

(Editor's note: the base clock is only going to be a concern when running all cores for a long period of time. I fully expect performance to be higher for CFL-S parts than KBL-S parts in all workloads.)

The last thing that I’ll mention is that, of the two i3s, the two i5s, and the two i7s, one is locked (and lower TDP) and one is unlocked. In other words, Intel has an unlocked solution in all three classifications, even the i3. Even though it doesn’t have a turbo clock setting, you can still overclock it by hand if you desire.

Prices range from $117 to $359 USD, as seen in the slide, above. They launch on October 5th.

Specifications and Architecture

It has been an interesting 2017 for Intel. Though still the dominant market share leader in consumer processors of all shapes and sizes, from DIY PCs to notebooks to servers, it has come under attack with pressure from AMD unlike any it has felt in nearly a decade. It started with the release of AMD Ryzen 7 and a family of processors aimed at the mainstream user and enthusiast markets. That followed by the EPYC processor release moving in on Intel’s turf of the enterprise markets. And most recently, Ryzen Threadripper took a swing (and hit) at the HEDT (high-end desktop) market that Intel had created and held its own since the days of the Nehalem-based Core i7-920 CPU.

Between the time Threadripper was announced and when it shipped, Intel made an interesting move. It decided to launch and announce its updated family of HEDT processors dubbed Skylake-X. Only available in a 10-core model at first, the Core i9-7900X was the fastest tested processor in our labs, at the time. But it was rather quickly overtaken by the likes of the Threadripper 1950X that ran with 16-cores and 32-threads of processing. Intel had already revealed that its HEDT lineup would go to 18-core options, though availability and exact clock speeds remained in hiding until recently.

i9-7980XE

i9-7960X

i9-7940X

i9-7920X

i9-7900X

i7-7820X

i7-7800X

TR 1950X

TR 1920X

TR 1900X

Architecture

Skylake-X

Skylake-X

Skylake-X

Skylake-X

Skylake-X

Skylake-X

Skylake-X

Zen

Zen

Zen

Process Tech

14nm+

14nm+

14nm+

14nm+

14nm+

14nm+

14nm+

14nm

14nm

14nm

Cores/Threads

18/36

16/32

14/28

12/24

10/20

8/16

6/12

16/32

12/24

8/16

Base Clock

2.6 GHz

2.8 GHz

3.1 GHz

2.9 GHz

3.3 GHz

3.6 GHz

3.5 GHz

3.4 GHz

3.5 GHz

3.8 GHz

Turbo Boost 2.0

4.2 GHz

4.2 GHz

4.3 GHz

4.3 GHz

4.3 GHz

4.3 GHz

4.0 GHz

4.0 GHz

4.0 GHz

4.0 GHz

Turbo Boost Max 3.0

4.4 GHz

4.4 GHz

4.4 GHz

4.4 GHz

4.5 GHz

4.5 GHz

N/A

N/A

N/A

N/A

Cache

24.75MB

22MB

19.25MB

16.5MB

13.75MB

11MB

8.25MB

40MB

38MB

?

Memory Support

DDR4-2666 Quad Channel

DDR4-2666 Quad Channel

DDR4-2666 Quad Channel

DDR4-2666 Quad Channel

DDR4-2666
Quad Channel

DDR4-2666
Quad Channel

DDR4-2666
Quad Channel

DDR4-2666
Quad Channel

DDR4-2666 Quad Channel

DDR4-2666 Quad Channel

PCIe Lanes

44

44

44

44

44

28

28

64

64

64

TDP

165 watts

165 watts

165 watts

140 watts

140 watts

140 watts

140 watts

180 watts

180 watts

180 watts?

Socket

2066

2066

2066

2066

2066

2066

2066

TR4

TR4

TR4

Price

$1999

$1699

$1399

$1199

$999

$599

$389

$999

$799

$549

Today we are now looking at both the Intel Core i9-7980XE and the Core i9-7960X, 18-core and 16-core processors, respectively. The goal from Intel is clear with the release: retake the crown as the highest performing consumer processor on the market. It will do that, but it does so at $700-1000 over the price of the Threadripper 1950X.

According to the Netherlands arm of Hardware.info, while Kaby Lake-based processors will physically fit into the LGA-1151 socket of Z370 motherboards, they will fail to boot. Since their post, Guru3D asked around to various motherboard manufacturers, and they claim that Intel is only going to support 8th Generation processors with that chipset via, again, allegedly, a firmware lock-out.

If this is true, then it might be possible for Intel to allow board vendors to release a new BIOS that supports these older processors. Guru3D even goes one step further and suggests that, just maybe, motherboard vendors might have been able to support Coffee Lake on Z270 as well, if Intel would let them. I’m... skeptical about that last part in particular, but, regardless, it looks like you won’t have an upgrade path, even though the socket is identical.

It’s also interesting to think about the issue that Hardware.info experienced: the boot failed on the GPU step. The prevailing interpretation is that everything up to that point is close enough that the BIOS didn’t even think to fail.

My interpretation of the step that booting failed, however, is wondering whether there’s something odd about the new graphics setup that made Intel pull support for Z270. Also, Intel usually supports two CPU generations with each chipset, so we had no real reason to believe that Skylake and Kaby Lake would carry over except for the stalling of process tech keeping us on 14nm so long.

Still, if older CPUs are incompatible with Z370, and for purely artificial reasons, then that’s kind-of pathetic. Maybe I’m odd, but I tend to buy a new motherboard with new CPUs anyway, but I can’t envision the number of people who flash BIOSes with their old CPU before upgrading to a new one is all that high, so it seems a little petty to nickel and dime the few that do, especially at a time that AMD can legitimately call them out for it.

The EPYC 7351P, which should sell for roughly $750 was tested against Intel's Xeon Silver 4108 which runs about $440 in various server applications such as GROMACS, OpenSSL and even a chess benchmark. The tests were done with single socket EPYCs, the "P" series, which are offered at a significant discount when compared to AMD's dual socket family; benchmarked against Intel's Xeon Silver in both single and dual socket configurations. The only time that the Xeon's performance came close to the single socket 7351P were when they were configured in dual socket systems, even then AMD's EPYC chip came out on top, often by a significant margin.

Raw performance is not the only advantage AMD offers on EPYC, the feature sest also far outstrips the somewhat watered down Xeon Silver family. The single socket 7351P offers 128 PCIe lanes while a dual socket Xeon Silver can only offer 96 and EPYC can handle up to 2TB of DDR4-2666 in its eight channel memory controller whereas Intel is limited to 1.5TB DDR4-2400 in a dual socket server nor can it support dual AVX-512 nor Omni-Path fabric.

Intel does have some advantages that come with the maturity of their platform, including superb NVMe hotswap support as well as QuickAssist and they do have higher end Xeon Gold chips which include the aforementioned features that the Xeon Silver line lacks, however they are also significantly more expensive than EPYC.

You can expect more tests to appear in the future as STH invested a lot of money in new hardware to test and as the tests can take days to complete there will be some delay before they have good data to share. It is looking very positive for AMD's EPYC family, they offer an impressive amount of value for the money and it will be interesting to see how Intel reacts.

The change process technology continues to have a negative effect on DRAM supplies and according to the story posted on Electronics Weekly there is no good news in sight. The three major vendors, Samsung, SK Hynix and Micron are all slowing production as a result of new fabs being built and existing production lines upgraded for new process technology such as EUV. This will ensure that prices continue to slowly creep up over the remainder of this year and likely into 2018. Drop by for more information on the challenges each are facing.

"While overall DRAM demand will remain high in 2018, new fabs being planned will not be ready for mass production until 2019 at the earliest."

Overview

When we first saw product page for the Marseille mCable Gaming Edition, a wave of skepticism waved across the PC Perspective offices. Initially, an HDMI cable that claims to improve image quality while gaming sounds like the snake oil that "audiophile" companies like AudioQuest have been peddling for years.

However, looking into some of the more technical details offered by Marseille, their claims seemed to be more and more likely. By using a signal processor embedded inside the HDMI connector itself, Marseille appears to be manipulating the video signal to improve quality in ways applicable to gaming. Specifically, their claim of Anti-Aliasing on all video signals has us interested.

Even from the initial unboxing, there are some unique aspects to the mCable. First, you might notice that the connectors are labeled with "Source" and "TV." Since the mCable has a signal processor in it, this distinction which is normally meaningless starts to matter a great deal.

Similarly, on the "TV" side, there is a USB cable used to power the signal processing chip. Marseille claims that most modern TV's with USB connections will be able to power the mCable.

While a lot of Marseilles marketing materials are based on upgrading the visual fidelity of console games that don't have adjustable image quality settings, we decided to place our aim on a market segment we are intimately familiar with—PC Gaming. Since we could selectively turn off Anti-Aliasing in a given game, and PC games usually implement several types of AA, it seemed like the most interesting testing methodology.

New graphics drivers for GeForce cards were published a few days ago. Unfortunately, I became a bit reliant upon GeForce Experience to notify me, and it didn’t this time, so I am a bit late on the draw. The 385.69 update adds “Game Ready” optimizations for a bunch of new games: Project Cars 2, Call of Duty: WWII open beta, Total War: WARHAMMER II, Forza Motorsport 7, EVE: Valkyrie - Warzone, FIFA 18, Raiders of the Broken Planet, and Star Wars Battlefront 2 open beta.

One open issue is that GeForce TITAN (which I’m assuming refers to the original, Kepler-based one) cannot be installed on a Threadripper-based motherboard in Windows 10. The OS refuses to boot after the initial install. I’m guessing this has been around for a while, but in case you’re planning on upgrading to Threadripper (or buying a second-hand TITAN) it might be good to know.

If you haven’t received notification to update your drivers yet, poke GeForce Experience to make sure that it’s running and checking. Or, of course, you can download them from NVIDIA’s website.

NVIDIA is adding a third SKU to their SHIELD TV line-up, shaving $20 off the price tag by including just a media remote, rather than the current low-end SKU’s media remote and a gamepad. This makes the line-up: SHIELD (16GB, Remote Only) for $179.00, SHIELD (16GB, Remote + Gamepad) for $199.99, and SHIELD PRO (500GB, Remote + Gamepad) for $299.99.

All SKUs come with MSI levels of uppercase brand names.

This version is for those who are intending to use the device as a 4K media player. If you are not interested in gaming, then that’s $20 in your pocket instead of a controller that you will never use on your shelf. If, however, you want to game in the future, then the first-party SHIELD CONTROLLER is $59.99 USD, so buying the bundle with the gamepad now will save you about $30 (Update, Sept 24th @ 5:45pm: $40... I mathed wrong.) That leaves a little bit to think about, but the choice can now be made.

The new bundle is now available for pre-order, and it ships on October 18th.

The iCX cooler on the card offers nine thermal sensors and multiple MCUs along with asynchronous fan control to manage both heat and sound simultaneously. You can choose between black or white models depending on the colour scheme in your PC and there are customizable RGB colour for the visual alarms present on the card. PR just below the back end.

September 21st, 2017 – The EVGA GeForce GTX 1080Ti FTW3 ELITE cards are now available with 12GHz of GDDR5 memory, giving it 528 GB/s of memory bandwidth! These cards are available with either the ELITE Black or White shroud, and of course comes with EVGA’s exclusive iCX technology, giving you 9 thermal sensors, onboard thermal LED indicators and incredible cooling with quiet operation.

Features

Includes EVGA iCX Technology

12GHz GDDR5 Memory

528 GB/s of Memory Bandwidth

Available in ELITE Black and White Colors

Includes EVGA iCX Technology

Featuring a total of 11 global patents (pending and granted), iCX is efficiency perfected.

AMD's popularity with Ryzen CPUs (and upcoming APUs) has made waves across the industry, and Noctua have jumped in with a pair of low-profile offerings that update previous designs for cramped case interiors.

First up is the new version of the NH-L9a:

"The new NH-L9a-AM4 is an AM4-specific revision of Noctua’s award-winning NH-L9a low-profile CPU cooler. At a height of only 37mm, the NH-L9a is ideal for extremely slim cases and, due to its small footprint, it provides 100% RAM and PCIe compatibility as well as easy access to near-socket connectors, even on tightly packed mini-ITX motherboards."

Next is the new NH-L12S:

"The new S-version of the renowned NH-L12 not only adds AM4 support but also gives more flexibility and improved performance in low-profile mode. Thanks to the new NF-A12x15 PWM slim 120mm fan, the NH-L12S provides even better cooling than the previous model with its 92mm fan. At the same time, the NH-L12S is highly versatile: with the fan installed on top of the fins, the cooler is compatible with RAM modules of up to 45mm in height. With the fan installed underneath the fins, the total height of the cooler is only 70mm, making it suitable for use in many compact cases."

Noctua says that these new coolers now shipping "and will be available shortly", with an MSRP of $39.90 for the NH-L9a-AM4 and $49 for the NH-L12S.

CNXSoft were granted a look at upcoming Intel NUC models this morning, including the next generation of systems, dubbed Hades Canyon, with a variety of other Canyons as well. The most interesting are the top models, powered by Kaby Lake H and a discrete GPU, the NUCxi7HVK aka Hades Lake VR and NUCxi7HNK which is Hades Lake without VR. Those two models will support for up to six displays and offer two Thunderbolt 3 ports, a pair of PCIe SSDs as well as support for Intel Optane. All of these features could require a slightly larger footprint than we are used to with NUCs especially considering the dGFX. Head on over for more details on the other NUC models you can expect to see in the coming years.

"Intel’s new generation of Gemini Lake and Coffee Lake processors is expected to launch at the end of this year, beginning of next, and this morning I received Intel’s NUC roadmap that gives a good idea of what’s coming in 2018 and 2019."

The newest Radeon Software ReLive 17.9.2 is especially worth grabbing if you have or plan to have more than one Vega based card in your system as it marks the return of Crossfire support. You can pair up Vega64 or Vega56 cards but do make sure they are a matched set. We haven't had time to test the performance results yet but you can be sure we will be working on that in the near future. Below are the results which AMD suggests you can expect in several different games, as well as a look at the other notes associated with this new driver.

Today in China Intel is holding their Technology and Manufacturing Day. Unlike previous "IDF" events this appears to be far more centered on the manufacturing aspects of Intel's latest process nodes. During presentations Intel talked about their latest steps down the process ladder to smaller and smaller geometries all the while improving performance and power efficiency.

Mark Bohr presenting at Intel Technology and Manufacturing Day in China. (Image courtesy of Intel Corporation)

It really does not seem as though 14nm has been around as long as it has, but the first Intel products based on that node were released in the 2nd half of 2014. Intel has since done further work on the process. Today the company talked about two other processes as well as products being made on these nodes.

The 10nm process has been in development for some time and we will not see products this year. Instead we will see two product cycles based on 14nm+ and 14nm++ parts. Intel did show off a wafer of 10nm Cannon Lake dies. Intel claims that their 10nm process is still around 3 years more advanced than the competition. Other foundry groups have announced and shown off 10nm parts, but overall transistor density and performance does not look to match what Intel has to offer.

We have often talked about the marketing names that these nodes have been given, and how often their actual specifications have not really lived up to the reality. Intel is not immune to this, but they are closer to describing these structures than the competition. Even though this gap does exist, competition is improving their products and offering compelling solutions at decent prices so that fabless semi firms can mostly keep up with Intel.

A new and interesting process is being offered by intel in the form of 22FFL. This is an obviously larger process node, but it is highly optimized for low power operation with far better leakage characteristics than the previous 22nm FF process that Intel used all those years ago. This is aimed at the ultra-mobile devices with speeds above 2 GHz. This seems to be a response to other low power lines like the 22FDX product from GLOBALFOUNDRIES. Intel did not mention potential RF implementations which is something of great interest from those also looking at 22FDX.

Perhaps the biggest news that was released today is that of Intel Custom Foundry announcing and agreement with ARM to develop and implement those CPUs on the upcoming 10nm process. This can have a potentially huge impact depending on the amount of 10nm line space that Intel is willing to sell to ARM's partners as well as what timelines they are looking at to deliver products. ARM showed off a 10nm test wafer of Cortex-A75 CPUs. The company claims that they were able to design and implement these cores using industry standard design flows (automated place and route, rather than fully custom) and achieving performance in excess of 3 GHz.

Gus Yeung of ARM holding a 10nm Cortex-A75 based CPUs designed by Intel. (Image courtesy of Intel Corporation)

Intel continues to move forward and invest a tremendous amount of money in their process technology. They have the ability to continue at this rate far beyond that of other competitors. Typically the company does a lot of the heavy lifting with the tools partners, which then trickles down to the other manufacturers. This has allowed Intel to stay so far ahead of the competition, and with the introduction of 14nm+, 14nm++, and 10nm they will keep much of that lead. Now we must wait and see what kind of clockspeed and power performance we see from these new nodes and how well the competition can react and when.

Epic Games has released a preview build of Unreal Engine 4.18. This basically sets a bar for shipped features, giving them a bit of time to crush bugs before they recommend developers use it for active projects. This version has quite a few big changes, especially in terms of audio and video media.

WebAssembly is now enabled by default for HTML5.

First, we’ll discuss platform support. As you would expect, iOS 11 and XCode 9 are now supported, and A10 processors can use the same forward renderer that was added to UE4 for desktop VR, as seen in Robo Recall. That’s cool and all, but only for Apple. For the rest of us, WebAssembler (WASM) is now enabled by default for HTML5 projects. WASM is LLVM bytecode that can be directly ingested by web browsers. In other words, you can program in C++ and have web browsers execute it, and do so without transpiling to some form of JavaScript. (Speaking of which, ASM.js is now removed from UE4.) The current implementation is still single-threaded, but browser vendors are working on adding multi-threading to WASM.

As for the cool features: Epic is putting a lot of effort in their media framework. This allows for a wider variety of audio and video types (sample rates, sample depths, and so forth) as well as, apparently, more control over timing and playback, including through Blueprints visual scripting (although you could have always made your own Blueprint node anyway). If you’re testing out Unreal Engine 4.18, Epic Games asks that you pay extra attention to this category, reporting any bugs that you find.

Epic has also improved their lighting engine, particularly when using the Skylight lighting object. They also say that Volumetric Lightmaps are also, now, enabled by default. This basically allows dynamic objects to move through a voxel-style grid of lighting values that are baked in the engine, which adds indirect lighting on them without a full run-time GI solution.

The last thing I’ll mention (although there’s a bunch of cool things, including updates to their audio engine and the ability to reference Actors in different levels) is their physics improvements. Their Physics Asset Editor has been reskinned, and the physics engine has been modified. For instance, APEX Destruction has been pulled out of the core engine into a plug-in, and the cloth simulation tools, in the skeletal mesh editor, are no longer experimental.

Unreal Engine 4.18 Preview can be downloaded from the Epic Launcher, but existing projects should be actively developed in 4.17 for a little while longer.

To start with the particular specification which will upset some people, the ASUS XG37VQ is a 1080p monitor; so if life starts at 1440p then feel free to move on. For those still reading, this Freesync monitor supports refresh rates from 48 to 144Hz and can display 95% sRGB coverage. Techgage were impressed with the quality of the display but when it came to the RGBs present on the monitor they had some questions; the ROG logo that is projected from the bottom of the monitor only comes in red, while the glowing circle on the back of the display supports a full gamut of colours which no one will ever see. Pop over for the full review.

"Let's cut right to the chase. The Asus ROG Strix XG27VQ is a $350 gaming monitor, 27 inches in size, with a resolution of 1920 x 1080 and a refresh rate of 144 Hz. We're looking at a VA LCD panel here with FreeSync support, sporting an 1800R curvature."

You cannot really talk about the new Skylake-X parts from Intel without bringing up AMD's Threadripper as that is the i9-7980XE and i9-7960X's direct competition. From a financial standpoint, AMD is the winner, with a price tag either $700 or $1000 less than Intel's new flagship processors. As Ryan pointed out in his review, for those whom expense is not a consideration it makes sense to chose Intel's new parts as they are slightly faster and the Xtreme Edition does offer two more cores. For those who look at performance per dollar the obvious processor of choice is ThreadRipper; for as Ars sums up in their review AMD offers more PCIe lanes, better heat management and performance that is extremely close to Intel's best.

"Ultimately, the i9-7960X raises the same question as the i9-7900X: Are you willing to pay for the best performing silicon on the market? Or is Threadripper, which offers most of the performance at a fraction of the price, good enough?"

NVIDIA seems to have scored a fairly large customer lately, as Google has just added Tesla P100 GPUs to their cloud infrastructure. Effective immediately, you can attach up to four of these GPUs to your rented servers on an hourly or monthly basis. According to their pricing calculator, each GPU adds $2.30 per hour to your server’s fee in Oregon and South Carolina, which isn’t a lot if you only use them for short periods of time.

If you need to use them long-term, though, Google has also announced “sustained use discounts” with this blog post, too.

While NVIDIA has technically launched a successor to the P100, the Volta-based V100, the Pascal-based part is still quite interesting. The main focus of the GPU design, GP100, was bringing FP64 performance up to its theoretical maximum of 1/2 FP32. It also has very high memory bandwidth, due to its HBM 2.0 stacks, which is often a huge bottleneck for GPU-based applications.

For NVIDIA, selling high-end GPUs is obviously good. The enterprise market is lucrative, and it validates their push into the really large die sizes. For Google, it gives a huge reason for interested parties to consider them over just defaulting to Amazon. AWS has GPU instances, but they’re currently limited to Kepler and Maxwell (and they offer FPGA-based acceleration, too). They can always catch up, but they haven’t yet, and that's good for Google.

If you haven't seen the lengths which scammers will go to when modifying ATMs to steal your bank info you should really take a look at these pictures and get in the habit of yanking on the ATM's fascia and keyboard before using them. Unfortunately as Hack a Day posted about last week, the bank is not the only place you have to be cautious, paying at the pump can also expose your details. In this case it is not a fake front which you need to worry about, instead a small PIC microcontroller is attached to the serial connection between card reader and pump computer, so it can read the unencrypted PIN and data and then store the result in an EEPROM device for later collection. The device often has Bluetooth connectivity so that the scammers don't need to drive right up to the pump frequently.

There is an app you can download that might be able to help stop this, an app on Google Play will detect Bluetooth devices utilizing the standard codes the skimmers use and alert you. You can then tweet out the location of the compromised pump to alert others, and hopefully letting the station owner and authorities know as well. The app could be improved with automatic reporting and other tools, so check it out and see if you can help improve it as well as keeping your PIN and account safe when fuelling up.

"It would be nice to think that this work might draw attention to the shocking lack of security in gas pumps that facilitates the skimmers, disrupt the finances of a few villains, and even result in some of them getting a free ride in a police car. We can hope, anyway."

The latest version of CRYENGINE, 5.4, makes several notable improvements. Starting with the most interesting one for our readers: Vulkan has been added at the beta support level. It’s always good to have yet another engine jump in with this graphics API so developers can target it without doing the heavy lifting on their own, and without otherwise limiting their choices.

More interesting, at least from a developer standpoint, is that CRYENGINE is evolving into an Entity Component framework. Amazon is doing the same with their Lumberyard fork, but Crytek has now announced that they are doing something similar on their side, too. The idea is that you place relatively blank objects in your level and build them up by adding components, which attaches the data and logic that this object needs. This system proved to be popular with the success of Unity, and it can also be quite fast, too, depending on how the back-end handles it.

I also want to highlight their integration of Allegorithmic Substance. With game engines switching to a PBR-based rendering model, tools can make it easier to texture 3D objects by stenciling on materials from a library. That way, you don’t need to think how gold will behave, just that gold should be here, and rusty iron should be over there. All of the major engines are doing it, and Crytek, themselves, have been using Substance, but now there’s an actual, supported workflow.

CryEngine is essentially free, including royalty-free, to use. Their business model currently involves subscriptions for webinars and priority support.

The day after Intel had its Technology and Manufacturing expo in China, GLOBALFOUNDRIES kicks off their own version of the event and has made a significant number of announcements concerning upcoming and next generation process technologies. GF (GLOBALFOUNDRIES) had been the manufacturing arm of AMD until it was spun off as its own entity in 2009. Since then GF has been open to providing fabless semiconductor firms a viable alternative to TSMC and other foundries. Their current 14nm process is licensed from Samsung, as GF had some significant issues getting their own version of that technology into production. GF looks to be moving past their process hiccups in getting to FinFET technologies as well as offering other more unique process nodes that will serve upcoming mobile technologies very well.

The big announcement today was the existence of the 12LP process. This is a "12 nm" process that looks to be based off of their previous 14nm work. It is a highly optimized variant that offers around 15% better density and 10% better performance than current 14/16nm processes from competing firms. Some time back GF announced that it would be skipping the 10nm node and going directly to 7nm, but it seems that market forces have pushed them to further optimize 14nm and offer another step. Regular process improvement cadences are important to fabless partners as they lay out their roadmaps for future products.

12FP is also on track to be Automotive Grade 2 Certified by Q4 2017, which opens it up to a variety of automotive applications. Self-driving cars are the hot topic these days and it appears as though GF will be working with multiple manufacturers including Tesla. The process also has an RF component that can be utilized for those designs.

There had been some questions before this about what GF would do between 14nm and their expected 7nm offering. AMD had previously shown a roadmap with the first generation Zen being offered on 14nm and a rather nebulous sounding 14nm+ process. We now know that 12LP is going to be the process that AMD leverages for Zen and Vega refreshes next year. GF is opening up risk production in 1H 2018 for early adopters. This typically means that tuning is still going on with the process, and wafer agreements tend to not hinge on "per good die". Essentially, just as the wording suggest, the monetary risks of production fall more on the partner rather than the foundry. I would expect the Zen/Vega refreshes to start rolling out mid-Summer 2018 if all goes well with 12LP.

RF is getting a lot of attention these days. In the past I had talked quite a bit about FD-SOI and the slow adoption of that technology. In the 5G world that we are heading to, RF is becoming far more important. Currently GF has their 28FDX and 22FDX processes which utilize FD-SOI (Fully Depleted Silicon On Insulator). 22FDX is a dual purpose node that can handle both low-leakage ASICs as well as RF enabled products (think cell-phone modems). GF has also announced a new RF centric process node called 8SW SOI. This is a 300mm wafer based technology at Fab 10 located in East Fishkill, NY. This was once an IBM fab, but was eventually "given" to GF for a variety of reasons. The East Fishkill campus is also a center for testing and advanced process development.

22FDX is not limited to ASIC and RF production. GF is announcing that it is offering eMRAM (embedded magnetoresistive non-volatile memory) support. GF claims that ic an retain data through a 260C solder reflow while retaining data for more than 10 years at 125C. These products were developed through a partnership with Everspin Technologies. 1Gb DDR MRAM chips have been sampled and 256Mb DDR MRAM chips are currently available through Everspin. This technology is not limited to standalone chips and can be integrated into SOC designs utilizing eFlash and SRAM interface options.

GLOBALFOUNDRIES has had a rocky start since it was spun off from AMD. Due to aggressive financing from multiple sources it has acquired other pure play foundries and garnered loyal partners like AMD who have kept revenue flowing. If GF can execute on these new technologies they will be on a far more even standing with TSMC and attract new customers. GF has the fab space to handle a lot of wafers, but these above mentioned processes could be some of their first truly breakthrough products that differentiates itself from the competition.