Overview

Since it's introduction in early 2015, the modern iteration of the Dell XPS 13 has been one of the most influential computers in recent history. An example of the rise of desirable Windows-based notebooks back into the premium market, the XPS 13 has done what only a few OEMs have been able to—inspire knockoffs. Now, the market is filled with similar designs including ultrathin bezels (and some even copying the compromises of webcam placement), at similar price points.

Even though it's been regarded as one of the best PC notebooks for its entire tenure, it was clear for a while that Dell must move the brand of their flagship notebook forward, and here it is, the redesigned XPS 13 9370 for 2018.

From a quick glance, the 2018 XPS 13 is quite similar to the outgoing 9360 model from last year. Apart from this new, radical Alpine White and Rose Gold color scheme of our particular review unit, you would be hard-pressed to spot it as unique in public. However, once you start to dig in, the changes become quite evident.

While the new XPS 13 maintains the same physical footprint as the previous iterations, it loses a significant amount of thickness. Still retaining the wedge shape, although much less exaggerated now, the XPS 13 9370 measures only 0.46" at its thickest point, compared to 0.6" on the previous design. While tenths of inches may not seem like a huge difference, this amounts to a 23% reduction in thickness, which is noticeable for a highly portable item like a notebook.

Introduction and Features

Introduction

Corsair is a well-respected name in the PC industry and they continue to offer a complete line of products for enthusiasts, gamers, and professionals alike. Today we are taking a detailed look at Corsair’s latest flagship power supply, the AX1600i Digital ATX power supply unit. This is the most technologically advanced power supply we have reviewed to date. Over time, we often grow numb to marketing terms like “most technologically advanced”, “state-of-the-art”, “ultra-stable”, “super-high efficiency”, etc., but in the case of the AX1600i Digital PSU, we have seen these claims come to life before our eyes.

1,600 Watts: 133.3 Amps on the +12V outputs!

The AX1600i Digital power supply is capable of delivering up to 1,600 watts of continuous DC power (133.3 Amps on the +12V rails) and is 80 Plus Titanium certified for super-high efficiency. If that’s not impressive enough, the PSU can do it while operating on 115 VAC mains and with an ambient temperature up to 50°C (internal case temperature). This beast was made for multiple power-hungry graphic adapters and overclocked CPUs.

The AX1600i is a digital power supply, which provides two distinct advantages. First, it incorporates Digital Signal Processing (DSP) on both the primary and secondary sides, which allows the PSU to deliver extremely tight voltage regulation over a wide range of loads. And second, the AX1600i features the digital Corsair Link, which enables the PSU to be connected to the PC’s motherboard (via USB) for real-time monitoring (efficiency, voltage regulation, and power usage) and control (over-current protection and fan speed profiles).

Quiet operation with a semi-fanless mode (zero-rpm fan mode up to ~40% load) might not be at the top of your feature list when shopping for a 1,600 watt PSU, but the AX1600i is up to the challenge.

It's all fun and games until something something AI.

Microsoft announced the Windows Machine Learning (WinML) APIabout two weeks ago, but they did so in a sort-of abstract context. This week, alongside the 2018 Game Developers Conference, they are grounding it in a practical application: video games!

Specifically, the API provides the mechanisms for game developers to run inference on the target machine. The training data that it runs against would be in the Open Neural Network Exchange (ONNX) format from Microsoft, Facebook, and Amazon. Like the initial announcement suggests, it can be used for any application, not just games, but… you know. If you want to get a technology off the ground, and it requires a high-end GPU, then video game enthusiasts are good lead users. When run in a DirectX application, WinML kernels are queued on the DirectX 12 compute queue.

We’ve discussed the concept before. When you’re rendering a video game, simulating an accurate scenario isn’t your goal – the goal is to look like you are. The direct way of looking like you’re doing something is to do it. The problem is that some effects are too slow (or, sometimes, too complicated) to correctly simulate. In these cases, it might be viable to make a deep-learning AI hallucinate a convincing result, even though no actual simulation took place.

Fluid dynamics, global illumination, and up-scaling are three examples.

Previously mentioned SIGGRAPH demo of fluid simulation without fluid simulation...... just a trained AI hallucinating a scene based on input parameters.

Another place where AI could be useful is… well… AI. One way of making AI is to give it some set of data from the game environment, often including information that a player in its position would not be able to know, and having it run against a branching logic tree. Deep learning, on the other hand, can train itself on billions of examples of good and bad play, and make results based on input parameters. While the two methods do not sound that different, the difference between logic being designed (vs logic being assembled from an abstract good/bad dataset) someone abstracts the potential for assumptions and programmer error. Of course, it abstracts that potential for error into the training dataset, but that’s a whole other discussion.

The third area that AI could be useful is when you’re creating the game itself.

There’s a lot of grunt and grind work when developing a video game. Licensing prefab solutions (or commissioning someone to do a one-off asset for you) helps ease this burden, but that gets expensive in terms of both time and money. If some of those assets could be created by giving parameters to a deep-learning AI, then those are assets that you would not need to make, allowing you to focus on other assets and how they all fit together.

These are three of the use cases that Microsoft is aiming WinML at.

Sure, these are smooth curves of large details, but the antialiasing pattern looks almost perfect.

For instance, Microsoft is pointing to an NVIDIA demo where they up-sample a photo of a car, once with bilinear filtering and once with a machine learning algorithm (although not WinML-based). The bilinear algorithm behaves exactly as someone who has used Photoshop would expect. The machine learning algorithm, however, was able to identify the objects that the image intended to represent, and it drew the edges that it thought made sense.

Like their DirectX Raytracing (DXR) announcement, Microsoft plans to have PIX support WinML “on Day 1”. As for partners? They are currently working with Unity Technologies to provide WinML support in Unity’s ML-Agents plug-in. That’s all the game industry partners they have announced at the moment, though. It’ll be interesting to see who jumps in and who doesn’t over the next couple of years.

O Rayly? Ya Rayly. No Ray!

Microsoft has just announced a raytracing extension to DirectX 12, called DirectX Raytracing (DXR), at the 2018 Game Developer's Conference in San Francisco.

The goal is not to completely replace rasterization… at least not yet. This effect will be mostly implemented for effects that require supplementary datasets, such as reflections, ambient occlusion, and refraction. Rasterization, the typical way that 3D geometry gets drawn on a 2D display, converts triangle coordinates into screen coordinates, and then a point-in-triangle test runs across every sample. This will likely occur once per AA sample (minus pixels that the triangle can’t possibly cover -- such as a pixel outside of the triangle's bounding box -- but that's just optimization).

For rasterization, each triangle is laid on a 2D grid corresponding to the draw surface.
If any sample is in the triangle, the pixel shader is run.
This example shows the rotated grid MSAA case.

A program, called a pixel shader, is then run with some set of data that the GPU could gather on every valid pixel in the triangle. This set of data typically includes things like world coordinate, screen coordinate, texture coordinates, nearby vertices, and so forth. This lacks a lot of information, especially things that are not visible to the camera. The application is free to provide other sources of data for the shader to crawl… but what?

Cubemaps are useful for reflections, but they don’t necessarily match the scene.

This is where DirectX Raytracing comes in. There’s quite a few components to it, but it’s basically a new pipeline that handles how rays are cast into the environment. After being queued, it starts out with a ray-generation stage, and then, depending on what happens to the ray in the scene, there are close-hit, any-hit, and miss shaders. Ray generation allows the developer to set up how the rays are cast, where they call an HLSL instrinsic instruction, TraceRay (which is a clever way of invoking them, by the way). This function takes an origin and a direction, so you can choose to, for example, cast rays only in the direction of lights if your algorithm was to, for instance, approximate partially occluded soft shadows from a non-point light. (There are better algorithms to do that, but it's just the first example that came off the top of my head.) The close-hit, any-hit, and miss shaders occur at the point where the traced ray ends.

To connect this with current technology, imagine that ray-generation is like a vertex shader in rasterization, where it sets up the triangle to be rasterized, leading to pixel shaders being called.

Even more interesting – the close-hit, any-hit, and miss shaders can call TraceRay themselves, which is used for multi-bounce and other recursive algorithms (see: figure above). The obvious use case might be reflections, which is the headline of the GDC talk, but they want it to be as general as possible, aligning with the evolution of GPUs. Looking at NVIDIA’s VXAO implementation, it also seems like a natural fit for a raytracing algorithm.

Speaking of data structures, Microsoft also detailed what they call the acceleration structure. Each object is composed of two levels. The top level contains per-object metadata, like its transformation and whatever else data that the developer wants to add to it. The bottom level contains the geometry. The briefing states, “essentially vertex and index buffers” so we asked for clarification. DXR requires that triangle geometry be specified as vertex positions in either 32-bit float3 or 16-bit float3 values. There is also a stride property, so developers can tweak data alignment and use their rasterization vertex buffer, as long as it's HLSL float3, either 16-bit or 32-bit.

As for the tools to develop this in…

Microsoft announced PIX back in January 2017. This is a debugging and performance analyzer for 64-bit, DirectX 12 applications. Microsoft will upgrade it to support DXR as soon as the API is released (specifically, “Day 1”). This includes the API calls, the raytracing pipeline resources, the acceleration structure, and so forth. As usual, you can expect Microsoft to support their APIs with quite decent – not perfect, but decent – documentation and tools. They do it well, and they want to make sure it’s available when the API is.

Example of DXR via EA's in-development SEED engine.

In short, raytracing is here, but it’s not taking over rasterization. It doesn’t need to. Microsoft is just giving game developers another, standardized mechanism to gather supplementary data for their games. Several game engines have already announced support for this technology, including the usual suspects of anything top-tier game technology:

Frostbite (EA/DICE)

SEED (EA)

3DMark (Futuremark)

Unreal Engine 4 (Epic Games)

Unity Engine (Unity Technologies)

They also said, “and several others we can’t disclose yet”, so this list is not even complete. But, yeah, if you have Frostbite, Unreal Engine, and Unity, then you have a sizeable market as it is. There is always a question about how much each of these engines will support the technology. Currently, raytracing is not portable outside of DirectX 12, because it’s literally being announced today, and each of these engines intend to support more than just Windows 10 and Xbox.

Still, we finally have a standard for raytracing, which should drive vendors to optimize in a specific direction. From there, it's just a matter of someone taking the risk to actually use the technology for a cool work of art.

CalDigit Tuff Rugged External Drive

There are a myriad of options when it comes to portable external storage. But if you value durability just as much as portability, those options quickly dry up. Combining a cheap 2.5-inch hard drive with an AmazonBasics enclosure is often just fine for an external storage solution that sits in your climate controlled office all day, but it's probably not the best choice for field use during your national park photography trip, your scuba diving expedition, or on-site construction management.

For situations like these where the elements become a factor and the chance of an accidental drop skyrockets, it's a good idea to invest in "ruggedized" equipment. Companies like Panasonic and Dell have long offered laptops custom-designed to withstand unusually harsh environments, and accessory makers have followed suit with ruggedized hard drives.

Today we're taking a look at one such ruggedized hard drive, the CalDigit Tuff. Released in 2017, the CalDigit Tuff is a 2.5-inch bus-powered external drive available in both HDD and SSD options. CalDigit loaned us the 2TB HDD model for testing.

Introduction, Specifications and Packaging

Introduction:

When one thinks of an M.2 SSD, we typically associate that with either a SATA 6GB/s or more recently with a PCIe 3.0 x4 link. The physical interface of M.2 was meant to accommodate future methods of connectivity, but it's easy to overlook the ability to revert back to something like a PCIe 3.0 x2 link. Why take a seemingly backward step on the interface of an SSD? Several reasons actually. Halving the number of lanes makes for a simpler SSD controller design, which lowers cost. Power savings are also a factor, as driving a given twisted pair lane at PCIe 3.0 speeds draws measurable current from the host and therefore adds to the heat production of the SSD controller. We recently saw that a PCIe 3.0 x2 can still turn in respectable performance despite lower bandwidth interface, but how far can we get the price down when pairing that host link with some NAND flash?

Enter the MyDigitalSSD SBX series. Short for Super Boot eXpress, the aim of these parts is to offer a reasonably performant PCIe NVMe SSD at something closer to SATA SSD pricing.

Specifications:

Physical: M.2 2280 (single sided)

Controller: Phison E8 (PS5008-E8)

Capacities: 128GB, 256GB, 512GB, 1TB

PCIe 3.0 x2, M.2 2280

Sequential: Up to 1.6/1.3 GB/s (R/W)

Random: 240K+ / 180K+ IOPS (R/W)

Weight: 8g

Power: <5W

Packaging:

The MyDigitalDiscount guys keep things extremely simple with their SSD packaging, which is eaxctly how it should be. It doesn't take much to package and protect an M.2 SSD, and this does the job just fine. They also include a screwdriver and a screw just in case you run into a laptop that came without one installed.

A Snappy Budget Tablet

Huawei has been gaining steam. Even though they’re not yet a household name in the United States, they’ve been a major player in the Eastern markets with global ambitions. Today we’re looking at the MediaPad M3 Lite, a budget tablet with the kind of snappy performance and just better features that should make entry-level tablet buyers take notice.

The tablet arrives well-packed inside a small but sturdy box. I’ve got to say, I love the copper on white look they’ve gone with and wish they’d applied it to the tablet itself, which is white and silver. Inside the box is the tablet, charging brick with USB cable, a SIM eject tool, and warranty card. It’s a bit sparse, but at this price point is perfectly fine.

The tablet looks remarkably similar to the Samsung Galaxy Tab 4, only missing the touch controls on either side of the Home button and shifting the branding to the upper left. This isn’t a bad thing by any means but the resemblance is definitely striking. One notable difference is that the Home button isn’t actually a button at all but a touch sensor that doubles as the fingerprint sensor.

The MediaPad M3 Lite comes in at 7.5mm, or just under 0.3”, thick. Virtually all of the name brand tablets I researched prior to this review are within 0.05” of each other, so Huawei’s offering is in line with what we would expect, if ever so slightly thinner.

This is all of course very scary. It was not all that long ago that we found out about the Spectre/Meltdown threats that seemingly are more dangerous to Intel than to its competitor. Spectre/Meltdown can be exploited by code that will compromise a machine without having elevated privileges. Parts of Spectre/Meltdown were fixed by firmware updates and OS changes which had either no effect on the machine in terms of performance, or incurred upwards of 20% to 30% performance hits in certain workloads requiring heavy I/O usage. Intel is planning a hardware fix for these vulnerabilities later on this year with new products. Current products have firmware updates available to them and Microsoft has already implemented a fix in software. Older CPUs and platforms (back to at least 4th Generation Core) have fixes, but they were rolled out a bit slower. So the fear of a new exploit that is located on the latest AMD processors is something that causes fear in users, CTOs, and investors alike.

CTS-Labs have detailed four major vulnerabilities and have named them as well as have provided fun little symbols for each; Ryzenfall, Fallout, Masterkey, and Chimera. The first three affect the CPU directly. Unlike Spectre/Meltdown, these vulnerabilities require elevated administrative privileges to be run. These are secondary exploits that require either physical access to the machine or logging on with enhanced admin privileges. Chimera affects the chipset designed by ASMedia. It is installed via a signed driver. In a secured system where the attacker has no administrative access, these exploits are no threat. If a system has been previously compromised or physically accessed (eg. force a firmware update via USB and flashback functionality), then these vulnerabilities are there to be taken advantage of.

In every CPU it makes AMD utilizes a “Secure Processor”. This is simply a licensed ARM Cortex A5 that runs the internal secure OS/firmware. The same cores that comprise ARM’s “TrustZone” security product. In theory someone could compromise a server, install these exploits, and then remove the primary exploit so that on the surface it looks like the machine is operating as usual. The attackers will still have low level access to the machine in question, but it will be much harder to root them out.

When PC monitors made the mainstream transition to widescreen aspect ratios in the mid-2000s, many manufacturers opted for resolutions at a 16:10 ratio. My first widescreen displays were a pair of Dell monitors with a 1920x1200 resolution and, as time and technology marched forward, I moved to larger 2560x1600 monitors.

I grew to rely on and appreciate the extra vertical resolution that 16:10 displays offer, but as the production and development of "widescreen" PC monitors matured, it naturally began to merge with the television industry, which had long since settled on a 16:9 aspect ratio. This led to the introduction of PC displays with native resolutions of 1920x1080 and 2560x1440, keeping things simple for activities such as media playback but robbing consumers of pixels in terms of vertical resolution.

I was well-accustomed to my 16:10 monitors when the 16:9 aspect ratio took over the market, and while I initially thought that the 120 or 160 missing rows of pixels wouldn't be missed, I was unfortunately mistaken. Those seemingly insignificant pixels turned out to make a noticeable difference in terms of on-screen productivity real estate, and my 1080p and 1440p displays have always felt cramped as a result.

I was therefore sad to see that the relatively new ultrawide monitor market continued the trend of limited vertical resolutions. Most ultrawides feature a 21:9 aspect ratio with resolutions of 2560x1080 or 3440x1440. While this gives users extra resolution on the sides, it maintains the same limited height options of those ubiquitous 1080p and 1440p displays. The ultrawide form factor is fantastic for movies and games, but while some find them perfectly acceptable for productivity, I still felt cramped.

Thankfully, a new breed of ultrawide monitors is here to save the day. In the second half of 2017, display manufactures such as Dell, Acer, and LG launched 38-inch ultrawide monitors with a 3840x1600 resolution. Just like the how the early ultrawides "stretched" a 1080p or 1440p monitor, the 38-inch versions do the same for my beloved 2560x1600 displays.

The Acer XR382CQK

I've had the opportunity to test one of these new "taller" displays thanks to a review loan from Acer of the XR382CQK, a curved 37.5-inch behemoth. It shares the same glorious 3840x1600 resolution as others in its class, but it also offers some unique features, including a 75Hz refresh rate, USB-C input, and AMD FreeSync support.

Based on my time with the XR382CQK, my hopes for those extra 160 of resolution were fulfilled. The height of the display area felt great for tasks like video editing in Premiere and referencing multiple side-by-side documents and websites, and the gaming experience was just as satisfying. And with its 38-inch size, the display is quite usable at 100 percent scaling.

There's also an unexpected benefit for video content that I hadn't originally considered. I was so focused on regaining that missing vertical resolution that I initially failed to appreciate the jump in horizontal resolution from 3440px to 3840px. This is the same horizontal resolution as the consumer UHD standard, which means that 4K movies in a 21:9 or similar aspect ratio will be viewable in their full size with a 1:1 pixel ratio.

More Ports!

One of the promises of moving to interfaces like USB 3.1 Gen 2 and Thunderbolt 3 on notebooks is the idea of the "one cable future." For the most part, I think we are starting to see some of those benefits. It's nice that with USB Power Delivery, users aren't tied into buying chargers directly from their notebook manufacturer or turning to trying to find oddball third-party chargers with their exact barrel connector. Additionally, I also find it to be a great feature when laptops have USB-c charging ports on opposing sides of the notebooks, allowing me greater flexibility to plug in a charger without putting additional strain on the cable.

For years, the end-game for mobile versatility has been a powerful thin-and-light notebook which you can connect to a dock at home, and use a desktop PC. With more powerful notebook processor's like Intel's quad-core 8th generation parts coming out, we are beginning to reach a point where we have the processing power; the next step is having a quality dock with which to plug these notebooks.

While USB-C can support DisplayPort, Power Delivery, and 10 Gbit/s transfer speeds in its highest-end configuration, this would still be a bit lacking for power users. Thunderbolt 3 offering the same display and power delivery capabilities, but with its 40 Gbit/s data transfer capabilities is a more suitable option.

Today, we are taking a look at the CalDigit Thunderbolt Station 3 Plus, a Thunderbolt 3-enabled device that provides a plethora of connectivity options for your notebook.

Introduction, Specifications and Packaging

Introduction:

Intel has wanted a 3D XPoint to go 'mainstream' for some time now. Their last big mainstream part, the X25-M, launched 10 years ago. It was available in relatively small capacities of 80GB and 160GB, but it brought about incredible performance at a time where most other early SSDs were mediocre at best. The X25-M brought NAND flash memory to the masses, and now 10 years later we have another vehicle which hopes to bring 3D XPoint to the masses - the Intel Optane SSD 800P:

Originally dubbed 'Brighton Beach', the 800P comes in at capacities smaller than its decade-old counterpart - only 58GB and 118GB. The 'odd' capacities are due to Intel playing it extra safe with additional ECC and some space to hold metadata related to wear leveling. Even though 3D XPoint media has great endurance that runs circles around NAND flash, it can still wear out, and therefore the media must still be managed similarly to NAND. 3D XPoint can be written in place, meaning far less juggling of data while writing, allowing for far greater performance consistency across the board. Consistency and low latency are the strongest traits of Optane, to the point where Intel was bold enough to launch an NVMe part with half of the typical PCIe 3.0 x4 link available in most modern SSDs. For Intel, the 800P is more about being nimble than having straight line speed. Those after higher throughputs will have to opt for the SSD 900P, a device that draws more power and requires a desktop form factor.

Specifications:

Capacities: 58GB, 118GB

PCIe 3.0 x2, M.2 2280

Sequential: Up to 1200/600 MB/s (R/W)

Random: 250K+ / 140K+ IOPS (R/W) (QD4)

Latency (average sequential): 6.75us / 18us (R/W) (TYP)

Power: 3.75W Active, 8mW L1.2 Sleep

Specs are essentially what we would expect from an Optane Memory type device. Capacities of 58GB and 118GB are welcome additions over the prior 16GB and 32GB Optane Memory parts, but the 120GB capacity point is still extremely cramped for those who would typically desire such a high performing / low latency device. We had 120GB SSDs back in 2009, after all, and nowadays we have 20GB Windows installs and 50GB game downloads.

Before moving on, I need to call out Intel on their latency specification here. To put it bluntly, sequential transfer latency is a crap spec. Nobody cares about the latency of a sequential transfer, especially for a product which touts its responsiveness - something based on the *random* access latency, and the 6.75us figure above would translate to 150,000 QD1 IOPS (the 800P is fast, but it's not *that* fast). Most storage devices/media will internally 'read ahead' so that sequential latencies at the interface are as low as possible, increasing sequential throughput. Sequential latency is simply the inverse of throughput, meaning any SSD with a higher sequential throughput than the 800P should beat it on this particular spec. To drive the point home further, consider that a HDD's average sequential latency can beat the random read latency of a top-tier NVMe SSD like the 960 PRO. It's just a bad way to spec a storage device, and it won't do Intel any favors here if competing products start sharing this same method of rating latency in the future.

Packaging:

Our samples came in white/brown box packaging, but I did snag a couple of photos of what should be the retail box this past CES:

Don't Call It SPIR of the Moment

Vulkan 1.0 released a little over two years ago. The announcement, with conformant drivers, conformance tests, tools, and patch for The Talos Principle, made a successful launch for the Khronos Group. Of course, games weren’t magically three times faster or anything like that, but it got the API out there; it also redrew the line between game and graphics driver.

First, the specifications for both Vulkan 1.1 and SPIR-V 1.3 have been published. We will get into the details of those two standards later. Second, a suite of conformance tests has also been included with this release, which helps prevent an implementation bug from being an implied API that software relies upon ad-infinitum. Third, several developer tools have been released, mostly by LunarG, into the open-source ecosystem.

The first is Protected Content. This allows developers to restrict access to rendering resources (DRM). Moving on!

The second is Subgroup Operations. We mentioned that they were added to SPIR-V back in 2016 when Microsoft announced HLSL Shader Model 6.0, and some of the instructions were available as OpenGL extensions. They are now a part of the core Vulkan 1.1 specification. This allows the individual threads of a GPU in a warp or wavefront to work together on specific instructions.

Shader compilers can use these intrinsics to speed up operations such as:

Finding the min/max of a series of numbers

Shuffle and/or copy values between lanes of a group

Adding several numbers together

Multiply several numbers together

Evaluate whether any, all, or which lanes of a group evaluate true

In other words, shader compilers can do more optimizations, which boosts the speed of several algorithms and should translate to higher performance when shader-limited. It also means that DirectX titles using Shader Model 6.0 should be able to compile into their Vulkan equivalents when using the latter API.

This leads us to SPIR-V 1.3. (We’ll circle back to Vulkan later.) SPIR-V is the shading language that Vulkan relies upon, which is based on a subset of LLVM. SPIR-V is the code that is actually run on the GPU hardware – Vulkan just deals with how to get this code onto the silicon as efficiently as possible. In a video game, this would be whatever code the developer chose to represent lighting, animation, particle physics, and almost anything else done on the GPU.

The Khronos Group is promoting that the SPIR-V ecosystem can be written in either GLSL, OpenCL C, or even HLSL. In other words, the developer will not need to rewrite their DirectX shaders to operate on Vulkan. This isn’t particularly new – Unity did this sort-of HLSL to SPIR-V conversion ever since they added Vulkan – but it’s good to mention that it’s a promoted workflow. OpenCL C will also be useful for developers who want to move existing OpenCL code into Vulkan on platforms where the latter is available but the former rarely is, such as Android.

Speaking of which, that’s exactly what Google, Codeplay, and Adobe are doing. Adobe wrote a lot of OpenCL C code for their Creative Cloud applications, and they want to move it elsewhere. This ended up being a case study for an OpenCL to Vulkan run-time API translation layer and the Clspv OpenCL C to SPIR-V compiler. The latter is open source, and the former might become open source in the future.

Now back to Vulkan.

The other major change with this new version is the absorption of several extensions into the core, 1.1 specification.

The first is Multiview, which allows multiple projections to be rendered at the same time, as seen in the GTX 1080 launch. This can be used for rendering VR, stereoscopic 3D, cube maps, and curved displays without extra draw calls.

The second is device groups, which allows multiple GPUs to work together.

The third allows data to be shared between APIs and even whole applications. The Khronos Group specifically mentions that Steam VR SDK uses this.

The fourth is 16-bit data types. While most GPUs operate on 32-bit values, it might be beneficial to pack data into 16-bit values in memory for algorithms that are limited by bandwidth. It also helps Vulkan be used in non-graphics workloads.

We already discussed HLSL support, but that’s an extension that’s now core.

The sixth extension is YCbCr support, which is required by several video codecs.

The last thing that I would like to mention is the Public Vulkan Ecosystem Forum. The Khronos Group has regularly mentioned that they want to get the open-source community more involved in reporting issues and collaborating on solutions. In this case, they are working on a forum where both members and non-members will collaborate, as well as the usual GitHub issues tab and so forth.

Introduction and First Impressions

Launching today, Corsair’s new Carbide Series 275R case is a budget-friendly option that still offers plenty of understated style with clean lines and the option of a tempered glass side panel. Corsair sent us a unit to check out, so we have a day-one review to share. How does it compete against recent cases we’ve looked at? Find out here!

The Carbide 275R is a compact mid-tower design that still accommodates standard ATX motherboards, large CPU coolers (up to 170 mm tall), and long graphics cards, and it includes a pair of Corsair’s SP120 fans for intake/exhaust. The price tag? $69.99 for the version with an acrylic side, and $79.99 for the version with a tempered glass side panel (as reviewed). Let’s dive in, beginning with a rundown of the basic specs.

Introduction and First Impressions

HyperX announced the Cloud Flight at CES, marking the first wireless headset offering from the gaming division of Kingston. HyperX already enjoyed a reputation for quality sound and build quality, so we'll see how that translates into a wireless product which boasts some pretty incredible battery life (up to 30 hours without LED lighting).

The HyperX Cloud Flight with a closed-cup design that looks like a pair of studio headphones, and in addition to the 2.4 GHz wireless connection it offers the option of a 3.5 mm connection, making it compatibile with anything that supports traditional wired audio. The lighting effects are understated and adjustable, and the detachable noise-cancelling mic is certified by TeamSpeak and Discord.

The big questions to answer in this review: how does it sound, how comfortable is it, and how well does the wireless mode work? Let's get started!

Introduction and Features

Introduction

Seasonic’s updated power supply lineup now includes the new PRIME Titanium Fanless model that can deliver up to 600W. The SSR-600TL is one of the latest members in Seasonic’s popular PRIME series. This new flagship PRIME fanless model boasts 80 Plus Titanium certification for the highest level efficiency.

Sea Sonic Electronics Co., Ltd has been designing and building PC power supplies since 1981 and they are one of the most highly respected manufacturers in the world. Not only do they market power supplies under their own Seasonic name but they are the OEM for numerous big name brands.

The Seasonic PRIME 600W Titanium Fanless power supply features Seasonic’s Micro Tolerance Load Regulation (MTLR) for ultra-stable voltage regulation and comes with fully modular cables, all Japanese made capacitors, and is backed by a 12-year warranty.

“The PRIME 600 Titanium Fanless utilizes the fanless technology, which eliminates the fan noise completely, ensuring truly quiet operation. The unit not only is rated to be 80PLUS Titanium efficient, but it also has the highest power output on the fanless power supply market at the moment. The power supply is ideal any situations that demand silence from the equipment. The high quality components inside and the innovative circuit design result in clean and stable power output. The fully modular cables allow for better cable management in the computer case.

Seasonic employs the most efficient manufacturing methods, uses the best materials and works with most reliable suppliers to produce reliable products. The PRIME Series layout, revolutionary manufacturing solutions and solid design attest to the highest level of ingenuity of Seasonics’s engineers and product developers. Demonstrating confidence in its power supplies, Seasonic stands out in the industry by offering PRIME Series a generous 12-year manufacturer’s warranty period.”

Overview

Compared to manufacturers like Dell, HP, and ASUS, Razer is a relative newcomer to the notebook market having only shipped their first notebook models in 2013. Starting first with gaming-focused designs like the Razer Blade and Blade Pro, Razer branched out to a more general notebook audience in 2016 with the launch of the Razer Blade Stealth.

Even though Razer is a primarily gamer-centric brand, the Razer Blade Stealth does not feature a discrete GPU for gaming. Instead, Razer advertises using their Razer Core V2 external Thunderbolt 3 enclosure to add your own full-size GPU, giving users the flexibility of a thin-and-light ultrabook, but with the ability to play games when docked.

Compared to my previous daily driver notebook, the "Space Gray" MacBook Pro, the Razer Blade Stealth shares a lot of industrial design similarities, even down to the "Gunmetal" colorway featured on our review unit. The aluminum unibody construction, large touchpad, hinge design, and more all clearly take inspiration from Apple's notebooks over the years. In fact, I've actually mistaken this notebook for a MacBook Pro in a few quick glances around the office in recent weeks.

As someone who is a fan of the industrial design of the MacBook Pro lineup, but not necessarily Apple's recent hardware choices, these design cues are a good thing. In some ways, the Razer Blade Stealth feels like Apple had continued with their previous Retina MacBook Pro designs instead of moving into the current Touch Bar-sporting iteration.

One of the things that surprised me most when researching the Razer Blade Stealth was just how equipped the base model was. All models include 16 GB of RAM, a QHD+ touch screen, and at least 256 GB of PCIe NVMe flash storage. However, I would have actually liked to see a 1080p screen option, be it with or without touch. For such a small display size, I would rather gain the battery life advantages of the lower resolution.

Overshadowing the Previous Gen

To say that sim racing has had a banner year is perhaps an understatement. We have an amazingly robust ecosystem of titles and hardware that help accentuate the other to provide outstanding experiences for those who wish to invest. This past year has seen titles such as Project CARS 2, Forza 7, DiRT 4, and F1 2017 released as well as stalwarts such as iRacing getting major (and consistent) updates. We also have seen the rise of esports with racing titles, most recently with the F1 series and the WRC games. These have become flashy affairs with big sponsors and some significant prizes.

Racing has always had a niche in PCs, but titles such as Forza on Xbox and Gran Turismo on Playstation have ruled the roost. The joy of PC racing is the huge amount of accessories that can be applied to the platform without having to pay expensive licenses to the console guys. We have really seen the rise of guys like Thrustmaster and Fanatec through the past decade providing a lot of focus and support to the PC world.

This past year has seen a pretty impressive lineup of new products addressing racing on both PC and console. One of the first big releases is what I will be covering today. It has been a while since Thrustmaster released the TS-PC wheel set, but it has set itself up to be the product to beat in an increasingly competitive marketplace.

So Long, Battery Stress

Wireless peripherals can be stressful. Sure, we all love being free from the tether, but as time goes on worries about responsiveness linger in the back of the mind like an unwelcome friend. Logitech is here with an impressive answer: the G613 Wireless Mechanical Gaming Keyboard and the G603 Lightspeed Wireless Gaming Mouse. This pair of peripherals promise an astounding 18-months of battery life with performance that’s competitive with their wire-bound cousins. Did they succeed?

Specifications

G613 Wireless Mechanical Gaming Keyboard

MSRP: $149.99

Key Switch: Romer-G

Durability: 70 million keypresses

Actuation distance: 0.06 in (1.5 mm)

Actuation force: 1.6 oz (45 g)

Total travel distance: 0.12 in (3.0 mm)

Keycaps: ABS, Pad Printed Legends

Battery Life: 18 months

Connectivity: Wireless, Bluetooth

Dimensions: 18.8 x 8.5 inches

G603 LIGHTSPEED Wireless Gaming Mouse

MSRP: $69.99 ($59.97 on Amazon as of this writing)

Sensor: HERO

Resolution: 200 – 12,000 dpi

Max. acceleration: tested at >40G3

Max. speed: tested at >400 IPS3

USB data format: 16 bits/axis

USB report rate: HI mode: 1000 Hz (1ms), LO mode: 125 Hz (8 ms)

Bluetooth report rate: 88-133 Hz (7.5-11.25 ms)

Microprocessor: 32-bit ARM

Main buttons: 20 million clicks with precision mechanical button tensioning

Starting with the G613, we find a full-size keyboard that is both longer and wider than average. This is due to a set of six programmable macro keys (highlighted in blue, G1-G6, assignable in Logitech’s Gaming Software) along the left side. There is also a non-detachable wrist rest along the bottom made of hard plastic.

The overall footprint isn’t much larger than a standard full-size keyboard with a wrist rest, it's 18.8 x 8.5 inch dimensions, but it’s definitely something to consider if you’re space constrained. I appreciate that Logitech included the wrist rest but with more comfortable padded options out there, it would have been nice to be able to swap it out.

Recently, it's been painfully clear that GPUs excel at more than just graphics rendering. With the rise of cryptocurrency mining, OpenCL and CUDA performance are as important as ever.

Cryptocurrency mining certainly isn't the only application where having a powerful GPU can help system performance. We set out to see how much of an advantage the Radeon Vega 11 graphics in the Ryzen 5 2400G provided over the significantly less powerful UHD 630 graphics in the Intel i5-8400.

GPGPU Compute

Before we take a look at some real-world examples of where a powerful GPU can be utilized, let's look at the relative power of the Vega 11 graphics on the Ryzen 5 2400G compared to the UHD 630 graphics on the Intel i5-8400.

SiSoft Sandra is a suite of benchmarks covering a wide array of system hardware and functionality, including an extensive range of GPGPU tests, which we are looking at today.

Comparing the raw shader performance of the Ryzen 5 2400G and the Intel i5-8400 provides a clear snapshot of what we are dealing with. In every precision category, the Vega 11 graphics in the AMD part are significantly more powerful than the Intel UHD 630 graphics. This all combines to provide a 175% increase in aggregate shader performance over Intel for the AMD part.

Now that we've taken a look at the theoretical power of these GPUs, let's see how they perform in real-world applications.

Delivering on the Promise of Thunderbolt 3

Despite the greatly increased adoption of Thunderbolt 3 over the previous 2 Thunderbolt standards, the market is still lacking actual devices that take advantage of the full 40Gbps bandwidth that Thunderbolt 3 offers.

External storage seems like a natural use of this PCI-E 3.0 x4 interface available with the Thunderbolt 3 standard, but storage devices that take advantage of this are few and far between. Most of the devices in the market currently are merely bridges for SATA M.2 drives to Thunderbolt 3, which would be limited by the SATA 6Gb/s interface.

However, this market gap seems poised to change. Today, we are taking a look at the TEKQ Rapide Thunderbolt 3 Portable SSD, which advertises sequential transfer speeds up to 2.3 GB/s Read and 1.3 GB/s Write.