Launched earlier this year, AMD’s Ryzen 4000 “Renoir” APUs brought several new features and technologies to the table for AMD. Along with numerous changes to improve the APU’s power efficiency and reduce overall idle power usage, AMD also added an interesting TDP management feature that they call SmartShift. Designed for use in systems containing both an AMD APU and an AMD discrete GPU, SmartShift allows for the TDP budgets of the two processors to be shared and dynamically reallocated, depending on the needs of the workload.

As SmartShift is a platform-level feature that relies upon several aspects of a system, from processor choice to the layout of the cooling system, it is a feature that OEMs have to specifically plan for and build into their designs. Meaning that even if a laptop uses all AMD processors, it doesn’t guarantee that the laptop has the means to support SmartShift. As a result, only a single laptop has been released so far with SmartShift support, and that’s Dell’s G5 15 SE gaming laptop.

Now, as it turns out, Dell’s laptop will be the only laptop released this year with SmartShift support.

In a comment posted on Twitter and relating to an interview given to PCWorld’s The Full Nerd podcast, AMD’s Chief Architect of Gaming Solutions (and Dell alumni) Frank Azor has confirmed that the G5 15 SE is the only laptop set to be released this year with SmartShift support. According to the gaming frontman, the roughly year-long development cycle for laptops combined with SmartShift’s technical requirements meant that vendors needed to plan for SmartShift support early-on. And Dell, in turn, ended up being the first OEM to jump on the technology, leading to them being the first laptop vendor to release a SmartShift-enabled laptop.

It's a brand new technology and to @dell credit they jumped on it first. I explained reasons why during my interview with @pcworld@Gordonung@BradChacos No more SmartShift laptops are coming this year but the team is working hard on having more options ASAP for 2021.

Azor’s comment further goes on to confirm that AMD is working to get more SmartShift-enabled laptops on the market in 2021; there just won’t be any additional laptops this year. Which leaves us in an interesting situation where, Dell, normally one of AMD's more elusive partners, has what's essentially a de facto exclusive on the tech for 2020.

Collaborations between hardware vendors and Esports teams isn't a new thing, but it is becoming an apparent and common trend within the industry. GIGABYTE and G2 Esports has announced the Z490 Aorus Ultra G2 motherboard which is a limited edition version of its Z490 Aorus Ultra model, but with a few refinements. The new board updates the aesthetics to a mixture of red, silver and black, with an inclusive ESS Sabre ES9280CPRO USB Type-C DAC bundled with the board.

Using the GIGABYTE Z490 Aorus Ultra motherboard as its foundation, the limited edition Z490 Aorus Ultra G2 opts for red aluminium heatsink fins which cools the 12-phase power delivery, with a red, silver, and black aesthetic throughout. It includes two areas of integrated RGB LED lighting including the slash marks on the rear panel cover, and the G2 eye built into the chipset heatsink. There are three full-length PCIe 3.0 slots which operate at x16, x8/x8, and x8/x8/+x4, with three PCIe 3.0 M.2 slots and six SATA ports.

Included in the feature set is an Intel I225 2.5 gigabit Ethernet controller, with an Intel AX201 Wi-Fi 6. Primarily targeting the mid-range of the Z490 market, the Z490 Aorus Ultra G2 has three USB 3.2 G2 Type-A, one USB 3.2 G2 20 Gbps Type-C, two USB 3.2 G1 Type-A, and four USB 2.0 ports on the rear panel. Also present is a DisplayPort 1.4 video output, with five 3.5 mm audio jacks and S/PDIF optical output which are controlled by a Realtek ALC1220-VB HD audio codec.

What's separates the Z490 Ultra G2 from the non-G2 Esports branded model is in the accessories bundle. Included is a G2 and GIGABYTE branded ESS Sabre ES9280CPRO USB DAC which features a USB Type-C connector to 3.5 mm output which allows gamers and audiophiles to benefit from higher quality audio. The bundle also includes an engraved aluminium plaque signed by G2's CS: GO prodigy kennyS.

The GIGABYTE Z490 Aorus Ultra G2 is set to be available in the US, UK, Germany, France, Spain, Poland and Russia. However, GIGABYTE hasn't announced pricing, nor do any of the major vendors such as Amazon or Newegg have it listed.

]]>https://www.anandtech.com/show/15831/gigabyte-announce-z490-aorus-ultra-g2-motherboard-with-g2-esports
Fri, 05 Jun 2020 13:15:00 EDTtag:www.anandtech.com,15831:newsThe Biostar Racing Z490GTN Review: $200 for Comet Lake mini-ITXDr. Ian Cutress & Gavin BonshorSmall form factor boards are always a key talking point for any desktop market. The usual breakdown on Mini-ITX sales for any given generation is usually around 10%, and because these boards end up in the lower-cost systems, there tends to be a focus on the cheaper end of the spectrum, even when it comes to the Z series chipset which is the one with all the bells and whistles. With Intel's new Comet Lake-S processors, ranging from Celeron all the way up to Core i9, and with the new socket for Comet Lake, there will be a renewed demand for those looking to build a small form factor Intel system. One of the popular mini-ITX low-cost boards in each generation is from BIOSTAR, and today we're testing the Z490GTN.
]]>https://www.anandtech.com/show/15824/the-biostar-racing-z490gtn-motherboard-review
Fri, 05 Jun 2020 10:00:00 EDTtag:www.anandtech.com,15824:newsAmazon Makes AMD Rome EC2 Instances AvailableAndrei Frumusanu

After many months of waiting, Amazon today has finally made available their new compute-oriented C5a AWS cloud instances based on the new AMD EPYC 2nd generation Rome processors with new Zen2 cores.

Amazon had announced way back in November their intentions to adopt AMD’s newest silicon designs. The new C5a instances scale up to 96 vCPUs (48 physical cores with SMT), and were advertised to clock up to 3.3GHz.

The instance offerings scale from 2 vCPUs with 4GB of RAM, up to 96 vCPUs, with varying bandwidth to elastic block storage and network bandwidth throughput.

The actual CPU being used here is an AMD EPYC 7R32, a custom SKU that’s seemingly only available to Amazon / cloud providers. Due to the nature of cloud instances, we actually don’t know exactly the core count of the piece and whether this is a 64 or 48- core chip.

We quickly fired up an instance to check the CPU topology, and we’re seeing that the chip has two quadrants populated with the full 2 CCDs with four CCXs in total per quadrant, and two quadrants with seemingly only a single CCD populated, with only two CCXs per quadrant.

I quickly ran some tests, and the CPUs are idling at 1800MHz and boost up to 3300MHz maximum. All-core frequencies (96 threads) can be achieved at up to 3300MHz, but will throttle down to 3200MHz after a few minutes. Compute heavy workloads such as 456.hmmer will run at around 3100MHz all-core.

While it is certainly possible that this is a 64-core chip, Amazon’s offering of 96 vCPU metal instances point out against that. On the other hand, the 96 vCPU’s configuration of 192GB wouldn’t immediately match up with the memory channel count of the Rome chip unless the two lesser chip quadrants also each had one memory controller disabled. Either that, or there’s simply two further CCDs that aren’t can’t be allocated – makes sense for the virtualised instances but would be weird for the metal instance offering.

The new C5a Rome-based instances are available now in eight sizes in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Singapore) regions.

Over the last few years, we’ve seen a lot of new technologies in the mobile market trying to address the problem of attempting to gather depth information with a camera system. There’s been various solutions by different companies, ranging from IR dot-projectors and IR cameras (structured light), stereoscopic camera systems, to the latest more modern time-of-flight special dedicated sensors. One big issue of these various implementations has been the fact that they’re all using quite exotic hardware solutions that can significantly increase the bill of materials of a device, as well as influence its industrial design choices.

Airy3D is a smaller new company that has been to date only active on the software front, providing various imaging solutions to the market. The company is now ready to transition to a hybrid business model, describing themselves as a hardware-enabled software company.

The company’s main product to fame right now is the “DepthIQ” platform – a hardware-software solution that promises to enable high-quality depth sensing to single cameras at a much cheaper cost than any other alternative.

At the heart of Airy3D’s innovation is an added piece of hardware to existing sensors in the market, called a transmissive diffraction mask, or TDM. This TDM is an added transmissive layer manufactured on top of the sensor, shaped with a specific profile pattern, that is able to encode the phase and direction of light that is then captured by the sensor.

The TDM in essence creates a diffraction pattern (Talbot effect) onto the resulting picture, that differs based on the distance of a captured object. The neat thing that Airy3D is able to do here, is employ advanced software algorithms that are able to decode this pattern, and transform the raw 2D image capture into a 3D depth map as well as a 2D image with the diffraction pattern compensated out.

Airy3D’s role in the manufacturing chain of a DepthIQ enabled camera module is designing the TDM grating which they then license out and cooperate with sensor manufacturers, who then integrate it into their sensors during production. In essence, the company would be partnering with any of the big sensor vendors such as Sony Semiconductor, Samsung LSI or Omnivision in order to produce a complete solution.

I was curious whether the company had any limits in terms of the resolution the TDM can be manufactured at, since many of today’s camera sensors employ 0.8µm pixel pitches and we’re even starting to see 0.7µm sensors coming to market. The company sees no issues in scaling the TDM grating down to 0.5µm – so there’s still a ton of leeway for future sensor generations for years to come.

Adding a transmissive layer on top of the sensor naturally doesn’t come for free, and there is a loss in sharpness. The company is quoting MTF sharpness reductions of around 3.5%, as well as a reduction of the sensitivity of the sensor due to TDM, in the range of 3-5% across the spectral range.

Camera samples without, and with the TDM

The company shared with us some samples of a camera system using the same sensor, once without the TDM, and once with the TDM employed. Both pictures are using the exact same exposure and ISO settings. In terms of sharpness, I wouldn’t say there’s major immediately noticeable differences, but we do see that the darker image with the TDM employed, a result of the reduced QE efficiency of the sensor.

The software processing is said to be comparatively light-weight compared to other depth-sensor solutions, and can be done on a CPU, GPU, DSP or even small FPGA.

The resulting depth discernation the solution is able to achieve from a single image capture is quite astounding – and there’s essentially no limit to the resolution that can be achieved as it scales with the sensor resolution.

More complex depth sensing solutions can add anywhere from $15 to $30 to the BOM of a device. Airy3D sees this technology to see the biggest adoption in the low- and mid-range, as usually the higher end is able to absorb the cost of other solutions, as also unlikely to be willing to make the make any sacrifice in image quality on the main camera sensors. A cheaper device for example would be able to have depth-sensing face ID unlocking with just a simple front camera sensor, which would represent notable cost savings.

Airy3D says they have customers lined up for the technology, and see a lot of potential for it in the future. It’s an extremely interesting way to achieve depth sensing given it’s a passive hardware solution that integrates into an existing camera sensor.

Following an unexpected uptick in RMA requests, Corsair has initiated a limited recall for some of the manufacturer’s SF series of small form factor PSUs. The SFX power supplies, which were most recently revised in 2018 with the introduction of the SF Platinum series, have quickly become some of the most popular SFX power supplies on the market due to their high quality as well as Corsair’s reputation for support. The latter of which, as it turns out, is getting put to the test, as the company has discovered an issue in a recent run of the PSUs.

As noted by the crew over at Tom’s Hardware, Corsair has posted a notice to its forums alerting users of the recall. According to the company, an investigation of RMA’d PSUs has turned up an issue with PSUs made in the last several months. When in an environment with both “high temperatures, and high humidity”, the PSUs can unexpectedly fail. The fault is apparently a highly variable one – Corsair’s notice reports units failing both out of the box and later on – but thankfully seems to be relatively benign overall, as the problem is on the AC side of the transformer, well before any power is fed to PC components.

Ultimately, while it’s not an issue that Corsair believes will impact every SF series PSU, it’s enough of an issue issue that the company has initiated a voluntary recall/replacement program for swapping out the affected PSUs. According to the company, the issue is only present in PSUs manufactured within the last several months – from October of 2019 to March of 2020 – with lot codes 194448xx to 201148xx. PSUs manufactured before that window are unaffected, as are PSUs manufactured afterwards. The lot codes can be found on the PSU’s packaging, or if you’re like a certain editor-in-chief who has thrown out their box, it can be found on the PSU sticker itself.

Meanwhile, it’s worth noting that as part of the recall program, Corsair is offering to ship out replacement PSUs in advance. And of course, shipping costs in both directions are being picked up by Corsair.

All told, it's extremely rare to see a recall notice put out for power supplies, and particularly high-end units like Corsair's SF series. Which, if nothing else, makes it a notable event.

In one of the coolest collaborations of the year so far, MSI and EK Water Blocks have come together to create a new Z490 motherboard, the MSI MPG Z490 Carbon EK X. It is armed with an EKWB CPU block with integrated RGB LEDs which also provides cooling to the power delivery, it includes a 12+1 VRM design, with two PCIe 3.0 x4 M.2 slots, Realtek 2.5 G Ethernet, and an Intel AX201 Wi-Fi 6.

One of the most important aspects to consider when buying Intel's Comet Lake 10th generation processors is cooling. The Core i9-10900K is a hot CPU, even at stock, and with the performance gained via Intel's Thermal Velocity Boost, performance cooling is more important than it ever has. There are only a handful of Z490 models that include water blocks, and they aren't cheap. The ASRock Z490 Aqua is $1100, while the GIGABYTE Z490 Aorus Xtreme WaterForce is $1299. This model is expected to retail for $400.

Enter the MSI MPG Z490 Carbon EK X, which looks to offer the benefits of custom water cooling on Intel's hot running 10th generation desktop processors, but with a much more wallet-friendly cost. Its most significant selling point is the EKWB custom monoblock which cools both the CPU and the 12+1 phase power delivery. The integrated RGB LEDs in the monoblock can be controlled with MSI's Mystic Light software. There are three full-length slots which can operate at x16/x0/x+4 or x8/x8/x+4, with two PCIe 3.0 x1 slots too.

The four memory slots can support up to DDR4-4800 with a maximum capacity of up to 128 GB, while two PCIe 3.0 x4 M.2 and six SATA ports make up the board's storage capability. The design is based on its natural carbon inspired theme with black carbon patterning across the rear panel and sections of the monoblock, as well as the PCIe armor and chipset heatsink.

There is also a range of connectors including a USB 3.2 G2 20 Gbps Type-C, four USB 3.2 G2 Type-A, and two USB 2.0 ports on the rear panel. For users looking to use Intel's UHD integrated graphics, MSI has included a DisplayPort 1.4 and HDMI pair of video outputs, while a PS/2 combo caters to users with legacy peripherals. For networking the Z490 Carbon EK X is using a Realtek RTL8125B 2.5 G Ethernet controller, while an Intel AX201 Wi-Fi 6 adds support for BT 5.1 devices. In regards to internal connectors, MSI includes a single USB 3.2 G2 Type-C header, one USB 3.2 G1 Type-A header which supports two ports, and two USB 2.0 headers which support up to four ports.

The MSI MPG Z490 Carbon EK X has an MSRP of $400, which is very reasonable for all of the board's features, including the custom EKWB monoblock which cools the processor and power delivery components. So far this is the third Z490 model to include a monoblock by default and costs a third of what GIGABYTE are charging for its flagship Z490 Aorus Xtreme WaterForce model. It's not as high-end, but the Z490 Carbon EK X offers a more affordable entry point into the custom water cooling market with the Carbon EK X, and it looks good too.

]]>https://www.anandtech.com/show/15827/msi-mpg-z490-carbon-ek-x
Wed, 03 Jun 2020 13:00:00 EDTtag:www.anandtech.com,15827:newsThe Microsoft Surface Book 3 (15-Inch) Review: A Refreshing Dip Into Ice LakeBrett HowseThe PC industry has introduced some remarkably exciting designs over the last five years or so. Some of those designs, such as the thin-bezel laptop, have been adopted by almost all players in the industry. Microsoft has certainly been an innovator in the space as well, and the Surface Pro series has become the baseline for an entire category that did not exist in any volume before their launch. But almost certainly, one of the quirkiest designs was the Surface Book. First launched in 2015, and then succeeded by the Surface Book 2 in 2017, Microsoft is now releasing the third generation of their most powerful notebook computer.
]]>https://www.anandtech.com/show/15825/the-microsoft-surface-book-3-review
Wed, 03 Jun 2020 09:00:00 EDTtag:www.anandtech.com,15825:newsISCA 2020: Evolution of the Samsung Exynos CPU MicroarchitectureAndrei Frumusanu

ISCA, the International Symposium for Computer Architecture is an IEEE conference that usually we don’t tend to hear from all that often in the public. The main reason for this is that most sessions and papers tend to be more academically oriented, and thus generally quite a bit further away from the practice of what we see in real products. This year, the conference has changed its format in adding an industry track of sessions, with presentations and papers from various companies in the industry, covering actual commercial products out there in the wild.

Amongst the sessions, Samsung’s SARC (Samsung Austin R&D Centre) CPU development team has presented a paper titled “Evolution of the Samsung Exynos CPU Architecture”, detailing the team’s efforts over its 8-year existence, and presented some key characteristics of its custom Arm CPU cores ranging from the Exynos M1, to the most recent Exynos M6 CPU.

There are multiple reasons to need a PCIe switch. These can include expanding PCIe connectivity to more devices than the CPU is capable, to extend a PCIe fabric across multiple hosts, to generate failover support, or to increase device-to-device communication bandwidth in limited scenarios. With the advent of PCIe 4.0 processors and devices such as graphics, SSDs and FPGAs, an upgrade from the range of PCIe 3.0 switches to PCIe 4.0 was needed. Microchip has recently announced its new Switchtec PAX line of PCIe Switches, offering up to 100 lane variants supporting 52 devices and 174 GBps switching capabilities.

For readers not embedded in the enterprise world, you may remember in the past we have had a number of PCIe switches enter the consumer market. Initially we saw devices like the nF200 appear on high-end motherboards like the EVGA SR2, and then the PLX PEX switches on Z77 motherboards allowing 16-lane CPUs to offer 32 lanes of connectivity. Some vendors even went a bit overboard, offering dual switches and up to 22 SATA ports with an add-in LSI Raid controller with four-way SLI connectivity, all through a 16-lane CPU.

Recently, we haven’t seen much consumer use of these big PCIe switches. This is due to a couple of main factors – PLX was acquired by Avago in 2014 in a deal that valued the company at $300m, and seemingly overnight the cost of these switches increased three-fold according to my sources at the time, making them unpalatable for consumer use. The next generation of PEX 9000 switches were, by contrast to the PEX 8000 we saw in the consumer space, feature laden with switch-to-switch fabric connectivity and failover support. Avago then purchased Broadcom, and renamed themselves Broadcom, but the situation is still the same, with the switches focused in the server space and making the market ripe for competition. Enter Microchip.

Microchip has been on my radar for a while, and I met with them at Supercomputing 2019. At the time, when asked about PCIe 4.0 switches, I was told ‘soon’. The new Switchtec PAX switches are that line.

There will be six products, varying from 28-lane to 100-lane support, and bifurcation down to x1. These switches operate in an all-to-all capacity, meaning any lane can be upstream or downstream supported. Thus if a customer wanted a 1-to-99 conversion, despite the potential bottleneck, it would be possible. The new switches support hot-plug per-port, operate low-power Serdes connections, support OCuLink, and can be used with passive, managed, or optical cabling.

Customers for these switches will have access to real-time diagnostics for signaling, as well as fabric management software for the end-point systems. The all-to-all connectivity supports partial chip failure and bypass, along with partial reset features. This makes building a fabric across multiple hosts and devices fairly straightforward, with a variety of topologies supported.

When asked, pricing was given, which means it will depend on the customer and volume. We can imagine a vendor like Dell or Supermicro if they haven’t got fixed contracts with Broadcom switches to perhaps look into these solutions for distributed implementations or storage devices. Some of the second/third tier server vendors I spoke to at Computex were only just deploying PEX 9000-series switches, so perhaps deployment of Gen 4 switches might be more of a 2021/2022 target.

Those interested in Microchip are advised to contact their local representative.

Users looking for a PCIe switch enabled consumer motherboard should look at Supermicro’s new Z490 motherboards. Both are using PEX 8747 chips to expand the PCIe offering on Intel’s Comet Lake from 16 lanes to 32 lanes.

ASML has announced it has made a significant development in its multi-beam inspectional tool line. The new eScan1000 moves a single beam scanning process into a nine-beam scanning process, which ASML claims increases the throughput of such tools by up to 600% for in-line defect inspection applications. This tool is suitable for all major process nodes in current production as well as 5nm and beyond.

All of the world’s major superpowers have a vested interest in building their own custom silicon processors. The vital ingredient to this allows the superpower to wean itself off of US-based processors, guarantee there are no supplemental backdoors, and if needed add their own. As we have seen with China, custom chip designs, x86-based joint ventures, or Arm derivatives seem to be the order of the day. So in comes Russia, with its custom Elbrus VLIW design that seems to have its roots in SPARC.

One of the interesting elements of this profession is dealing with how the processor companies have changed their attitudes towards marketing their products over the past couple of decades. After years of bland boxing and sub-standard coolers, there have been recent efforts to produce something eye-catching to users casually browsing shelves, especially in an effort to draw attention to the high-end products. While ultimately the packaging has little-to-no value after unboxing the product, beyond perhaps the background in a gaming stream, it does mark a change in attitudes, especially when product packaging can accelerate the hype around a product.

One of the recent product packaging efforts from Intel was the dodecahedral packaging for its halo desktop product, the Core i9-9900K. While AMD has focused special packaging for high-end desktop, Intel it seems prefers to point it into the desktop product line. This packaging is a transparent blue dodecahedron, with the CPU at the center. No cooler is bundled, and the packaging is large for the processor, but it certainly made it stand out.

Intel launched Comet Lake a couple of weeks ago, its 10th generation Core product, with the flagship i9-10900K sitting at the top of the stack. As the Core i9-9900K no longer sits in that top spot, Intel has decided to discontinue versions of the 9900K in its special packaging. Specifically, retailers have until June 26th to order these processor versions, and the last shipment will be on July 10th. This is a very quick discontinuance procedure, however the non-special retail version will still be available.

At some point in this market, we are going to get a product with iconic packaging. One could argue if the packaging makes the product interesting at all – given how users tend to focus on a specific processor for their build, is spending potentially slightly more for the fancy box ever justified? You may think that this news post is somewhat arbitrary, talking about packaging discontinuance, but it perhaps yields a bigger question in the processor market – does packaging matter? Or the contents – a message from the CEO on a special anniversary edition, or the signature on the heatspreader?

Not only did Intel unveil its Z490 motherboard chipset for Intel's 10th generation desktop processors, but it also announced its more budget-friendly chipsets. Biostar has announced two new micro-ATX H410 models, the H410MHG, and the H410MH, aimed at the low cost and high volume market. Both with simplistic designs and budget-friendly controller sets, both models include Realtek Gigabit networking, Realtek ALC887 HD audio codecs, as well as four SATA ports and a single PCIe 3.0 x4 M.2 slot.

Biostar H410MHG micro-ATX motherboard

Starting with the higher-specification of the two new H410 models from Biostar, the H410MHG includes TPM technology which adds hardware-based security functionality designed for cryptographic operations. In regards to PCIe, it consists of a single full-length PCIe 3.0 x16 slot, two PCIe 3.0 x1 slots, and a single PCI slot. There are four straight-angled SATA ports below the 24-pin 12 V ATX motherboard power input, while a single 8-pin 12 V ATX input provides power to the CPU. On the rear panel is two USB 3.2 G1 Type-A, and four USB 2.0 ports, with an HDMI, DVI-D and VGA port allowing users to use Intel's integrated UHD graphics. A COM port, a PS/2 mouse and PS/2 keyboard port are also present for users looking to use legacy peripherals. For cooling, the H410MHG also has three 4-pin fan headers, one for a CPU fan, and two for chassis fans.

Biostar H410MH micro-ATX motherboard

The Biostar H410MH has a single full-length PCIe 3.0 x16 slot, and two PCIe 3.0 x1 slots, with four SATA straight-angled SATA ports, and offers a slightly lighter rear IO panel. It includes separate PS/2 keyboard and mouse ports, two USB 3.2 G1 Type-A, four USB 2.0 ports, and two video outputs consisting of HDMI and VGA. For the cooling, it has just two 4-pin headers with one dedicated for a CPU fan, and the other for a chassis fan.

Biostar H410MHG (top) and H410MH (bottom) rear panels

Shared across both models includes the memory and networking support, with a Realtek RTL811H Gigabit Ethernet controller, and two memory slots with support for up to 64 GB of DDR4-2933 memory. The H410MHG and H410MH also feature a Realtek ALC887 HD audio codec which provides three 3.5 mm audio jacks on the rear panel, as well as a single PCIe 3.0 x4 M.2 slot with support for both NVMe and SATA drives.

Although Biostar hasn't announced pricing or availability for the H410MHG and H410MH models, likely, they won't be too expensive. Designed more for cost-focused users looking for a foundation while leveraging the power of Intel's 10th generation processors, both of these micro-ATX H410 models include access to the Biostar VIP Care portal for additional support from Biostar.

Samsung this morning is taking the wraps off of the long-awaited Intel (x86) version of the company’s popular ultraportable, always-connected laptop, the Galaxy Book S. First teased by Samsung late last year, the Intel-based version of the laptop is set to join their existing Qualcomm 8cx-based model, swapping out the Arm SoC for Intel’s latest foray into ultra-mobile processors, Lakefield. The Intel Galaxy Book S will be the first device to ship with Lakefield, putting the new processor to the test in seeing if Intel can match the kind of all-day battery life that the existing Galaxy Book S is known for.

As a quick refresher, for the last couple of years now we’ve been reporting on Intel’s forthcoming die stacked CPU, codenamed Lakefield. Officially called the “Intel Core processor with Intel Hybrid Technology”, Lakefield is a rather audacious chip for a company that’s been criticized for moving too slowly, as it integrates a number of first-time technologies for Intel as they look to deliver a cutting-edge x86-based chip that’s properly suited for the ultra-mobile market.

In terms of architecture, Lakefield is the first hybrid CPU from Intel, combining both the company’s Atom (Tremont) and Core (Sunny Cove) CPU cores on to a single die. This sort of big-little strategy has been a big part of Arm’s success in mobile devices, by offering separate high-performance and low-power cores to maximize efficiency without giving up on higher performance as well, and by incorporating it for their own designs it will mark a significant departure from Intel’s existing Atom-only offerings. At the same time, Lakefield is designed to be vastly more efficient than those earlier Atoms, with Intel aiming to get idle power consumption down to a couple of milliwatts, necessary to allow for devices that can be always-connected/always-on, and to breach the ultra-mobile market largely dominated by Qualcomm.

The other novel aspect of Lakefield is its construction. The chip is based on Intel’s Foveros technology, which is the company’s take on 3D die stacking using TSVs. In the case of Lakefield, Intel has split up the chip into essentially 3 levels, stacked upon each other: a 14nm-built base I/O die that has features like USB an audio, a 10nm-built compute chiplet that has the CPU and GPU cores, and finally a DRAM layer connected using more traditional Package-on-Package technology. This strategy not only lets Intel split up the manufacturing of the chip across multiple process nodes – using cutting-edge 10nm for the compute while using a highly-tuned 14nm node for the base die – but it minimizes the overall footprint of the chip. Lakefield has a footprint of just 12mm x 12mm (and 1mm tall), making the package smaller than a dime.

This chip, in turn, will be making its debut in Samsung’s Galaxy Book S family. Samsung has been shipping a Qualcomm 8cx version of this laptop since earlier this year, using Qualcomm’s ultra-mobile chip to drive the always-connected laptop. Qualcomm has been a major proponent of always-on laptops, leveraging their years of expertise in modems and low-power operation in general from smartphones to apply it to laptops, and with a rated battery life of up to 25 hours, the current Qualcomm-based version of the Galaxy Book S has certainly lived up to those goals.

So with Lakefield intended to go head-to-head with the likes of Qualcomm’s 8cx, there’s no better place for Intel to start than the Galaxy Book S. But this also means that they’ll have a tough fight right out of the gate, as they’ll be going up against one of the better Arm-powered laptops on the market.

Meanwhile, taking a look at the specifications, the Intel-based version of the Galaxy Book S is a spitting image of the Qualcomm version. Samsung appears to be using the same chassis here, so the 13.3-inch laptop retains the same dimensions as the current model, as well as the same two USB-C ports. The battery capacities are identical as well at 42 Wh, and I expect that the Intel model is also using the same 1080p LCD. Curiously though, the Intel model does end up being ever so lighter than the Qualcomm model – Samsung puts the former at 950g, 10g lighter than the 960g Qualcomm model.

As for RAM and storage, because RAM is part of the Lakefield package, Samsung is only offering a single 8GB configuration here. Unfortunately Samsung’s spec sheet doesn’t list memory frequencies, so we’ll have to wait and see what Intel has Lakefield’s memory clocked at. Meanwhile Samsung provides the storage, using 256GB or 512GB of their eUFS flash memory. To my knowledge this is the first x86 laptop to ship with eUFS, reflecting the mobile roots of the devices Intel is targeting with Lakefield. Further storage expansion is then available through a microSD card slot.

One specification that’s notably missing from Samsung’s announcement today is the expected battery life of the Intel-based model, and this is perhaps going to be the most interesting aspect of Lakefield. Intel has worked very hard to get their idle power consumption down to be able to match what Qualcomm has achieved with the 8cx, with the company claiming that Lakefield draws just 2mW at idle. At the same time, however, Lakefield lacks an integrated modem, and as a result Samsung is relying on an unknown Cat 16 external modem here. So in the battle of the Galaxy Books, Qualcomm will have the advantage in regards to requiring fewer chips.

As for other wireless connectivity, the new Intel model will ship with a 2x2 Wi-Fi 6 radio, giving it an edge there over the Qualcomm model with Wi-Fi 5. And both models ship with Bluetooth 5.0 support.

Rounding out the package, the Intel-based Galaxy Book S has a 720p webcam, a built-in microphone as well as Dolby Atmos-badged stereo speakers co-designed with AKG. The laptop also has a Windows Hello-compatible fingerprint reader.

Wrapping things up, with Samsung finally unveiling the full specifications of the Intel-based Galaxy Book S, this is a strong sign that the laptop and Intel’s Lakefield processors should be available soon. While Intel has yet to formally launch the processors, Lakefield’s been expected as a mid-2020 part for some time now, so given this, Lakefield’s launch should be imminent. The precise details of which Samsung seems to be leaving to Intel; at least for the moment, Samsung is not announcing specific availability dates or pricing. Though if it’s at parity with the Qualcomm-based model, then expect the Intel-based Galaxy Book S to start at $999.

]]>https://www.anandtech.com/show/15819/samsung-unveils-intel-galaxy-book-s-intels-lakefield-inbound
Fri, 29 May 2020 11:30:00 EDTtag:www.anandtech.com,15819:newsBest CPUs for Gaming: May/June 2020Dr. Ian CutressSometimes choosing a CPU is hard. So we've got you covered. In our CPU Guides, we give you our pick of some of the best processors available, supplying data from our reviews. Our Best CPUs for Gaming guide targets most of the common system-build price points that typically pair a beefy graphics card with a capable processor, with the best models being suitable for streaming and encoding on the fly.
]]>https://www.anandtech.com/show/9793/best-cpus
Fri, 29 May 2020 11:00:00 EDTtag:www.anandtech.com,9793:newsBest Android Phones: May 2020Andrei Frumusanu

We’ve nearly completed the spring release cycle of devices, and this means that most vendors have now released their flagship devices for 2020, introducing brand new phones with the newest technologies to the market. The new device generation significantly mix up the competitive landscape, and it looks like 2020’s flagship phones are all about high refresh-rate screens as well as new complex camera setups.

Samsung was amongst the first to release their products in 2020, with the Galaxy S20’s showcasing the company’s new camera generation, and trying to one-up the ecosystem with the super high-end Galaxy S20 Ultra. Over weeks following that we saw outstandingly good devices from Xiaomi, Huawei, LG, and in particular OnePlus. The new OnePlus 8 Pro really changed things up for the company as the new device can no longer be called a “flagship-killer”, but rather an outright flagship – with no compromises in features, but also with a higher price tag.

Yesterday Arm released the new Cortex-A78, Cortex-X1 CPUs and the new Mali-G78 GPU. Alongside the new “key” IPs from the company, we also saw the reveal of the newest Ethos-N78 NPU, announcing Arm’s new second-generation design.

Over the last few years we’ve seen a literal explosion of machine learning accelerators in the industry, with a literal wild west of different IP solutions out there. On the mobile front particularly there’s been a huge amount of different custom solutions developed in-house by SoC vendors, this includes designs such as from Qualcomm, HiSilicon, MediaTek and Samsung LSI. For vendors who do not have the design ability to deploy their own IP, there’s the possibility of licensing something from an IP vendor such as Arm.

Arm’s “Ethos” machine learning IP is aimed at client-side inferencing workloads, originally described as “Project Trillium” and the first implementation seeing life in the form of the Ethos-N77. It’s been a year since the release of the first generation, and Arm has been working hard on the next iteration of the architecture. Today, we’re covering the “Scylla” architecture that’s being used in the new Ethos-N78.

]]>https://www.anandtech.com/show/15817/arm-announces-ethosn78-npu-bigger-and-more-efficient
Wed, 27 May 2020 10:00:00 EDTtag:www.anandtech.com,15817:newsThe ASRock Z490 Taichi Motherboard Review: Punching LGA1200 Into LifeGavin BonshorIn our first Intel Z490 motherboard review, the ASRock Z490 Taichi takes center stage. With its recognizable Taichi clockwork inspired design, a 12+2 power delivery, three PCIe 3.0 x4 M.2 slots, and a Realtek 2.5 gigabit Ethernet port on the rear panel, it looks to leave its stamp on the Z490 market. The Taichi remains one of ASRock's perennial premium mid-range models.
]]>https://www.anandtech.com/show/15781/the-asrock-z490-taichi-motherboard-review
Wed, 27 May 2020 09:00:00 EDTtag:www.anandtech.com,15781:newsArm's New Cortex-A78 and Cortex-X1 Microarchitectures: An Efficiency and Performance DivergenceAndrei Frumusanu2019 was a great year for Arm. On the mobile side of things one could say it was business as usual, as the company continued to see successes with its Cortex cores, particularly the new Cortex-A77 which we’ve now seen employed in flagship chipsets such as the Snapdragon 865. The bigger news for the company over the past year however hasn’t been in the mobile space, but rather in the server space, where one can today rent Neoverse-N1 CPUs such as Amazon’s impressive Graviton2 chip, with more vendors such as Ampere expected to release their server products soon.

While the Arm server space is truly taking off as we speak, aiming to compete against AMD and Intel, Arm hasn't reached the pinnacle of the mobile market – at least, not yet. Arm’s mobile Cortex cores have lived in the shadow of Apple’s custom CPU microarchitectures over the past several years, as Apple has seemingly always managed to beat Cortex designs by significant amounts. While there’s certainly technical reasons to the differences – it was also a lot due to business rationale on Arm’s side.

Today for Arm’s 2020 TechDay announcements, the company is not just releasing a single new CPU microarchitecture, but two. The long-expected Cortex-A78 is indeed finally making an appearance, but Arm is also introducing its new Cortex-X1 CPU as the company’s new flagship performance design. The move is not only surprising, but marks an extremely important divergence in Arm’s business model and design methodology, finally addressing some of the company’s years-long product line compromises.

]]>https://www.anandtech.com/show/15813/arm-cortex-a78-cortex-x1-cpu-ip-diverging
Tue, 26 May 2020 09:00:00 EDTtag:www.anandtech.com,15813:newsArm Announces The Mali-G78 GPU: Evolution to 24 CoresAndrei FrumusanuToday as part of Arm’s 2020 TechDay announcements, alongside the release of the brand-new Cortex-A78 and Cortex-X1 CPUs, Arm is also revealing its brand-new Mali-G78 and Mali-G68 GPU IPs.

Last year, Arm had unveiled the new Mali-G77 which was the company’s newest GPU design based on a brand-new compute architecture called Valhall. The design promised major improvements for the company’s GPU IP, shedding some of the disadvantages of past iterations and adapting the architectures to more modern workloads. It was a big change in the design, with implementations seen in chips such as the Samsung Exynos 990 or the MediaTek Dimensity 1000.

The new Mali-G78 in comparison is more of an iterative update to the microarchitecture, making some key improvements in the matter of scalability of the configuration as well as balance of the design for workloads, up to some more radical changes such as a complete redesign of its FMA units.

The latest monitor in Viewsonic's large and varied portfolio comes via the XG270QC, which is a part of its gaming-focused Elite series. Available in the US now, the 27 inch Viewsonic Elite XG270QC features a 1500R curved screen, with a refresh rate of 165 Hz, and is certified for VESA DisplayHDR 400.

Designed with gaming in mind, the Viewsonic Elite XG270QC comes with many of the features you'd expect for a contemporary gaming displaying. including a 27-inch, 2560x1440 VA panel with a fast refresh rate of 165 Hz, variable refresh support including AMD's FreeSync Premium Pro certification, and is VESA certified DisplayHDR 400. Although officially it has a 3 ms response time, Viewsonic is stating that it has a 1 ms MPRT response time, with Viewsonic's PureXP Motion Blur reduction technology making this possible. The curve of the panel is rated at 1500R which Viewsonic claims is provide a more immersive gaming experience.

Looking at the dimensions, it's 24.1 inches wide with a 4-inch depth. It has an adjustable height of between 18.97 and 23.59 inches, with a net weight of 7.5 kg with the stand installed. For users looking to mount it to a monitor stand or wall mount, it is VESA 100 x 100 mm mounting on the rear and weighs 4.9 kg without the stand installed. The XG270QC has a black glossy finish and includes a single DisplayPort 1.4 input, two HDMI 2.0 inputs, a 3.5 mm audio output, and for security, it features a Kensington Lock slot. Provided with the Elite XG270QC is Viewsonic's Elite Display Controller software which connects to its device via a Type-A cable which is supplied, and allows users to adjust the integrated RGB LED lighting. It is certified to work with ThermalTake's RGB Plus and Razer's popular Chroma RGB Ecosystems.

Touching on some of the finer details of the 27-inch panel, it has a 178-degree viewing angle and offers VESA Adaptive-Sync support. It features AMD FreeSync Premium Pro certification, which is AMD's own classification system for grading monitors, ensuring among other things a wide enough refresh rate for Low Framerate Compensation support, as well as low-latency HDR support. In terms of color reproduction, Viewsonic is claiming 16.7 million colours, with a 3,000 to 1 static contrast ratio and 120 million to 1 dynamic contrast ratio. For power, Viewsonic states that in Eco mode, it's optimized for 45 W, while it has a 55 W typical consumption rate, with a maximum of up to 59 W.

Viewsonic has said that the Elite XG270QC is to purchase in the US for a price around the $460 mark. Users in the EU, AU, and other regions around the world will, however, need to wait until June.

This week NVIDIA announced their earnings for the first quarter of their 2021 fiscal year. The current fiscal year is an especially important one for NVIDIA on both a business level and a product level, as the company is enjoying closing the Mellanox deal, all the while opening up shipments of their new datacenter-class A100 accelerators. Especially coming off of last year’s crypto-hangover, NVIDIA has started their new fiscal year with the good times rolling on.

NVIDIA Q1 FY2021 Financial Results (GAAP)

Q1'FY2021

Q4'FY2020

Q1'FY2020

Q/Q

Y/Y

Revenue

$3080M

$3105M

$2220M

-1%

+39%

Gross Margin

65.1%

64.9%

58.4%

+0.2%

+6.8%

Operating Income

$1028M

$990M

$358M

-1%

+116%

Net Income

$917M

$950M

$394M

-4%

+106%

EPS

$1.47

$1.53

$0.64

-5%

+105%

For Q1’FY21, NVIDIA booked $3.08B in revenue. Compared to the year-ago quarter, this is a jump in revenue of 39%, making for a very strong first quarter that was only a hair under Q4, which is commonly a very strong quarter for NVIDIA. Those sizable revenues, in turn, are reflected in NVIDIA’s profits: the company booked $917M in net income for the quarter, more than double Q1’FY20. In fact it’s the second-best Q1 ever for the company; only Q1’FY19 was better, which was in the middle of the crypto boom.

What was a record, however, was NVIDIA’s gross margin. For the quarter NVIDIA booked a GAAP gross margin of 65.1%, edging out the previous quarter and beating even Q1’FY19. As NVIDIA’s revenues have shifted increasingly towards higher-margin products like accelerators, it’s helped the already profitable NVIDIA to extend that profitability even further.

NVIDIA Quarterly Revenue Comparison (GAAP)
($ in millions)

In millions

Q1'FY2021

Q4'FY2020

Q1'FY2020

Q/Q

Y/Y

Gaming

$1339

$1491

$1055

-10%

+27%

Professional Visualization

$307

$331

$266

-7%

+15%

Datacenter

$1141

$968

$634

+18%

+80%

Automotive

$155

$163

$166

-5%

-7%

OEM & IP

$138

$152

$99

-9%

+39%

Breaking down NVIDIA’s revenue by platform, while there are no great surprises per-se, the company has reached some milestones that are strong indicators of where things are going. Starting with NVIDIA’s datacenter revenue, that platform of their business has set a record for revenue for a second consecutive quarter, with $1.141B in revenue. This marks the first time NVIDIA’s datacenter business has booked more than $1B in revenue in a single quarter, and NVIDIA doesn’t expect it to be the last.

While the picture will get muddled a bit next quarter as Mellonox revenue is folded into this mix, the big picture is that datacenter accelerator sales are strong, and set to grow. NVIDIA’s Ampere-based A100 accelerators began shipping for revenue in Q1, helping to boost the numbers there, while Q2 will be the first full quarter of sales. According to NVIDIA, they’re already seeing broad demand for datacenter products, with the major hyperscalers quickly picking up A100s. Overall, NVIDIA’s Volta-generation accelerators were extremely successful for the company, almost but not quite growing the datacenter business to one billion dollars per quarter, and the company is eager to repeat and extend that success with Ampere.

Meanwhile, NVIDIA’s largest business, gaming, was also strong for the quarter, with the company booking $1.339B in revenue. While down seasonally as usual, NVIDIA is reporting that they have weathered the current pandemic similar to other chipmakers, with soft sales in some areas being counterbalanced by greater demand for chips for home computers as workers shift to working from home.

Interestingly, there’s a very real chance that this could be one of the last quarters where gaming is NVIDIA’s biggest revenue generator. Along with folding Mellanox into the company – and into the datacenter segment – NVIDIA’s datacenter business as a whole has been growing at a much greater clip than gaming. NVIDIA has made it very clear that they’re pushing for a more diversified revenue stream than their traditional gaming roots, and if the datacenter business grows too much more they may just get there this year. Though it will be interesting to see what the eventual launch of Ampere-based gaming products does for gaming revenue, as NVIDIA’s revenue also reflects the fact that they’re nearing the end of the Turing generation of products.

Bringing up third place was NVIDIA’s professional visualization platform, which saw $307M in revenue. As with gaming sales, the company is seeing a boost in sales due to work from home equipment purchases. This comes on top of the day-to-day demand for workstation laptops, which NVIDIA has been increasingly invested in.

Meanwhile NVIDIA’s automotive business ended up being something of a laggard for Q1’FY21. The segment booked $155M in revenue, which is down 7% from the year-ago quarter. NVIDIA’s automotive business moves at a much different pace than its GPU businesses – in part because it’s not set to really take off until self-driving cars become a retail reality – so the business tends to ebb and flow.

Finally, NVIDIA booked $138M in OEM & IP revenue for Q1’FY21. While this platform is small potatoes compared to gaming and datacenter, on a percentage basis it’s actually another big jump for NVIDIA; the segment grew 39% over the year-ago quarter. According to NVIDIA, the main driving factor here was increased entry-level GPU sales for OEM systems.

Wrapping things up, looking ahead to Q2 of FY2021, NVIDIA’s current predictions call for another strong quarter. Having closed the Mellanox deal, Mellanox’s earnings will be folded into NVIDIA’s numbers starting in Q2, helping to push the company to what should be record revenue. Meanwhile on the product side of matters, Q2 will be the first full quarter of A100 accelerator shipments, which should help NVIDIA further grow their datacenter business.

Following last week’s virtual GTC keynote and the announcement of their Ampere architecture, this week NVIDIA has been holding the back-half of their conference schedule. As with the real event, the company has been posting numerous sessions on everything NVIDIA, from Ampere to CUDA to remote desktop. But perhaps the most interesting talk – and certainly the most amusing – is coming from NVIDIA’s research group.

Tasked with developing future technologies and finding new uses for current technologies, today the group is announcing that they have taught a neural network Pac-Man.

And no, I don’t mean how to play Pac-Man. I mean how to be the game of Pac-Man.

The reveal, timed to coincide with the 40th anniversary of the ghost-munching game, is coming out of NVIDIA’s research into Generative Adversarial Networks (GANs). At a very high level, GANs are a type of neural network where two neural networks are trained against each other – typically one learning how to do a task and the other learning how to spot the first doing that task – with the end goal being that the competition between the networks can help make the two networks better by forcing them to improve to win. In terms of practical applications, GANs have most famously been used in research projects to create programs that can create realistic-looking images of real-world items, upscale existing images, and other image synthesis/manipulation tasks.

For Pac-Man, however, the researchers behind the fittingly named GameGAN project took things one step further, focusing on creating a GAN that can be taught how to emulate/generate a video game. This includes not only recreating the look of a game, but perhaps most importantly, the rules of a game as well. In essence, GameGAN is intended to learn how a game works by watching it, not unlike a human would.

For their first project, the GameGAN researchers settled on Pac-Man, which is as good a starting point as any. The 1980 game has relatively simple rules and graphics, and crucially for the training process, a complete game can be played in a short amount of time. As a result, over 50K “episodes” of training, the researchers taught a GAN how to be Pac-Man solely by having the neural network watch the game being played.

And most impressive of all, the crazy thing actually works.

In a video released by NVIDIA, the company is briefly showing off the Pac-Man-trained GameGAN in action. While the resulting game isn’t a pixel-perfect recreation of Pac-Man – notably, GameGAN’s simulated resolution is lower – the game none the less looks and functions like the arcade version of Pac-Man. And it’s not just for looks, either: the GameGAN version of Pac-Man accepts player input, just like the real game. In fact, while it’s not ready for public consumption quite yet, NVIDIA has already said that they want to release a publicly playable version this summer, so that everyone can see it in action.

Fittingly for a gaming-related research project, the training and development for the GameGAN was equally as silly at times. Because the network needed to consume thousands upon thousand of gameplay sessions – and NVIDIA presumably doesn’t want to pay its staff to play Pac-Man all day – the researchers relied on a Pac-Man-playing bot to automatically play the game. As a result, the AI that is GameGAN has essentially been trained in Pac-Man by another AI. And this is not without repercussions – in their presentation, the researchers have noted that because the Pac-Man bot was so good at the game, GameGAN has developed a tendency to avoid killing Pac-Man, as if it were part of the rules. Which, if nothing else, is a lot more comforting than finding out that our soon-to-be AI overlords are playing favorites.

All told, training the GameGAN for Pac-Man took a quad GV100 setup four days, over which time it monitored 50,000 gameplay sessions. Which, to put things in perspective of the amount of hardware used, 4 GV100 GPUs is 84.4 billion transistors, almost 10 million times as many transistors as are found in the original arcade game’s Z80 CPU. So while teaching a GAN how to be a Pac-Man is incredibly impressive, it is, perhaps, not an especially efficient way to execute the game.

Meanwhile, figuring out how to teach a neural network to be Pac-Man does have some practical goals to it as well. According to the research group, one big focus right now is in using this concept to more quickly train simulators, which traditionally have to be carefully constructed by humans in order to capture all of the possible interactions. If a neural network can instead learn how something behaves by watching what’s happening and what inputs are being made, this could conceivably make creating simulators far faster and easier. Interestingly, the entire concept leads to something of a self-feedback loop, as the idea is to then use those simulators to then train other neural networks how to perform a task, such as NVIDIA’s favorite goal of self-driving cars.

Ultimately, whether it leads to real-world payoffs or not, there’s something amusingly human about a neural network learning a game by observing – even (and especially) if it doesn’t always learn the desired lesson.

]]>https://www.anandtech.com/show/15814/gaming-ais-nvidia-teaches-a-neural-network-to-recreate-pacman
Fri, 22 May 2020 10:30:00 EDTtag:www.anandtech.com,15814:newsAvantek's Arm Workstation: Ampere eMAG 8180 32-core Arm64 ReviewAndrei FrumusanuArm desktop systems are quite a rarity. In fact, it’s quite an issue for the general Arm software ecosystem in terms of having appropriate hardware for developers to actually start working in earnest on more optimised Arm software.

However, if you actually wanted a private local and physical system, you’d mostly be relegated to small low-performing single-board computers which most of the time had patchy software support. It’s only been in the last year or two where Arm-based laptops with Qualcomm Snapdragon chips have suddenly become a viable developer platform thanks to WSL on Windows.

For somebody who wants a bit more power and in particular is looking to make use of peripherals – actively using large amounts of storage or PCIe connectivity, then there’s options such as Avantek’s eMag Workstation system.

Today is the next stage of the AMD AM4 B550 motherboard rollout: numerous vendors have started listing its models in place of an expected launch on June 16th. One big feature of B550 that B450 didn't have in the specifications was PCIe 4.0 support, and so this will be a big uplift with the new motherboards. Vendors today are unveiling the more price conscious models when directly compared to the premium X570 models.

The B550 chipset has been touted for many months, with much speculation on feature set, compatibility, and which AMD Ryzen processors users will opt to go for when pairing up a new board. One prevalent issue which AMD has addressed recently is that its impending Zen 3 and Ryzen 4000 processors will now be supported on B450 and X470, albeit without the benefits of PCIe 4.0 and its increased bandwidth. With this in mind, one of the main advantages of the new B550 chipset is that it will openly support PCIe 4.0 and Zen 3, which gives users more affordable options when it comes to selecting a new PC, whether it is a budget gaming system or an AMD Ryzen 3950X 16-core laden powerhouse.

AMD recently announced its Ryzen 3 3000 series processors, the Ryzen 3 3300X and Ryzen 3 3100 which we reviewed. Users looking to buy a new motherboard for AMD's more affordable Ryzen 5 and Ryzen 3 processors may not want to spend the big bucks some vendors are asking for some of its X570 models. Enter B550, and with over 50 models across all the prominent vendors to choose from, it's likely a user will be able to find one that not only matches their style requirements but matches what they need from a feature set as well.

AMD X570, B550 and B450 Chipset Comparison

Feature

X570

B550

B450

PCIe Interface from CPU

4.0

4.0

3.0

PCIe Interface from Chipset

4.0

3.0

2.0

Max PCH PCIe Lanes

24

24

24

USB 3.1 Gen2

8

2

0

Max USB 3.1 (Gen2/Gen1)

8/4

2/6

0/6

DDR4 Support

3200

?

2667

Max SATA Ports

8

6

6

PCIe GPU Config

x16
x8/x8
x8/x8+x8*

x16
x8/x8x16/+x4

x16
x16/+x4

Memory Channels (Dual)

2/2

2/2

2/2

Integrated 802.11ac Wi-Fi MAC

N

N

N

Chipset TDP

11W

?W

4.8W

Overclocking Support

Y

Y

Y

XFR2/PB2 Support

Y

Y

Y

The biggest benefit going to B550 from B450 is official support for PCIe 4.0 devices within the full-length PCIe slot which is driven by the processor. This means that the top full-length slot will run at PCIe 4.0 x16, with some models allowing for x8/x8 from a second full-length slot with official support for NVIDIA SLI. From the CPU support pages that we've seen announced from vendors, B550 will only be compatible with AMD's Ryzen 3000 series processors. For users looking to use PCIe 4.0 storage devices, B550 models include a single PCIe 4.0 x4 M.2 slot, with any additional M.2 slots coming via PCIe 3.0 lanes from within the chipset itself.

Another benefit is the B550 chipset includes support for up to two USB 3.1 G2 ports which B450 didn't offer. B550 also retains the same capability in regards to USB 3.1 G1 support as B450 with up to six ports available from the chipset. Users can also install up to six SATA ports from the chipset, with the onus on vendors if they want to use additional re-drivers or SATA controllers to push the numbers further at the cost of PCH lanes. Some models leverage Realtek's ALC1220 on the more premium B550 models, with the ALC1200 featured on MSI's MAG B550 Tomahawk.

The GIGABYTE B550 Aorus Master motherboard

Motherboard vendors across the world have been enabling their public listings of the B550 product stacks online, with some notable models from GIGABYTE which include the B550 Aorus Pro AC which provides for one PCIe 4.0 x4 M.2 slot, an additional PCIe 3.0 x4 slot which is driven from the chipset and includes an Intel Wi-Fi 3168 wireless interface which is Wi-Fi 5 standard. ASUS has also announced its stack with the top model, the ROG Strix B550-E Gaming offering an Intel I225-V 2.5 G Ethernet controller, with three full-length slots offering x16, and x8/x8 for NVIDIA SLI multi-graphics configurations, with the third full-length slot operating at PCIe 3.0 x4.

MSI has also dropped its B550 stack publically, with the popular Tomahawk series appearing via the B550 Tomahawk has also been announced with support for up to DDR4-4866 memory, a full-length PCIe 4.0 x16 slot with a secondary full-length PCIe 3.0 x4 slot, with two PCIe 3.0 x1 slots. Users looking for small form factor models will find at least three mini-ITX models, with the most prominent model coming from ASUS via the ROG Strix B550-I with Intel's I225-V 2.5 G Ethernet onboard. There are also plenty of micro-ATX models with five from ASUS alone expected at launch, although some models may be region dependent.

In regards to the pricing on AMD's B550 models, only ASUS so far has announced any pricing, with prices starting at $134 for the ASUS Prime B550M-A, all the way up to $279 for the ASUS ROG Strix B550-E Gaming which has an Intel's AX200 Wi-Fi 6 wireless interface, an Intel I225-V 2.5 G Ethernet controller, and a SupremeFX S1220A HD audio codec. We expect more pricing to be available in the coming days and weeks.

]]>https://www.anandtech.com/show/15810/amds-b550-motherboards-start-apppearing-online
Thu, 21 May 2020 13:00:00 EDTtag:www.anandtech.com,15810:newsHot Chips 32 (2020) Schedule Announced: Tiger Lake, Xe, POWER10, Xbox Series X, TPUv3, Jim Keller KeynoteDr. Ian CutressI’ve said it a million times and I’ll say it again – the best industry conference I go to every year is Hot Chips. The event has grown over the years, to around 1700 people in 2019 if I remember correctly, but it involves two days of presentations about the latest hardware that has hit the market. This includes new and upcoming parts that change the industry we work in, including deep dives into some of the most important silicon at play in the market today. There are also extensive keynote presentations from the most prominent members of the industry that give insights into how these people (and the companies) work, but also where the future is going.

This week the lid was lifted on the provision Hot Chips 2020 schedule. With COVID-19 in mind, this year will also be the first year the conference will be offered online-only for attendees. Hot Chips 2020 is scheduled for August 16th to August 18th.

]]>https://www.anandtech.com/show/15806/hot-chips-32-2020-schedule-tiger-lake-xe-power10-xbox-series-x-tpuv3-jim-keller
Thu, 21 May 2020 08:00:00 EDTtag:www.anandtech.com,15806:newsIntel Acquires Rivet Networks: Killer Networking is all in for Team BlueDr. Ian Cutress

News hot off the wire is that Rivet Networks, the company behind the Killer range of accelerated networking products and analysis tools, is being acquired by Intel. The two companies have been working very closely of late, using a unified silicon strategy for the latest gigabit Ethernet networking silicon and also Wi-Fi 6 add-in cards and CNVi CRF modules for laptops. This new acquisition for Intel will enable an element of Ethernet traffic monitoring and optimization the portfolio has not had before, but it will be interesting to see how Intel hands the acquisition compared to when Qualcomm Atheros acquired Rivet Networks some years ago.

]]>https://www.anandtech.com/show/15809/intel-acquires-rivet-networks-killer-networking-is-all-in-for-team-blue
Wed, 20 May 2020 16:00:00 EDTtag:www.anandtech.com,15809:newsThe Intel Comet Lake Core i9-10900K, i7-10700K, i5-10600K CPU Review: Skylake We Go Again Dr. Ian CutressThe first thing that comes to mind with Intel’s newest line of 10th Generation desktop processors is one of ‘14nm Skylake, again?’. It is hard not to ignore the elephant in the room – these new processors are minor iterative updates on Intel’s 2015 processor line, moving up from four cores to ten cores and some extra frequency, some extra security measures, a modestly updated iGPU, but by and large it is still the same architecture. At a time when Intel has some strong competition, Comet Lake is the holding pattern until Intel can bring its newer architectures to the desktop market, but can it be competitive?
]]>https://www.anandtech.com/show/15785/the-intel-comet-lake-review-skylake-we-go-again
Wed, 20 May 2020 09:00:00 EDTtag:www.anandtech.com,15785:newsAMD to Support Zen 3 and Ryzen 4000 CPUs on B450 and X470 MotherboardsDr. Ian CutressIn a surprising twist, AMD has today announced that it intends to enable Ryzen 4000 and Zen 3 support on its older B450 and X470 Motherboards. This is going to be a ‘promise now, figure out the details later’ arrangement, but this should enable most (if not all) users running 400 series AMD motherboards to upgrade to the Zen 3 processors set to be unveiled later this year.
]]>https://www.anandtech.com/show/15807/amd-to-support-zen-3-and-ryzen-4000-cpus-on-b450-and-x470-motherboards
Tue, 19 May 2020 10:00:00 EDTtag:www.anandtech.com,15807:news