Zotac made a wee splash on the market years ago with their ITX and Nano products and recently have been shrinking GPUs. They have continued to carve out a small piece of the market, recently releasing the Zotac RTX 2070 Mini with a smaller price as well as a smaller stature. There is more to this card than just the trim job, the GPU is a Non-A TU-106, which shares the same design but will have limited overclocking potential, if any at all. That also seems to drop the price, which is a welcome feature.

The gang over at Bjorn3D tested it on a number of games as well as seeing if there is a way to bump the clocks up in their full review.

"One of the first aftermarket partner model 2070’s has rolled in. It is from our friends at Zotac, the RTX 2070 Mini. This model does not appear to be a Overclock model on the surface and might even be a non A GPU due to its less diminutive price point."

Adding these 1440p results was planned from the beginning, but time constraints made testing at three resolutions before getting on a plane for CES impossible (though in retrospect UHD should have been the one excluded from part one, and in future I'll approach it that way). Regardless, we now have those 1440p results to share, having concluded testing using the same list of games and synthetic benchmarks we saw in the previous installment.

On to the benchmarks!

PC Perspective GPU Test Platform

Processor

Intel Core i7-8700K

Motherboard

ASUS ROG STRIX Z370-H Gaming

Memory

Corsair Vengeance LED 16GB (8GBx2) DDR4-3000

Storage

Samsung 850 EVO 1TB

Power Supply

CORSAIR RM1000x 1000W

Operating System

Windows 10 64-bit (Version 1803)

Drivers

AMD: 18.50
NVIDIA: 417.54, 417.71 (OC Results)

We will begin with Unigine Superposition, which was run with the high preset settings.

Here we see the RTX 2060 with slightly higher performance than the GTX 1070 Ti, right in the middle of GTX 1070 and GTX 1080 performance levels. As expected so far.

At first glance, the MSI RTX 2060 Gaming Z is very similar to the Founders Edition, with only the boost clock of 1830MHz justifying that it's MSRP is $35 higher. Once TechPowerUp got into testing, they found that the custom cooler helps the card maintain peak speed far more effectively than the FE. This also let them hit an impressive manual overclock of 2055MHz boost clock and 1990MHz on the memory. Check out the results as well as a complete tear down of the card in the review.

"MSI's GeForce RTX 2060 Gaming Z is the best RTX 2060 custom-design we've reviewed so far. It comes with idle-fan-stop and a large triple-slot cooler that runs cooler than the Founders Edition. Noise levels are excellent, too, it's the quietest RTX 2060 card to date."

Citing word from a board partner, VideoCardz.com has published a rumor about an upcoming NVIDIA GeForce GPU based on Turing, but without ray tracing support. While such a product seems inevitable as we move further down the chain into midrange and mainstream graphics options, where ray tracing makes less sense from a performance standpoint, the name accompanying the report is harder to fathom: GTX 1660 Ti.

"The GeForce GTX 1660 Ti is to become NVIDIA’s first Turing-based card under GTX brand. Essentially, this card lacks ray tracing features of RTX series, which should (theoretically) result in a lower price. New SKU features TU116 graphics processor and 1536 CUDA cores. This means that GTX 1660 Ti will undoubtedly be slower than RTX 2060." - VideoCardz.com

Beyond the TU116 GPU and 1536 CUDA cores, VideoCardz goes on to state that their sources also claim that this new GTX card will still make use of GDDR6 memory on the same 192-bit bus as the RTX 2060. As to the name, while it may seem odd not to adopt the same 2000-series branding for all Turing cards, the potential for confusion with RTX vs GTX branding might be the reason - if indeed cards in a 1600-series make it to market.

If you are running an Ubuntu system with an older GPU and are curious about upgrading but unsure if it is worth it, Phoronix has a great review for you. Whether you are gaming with OpenGL and Vulkan, or curious about the changes in OpenCL/CUDA compute performance they have you covered. They even delve into the power efficiency numbers so you can spec out the operating costs of a large deployment, if you happen to have the budget to consider buying RTX 2060's in bulk.

"In this article is a side-by-side performance comparison of the GeForce RTX 2060 up against the GTX 1060 Pascal, GTX 960 Maxwell, and GTX 760 Kepler graphics cards."

One of NVIDIA's biggest and most surprising CES announcements was the introduction of support for "G-SYNC Compatible Monitors," allowing the company's G-SYNC-capable Pascal and Turing-based graphics cards to work with FreeSync and other non-G-SYNC variable refresh rate displays. NVIDIA is initially certifying 12 FreeSync monitors but will allow users of any VRR display to manually enable G-SYNC and determine for themselves if the quality of the experience is acceptable.

Those eager to try the feature can now do so via NVIDIA's latest driver, version 417.71, which is rolling out worldwide right now. As of the date of this article's publication, users in the United States who visit NVIDIA's driver download page are still seeing the previous driver (417.35), but direct download links are already up and running.

Users with a certified G-SYNC compatible monitor will have G-SYNC automatically enabled via the NVIDIA Control Panel when the driver is updated and the display is connected, the same process as connecting an official G-SYNC display. Those with a variable refresh rate display that is not certified must manually open the NVIDIA Control Panel and enable G-SYNC.

NVIDIA notes, however, that enabling the feature on displays that don't meet the company's performance capabilities may lead to a range of issues, from blurring and stuttering to flickering and blanking. The good news is that the type and severity of the issues will vary by display, so users can determine for themselves if the potential problems are acceptable.

Update: Users over at the NVIDIA subreddit have created a public Google Sheet to track their reports and experiences with various FreeSync monitors. Check it out to see how others are faring with your preferred monitor.

Update 2: Our friends over at Wccftech have published a short video demonstrating how to enable G-SYNC on non-G-SYNC VRR monitors:

3D Mark recently released an inexpensive update to their benchmarking suite to let you test your ray tracing performance; you can grab Port Royal for a few dollars from Steam. As there has been limited time to use the benchmark as well as a small sample of GPUs which can properly run it, it has not yet made it into most benchmarking suites. Bjorn3D took the time to install it on a decent system and tested the performance of the Titan and the five RTX cards available on the market.

"3DMark is finally updated with its newest benchmark designed specifically to test real time ray tracing performance. The benchmark we are looking at today is Port Royal, it is the first really good repeatable benchmark I have seen available that tests new real time ray tracing features."

After announcing the Radeon VII this week at CES, AMD has quietly released its own internal benchmarks showing how the upcoming card potentially compares to the Radeon RX Vega 64, AMD's current flagship desktop GPU released in August 2017.

The internal benchmarks, compiled by AMD Performance Labs earlier this month, were released as a footnote in AMD's official Radeon VII press release and first noticed by HardOCP. AMD tested 25 games and 4 media creation applications, with the Radeon VII averaging around a 29 percent improvement in games and 36 percent improvement in professional apps.

AMD's test platform for its gaming Radeon VII benchmarks was an Intel Core i7-7700K with 16GB of DDR4 memory clocked at 3000MHz running Windows 10 with AMD Driver version 18.50. CPU frequencies and exact Windows 10 version were not disclosed. AMD states that all games were run at "4K max settings" with reported frame rate results based on the average of three separate runs each.

For games, the Radeon VII benchmarks show a wide performance delta compared to RX Vega 64, from as little as 7.5 percent in Hitman 2 to as much as 68.4 percent for Fallout 76. Below is a chart created by PC Perspective from AMD's data of the frame rate results from all 25 games.

In terms of media creation applications, AMD changed its testing platform to the Ryzen 7 2700X, also paired with 16GB of DDR4 at 3000MHz. Again, exact processor frequencies and other details were not disclosed. The results reveal between a 27% and 62% improvement:

It is important to reiterate that the data presented in the above charts is from AMD's own internal testing, and should therefore be viewed skeptically until third party Radeon VII benchmarks are available. However, these benchmarks do provide an interesting first look at potential Radeon VII performance compared to its predecessor.

Radeon VII is scheduled to launch February 7, 2019 with an MSRP of $699. In addition to the reference design showcased at CES, AMD has confirmed that third party Radeon VII boards will be available from the company's GPU partners.

Part of this refresh has already been announced, of course, as Papermaster noted, “we’re really excited to start on the high end” (speaking about the Radeon VII) and he concluded with the promise that “you’ll see the announcements over the course of the year as we refresh across our Radeon roadmap”. It was not mentioned if the refreshed lineup will include 7 nm parts derived from the Vega VII shown at CES, but it seems reasonable to assume that we haven’t seen the last of Vega 2 in 2019.

AMD is still mid-keynote but that's no reason not to start filling you in on what we know, especially since the CES gang just got a free copy of The Division 2 so we may not see them for a while.

The new Radeon 7 looks similar to the Vega series but offers improved performance, especially at 4K resolutions. According to their internal benchmarks you will see a noticeable improvement from the VEGA series on a number of games.

The new card is not just for gaming, they also showed a slide covering the increases you can expect on a variety of creative software.

As far as the specifications go, we know the card will feature 60 CUs, or 3840 Stream Processors and an impressive 16GB of HBM2 memory with a 1.8GHz GPU at the core. It will require a pair of 8pin PCIe power connectors to drive all of that.

The card will be available on Feb 7th for an MSRP of $699, with a free copy of The Division 2 for as long as supplies last, you can also enjoy that deal on select Ryzen chips That places it under the cost of NVIDIA's top GPUs, but significantly more than the new RTX 2060, but we still have to see where it sits in the benchmark charts!

3rd-Gen Ryzen CPUs Coming

The new third generation Ryzen uses AMD's chiplet design, with a smaller core and a large IO chip. Code-named Matisse, the 7nm Zen 2 desktop parts are not yet ready for release, with final clock speeds not announced. (AnandTech was able to go a little deeper into the the matter before the announcement, and they offer some analysis of the feasability of adding another chiplet to the die and meeting the 16-core number some expected based on the rumors we saw prior to this event. Ed.)

Dr. Su did not share much information of the new chip with us on stage, though we know it may pull less power than a Core i9, at least in Cinebench. Owners of AM4 boards can rest assured knowing that upgrading to the new chips will be as easy as a BIOS update as the socket will indeed remain the same.