HWBOT Articles

Today we find the GPU Flashback Archive delving into the not so distant past to focus on the NVIDIA 900 series of graphics cards, the first to use NVIDIA’s new Maxwell architecture which had already seen the light day in mobile GPU solutions, an indication of the direction that the company were taking at the time. Let’s take a look at the cards that were launched as part of the 900 Series, the improvements and changes that Maxwell brought and some of the more memorable scores that have been posted on HWBOT.

The first question one may well have regarding the NVIDIA 900 series is simple - what happened to the 800 series? To answer the question fully, you must first look at the direction that NVIDIA was moving at the time. A movement to expand its product offerings in order to compete in the quickly expanding mobile SoC market. The suddenly ubiquity of Android-based smartphones around the globe was fuelled in part by the development of mobile SoCs from Qualcomm, Samsung, Mediatek, Marvell, Allwinner and others. The traditional feature phone was quickly being replaced by smartphones that now required improved multi-core CPU performance, HD display support and, importantly from NVIDIA’s perspective, decent enough graphics processing to actually play 3D games. Intel and NVIDIA were two companies with plenty of R&D and marketing budget who sought to enter a new market to help bolster revenues during an inevitable slow down of desktop PC sales, a traditional cash cow for both.

The GPU Flashback Archive series continues today with a recap of the NVIDIA GeForce 700 series, a series refresh which heralds part two of the Kepler family of GPUs. We can also remember it as a time when NVIDIA launched their first ever GTX Titan card and with it, a new pricing and retail strategy for truly high-end graphics card products. Let’s take a look at the new Kepler architecture GPUs, the cards that were popular with HWBOT members and some of the more memorable scores that have been posted since launch.

The 2011-2013 period of history saw NVIDIA implement a more regular cadence to their high-end product launches and refreshes. One that saw the company launch a new GPU architecture every two years, with new product lines arriving each year. This means deriving two product lines per architecture with an improved version offered the second time out. This is what we saw with Fermi, an architecture whose potential was full realized at the second attempt. With the GeForce 700 series, which arrived proper in May 2013 with the arrival of both the GeForce GTX 780 and GTX 770, we have something different. The new cards arrived using a much bigger version of the Kepler architecture compared to what we saw on the NVIDIA 600 series.

The GPU Flashback Archive arrives today at the NVIDIA 600 series that debuted in Spring of 2012. The new range of cards showcased a new graphics architecture design and the beginning of what we might describe as the Kepler era. Let’s take a peek at the changes that the new design heralded as well as a close up view of on the GeForce GTX 680 card, the most popular 6-series card with HWBOT members historically speaking. Before we look at some notable scores that were made with the GeForce 680, let’s first kick off with an overview of what innovations arrived with the new Kepler architecture.

If we cast our minds back to 2012 we can recall a era when NVIDIA and AMD were virtually neck and neck, with successive graphic card launches from each company swinging the performance crown from side to side. The arrival of Kepler in many ways represents the beginning of the end of the competitive duopoly that is clearly absent today. Kepler helped NVIDIA push ahead of AMD in terms of graphics processor design, creating a performance lead which AMD still finds insurmountable, despite the arrival of their latest Vega-based cards. Let’s take a look at Kepler in a little detail.

This week the GPU Flashback Archive sets its sights on the GeForce 500 series from NVIDIA. Arriving in late 2010, the 500 Series was the second round of graphics cards based on the Fermi architecture which had limped over the line in the previous generation, ostensibly due to fabrication and yield issues. The new flagship GTX 580 arrived with a more polished take on the Fermi design that help NVIDIA combat the threat from AMD and their popular Radeon 5000 and 6000 series cards. As ever, let’s take a look at the new GPU, the new flagship card and a few of the outstanding scores that have been submitted to HWBOT.

To say that the NVIDIA 400 series graphics cards launch was less than smooth, would be a total understatement. The GF100 Fermi architecture GPU in fact arrived six months late with a significant number of cores hacked off. Blame was laid at the door of fabricators TSMC and a 40nm manufacturing process that clearly hadn’t been optimally adapted for NVIDIA’s Fermi, a monster chip boasting 3 billion transistors and a 529mm² die. While cards such as the GTX 480 had actually done well to make NVIDIA competitive in performance terms, the GTX 580 and its GF110 GPU was rather quickly shoved out the door just eight months later as a revised and improved version of the original.

This week in our GPU Flashback Archive series we cast our minds back to a very popular and well loved graphics card series, the GeForce 400 series. NVIDIA launched the GeForce 400 series in March 2010 armed with a new Fermi architecture that it hoped would help it compete with the successful AMD Radeon 5000 series. Let’s look at the new features that Fermi offered, the cards that were popular and the scores that were submitted to HWBOT in this era.

Compared to previous product launches from NVIDIA, the GeForce 400 series launch did not go as smoothly as hoped. September 2009 saw AMD come out with their Radeon 5000 series which made a solid case against NVIDIA 200 series offerings. It would be January before NVIDIA really started wooing tech media with tales of its forthcoming Fermi architecture lineup. It would be March 2010 before tech media actually got their hands on the new cards and several weeks after that before enthusiasts would be able to actually buy one. This was not the typical NVIDIA launch. Reasons for the delay certainly seemed to lie with issues with actual fabrication at TSMC who were not providing the yields expected on their new 40nm process. This was a problem that particularly hurt NVIDIA due to the fact that the new Fermi GPU, the GF100, was actually very large. When the GeForce 400 series finally arrived in the form of the GeForce GTX 480 and GTX 470, by most calculations they were six months late.

HWBOT news

As 2019 is coming near, we are currently holding a poll over at the forums to see which benchmarks should still be applicable for points in 2019 and which not. The consensus is less benchmarks should have global points, but which exactly do you want to see removed or added? Cast your vote in the forums!

Head over to overclock.net if you want to participate in the annual Freezer’ Burn overclocking competition. 2400$ in prizes, very active community, split ambient and extreme cooling stages, they've got it all!

This competition was designed with the goal that everyone can participate and be competitive. Whether you are an experienced subzero overclocker or just starting and this is all new, you can learn and participate in a fair and even environment. There are separate categories for ambient cooled and extreme cooled computers. You will only be allowed to participate in one category, but you an change categories at any time.
We learned alot from the last Freezer’ Burn competition and we decided to make one very fundamental change for this competition. The biggest and most important change is that we will not limit the CPU or core count in this competition. Simply put, we will allow every CPU ever made in this competition. You might be saying to yourself, hey that's not fair since I only have a 4 core CPU and others might have 32 core CPU’s. Well, we have an interesting solution to this problem, which will be described below in the 2D section. To take things a step further, we are only choosing 3D benchmark programs were the CPU does not influence the score. No matter what CPU you have, you can compete in a fair environment!

The GALAX GOC 2018 Qualification ended today, and the submissions made are being scrutinized the next 24 hours. Some submissions where rejected incorrectly due to a mixup the the GPUPI version restrictions, and might be allowed to enter the competition if they were initially submitted - but rejected by the engine - before the competition deadline.

Hold on tight, we will have the final ranking validated tomorrow at 19h CEST.

Intel just took the veil off the performance figures for its latest Z390 platform and the forthcoming Intel Core i9 9900K processor, an 8-core/16-thread Coffee Lake architecture chip that so far has broken 10 World Records and 11 Global First Places in the 8 cores CPU category. The new Intel Z390 chipset has also broken the DDR4 memory frequency record, pushing it to 5566MHz.

As the GOC 2018 Online Qualification Contest just started this week, GALAX has provided some tools to get the most out of your 2080ti. It might get you the edge to get the ticket to the finals, but use at own risk!

Let us know in the comments if you would like to receive the mod tools and tweaking tips.

GALAX has just confirmed the dates, prizes and stage of the online worldwide qualification phase of the GOC 2018. The online qualification contest is the prelude to the GALAX GOC 2018 Grand Final which will take place in December. The qualification phase is open to all overclockers and spans five stages, which includes both 2D and 3D challenges.