Monday, December 21, 2015

Star Wars: Battlefront is one of the best optimized games that are on the market right now, when you only look at the graphics versus framerate output on different GPU’s. The game has amazing shading, lighting, and post FX leveraging that looks to be a form of PBR to make a more realistic aesthetic without hammering polys and draw calls.

All of this was tested on an X99 platform, so its fluidity has been scoured across a CPU suite. Gamers Nexus decided to benchmark Battlefront with an Intel lineup and some of AMD’s FX CPU’s, also including one APU and dGPU combination. Anything that is not on this page can mean one of two different things: Either Gamers Nexus didn’t have it or it’s being used for another benchmark, and this counts for many CPU’s.

The Pentium G3258 seemed to work well during the games beta phase, but it seems to have dropped from official support when Battlefront launched. To get the dual-core CPU to work, and it’s now technically below specifications, user need to use .dll injection utilities to make the dual-core operation in Battlefront. Without the injection of the correct .dii files, Battlefront will launch with only a black screen and locked 200 FPS output, which gives zero video to the gamer. The game actually looks like it has crashed or has frozen, but an alt-F4 or alt-tab will show full operation of Windows.

In order to force G358 support, they decided to inject a dual-core dll through a process that is more detailed in the original article, and there will be a link at the bottom. They were not sure at the time if this risk will ban anti-hacker software or not. The injection makes for a more stable game that will support dual-core CPUs, even though it’s quite obvious that a sub-optimal test environment that could potentially make way for variance or non-linear comparison, if the test environment changed for the CPU. However, without the fix, they could not test the G3258 at all. This makes the .dll injection a game requirement when it comes to the G3258, and it creates a real-world use scenario.

The very latest AMD Catalyst drivers (15.11.1) were used during testing and this includes the Battlefront patch. NVidia’s 358.91 drivers were used during testing as well. Game settings were configured to “Medium”, “High”, and “Ultra” presets at 1080p, 1440p, and 4K resolutions.

The CEach scenario was tested for a total of 30 seconds identically, and then it was repeated three times for parity.

They decided to use “Survival” mode that is on a set course. This mode was chosen because of its replicability and reliability during the test. The benchmark is not comparable to the previous Battlefront benchmark, and it used Tatooine as its bench course.

An average FPS was 1% low and 0.1% low times that they are measured. They did not measure maximum or minimum FPS results because they considered the numbers to be pure outliers. Instead, they decided to take the average of the lowest 1% of results to show everyone the noticeable drops. Then, they took an average of the lowest 0.1% of results for sever spikes.

Their test discipline mandates disabling any power management functions, and this includes AMD’s APM and Intel’s C-States. This makes sure that the test results are accurate for each test and it eliminates the chance of variable clock throttling that could make a difference in the results and make them harder to measure.

They used the same DDR3 memory for every platform. The memory was configured to DDR3-2133 XMP profiles, which made sure that the memory did not impact the test results on platforms that cannot not 2133MHz RAM.

Windows’ deployment was clean between each and every configurations. They used a local and custom built pXe server to ghost images onto drives before testing begun. Every image was constructed for specific CPU architectures and chipset drivers are installed post image. Video drivers were not included on the images. There was one SSD per benchmark and this ensured complete isolation of driver and OS/image deployment. Thus, there was no conflict between CPU or platform changes. If you forge this step, it risks BSOD failures.

There was only one methodological limitation. They used one GTX 980 Ti, rather than using multiple GPUs, the settings which began to bottleneck the GPU will decrease the profundity of CPU limitations. This didn’t happen until higher settings, but they still got a clean hierarchy of CPUs.

The 1080p/low preset was tested only to create a visible delta between the CPUs and they hoped that they would eliminate the GPU as a potential bottleneck. This showed them which point the CPU will bottleneck the GPU, which is an extremely important factor to consider in component purchasing. The game is limited to only 200FPS by default, but they were able to unlock it with gametime.maxvariablefps 0.

The i3-4130 is the next slowest CPU on our bench, and it pushed a 174FPS average and 128FPS 1% lows. AMD’s first CPU that was tested on their bench was the FX-8370, and it pushed 165FPS on average. This is a 5.3% loss against the i3-4130. The FX-8370 retains a higher 1% low and 0.1% low framerates than the i3-4130.

When they moved on to the FX-8320E, they started to approach a performance region where the difference would show itself at higher settings. The FX-8320E, a lower power CPU, drives 132.7FPS to the display when it’s coupled up with the 980Ti and it has a bigger gap in 1% lows than the FX-8370. The A10-7870K APU has lower frame rates when compared to the FX-8320E across all of their tests.