Microsoft May Boost Xbox One’s GPU Performance by 8% - 10% - Report.

Microsoft Corp. plans to release a patch for Xbox One game console, which will boost performance of the system’s graphics processing unit by 8% - 10%. The update will remove compulsory reservation of GPU horsepower for processing of Kinect’s video data.

Not only Xbox One’s GPU has about 33% lower computational power than the PlayStation 4’s, but it reserves 2% of GPU performance for processing of Kinect’s audio data and 8% of GPU performance for processing of Kinect’s video data. Basically, 10% of the GPU remains idle even at times when a game does not use the motion sensor. As a consequence, in some games (Tomb Raider 2013, for example) PlayStation 4 renders 60 frames per second, whereas the Xbox One can only hit 30 frames per second.

Quite naturally, game developers need to learn how to efficiently use Xbox One’s 32MB SRAM/ESRAM GPU cache to speed up rendering of graphics-intensive video games, however, this will take time. In a bid to boost performance now, Microsoft is working on a patch that will make 8% reservation of GPU horsepower for Kinect optional, reports HotHardware web-site. As a result, games that do not use Kinect will be able to use the additional resources to improve fps and/or graphics quality. While 8% is not a lot, over time there will be more tweaks here and there to improve the GPU of the Xbox One.

It is worth mentioning that less than a quarter after the launch Microsoft is rebalancing the Xbox One platform towards higher performance in video games. The new Kinect 2 sensor remains an important part of the whole Xbox One project, but it is obvious that right now the software giant needs to concentrate in improving the key part of the platform: graphics processing unit.

Preview

yeha i thin kthey are going to need a little bit more performance tweeking before they are able to get to the point where they can run 60fps on all games bc i do not tjhink that 10% more performance is going to increase it enough to run at 60 fps

Preview

2.

If they boost performance to accommodate the Kinect controller, it means they have had complaints that it interferes with performance.
That is something they should have noticed during the testing period, instead of rushing it to market.

Preview

3.

Both M$, and SONY had better start ordering more powerfull processors from AMD, or others, because there is only a short window remianing before AMD and Nvidia, begin to merge the CPU with the descrete GPU, and produce an entire caming console on a PCI card. Maxwell is Nvidia's reply to the AMD dedicated gaming console APU's of the Xbone and PS4! Just as the API wars are beginning between Mantle, DX*.*, OpenGL, there will be a war to get CPUs up close and connected, Low Latency wise and Shareing the same fat data bus, with the GPU! And what way is the best way to reduce latency between CPU and GPU on a descrete GPU, the best way is to merge the CPU and the GPU onto the descrete graphics card, making the discrete GPU into a complete APU, with its own OS, GDDR5 memory, Large ON DIE RAM(to boost the on card gaming OS and Engine), and CPU/GPU combo shareing the same on die memory memory controller to a unified memory address space. The begnnings of this are starting, Nvidia's maxwell may start out with a single Denver ARM ISA based core, but the competition will force Nvidia to add more denver cores to Maxwell, to compete with AMD, as AMD will be beginning to rework its Gaming console APUs, into more powerfull descrete PCI based complete gaming platforms on a PCI card. This is where gaming is gonig to evolve, driven by the need for lower latency between CPU and GPU, this need is currently the driving force behind the API improvments of Mantle, but the real solution for the latency issue, is to merge the CPU with the GPU, and descrete GPUs will evolve into a complete gaming PCI based APUs that are consoles/computers unto themselves, all on a PCI card.

Preview

It would be really dumb to make discrete PCI card machines. Not only would you have to pay out the butt for such a compact item but you would have to purchase a motherboard, proc, and ram just to be able to run the machine. It's not like you can just have a bare PCI express slot with nothing holding it.

On GPU latency, the best way to address that is to continue improving the PCIe standard. Above that, minor hardware improvements like hUMA and new APIs are what they should push for on the software side.

Preview

People pay out the nose, ears and Butt, for graphics cards, and Maxwell is getting a Denver ARM ISA based core to add to the GPU. PCI requires encodeing/encapsulating decoding/de-encapsulating of data, and this will always require overhead, and introduce Latency! Getting a CPU in as close a proximity to the GPU, and having the CPU/GPU share a large on Die RAM, FAT data BUS, and GDDR5 memory(on the PCI card!) is the the way to go! UMA(hUMA is just a fancy AMD marketing term) for Unified Memory Access, saves having to move huge amounts between non unified memory address spaces(one 64 bit pointer, takes much less time to transfer, than a whole Butt load of data takes to transfer the old non unified memory way). Descrete GPUs have in them 90% of what it takes to be a full general purpose computer, they have memory, memory controller, GDDR5 memory, data bus, and other on die control blocks, so adding a general purpose computer, to the vector processor(GPU) is just a matter of adding another on die functional block of logic, in this case a CPU(general purpose) to the Vector Processor, and both the CPU and GPU can share the GPU's on die memory controller, hell most memory controllers are almost if not CPUs in their own right! So how much space would an on DIE CPU take up on the discrete GPU PCI board? Not very much, AMD crams 8 cores onto its console APUs, and one of them has a large on die RAM, so the GPU/CPU combo(APU, whatever name the marketing monkeys can think up) in not going to take up any more than a few MM squared on the CPU/GPU's DIE. If you do not think that A discrete GPU has a De Facto mother board on that PCI card, then how does it use the GDDR5 memory, and the Fat GPU style data bus! It is the same thing, it is just refered to as a daughter card with its own memory and address bus, and daughters do become mothers, just look at the Big Iron servers, and HPC/Supercomputers, with thousands of motherboards each hosting independent PCI based computer systems 8 or more PCI slots per Moatherboard. If the CPU and the GPU are both on die they can communicate over the internal on die data bus, and if the CPU/GPU has a large on die RAM then most of the time they will not have to deal with PCI based transfers, as the on die RAM will be attatched driectly to the high speed on die internal bus, and if any code does not already reside on the large on die RAM well the memory controller will do its job, and if the gaming engine and OS vital functions reside on the on die RAM(with proper cacheing functionality they will) then the games will run a whole Butt loads faster on a PCI based gaming platform.

Preview

"It would be really dumb to make discrete PCI card machines. Not only would you have to pay out the butt for such a compact item but you would have to purchase a motherboard, proc, and ram just to be able to run the machine. It's not like you can just have a bare PCI express slot with nothing holding it."

Where in the name of [insert favorite deity here, or other] did you get the idea that I ment just the PCI based gaming system, without a motherboard to hold up the slot? the motherboard is there to host the general purpose OS, the PCI complete gaming system on a PCI card has its own optimized for gaming OS distro(Steam OS, other) and game/gaming engine loaded at the time, so as not to need the assistence of the Motherboard CPU/whatever once the PCI based gaming system is booted up and loads its gaming OS/game, the motherboard CPU and OS, is just there to assist in boot up or passing the game/gaming engine to the gaming OS on the PCI card, in fact the motherboard OS does not have to do any work other than monitoring the system, the PCI based gaming platform is quite capable of doing its own disk IO, and OS booting by itself, as any computer can do, or multiple CPU/GPU based computing platforms can do, via the bus mastering and DMI circuitry that have been part of the motherboard standards for years.

Just A question evernessince, have you ever looked at the chips on a descrete GPU? Do you not see the memory on the PCI card, the GDDR5 memory ICs, the data and Address Bus traces on the PCI card, the GPU(vector processor) with its on DIE memory controller(Nivida since fermi, AMD with its APUs/descrete GPUs)! And Just beacuse the current descrete GPUs do not have branch prediction units and such, you think thay are not processors in their own right for vector computing tasks(gaming graphics), and that they are still not processors that require memory controllers, and such for their PCI based graphics computing, when in fact the only difference between CPU and GPU, motherboard, and PCI daughter complete computing platforms, is maybe as little as a branch prediction unit and a few other bits of logic added to the GPU. GPUs are computers, just not general purpose computers, and the descrete GPU only cards are still computing platforms(vector computers).

Most people own a desktop or 2, and a laptop, and having a PCI gaming platform (AMD gaming APU based, Nvidia Denver ARM ISA APU equivalent) on a PCI card would make any old desktop a gaming console, or TWO(for desktops with more PCI slots). Who would not like that, and for those with no desktop, get a Steambox, they are mostly complete desktops in a small formfactor(for some SKUs), or larger for others.