GeForce GTX 750 Ti May Be Maxwell... and February?

Well this is somewhat unexpected (and possibly wrong). Maxwell, NVIDIA's new architecture to replace Kepler, is said to appear in Feburary with the form of a GeForce GTX 750 Ti. The rumors, which sound iffy to me, claims that this core will be produced at TSMC on a 28nm fabrication technology and later transition to their 20nm lines.

As if the 700-series family tree was not diverse enough.

2013 may have been much closer than expected.

Swedish site, Sweclockers, have been contacted by "sources" which claim that NVIDIA has already alerted partners to prepare a graphics card launch. Very little information is given beyond that. They do not even have access to a suggested GM1## architecture code. They just claim that partners should expect a new videocard on the 18th of February (what type of launch that is is also unclear).

This also raises questions about why the mid-range card will come before the high-end. If the 28nm rumor is true, it could just be that NVIDIA did not want to wait around until TSMC could fabricate their high-end part if they already had an architecture version that could be produced now. It could be as simple as that.

The GeForce GTX 750 Ti is rumored to arrive in February to replace the GTX 650 Ti Boost.

seems a bit odd that they would launch their new architecture with a very mid range graphics card. I would have expected something along the lines of a 880 but of course that would make the just released 780ti pretty irrelevant.

There wouldn't be a point, Nvidia's profit margins on their currently, ultra-refined GK110/104 dies, are sufficient to keep them around for a while. Where Nvidia needs to capture some market share is in the "Sweet Spot" of GPU's where most of their sales and gross profit is made.

Use a notepad app on your phone or even carry a real
notepad with you. There will be no other traffic "clogging" up your bandwidth and causing interruptions in service.
Let's find out what makes some bedsheets softer when compared with others.

seems a bit odd that they would launch their new architecture with a very mid range graphics card. I would have expected something along the lines of a 880 but of course that would make the just released 780ti pretty irrelevant.

Sticking with the 28nm process makes sense- they had enough problems with it a couple of years ago so I imagine that 20nm may have at least a shortage problem for a while. Reusing parts of 7xx also makes some sense. Naming it 7xx does not. It breaks the naming convention they have been establishing. I can't think of a precedent right off.

The only thing that comes to mind is there may be a possible issue with the stability of the architecture when they push it to full production. So this could provide a testing ground for stability issues that could be found at higher power demand threshold.

There is also the possibility that they are simply ready to put the architecture to market at this power threshold and want to provide the product for their customers.

Either way the positive note to take from this is that there is a new architecture being brought to market and that provides the opportunity for development.

Maybe that SKU or the midrange is a big money maker for them, and they need to make some sales, as high end may not produce enough revenue! Also no mention of the Denver ARM core. AMD's gaming APUs must have greatly impressed Nivida, what with having a CPU core/s placed in such a low latency spot, on die right next to the GPU, and possably sharing the Wide GPU data BUS, and GDDR5 memory. I sure can see that having one or more CPU cores right next to the GPU(without having to communicate through any latency adding PCI encoding, and decoding protocols), and sharing the same on die memory controller, and maybe a large on die RAM, to go along with some GPU style wide bus access to GDDR5 memory, with the possability of a Descrete card based GPU/CPU combo [call it a gaming APU, or whatever] hosting a Gaming OS, and engine, and that it may just be the future end result of the merging of CPU with GPU.

Can You see the future?

Hay mister, you got CPU all over my GPU! And you-- You got GPU all over my CPU! Oh Man! that sure games Great!

It makes perfect sense to start with a mid-ranged product, especially on a significant die-shrink, it means higher yields and greater margins. Nvidia will be able to maximize the number of usable dies out of single wafer, as they will be able to "disable" malfunctioning SMX units, without ditching the entire die.