NVIDIA's next-generation GPU architecture, codenamed "Maxwell," will debut this February, with the unexpectedly positioned GeForce GTX 750. The card will launch on February 18, to be specific. Maxwell will introduce a host of new features for NVIDIA, beginning with Unified Virtual Memory. The feature lets the GPU and CPU share the same memory. Such a feature is already implemented on the current CUDA, but Maxwell could be designed to reduce overhead involved in getting the thing to work. The next big feature is that Maxwell GPUs will embed a 64-bit ARM CPU core based on NVIDIA's "Project Denver." This CPU core will allow the GPU to reduce dependency on the system's main processor in certain GPGPU scenarios. Pole-vaulting the CPU's authority in certain scenarios could work to improve performance

Getting back to the GeForce GTX 750 Ti, NVIDIA's aim is simple, to see how "Maxwell" performs on the existing, proven 28 nanometer silicon fab process, before scaling it up on the future 20 nm nodes, with bigger chips. Given its name, we expect it to be positioned in between the GTX 760 and the GTX 660 in terms of gaming performance, but we won't be surprised if it falls into an entirely different league with GPGPU. There are no specifications at hand.

You mean older than a rumour? The TPU GPU database entry at least has a PCI device ID that backs up the information that it is based on Kepler (GK106). While the Maxwell-related rumours have nothing to back them up. Not even any public beta ForceWare includes an .INF file that would list any Maxwell GPUs.

and it shows to be relatively much better in compute, coin miners are going to be all over it ... aaand it's gone

Click to expand...

Interesting thinking you might have something… A rushed 28Nm product that hashes closer to say a 280X, but much cheaper and efficient would stand the current situation on its ear. It would be a good move for Nvidia, basically switch all the underutilize GK106 and GK104 starts to this and hope they can maintain a light sprinkle on what’s still like a parched desert. But does such thinking take into account that Nvidia would've needed to know even before Novembers "gold rush" that there could be such great insurgence for compute MH/s, and schedule the "starts" to anticipate the demand? I mean it’s been like 6-8 weeks since the frenzy started that’s not enough time to build even a drop of what will be sucked up in hours. This might not be a paper launch, but it might as well be because yes they’ll be… gone! And then in the same boat as AMD, which they really really want to be even if it doesn't advance their "gaming initatives". I will say the MSRP will tell us how good of miner tell be expensive like $250 we’ll have an inkling of what’s to come.

If it sits between GTX 660 and GTX 760 in pricing and it shows to be relatively much better in compute, coin miners are going to be all over it ... aaand it's gone

Click to expand...

A gtx 780ti, the top end card produces ~430 k/h and and amd 7870 produces about ~425 k/h. I would be surprised if this card was able to mine better than a 780ti honestly. It may be maxwell but it is mid end.

A gtx 780ti, the top end card produces ~430 k/h and and amd 7870 produces about ~425 k/h. I would be surprised if this card was able to mine better than a 780ti honestly. It may be maxwell but it is mid end.

Click to expand...

That is assuming that maxwell doesn't bring any new architecture changes. The whole mining performance gap is due to some operations that GCN does in single cycle, kepler CUDA core needs several cycles. It would be foolish of nVidia not to address these kinds of optimizations with new gpu architecture.

I would be surprised if this card was able to mine better than a 780ti honestly. It may be maxwell but it is mid end.

Click to expand...

Isn’t it more of the equation of MH/s vs. efficiency vs. cost. It's also a question of CPU's and mobo’s you don't want that equiptment to give less hashrate when load up. But there's a premium for C-F mobo's if you find those, while firgure there's still plentiful 4 slot SLI capable mobo's that cost less it make the equivalence of the system more attractive as you can use lots of cheap Intel CPU that are more abundant. Go look for a low watt Semprons’ there gone also!

A gtx 780ti, the top end card produces ~430 k/h and and amd 7870 produces about ~425 k/h. I would be surprised if this card was able to mine better than a 780ti honestly. It may be maxwell but it is mid end.

Click to expand...

The days when a Titan mined 350~400 KH/s and a 780ti mined 400+ KH/s are long in the past, with Cudaminer my Titans easily mine 512~535 KH/s each:

The GT430 is used to drive my display while the Titans mine

And yes, I know this is still a far cry from my 290X' performance (900+ KH/s) at a much lower price, but with each consecutive release cudaminer increases the mining efficiency of CUDA based cards, so don't dismiss Maxwell so easily, chances are Nvidia is going to catch up to AMD when it comes to mining efficiency, and Maxwell could well be the right architecture to do so

And yes, I know this is still a far cry from my 290X' performance (900+ KH/s) at a much lower price, but with each consecutive release cudaminer increases the mining efficiency of CUDA based cards, so don't dismiss Maxwell so easily, chances are Nvidia is going to catch up to AMD when it comes to mining efficiency, and Maxwell could well be the right architecture to do so

Click to expand...

And remember, the whole point of Maxwell is that it includes a 64-bit CPU on die to handled some of the workload that would normally be offloaded to the main system CPU. This is supposed to improve CUDA performance quite a bit(how much is yet to be seen of course).

Yes. Since it is a mid end card, even with better hardware I personally would not expect it to surpass nvideas current top end in compute. I brought up the 7870 for example, as I think that would be the card that this would compete with.

@15th Warlock, thanks for that. I did not know cudaminer had made such gains as I am already invested in AMD hardware.

If this card can put out ~450k/h for ~ $225 at ~160w it would be a good alternative to amd, comparable to the 7870. If it can do any better than that on any of those three fronts this thing will be quite a special card.

If this card can put out ~450k/h for ~ $225 at ~160w it would be a good alternative to amd, comparable to the 7870. If it can do any better than that on any of those three fronts this thing will be quite a special card.

Click to expand...

Exactly then factor the ease and abundance of low cost Intel CPU's and 4 slot PCI mobo's and the cost to have quad GPU set-up for less. What that whole setup lets just say hash's some in the range and as Tri 280X build and does it with less power while being easier to locate parts and less expensive to throw together. Although I suppose they aren't "SLI or C-F" the actual cards so you just need a cheap mobo with four slots.

Basically the performance gap is a product of AMD’s focus on integer compute performance, and Nvidia’s relative lack of interest in that aspect of GPU performance. To be clear this is not a software issue, but rather an architectural design trade-off that Nvidia made to de-emphasize integer compute in order to meet their other design goals.

Exactly then factor the ease and abundance of low cost Intel CPU's and 4 slot PCI mobo's and the cost to have quad GPU set-up for less. What that whole setup lets just say hash's some in the range and as Tri 280X build and does it with less power while being easier to locate parts and less expensive to throw together. Although I suppose they aren't "SLI or C-F" the actual cards so you just need a cheap mobo with four slots.

Click to expand...

Would be some nice cost savings... haha

Considering I've just bought 4 280x to mine with and have a setup probably around $2200 or so... 4 750 TIs would be a nice bit of savings if they are priced at 250-300...

Slightly chuckle worthy dreaming going on so one enabled 64bit arm core is now going to allow highly parallel algorithms to run better than amd's arch.
I doubt it as its aimed at Gamers and like gk104 compute will be compromised or nvidia are daft