As I am going to do a new PC build later this year, I'll be interested in these results. I have seen several "gamer" oriented reviews that also tested "compute" capabilities, and I was not impressed by the compute results. All compute tests except for one were slower than previous gen cards. So, I still hold out hope that the GTX 680 will perform better than the previous gen cards.

As a summary of what I have read:

Power consumption is down quite a bit - TDP is around 195W under load.
Games run faster.

Its not much, I know, but that pretty much sums it up. ;)
____________

Just got my 680 in, unfortuantely I have to have it on Windows for gaming, and would like to know whether or not I should attach it to the project yet? Don't want to be returning a whole bunch of invalid or errors.

I'm not sure what the situation is. Gianni indicated that he might release a Beta app, and the server status shows 17 beta tasks waiting to be sent. It's been like that for a few days. These might be for Linux only though?

You need the latest drivers, and as far as I am aware, the GTX680 will not run normal or Long tasks. So I suggest you attach to the project, configure a profile to only run Beta's and see if any come your way.
____________FAQ's

It appears that they are Linux only. If I wasn't running out of drive space, i would give this rig the dual-boot, since I now know how to configure it. Don't feel like using usb, since im running wcg currently. Might go pick up a larger SSD in order to accomodate this weekend.

First off let me apologize for the "tone" of my written voice, but after spending 6 hours last night trying to install ubuntu i can say I HATE the "disgruntled" GRUB. Windows 7 install refused to play nice with it. Kept getting grub-efi failed to install on /target error, as well as MANY others. Even went to the trouble of disconnecting my Windows SATA connection, but still kept getting same error on fresh drive. Do to the fact that it is Easter Weekend (and the habit of wanting betas ala WCG), I have decided to unistall Windows 7 in order to accomplish my goal. Since this is mainly a crunching rig (0 files stored internally, I keep all on external encrypted HDDs), and besides the one game 1 game I play (which was ruined due to a recent "patch"), having Windows does nothing for me ATM. Should have this uninstalled shortly, and hopefully with it unistalled maybe GRUB will not be so grumpy (as well as me).

I'm not sure what the situation is. Gianni indicated that he might release a Beta app, and the server status shows 17 beta tasks waiting to be sent. It's been like that for a few days. These might be for Linux only though?

You need the latest drivers, and as far as I am aware, the GTX680 will not run normal or Long tasks. So I suggest you attach to the project, configure a profile to only run Beta's and see if any come your way.

In order to crunch anything at all - beta or otherwise - you need both a supply of tasks and an application to run them with.

The standard BOINC applications page still seems to work, even if like me you can't find a link from the redesigned front page. No sign of a Beta application yet for either platform, which may take some time pressure off the OS (re-)installs.

Literally getting ready to de-install before you posted that....... Is that because those beta wu's can't be sent to anyone with the designated platforms unless they have a 680 though, meaning they don't want them to go to people who have the other apps, but without the proper GPU to run them?

EDIT

I wouldn't think they would even bother loading betas unless they were ready to go out. Meaning why bother loading them, if you're still testing in house. I would ASSUME it may not be listed in apps, for reasons stated above. Even though it is odd that nothing is listed, maybe that's just because the app doesn't matter, since this beta is related to hardware?

The beta WUs are from before, they don't go out because there is no beta app now.

1) we will upload a NEW application for linux faster for any fermi card and it will work for a gtx680
2) it will be compiled with cuda4.2
3) some days later the same app will be provided for windows
4) later there will be an optimized app for gtx680 for linux and windows

Note that we are testing a new app, new cuda and new architecture. Expect some problems and some time. Within 10 days, we should have 1 and 2. Some variation on the plan are also possible. We might put out a cuda3.1 new app for instance.

Any progress?
Could you please share some information about the performance of the GTX 680 perhaps?
I'm afraid that the CPU intensive GPUGrid tasks will suffer much more performance penalty on GTX 680 than on GTX 580 (and on the other CC2.0 GPUs). Maybe an Ivy Bridge CPU overclocked to 5GHz could compensate this penalty.

Besides that fact Zoltan, which from what I can tell, the CPU is basically what's crippling the 680 across the board for every project.

However, I've been steadily re-reading several lines from Anandtech in-depth review about the card itself:

1)Note however that NVIDIA has dropped the shader clock with Kepler, opting instead to double the number of CUDA cores to achieve the same effect, so while 1536 CUDA cores is a big number it’s really only twice the number of cores of GF114 as far as performance is concerned.

So if I am correct, and its 12:22 am so give me a break if im wrong, but since we use shader clock, what this means is that if you were to double the cores of 580 to 1024 you would be operating at 772 Mz. (set rops and everything aside, as crazy as that sounds). You know, I can't figure this math out, but what I will say, as posted earlier the primegrid sieve ran 25% faster on 680 (240s vs 300s) Which to me, I just keep looking at the fact that's roughly the same difference between the 1005Mz clock and the 580's 772. Don't really know where i was going with this, or how i was going to get there, but is that why sieve increased by 25%, and there's also the 20% decrease in TDP. Compared to 580, it has 1/3 more cores than 580 (1536 vs 1024), but a 1/3 less ROPS.

Again sorry for confused typing, its late, but that 25% increase in clock just kept staring at me. My bet goes to the 25% increase until the optimized app comes out to take more advantage of CC3.0

The kepler optimized application is 25% faster than a gtx580 regardless of the processor for a typical WU. I don't see why the CPU should have any different impact between compared to Fermi.

gdf

Any progress?
Could you please share some information about the performance of the GTX 680 perhaps?
I'm afraid that the CPU intensive GPUGrid tasks will suffer much more performance penalty on GTX 680 than on GTX 580 (and on the other CC2.0 GPUs). Maybe an Ivy Bridge CPU overclocked to 5GHz could compensate this penalty.

Compared to 580, it has 1/3 more cores than 580 (1536 vs 1024), but a 1/3 less ROPS.

A GTX 580 has 512 cuda cores and a GTX 680 has 1536.

CUDA is different from OpenCL. On several OpenCL projects high CPU requirement appears to be the norm.

I would expect a small improvement when using PCIE3 with one GPU. If you have 2 GTX680's in a PCIE2 system that drops from PCIE2 x16 to PCIE2 x8, then the difference would be much more noticeable, compared to a board supporting two PCIE3 x16 lanes. If you're going to get 3 or 4 PCIE3 capable GPU's then it would be wise to build a system that properly supports PCIE3. The difference would be around 35% of one card, on an PCIE3 X16, X16, X8 system compared to a PCIE2 X8, X8, X4 system. For one card it's not really worth the investment.

If we are talking 25% faster @ 20% less power, then in terms of performance per Watt the GTX680 is ~50% better than a GTX580. However that doesn't consider the rest of the system.
Of the 300W a GTX680 system might use, for example, ~146W is down to the GPU. Similarly, for a GTX580 it would be ~183W. The difference is ~37W. So the overall system would use ~11% less power. If the card can do ~25% more work then the overall system improvement is ~39% in terms of performance per Watt.
Add a second or third card to a New 22nm CPU system and factor in the PCIE improvements and the new systems performance per Watt would be more significant, perhaps up to ~60% more efficient.
____________FAQ's

The kepler optimized application is 25% faster than a gtx580 regardless of the processor for a typical WU.

It sounds promising, and a little disappointing at the same time (as it is expected).

I don't see why the CPU should have any different impact between compared to Fermi.

Because there is already 25-30% variation in GPU usage between different type of workunits on my GTX 580. For example NATHAN_CB1 runs at 99% GPU usage while NATHAN_FAX4 runs at only 71-72%. I wonder how much the GPUGrid client could feed a GPU with as many CUDA cores as the GTX 680 has, while it could feed a GTX 580 to run at only 71-72% (and the GPU usage drops as I raise the GPU clock, so the performance is CPU and/or PCIe limited). To be more specific, I'm interested in how much is the GPU usage of a NATHAN_CB1 and a NATHAN_FAX4 on a GTX 680 (and on a GTX 580 with the new client)?

I brought the the core count up to 1024, instead of 512, since I kept trying to figure out the math for what the improvement was going to be. Meaning, if I doubled the core count, I could do away with the shader clock, as they did in kepler. (i know kepler was quadrupled, but in terms of performance it was just doubled) The math SEEMED to work out ok. So, I was working with 1024 cores working at core clock of 772 meant 1/3 more cores on 680 than 580 (adjusted for the doubled shader freq).This led to a difference in shader clock of 23.2% faster for Kepler (772/1005). Which meant (to me and my 0 engineering knowledge), a benefit of 56.6% (increase in amount of cores*increase in adjusted freq) However, since there are 1/3 less ROPs, that got me down to 23.4% (but if I'm not mistaken, the ROP freq. is calc. off core, and I learned this after adjusting for 570, 480 and 470, once i learned the ROP freq i quit trying)

What's weird, is that this math kept LOOKING correct the further I went. There was roughly a 45% increase compared to a 570 (as shown on sieve tasks), on a 480 in my math it showed an increase of roughly 35%, but compared to a 470 it jumped to 61%.

Again, not an engineer, just someone who had the day off. It strike me as odd though that it seemed to work. But, adding ROPs in may have been the mistake, I honestly don't even know how important they are for what we do. Meaning that since it is coorelated with pixel (again out of my league :) ) it could be like a high memory bandwitdh and not mean as much to us. The 25% increase and 45% were the ones that kept my math skills going, b/c that was what was seen on PPS sieve tasks.

Ah, coincidences..... ;) Oh, and I have been looking for mobo that support 3.0 @ x16,x16 but I think Ive only found one that did and it was like $300, however I refuse to get one that doesn't, merely b/c I want everthing at 100% (even if the extra bandwidth isn't used)

One more thing. I'm assuming Zoltan meant, as he already explained in relation to GPUgrid WUs, that like Einstein apps, have we hit a."wall" to where the CPU matters more than gpu once you reach a certain point. As per his description some tasks are dependent on a fast CPU,someone in other forum is failing tasks because he has a 470 or a 480 can't remember, in a xeon @ 2.5, which is currently causing him issues.

Oh, and it's not whether or not they'll finish, it's about whether or not the CPU will bottleneck the GPU. I reference Einstein, b/c as mentioned, anything above a 560ti 448 will finish the task in roughly the same GPU time, and what makes the difference in how fast you finish the WU is based off of how fast your CPU is. This can SEVERELY cripple performance.

The newest R300 series doesn't have the sleep bug, but it is a beta. The 4.2 came out with 295, so it's either beta or wait til WHQL is released. The beta version is 301.24. Or if possible in your situation, you can tell Windows to Never turn off display. This prevents sleep bug, and you can do whatever you want.

Just a friendly reminder about what you're getting with anything less than 680/670 The 660ti will be based off othe 550ti's board. Depending on each users power requirements. I would HIGHLY reccomend waiting for results from said boards, or would reccomend the 500 series. Since a 660ti will most likely have half the cores, and a 15% decrease in clock compared to 580, this could severely cripple other 600 series as far as crunching is concerned. Meaning, a 560Ti 448 and above will, IMO (I can't stress this enough), probably be able to beat a 660Ti when it's released. Again, IMHO. This is as far as speed is concerned. Performace/watt may be a different story, but a 660ti will be based off of a 550ti specs (keep that in mind)

Profile has been changed to accept betas only for that rig. Again, 50%!!!!!!!!

Sorry Einstein, but your apps have NOTHING on this jump in performance!! And that doesn't even account for performance/watt. My EVGA step up position in queue better increase faster!!! My 570 is still at #501!!!

Compared to the current production application running on a gtx580, the new app is 17% faster on the same GTX580 and 50% faster on a Gtx680.

I don't think that means the GTX680 is 50% faster than a GTX580!
I think it means the new app will be 17% faster on a GTX580, and a GTX680 on the new app will be 50% faster than a GTX580 on the present app.
That would make the GTX680 ~28% faster than a GTX580 on the new app.
In terms of performance per Watt that would push it to ~160% compared to the GTX580, or twice the performance per Watt of a GTX480 ;)
____________FAQ's

Lol. I knew that...... Either way, a lot faster than my 570 that's currently attached! And 160% more efficient is amazing. Again, great work guys!!! Still need to find a mobo for ivy that supports 3.0 at 2 x16.

Also I have one more question to those in the know. If I run a GTX680 on a PCIE2 motherboard will it take a performance hit on that 150% figure? Could this be tested if you have time GDF - I know its not a high priority but may help people like me who dont have a next gen motherboard make an informed decision.

All I can say is on the note about the performance hit is, I'm going to THINK that it won't, PCI 3.0 allows for 16 GB/s in each direction. For what we do, this is A LOT of bandwidth. From the results that I've seen, which are based on games, the performance increase seems to be only 5-7%, if this is the case, I would ASSUME that there wouldn't be that big of a performance hit.

The only reason that I want a PCI 3 mobo, which can run 2 cards at x16 each, is because i play games, well one, and two; because it's just a mental thing for me (meaning running at full capacity) even if it's not noticed. I also don't plan on building another rig for some time, and I would like this one to be top notch ;).

It will MOST LIKELY only make a difference to those who run either a) huge monitors or b) multiple monitors using NVIDIA Surround, which I plan on doing with a 3+1 monitor setup.

Think of it like this, even the biggest tasks for GPUgrid only use a lil over a GB if i'm not mistaken (in memory), the need for 16GB/s is way overpowered I would imagine. I'll let you know how my 680 runs once the beta is out (it's on a PCI 2.0 mobo currently)

This post, in this thread, discusses speculatively PCIE3 vs PCIE2.
Basically, for a single card it's probably not worth the investment, for two cards it depends on what you want from the system, and for 3 or 4 it's worth it.
As you are looking to get a new system it may be worth it. Obviously we won't know for sure until someone posts actual results for both PCIE2 and 3 setups and multiple cards.
____________FAQ's

Further, since IB-E won't be released until MAYBE Q3-Q4, probably towards christmas would be my guess, and won't really offer any benefit besides a die shrink.

I guess this explains why I was having a hard time finding a PCI 3.0 2x16 mobo. Wow, my idea of 100% GPU functionality just increased the price by about another $250. Hmmmm...

Oh, and I found this on andantech (though it's for AMD GPU)

Simply enabling PCIe 3.0 on our EVGA X79 SLI motherboard (EVGA provided us with a BIOS that allowed us to toggle PCIe 3.0 mode on/off) resulted in a 9% increase in performance on the Radeon HD 7970. This tells us two things: 1) You can indeed get PCIe 3.0 working on SNB-E/X79, at least with a Radeon HD 7970, and 2) PCIe 3.0 will likely be useful for GPU compute applications, although not so much for gaming anytime soon.

Doesn't list what they ran or any specs though

EDIT. Well it appears the 3820 can OC to 4.3 which would be most for what I need. Wouldn't mind having a 6 core though. 4 extra threads for WUs would be nice but not mandatory. At $250 at MicroCenter, quite a nice deal.

EDIT. Well it appears the 3820 can OC to 4.3 which would be most for what I need. Wouldn't mind having a 6 core though. 4 extra threads for WUs would be nice but not mandatory. At $250 at MicroCenter, quite a nice deal.

I've been looking at the 3820 myself. In my opinion, that is the only SB-E to get. Techspot got the 3820 up to 4.625 GHz, and at that speed, it performs pretty much equally as well as a 3960K at 4.4 GHz. To me, it's a no-brainer - $1000 3960K, $600 3930K, or $250 3820 that performs as well as the $1K chip. According to the Microcenter web site, that price is in-store only.

Where SB-E will really excel is in applications that are memory intensive, such as FEA and solid modelling - which is a conclusion that I came to as a result of the Techspot review - that tested the 3820 in a real-world usage scenario of SolidWorks.

Anyway, IB is releasing on Monday, and it might be worth the wait. Personally, I do not think IB will beat SB-E in memory intensive applications, however, I'll be looking very closely at the IB reviews.
____________

CUDA4.2 comes in the drivers which support cards as far back as the GeForce 6 series. Of course GeForce 6 and 7 are not capable of contributing to GPUGrid. So the question might be, will GeForce 8 series cards still be able to contribute?
I think these and other CC1.1 cards are overdue for retirement from this project, and I suspect that CUDA4.2 tasks that run on CC1.1 cards will perform worse than they do now, increasing the probability for retirement. While CC1.1 cards will perform less well, Fermi and Kepler cards will perform significantly better.

There isn't much on CUDA4.2, but CUDA4.1 requires 286.19 on Win and 285.05.33 on Linux. I think support arrived with the non-recommended 295.x drivers; on one of my GTX470's (295) Boinc says it supports CUDA 4.2, the other (Linux 280.13) says 4.0.

I would expect the high end GTX200 series cards (CC1.3) will still be supported by GPUGrid, but I don't know what the performance would be and it's not my decision. I would also expect support for CC1.1 cards to be dropped, but we will have to wait and see.
____________FAQ's

Not entirely sure why you posted that individuals user id, App page still says nothing new is out for beta testing. Hoping this means they finally got their linux drivers working properly and are finally testing in house.

Maybe tomorrow?

EDIT: Tried to grab some on Windows, and still none available. Someone is definately grabbing and returning results though.

I think it means the new app will be 17% faster on a GTX580, and a GTX680 on the new app will be 50% faster than a GTX580 on the present app.
That would make the GTX680 ~28% faster than a GTX580 on the new app.

new app on gtx 580 115 ns/day
new app on gtx 680 150 ns/day

Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true.
150ns/115ns = 1.30434, so there is around 30.4% performance improvement over the GTX 580. But this improvement comes only from the higher GPU clock of the GTX 680, because the clock speed of the GTX 680 is 30.3% higher than the GTX 580's (1006MHz/772MHz = 1.3031)
All in all, only 1/3 of the GTX 680's shaders (the same number as the GTX 580 has) can be utilized by the GPUGrid client at the moment.
It would be nice to know what is limiting the performance. As far as I know, the GPU architecture is to blame, so the second bad news is that the shader utilization will not improve in the future.

I would like to know.what limits performance as.well, but.the shader clock speed is.actually lower. Remember you have to.double others clock to get shader clock, so 680 =1.1 ghz on boost while 580 stock 772*2 for shader clock. Also more efficient. Running a 3820 @ 4.3 on wcg, and Einstein with gpu at 80% utilization. This system currently only uses 300 W

Actually this is the worst news we could have regarding the GTX 680's shader utilization. My bad feeling about the GTX 680 have come true.
150ns/115ns = 1.30434, so there is around 30.4% performance improvement over the GTX 580. But this improvement comes only from the higher GPU clock of the GTX 680, because the clock speed of the GTX 680 is 30.3% higher than the GTX 580's (1006MHz/772MHz = 1.3031)
All in all, only 1/3 of the GTX 680's shaders (the same number as the GTX 580 has) can be utilized by the GPUGrid client at the moment.
It would be nice to know what is limiting the performance. As far as I know, the GPU architecture is to blame, so the second bad news is that the shader utilization will not improve in the future.

I think you are wrong. You are looking it at totally the wrong way, concentrating on the negatives rather than the positives.

1. The card is purposefully designed not to excel at compute applications. This is a design goal for NVidia. They designed it to play games NOT crunch numbers. 95 % of people buy these cards to play games. The fact that there is any improvement at all over the 5xx series cards in GPUGRID is a TOTAL BONUS for us - and I think testament to the hard work of the GPUGRID developers and testers rather than anything else NVidia have done.

2. It looks like we are going to get a 30.4% performance increase at GPUGRID and at the same time a 47% drop in power usage (and thus a drop in heat and noise) on a card that is purposefully designed to be awful at scientific computing. And you are not happy with that?

I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used.

I think you should count your lucky stars we are seeing any improvements at all let alone MASSIVE improvements in crunch/per watt used.

For $1000 a card, I would expect to see a very significant increase, boardering on, if not actually massive - no luck about it. The power reduction comes with the territory for 28nm, so thats out of the equation. What is left on the compute side is a 30% improvement achieved by the 30% improvement on the GPU clocks.

From a Compute angle is it worth dropping £1000 on a card that - essentially - has only increased its clocks compared to the 580? I very much doubt it. In any case NVidia supply of 28nm is barely adequate at best, so a high priced 690 goes along with that, and its likely to stay that way for a good while until 28nm supply improves.

There is little doubt that they have produced a winner for gaming, its a beast for sure, and is going to "win" this Round. I doubt though that there will be many gamers, even the hard core "I just want the fastest" players, who will drop the money for this. $1000 is a step too far, and I believe will over time result in a real push back on price - its way too much when the mid range cards will nail any game going, let alone in SLI.

Fingers crossed the Project team can pull the cat out of the bag as far as GPUGRID is concerned - but its not looking great at present - at least not for $1000 it isnt.

The only "issue" I have with the new series, is that it will be on boost 100% of the time, with no way to change it. The card uses 1.175 v and runs at 1105 Mhz in boost (specific to each card) with the amount of stress we put these things through, and that Maxwell will not be out til 2014 I actually paid EVGA $25 to.extend the 3 year to 5. Plan on having these at LEAST til 2015, since i will have both cards be 600 series, bought one and step uped a 570. Whenever Maxwell or 7xx series comes out ill buy more, but these will be in one system or another for quite some time. Even though temps at 80% utilization are 48-50, I'm not taking any chances with that high of a voltage 24/7/365

EDIT why does everyone keep saying the clock is.faster? The core and shader clock.is the same. Since we use shader clock its actually slower at 1.1 Ghz compares to what 1.5Ghz on.the 580. And the 680 is 500, the 690 is $1000

EDIT AGAIN: If you already own say 5 580's or whatever, AND live in a place with high electricity, considering used cards can still get roughly $250, you MAY actually be able to recover the costs of electricity alone, let alone the increased throughput. AGAIN, the SHADER CLOCK is 39.% SLOWER, not faster. 1.1Ghz Shader on 680 vs 1.544Ghz on the 580 (core 2x). CORE clock is irrelevant to us. Am I missing something here?

With the Fermi series' the shaders were twice as fast as the GPU core. I guess people presume this is still the case with Kepler.

It's possible that some software will turn up that enables you to turn turbo off, though I expect many would want it to stay on.
Can the Voltage not be lowered using MSI Afterburner or similar?
1.175v seems way too high to me; my GTX470 @ 680MHz is sitting at 1.025v (73degC at 98% GPU load).

I think the scientific research methods would need to change in order to increase utilization of the shaders. I'm not sure that is feasible, or meritable.
While it would demonstrate adaptability by the group, it might not increase scientific accuracy, or might require so much effort that it proves to be too much of a distraction. It might not even work, or could be counterproductive. Given that these cards are going to be the mainstream GPU's for the next couple of years, a methodology rethink might be worth investigating.

Not having a Kepler or tasks for one, I could only speculate on where the calculations are taking place. It might be the case that a bit more is now done on the GPU core and somewhat less on the shaders.

Anyway, it's up to the developers and researchers to get as much out of the card as they can. It's certainly in their interests.
____________FAQ's

why does everyone keep saying the clock is.faster? The core and shader clock.is the same. Since we use shader clock its actually slower at 1.1 Ghz compares to what 1.5Ghz on.the 580.
....
AGAIN, the SHADER CLOCK is 39.% SLOWER, not faster. 1.1Ghz Shader on 680 vs 1.544Ghz on the 580 (core 2x). CORE clock is irrelevant to us. Am I missing something here?

As a consequence of the architectural changes (say improvements), the new shaders in the Kepler chip can do the same amount of work as the shaders in the Fermi at doubled core clock. That's why the Kepler can be more power efficient than the Fermi. (And because the 28nm lithography of course)

No, voltage monitor does not efffect this card whatsoever. Some say you can limit it by limiting the power usage monitor, but since we put a different kind of load on the chip, at 80% utilization , mine is on boost with only 60% power load. I've tried offseting down to base clock (-110) but voltage was still at 1.175.

It bothers me a lot too, I mean my temps are around 50, but as i said before this is why I paid EVGA another $25 to extend the warranty to 5 years. If it does eventually bust, wouldn't be my fault.

just skimming this im getting alot of mixed signals, i read that theres a 50% increase on the 680, and also that the coding on the 680 almost isnt worth it, while i know its just come out, should I be waiting for a 600 or not?

Its built as a gamers card, not a Compute card, and thats the big change from previous NVidia iterations, where previously comparable gaming and compute performance increases were almost a given - not on this one, nor - it seems likely - on the 690. The card also has abysmal to appauling double precision capability, and whilst thats not required here, it does cut off some BOINC projects.

If its gaming, its almost a no brainer if you are prepared to suck up the high price, its a gaming winner for sure.

If its Compute useage, there hangs the question mark. It seems unlikely that it will perform well in a comparitive sense to older offerings given the asking price, and the fact that the architecture does not lend itself to Compute applications. The Project Team have been beavering away to see what they can come up with. The 580 was built on 40nm, the 680 is built on 28nm but early indications only indicate a 50% increase over the 580 - that like for like, given the 40nm to 28nm switch, indicates the design change and concentration on gaming not Compute.

Dont take it all as doom and gloom, but approach the 680/690 Compute with healthy caution until real world testing comes out so your expectations can be tested, and the real world result compared with what you want.

Not a straight answer because its new territory - an NVidia card built for gaming that appears to "ignore" Compute. Personally I am waiting to see the Project Team's results, because if these guys cant get it to deliver Compute to a decent level thats comensurate with the asking price and change from 40nm to 28nm, no one can. Suggest you wait for the test and development results from the Project Team, then decide.

If it's faster here then it's a good card for here.
For each GPU project different cards perform differently. AMD chose to keep their excellent level of FP64 in their top (enthusiast) cards (HD 7970 and 7950), but dropped FP64 to really poor levels in their mid and range cards (HD 7870, 7850, 7770 and 7750; all 1/16th).

It's not actually a new thing from NVidia; the CC2.1 cards reduced their FP64 compared to the CC2.0 cards (trimmed the fat), making for relatively good & affordable gaming cards, and they were popular.
I consider the GTX680 more of an update from these CC2.1 cards than the CC2.0 cards. We know there will be a full-fat card along at some stage. It made sense to concentrate on the gaming cards - that's were the money's at. Also, NVidia have some catching up to do in order to compete with the AMD's big FP64 cards.
NVidia's strategy is working well.

By the way, the GTX690 offers excellent performance per Watt compared to the GTX680, which offers great performance to begin with. The GTX690 should be ~18% more efficient.
____________FAQ's

well 50% increase in compute speed sounds good to me, especially since nvidia had, not sure if they still do, 620 driver link on there site as someone here noted. but if it comes down to it i guess a new 570 probably wont be a bad deal.

If it's faster here then it's a good card for here.
For each GPU project different cards perform differently. AMD chose to keep their excellent level of FP64 in their top (enthusiast) cards (HD 7970 and 7950), but dropped FP64 to really poor levels in their mid and range cards (HD 7870, 7850, 7770 and 7750; all 1/16th).

It's not actually a new thing from NVidia; the CC2.1 cards reduced their FP64 compared to the CC2.0 cards (trimmed the fat), making for relatively good & affordable gaming cards, and they were popular.
I consider the GTX680 more of an update from these CC2.1 cards than the CC2.0 cards. We know there will be a full-fat card along at some stage. It made sense to concentrate on the gaming cards - that's were the money's at. Also, NVidia have some catching up to do in order to compete with the AMD's big FP64 cards.
NVidia's strategy is working well.

By the way, the GTX690 offers excellent performance per Watt compared to the GTX680, which offers great performance to begin with. The GTX690 should be ~18% more efficient.

Unfortunately, both Nvidia and AMD are now locking out reasonable BOINC upgrades for users like me who are limited by how much extra heating the computer room can stand, and therefore cannot handle the power requirements of any of the new high-end cards.

I posted a question in regards to GPU Boost on NVIDIA's forums, and the high voltage given to the card (1.175) and my concerns about this running 24/7, and asking (pleading) that we should be allowed to turn Boost off.

An Admins response:

Hi 5pot,

I can understand about being concerned for the wellbeing of your hardware, but in this case it is unwarranted. :) Previous GPUs used fixed clocks and voltages and these were fully guaranteed and warrantied. GPU Boost has the same guarantee and warranty, to the terms of your GPU manufacturer's warranty. :thumbup: The graphics clock speed and voltage set by GPU Boost is determined by real-time monitoring of the GPU core and it won't create a situation that is harmful for your GPU.

Hi Robert,
At present there is nothing below a GTX680, but there will be.
GeForce GT 630 and GT 640 cards will come from NVidia in the next few months.
Although I don't know how they will perform, I expect these GK107 cards will work here. These will be 50/75W cards, but when running tasks should only use ~75% of that (38/56W).

It's probably best to avoid the GF114 and GF116 GF600 cards for now (40nm). These are just re-branded GF500 cards (with Fermi rather than Kepler designs).

We should also see a GTX670, GTX660 Ti, GTX660 and probably a GTX650 Ti (or similar) within a few months. I think the GTX670 is expected ~ 10th May.

My guess is that a GTX670 would have a TDP of ~170W/175W and therefore actually use ~130W. There is likely to be at least one card with a TDP of no more than 150W (only one 6-pin PCIE power connector required). Such a card would actually use ~112W when running tasks.

I think these might actually be favorable compared to their CC2.1 GF500 predecessors, but we will have to wait and see.

Unfortunately, both Nvidia and AMD are now locking out reasonable BOINC upgrades for users like me who are limited by how much extra heating the computer room can stand, and therefore cannot handle the power requirements of any of the new high-end cards.

However, a GT 645 is also listed now, and short enough that I might find some brand that fill fit in my computer that now has a GTS 450. I may have to look at that one some more, while waiting for the GPUGRID software to be updated enough to tell if the results make it worth upgrading.

and that's only the suggested specs - OEM's are free to do whatever they want according to clock-rates..

I see what you mean about RAM sizes.

However, a GT 645 is also listed now, and short enough that I might find some brand that fill fit in my computer that now has a GTS 450.

if you want a rebranded GTX560se..

I see nothing about it that says Fermi or Kepler. But if that's correct, I'll probably wait longer before replacing the GTS 450, but check if one of the Kepler GT 640 versions are a good replacement for the GT 440 in my other desktop.

I see nothing about it that says Fermi or Kepler. But if that's correct, I'll probably wait longer before replacing the GTS 450, but check if one of the Kepler GT 640 versions are a good replacement for the GT 440 in my other desktop.

These have already been released as OEM cards. Doesn't mean you can get them yet, and I would still expect retail versions to turn up, but exactly when I don’t know.
Anything that is PCIE2 probably has a 40nm Fermi design. Anything PCIE3 should be Kepler.
GeForce GT 600 OEM list :
GT 645 (GF114, Not Kepler, 40nm, 288 shaders) – should work as an entry level/mid-range card for GPUGrid
GT 630 (GK107, Kepler, 28nm, 384 shaders) – should work as an entry level card for GPUGrid
GT 620 (GF119, Not Kepler, 40nm, 48 shaders) – too slow for GPUGrid
605 (GF119, Not Kepler, 40nm, 48 shaders) – too slow for GPUGrid

These have already been released as OEM cards. Doesn't mean you can get them yet, and I would still expect retail versions to turn up, but exactly when I don’t know.
Anything that is PCIE2 probably has a 40nm Fermi design. Anything PCIE3 should be Kepler.

probably that's the clue we will have.

only one thing left on the bright side: the low TDP kepler-version GT640 will most likely show up even fanless.

just skimming this im getting alot of mixed signals, i read that theres a 50% increase on the 680, and also that the coding on the 680 almost isnt worth it, while i know its just come out, should I be waiting for a 600 or not?

This 50% increase is actually around 30%.
The answer depends on what you prefer.
The GTX 680 and mostly the GTX 690 is an expensive card, and they will stay expensive for at least until Xmas. However, considering the running costs, it could be worth the investment in long term.
My personal opinion is that nVidia won't release the BigKepler as a GeForce card, so there is no point in waiting for a better cruncher card from nVidia this time. In a few months we'll see if I was right in this matter. Even if nVidia releases the BigKepler as a GeForce card, its price will be between the price of the GTX 680 and 690.
On the other hand, there will be a lot of cheap Fermi based (CC2.0) cards, either second-hand ones, or some "brand new" from a stuck stockpile, so one could buy 30% less computing power approximately at half (or maybe less) price.

Until the GF600 app gets released there's not much point buying any GF600.

Upgrading to a GF500, on the cheap, seems reasonable (and I've seen a few at reasonable prices), but I expect that when the GTX 670 turns up (launches next week, supposedly) we will see a lot of price drops.

The GTX690 didn't really change anything; firstly there are none, and secondly a $999 card is way beyond most people, so it doesn't affect the prices of other cards. In fact the only thing it really competes against is the GTX680.
I suppose a few people with HD6990's and GTX 590's mightupgrade, but not many, and not when they can't get any.

I have a feeling 'full-fat' Kepler might have a fat price tag too. I'm thinking that the Quadro line up will expand to include the amateur video editors as well as the professionals. The old Quadro's were too pricey and most just used the GeForce Fermi cards, but now that the GF600 has put all it's eggs in the gaming basket, there is nothing for video editors. The design of the GTX690 suggests things. The Tesla's might also change. Possibly becoming more University friendly.
____________FAQ's

Unfortunately, both Nvidia and AMD are now locking out reasonable BOINC upgrades for users like me who are limited by how much extra heating the computer room can stand, and therefore cannot handle the power requirements of any of the new high-end cards.

The solution is easy, don't vent the hot exhaust from your GPU into the room. Two ways to do that:

1) Get a fan you can mount in the window. If the window is square/rectangular then get a fan with a square/rectangular body as opposed to a round body. Mount the fan in the window then put the computer on a stand high enough to allow the air that blows out of the video card to blow directly into the fan intake. Plug the open space not occupied by the fan with whatever cheap plastic material you can find in a building supply store, a painted piece of 1/4" plywood, kitchen counter covering (arborite) or whatever.

2) I got tired of all the fan noise so I attached a shelf outside the window and put both machines out there. An awning over my window keeps the rain off but you don't have to have an awning, there are other ways to keep the rain off. Sometimes the wind blows snow into the cases in the winter but it just sits there until spring thaw. Sometimes I need to pop a DVD in the tray so I just open the window. I don't use DVD's much anymore so it's not a problem. I screwed both cases to the shelf so they can't be stolen. It never gets much colder than -30 C here and that doesn't seem to bother them. Now I'm finally back to peaceful computing, the way it was before computers needed cooling fans.

CUDA4.2 comes in the drivers which support cards as far back as the GeForce 6 series. Of course GeForce 6 and 7 are not capable of contributing to GPUGrid. So the question might be, will GeForce 8 series cards still be able to contribute?

At this point, I run the short queue tasks on my 8800 GT. It simply cannot complete long queue tasks in a reasonable time. If tasks in the short queue start taking longer than 24 hours to complete, I will probably retire it from this project.

That said, if CUDA4.2 will bring significant performance improvements to fermi, I'll be looking forward to it.

As to the discussion of what card to buy, I found a new GTX 580 for $370 after rebate. Until I complete my new system, which should be in the next two weeks or so, I have been and will be running it in the machine where the 8800 GT was. It is about 2.5x faster than my GTX 460 on GPUGrid tasks.

As I see it, there are great deals on 580s out there considering that about a year ago, these were the "top end" cards in the $500+ range.

How is DP performance on 670s? Given DP performance on 680s, I would expect that DP performance on the 670 would be worse than the 680.

I know power consumption is not optimal on the 580 compared to the 600 series in most "gamer" reviews that I have seen, however, I chose the 580 since I run a project that requires DP capability. For projects that require DP capability, I would not be surprised if the 580 is more efficient, power consumption wise, than any of the 600 series as the 680's DP benchmarks are a fraction of the 580's DP benchmarks. On the project I run, Milkyway, I am seeing a similar 2.5 - 3x performance gain with the GTX 580 over my GTX 460 .

Unfortunately, anyone considering a GPU has many factors to consider and that only makes the task of choosing a GPU harder and more confusing.

For a GPU dedicated to GPUGrid, a 600 series card may be an optimal choice; however, for anyone running projects that require DP capability, 600 series may be disappointing at best.
____________

I might have suggested the release of a Windows app and worry about the Linux app when new drivers turn up, if it wasn't for the fact that NVidia are not supporting WinXP for their GeForce 600 cards.

What's the performance like on Win7?
Is there still a ~15% loss compared to Linux?

It looks like the GTX670 will be released on the 10th May. Hopefully supply will be able to meet demand and prices of both the GTX680 and GTX670 will be competitive. If it turns up at ~£320 the GTX670 is likely to be a card that attracts more crunchers than the GTX680 (presently ~£410), but all this depends on performance. I expect it will perform close to a GTX580, but use less power (~65 or 70% of a GTX580).

I'm in no rush to buy either, saying as an app for either Linux or Windows hasn't been released, and the performance is somewhat speculative.
____________FAQ's

I'm glad you guys were able to get it out for linux. Know its been hard with the driver issues. Is there a timeframe for a Windows beta app yet? I've got another 680 on the way, and a 670 being purchased soon. Would love to be able to bring them over here.

The failed workunits are 'ordinary' long tasks, which use the old application, no wonder they failing on your GTX 680.
You should set up your profile to accept only beta work for a separate 'location', and assign your host with the GTX 680 to this 'location'.