I’ve been GPUGrid-ing since 2009. Over the years I’ve invested about €2k in PCs and €1K in GPUs because I believe in this project.

During the past 10 days I’ve become disillusioned. For whatever reason, the scientists limited the number of WUs available. My contribution has dropped, dramatically, from its usual 1M credits per day.

Today, she who must be obeyed informed me that our monthly electric bill will increase by €60 in the coming year. I have little doubt that most of that is running two PCs, 24/7, for GPUGrid.

I’ve been GPUGrid-ing since 2009. Over the years I’ve invested about €2k in PCs and €1K in GPUs because I believe in this project.

During the past 10 days I’ve become disillusioned. For whatever reason, the scientists limited the number of WUs available. My contribution has dropped, dramatically, from its usual 1M credits per day.

Today, she who must be obeyed informed me that our monthly electric bill will increase by €60 in the coming year. I have little doubt that most of that is running two PCs, 24/7, for GPUGrid.

I’ve been GPUGrid-ing since 2009. Over the years I’ve invested about €2k in PCs and €1K in GPUs because I believe in this project.

I do believe in this project too.

During the past 10 days I’ve become disillusioned.

That's quite strange, as I have the feeling that we are contributing in the most promising research in GPUGrid's history during the past 10 days.
It's going on since December 2014 actually.

For whatever reason, the scientists limited the number of WUs available.

That's wrong perspective, as during the past 10 days there were always at least over 2000 long workunits in progress.
So I really admire the staff's enthusiasm (mostly Gerard's I think), as the current method needs continuous babysitting of the workunits.

My contribution has dropped, dramatically, from its usual 1M credits per day.

My contribution has fluctuated more than usual, so I've raised my cache settings to the running time of one long workunit.
Since then it's almost normal.
Perhaps your spare project kicked in, if you have any.

Today, she who must be obeyed informed me that our monthly electric bill will increase by €60 in the coming year. I have little doubt that most of that is running two PCs, 24/7, for GPUGrid.

That's true. If you're crunching on your CPUs (8 core AMD, and 4 core + HT i7-920), each of these can consume almost as much as a GTX 980.
But there's a little contradiction: if there's a shortage in workunits, it will actually decrease your electricity bill.

Your second conclusion is incontrovertible under any circumstances, so I think it was irrelevant in your decision.
I'm sorry to see you go, don't take my comments personal.
I've shared my point of view on the events of the past 10 days because I hope it can help prevent others to become this much disillusioned.

I’ve been GPUGrid-ing since 2009. Over the years I’ve invested about €2k in PCs and €1K in GPUs because I believe in this project.

I do believe in this project too.

During the past 10 days I’ve become disillusioned.

That's quite strange, as I have the feeling that we are contributing in the most promising research in GPUGrid's history during the past 10 days.
It's going on since December 2014 actually.

For whatever reason, the scientists limited the number of WUs available.

That's wrong perspective, as during the past 10 days there were always at least over 2000 long workunits in progress.
So I really admire the staff's enthusiasm (mostly Gerard's I think), as the current method needs continuous babysitting of the workunits.

My contribution has dropped, dramatically, from its usual 1M credits per day.

My contribution has fluctuated more than usual, so I've raised my cache settings to the running time of one long workunit.
Since then it's almost normal.
Perhaps your spare project kicked in, if you have any.

Today, she who must be obeyed informed me that our monthly electric bill will increase by €60 in the coming year. I have little doubt that most of that is running two PCs, 24/7, for GPUGrid.

That's true. If you're crunching on your CPUs (8 core AMD, and 4 core + HT i7-920), each of these can consume almost as much as a GTX 980.
But there's a little contradiction: if there's a shortage in workunits, it will actually decrease your electricity bill.

Your second conclusion is incontrovertible under any circumstances, so I think it was irrelevant in your decision.
I'm sorry to see you go, don't take my comments personal.
I've shared my point of view on the events of the past 10 days because I hope it can help prevent others to become this much disillusioned.

I agree with 99.98 of what Retwari has said, maybe you have something else going on?

I am sorry for your loss (and the GPUGRID surely is too). Excuses at this point may be worthless, but I think I'll take the opportunity to describe the situation in the lab for the last times.

Lots of software changes have been happening in the last three weeks (essentially all analysis/simulation building programs in Matlab have been recoded almost from scratch to Python) and most of the lab has been dedicated to this endeavour. This left me as the only net contributor to GPUGRID and, as I explained elsewhere (https://www.gpugrid.net/forum_thread.php?id=3570), the strategy I took is to send the exact number of Workunits to keep the whole the network busy and allowing a fast turnover following our adaptive scheme (for which each simulation generates a new one after analysis of the results of previous ones). This is the cause for the slight WU availability drop.

Please let me know if this is really an inconvience for the majority of you and I will change this strategy to a more (I think inefficient) but user friendly one.

Tomba, I hope you can take a chance to reconsider your decision. Users are the most important in GPUGRID (besides science) because nothing could have be done without contributors like you. I'd like to remember that contributing is not an obligation and must always be done according to our personal means. I think donation in general (not only in GPUGRID) must be measured not in absolute terms but in relative terms. Therefore, we would be very happy if you could keep on contributing without intereferring in your household budget.

Garard, please rethink and change the strategy. Suggestions have been made - you could simply have a priority queue and a secondary queue to ensure a constant flow of work, assuming drive space isn't as much of a problem these days.

The 'optimise towards turnover efficiency' strategy has the detrimental effect of putting crunchers off, thus reducing the participant number. In the short run this isn't noticeable, unless someone speaks up, but it's a long term factor. If people don't get work when they want it, they go elsewhere or just stop crunching.
____________FAQ's

Can't wait to have my own house, move to a state with cheaper electricity and I'm going to go nuts with electronics!
You can only crunch for 12 hours a day if you want (though right now it's getting cold and i'm thinking of adding more GPUs as heaters!)
If there are limited Work Units then I would go with single high-end GPU configurations so you can still game pretty well.
____________

Gerard, please rethink and change the strategy. If people don't get work when they want it, they go elsewhere or just stop crunching.

In the two months since I stopped crunching for GPUGrid I've kept a close eye on developments. Developments? None. Nothing has changed. I see many "no work" posts.

So - my secondary rig, which crunched 24/7 with a 770, I shipped to my daughter, with the 780Ti from my primary rig. I'm sure her family will now enjoy top-of-the-line gaming. My primary rig now has just the 750Ti.

I've spent, for retired me, serious cash on hardware for GPUGrid; around $3K. And I'm left with a redundant 770, 2x redundant 660s and a primary PC that has just one of its four GPU slots populated.

AND... running two PCs 24/7 has consumed much electricity. I was happy to do that but so often they were on but doing nothing for GPUGrid.

In the two months since I stopped crunching for GPUGrid I've kept a close eye on developments. Developments? None. Nothing has changed. I see many "no work" posts.

The reason of those "no work" problems are bad driver installations.
If you take a look at the GPUGrid CPD chart:
You can see the "developments" since you've left. (+25% credits awarded)
There are 3244 tasks in progress and 201 ready to send (which is more than this, because new steps are generated from the results).

AND... running two PCs 24/7 has consumed much electricity. I was happy to do that but so often they were on but doing nothing for GPUGrid.

Everybody appreciate your contribution.

The party's over...

It's over for you, that was your decision and we've accepted it.
But if you would like to ruin the the fun we have in the party which still goes on without you, that would be really unmannerly of you.

Please don't leave us - we need all contributions. I am also retired and, as you, have spent, and continue spending, significant resources on Internet based research. I limit my contribution to suit my budget and am happy to make whatever input I can to GPUGrid, WCG, malaria, etc., etc.

Every contribution helps and if there are no WUs available, wait until the work is ready for processing.

Tomba, you could have a rig running part time, say overnight if your electricity company has an off-peak rate. Sure its not as productive but you're still contributing what you can and your electric bill isn't as bad as before. Even if they don't have an off-peak rate you could run them part-time to minimise the cost.

As for lack of work units. I've been away from GPUgrid for most of our summer and autumn due to the heat. Its winter down here now (Sydney Australia) and finally cooled down so I have fired up my two GPUgrid machines with a GTX970 in each. Getting work units doesn't seem to be a problem. I am running a minimal cache on them (0.05 min and additional of 0.15 days). They download another work unit when they get to about 90% done.

I'll be dropping off again next week as I replace the two GPUgrid cruncher (but not the GTX970's).
____________BOINC blog

I can see how an electricity bill of €180 would cause someone to stop crunching. Factoring in hardware costs that's a huge amount of money.

There are ways to minimize the cost of crunching though.

1. Buy used hardware. You can get 40%-50% off the initial costs.
2. Use efficient hardware. Get a wattmeter, make sure it's really efficient :)

I'm suspecting that a good portion of Tomba's electricity costs were the AMD FX-8350 and i7 920 CPUs, (also a Pentium 4???). Changing those to 14nm intel chips could have shaved 150W or more off your power consumption.

3. Selling older hardware before they become obsolete... the 770 is/was still worth good money.

4. Crunch only in the winter, so you're using the waste heat. Whatever you spend on crunching is effectively part of your heating bill. Also crunching 3-4 months will make your per annum cost much more bearable.

I have a GTX 750, a GTX 1060 and 3x 12-14 core Xeons crunching and the heat is just enough that I can turn off gas heating and still get a comfortable 22C in the house.

Crunching this way, electricity costs aren't an issue at all. Even the 12 core Xeons cost 120$ from ebay so hw costs can be kept down as well, if you look for good deals.

Please don't leave us - we need all contributions.
+1 :) Stay with us Tomba

Yes, please. We all appreciate very much what you have done so far. And the new Pascal GPU generation is really a good argument to upgrade, keep on crunching and do good.

For example, the new GTX1070 is an extremely powerful card which clearly surpasses the GTX780ti in GFLOPS but even pulls 100W(!) less power. For gaming it even outperforms the 980ti. The 1070 has a good price-performance ratio, as the cheapest KFA card is available from 400 USD. In view of the benchmarks, that is a real bargain.

Alternatively you may want to consider the GTX 1060 3G which provides an even better price-performance ratio in terms of crunching. But please note the 1060 does no longer support SLI, if you want to buy two and play games now and again.

I'm suspecting that a good portion of Tomba's electricity costs were the AMD FX-8350 and i7 920 CPUs, (also a Pentium 4???). Changing those to 14nm intel chips could have shaved 150W or more off your power consumption.

Yes, you may want to get a favorable second-hand Ivy Bridge (3.Gen) or Haswell (4.Gen) system to run your GPUs. That will significantly reduce your energy consumption. For example, a cheap i5-3450 is 20% faster than the i7-920 (both single and multicore benchmarks) but needs only HALF the power.
____________I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.