1. We'll be remitting payment to SETI at Home on Monday (Jan 07, 2013) in the amount of $3485.00 USD. This is thanks in part to our generous donors as well as one very generous donor who contributed $2000 USD.

2. Some of the green star awards still aren't going through. I'll remind Matt to have a look at the system and find out what broke in the automated system this go around.

How To Donate
Please visit http://www.gpuug.org/catalog. From there, you will be able to donate to our current projects using Paypal, check or money order.

What We Do With Your Money
As we are a 100% volunteer funded and supported organization, we are able to allocated 100% of all donated funds (minus Paypal fees) toward projects which directly benefit SETI at Home.

501c3 Recognized Entity
The GPU Users Group Inc. is a recognized 501c3 organization in the United States. This means that your donations may be tax deductible on your federal income taxes. Our EIN for tax purposes is 45-2969708.

Sponsors
Thanks to our corporate sponsor NKOL http://www.nkol.net/, we are able to build items such as servers at a steep discount allowing us to stretch our donor's dollars.

Direct SETI Donations
We now offer the ability to donate directly to SETI at Home via the Paypal service. If you would like to donate directly to SETI at Home please visit http://www.gpuug.org/donatecash.

A) Replacement Bruno Server: This server will be a new, dedicated Upload server for the SETI at Home project. This server will also run backend functions that will benefit from being close to the result files, e.g. the validators.

B) Replacement Vader Server: This server will be a dedicated download server as well as dedicated to running various ancillary backend functions. Any excess capacity will be used to run post/pre NTPCKR/RFI functions (e.g. RFI studies, big array/image plotting with IDL, etc)

These two servers will have a huge impact on the project's ability to serve and collect data from the processing volunteers which we can all appreciate in this time of troubled server closet woes.

Here are the specs for our two new servers. The specs for both servers are identical.

While it's true that both of these servers meet the Lab provided specs, it's also true that they substantially surpass those specs in terms of horsepower and longevity. The hope here is that these servers preform better than expected, for longer than expected.

All this splitting talk is forgetting one thing. It still would need to be split. A "tape" isn't just 107+ seconds of data. The "tape" may be several hours of data, and that data includes parameters that presently aren't being sent to us but that the splitters use in splitting the data. Aren't the "tapes" presently 2Tb hard drives? Do you want to wait for that to D/L before you can do the next work unit? Are you willing to have work units that make CPDN work units look small?

2TB for most everything (GBT/Arecibo), 3TB for AP reob and a few 1TB's tossed into the mix.

I take your point Gary, us long term hard liners are Ok, but there are those that want the "feelgood factor" which Matt managed to give them.

I think that maybe a fortnightly "Seti Newsletter" bringing together the latest news, could be a useful idea that would give many people an update on what is happening with the project. And hopefully make them feel that they haven't been forgotten. But it would need regular input from people like Richard Haselgrove etc, the Lab guys, and the GPU User group on donations raised and kit bought etc. Could that input be regularly guaranteed??

I'd be willing to volunteer to produce such a Newsletter, provided I got enough regular input to make it worthwhile.

Chris-there's the problem--you won't get enough input. The lab staff simply won't have the time to give you what you want and people like Richard can only give so much info.

We really need to find a way to bring more staff to help with the workload. Our GPU fundraiser group needs to start finding fresh sources of new funds. We have a local billionaire named Thomas Golisano--thinking of trying to invoke some connections I have to him.

To be fair we're essentially a two man operation. I've begged for volunteers to write grants or letters, make calls, and generally help us however we they can but so far I've gotten next to nothing in the form of volunteers.

If I had two people who could spend maybe 4 hours a week contacting potential donors, I could do some good. One day I'll have those I hope.

Before we started the infrastructure of the lab was in a fairly poor condition. For evidence of this take a look at the first server donated, Synergy and now look at how many tasks that one machine is running. Add in Paddy and George and together these 3 servers have taken over the duties of I believe 8 now retired servers.

Since then our donors have upgraded everything from the server closet (Synergy, GeorgeM, PaddyM, new switch, more RAM, a filled JBOD) to the lab itself (workstations, desktop setups, UPS's) to the basics (120 and counting transport drives plus protective cases). We're even upgrading how SETI collects the data you process with our compute nodes, Brocade switches, docks and so on. Heck our donors have even contributed a large fist full of cash.

______________________________

While I like to point to the above and rant and rave at how awesome our donors are and what they've done, the issue remains; if it doesn't show up on Jim Donor's computer in some visible way, it doesn't seem to matter. This issue is frustrating to me however it's completely understandable given that the scientific community is largely focused on tangible results.

______________________________

The issue we're facing is a bit understandable. Consider that we're the largest BOINC project currently running. We chew through immense amounts of data thanks to ever increasing technology Compare a 560ti to a 690 for example and realize that advancement represents about a year's time of development.

As a result of the above, coupled with our need to upgrade infrastructure we run into problems like we've been having.

______________________________

For my end of the chain, we're going to continue to work through our donors to upgrade the project's infrastructure in hopes that we can avoid these issues we're having in the future. One of my primary goals is smoothing out the system while at the same time increasing the amount of data we're processing (yay more science!).

In short, try to be understanding. We have X resources while Y (amount of data users can process per (time)) is ever increasing. There are several logjams in our way namely a lack of staff, lack of proper bandwidth and necessary infrastructure upgrades. We (GPUUG) are working on fixing all of the above but we need time, money and volunteers who want to lend us a hand.

------------------------------

Sorry for the very long winded response but I hope it gives a few folks something to think about in light of the issues we've been having.

I just reported over 1,300 tasks with a max per report of 250, (in other words, six Scheduler contacts) without a hang, a timeout, or a wait.

Richard, I think you hit the nail on the head. 96GB of RAM isn't enough to keep Synergy from flogging the disks when it's running all of those processes.

We might be fixing that shortly. The Lab has Synergy loaded down heavy here of late.

Ah. Then can you stress to them, please - and with some force - that there is no need to gallop through splitting the tapes for AP so fast. In the short term, like before the next fresh tape appears in the queue, they could experiment with disabling the AP splitters on Synergy, and see how Lando gets on on its own. I did suggest that myself a week ago, but they chose not to act on it.

New big project coming as soon as we tie up things on the GPUUG end. More in the coming days, stay tuned!

I can only hope it sorts out the Scheduler issue. Then of course we'll need to get he bandwidth problem sorted. Then of course we'll find another bottleneck...

I agree, It seems maybe we are throwing hardware at UC Berkley being stubborn about a bandwidth problem.

I will donate what I can, But id like to hear some news on whats going on to.

Bandwidth is something we're trying to work on (GPUUG - Lab - UCB) though the process is quite Sysyphean. I wish I could throw donor money at the problem and solve it but sadly that's out of my hands unless we do something creative (and expensive).

I'll ask Eric and or Jeff to let us know what's up in a tech news post. Keep in mind that Dan's in China working on a very sexy SETI project and Matt's entertaining the world, so the lab's quite understaffed and slammed with these issues we've been fighting.

There was mention in a previous post by Slavac about getting a hardware load balancer, so that may help things a bit.

It might help with the network traffic to some extent, but the big issue for the last few weeks has been the Scheduler getting tied up in knots.

Agreed it won't help the scheduler, but should help the download server "balancing".

@Slavac, any news on the load balancer mentioned previously? Haven't seen a fund-raiser for it specifically.

Right now the plan is to wait on the load balancer after much discussion with the lab. The current plan (in order of likely priority which is subject to change)

1. Compute Node for the GBT. This will be a part of the high speed spectrometer.
1. A new upload server and download server. (Tossup as to priority with above. We're still chewing this one over with the lab)
2. Media converters for the lab. These will necessarily be funded, installed and tested before we reevaluate the load balancer. The hopes here are that the converters increase the much needed internal bandwidth of the project.

My plan right now is to attempt to get the balancer donated by the manuf. vs fundraising but again this is dependent on sorting out the media converters above. We'll likely slot the converters into priority 1 status once the specs from SSL come in.

______

As far as news on what's wrong with the project currently I'm sadly in the dark.

Also I should note we had another 3TB transport drive donated today which I'll get ordered this week.

Lastly I hope to start a new project this week, likely one of the above large ones. I'll follow that up with a new thread with the relevant data, specs and so on.

Certainly nothing wrong with that either, Uli.
LOL...Cash is king, as they say.
The GPUUG route does target items that the boyz in da lab have spec'd out and designated as priority.
And the GPUUG helps handle the purchase and shipping of such items, rather than the lab having to go through Berk channels to cut purchase orders and such.
Really more efficient for the project.

Both direct donations to SETI and via the GPUUG are equally important. We just focus on specific hardware that may be needed for A or B which frees up direct cash donations for other avenues.

Marvin was fitted with a new RAID 10 using 8 of our donated 2TB Seagate drives.

According to Eric these drives were 'really clean, no reallocated sectors on any of them.' This makes us think Seagate has gotten past its first of the year QC issues and is now shipping proper hardware again.

It it still Sunday in Berkeley. Wait until Monday afternoon their time (PDT/UTC-7), then if you have had no response, ask again.

Thanks, I'll do - yet I thought this was all working automatically without any manual actions by the stuff!?

Your donation requires some processing by the UC Berkeley Donations Office. Once they process the donation, you should get the email acknowledgement, and the Seti@Home staff will be notified. Your green star shows up shortly thereafter(and I see it has asrrived!)

If you're donating through the GPUUG it may take some time before the donations get processed. Right now the automatic process we're using is broken on the SETI side to award the green stars. Sadly we probably can't get this fixed until Matt returns to the office in 5 weeks or so.

Good news: Your money gets to SETI
Bad news: No stars for 5 weeks or so via GPUUG donations.

1. A check for $2000 from our SETI Direct donations was delivered to S&H on Thursday. I'm always proud to be able to cut such nice checks to the organization thanks to our donors.

2. We had a member donate today a 3TB hard drive to the project. Newegg was running a special so instead of our listed price, the drives are on special for $119. As a result I used some of our general funds to purchase 2 instead of the one paid for drive which will give SETI QTY 2, 3TB drives with which to use for AP Reob data.

Ah, that could be it. I have noticed it is pulling a lot less GPU tasks. I think the fan filter is due for a cleaning. Thank you!

The best way is to build a desktop machine, add a 1000W+ PSU and a MOBO that can handle 2-3 GPU's. Then populate the machine with those 2-3 GPU's (480's, 560's and higher are the most popular).

I'll catch flak for this, but your CPU doesn't really matter that much in the grand scheme of things when you're dealing with GPU computing. Both AMD and Intel make some very powerful CPU's that are quite affordable.

It all boils down to how many numbers your PC can crunch in a set amount of time.