Hello,
In the BOINC clients 'Transfers' tab there is a [Retry Now] button. As the download speeds for SETI are slow and indeed stop altogether, I've been pressing this to complete the download.. Now I'm running LHC as well. WUs from this project are taken only by the dual core CPU (no OpenCL for LHC) whereas I've made SETI take only GPU WUs. So the set-up looks like this in the 'Tasks' tab:

(Despite having two physical GPU cards, only one is scheduled if both CPU cores are in use.)
OK. When I accelerate the download like this, on download completion the task is whipped up by the spare GPU and 'finishes' in < 5 seconds -

No, because of what band it is, ie B3_P1, it is probably a 100% Blanked task, so it is not worth running, you're not reported it yet so we can't be 100% sure of that yet,
a lot of tasks (but not all) with eithier B3_P1 or B6_P0 end like this.

I see. Thanks. And I can confirm all is well with BOTH GPUs on SETI and the old dual-core simulating 4 cores (cc_config.xml) with LHC - although when both GPUs are running, one of the simulated cores is given over to serving them, whereas with only one GPU loaded, the set-up perseveres in a 4xCPU+1xGPU configuration. For the moment anyhow. I'm uncertain as to whether the simulation of 4 CPUs produces a benefit yet... We'll see.

You know, it occurs to me that these radar affected WUs might be issued in a size reduced form. It sure would make DLs faster!

Minor Digression - If anyone could possibly shed some light on this...

Why can't the WUs subjected to 'blanking' be smaller? I mean why not?
What is the issue with DownLoading from SETI@home? Is it just a simple lack of bandwidth?

-That last one might cause a few groans from our regular readers, but I've never seen a simple statement explaining it.

Don't worry about the circa 5 second WUs. You will get them every now and then. I had a bunch of them last week. If anything, they are slightly annoying given the current situation re download speeds and times.

It takes processing power to work out if the WU is only noise. It takes your machine 5 secs to decide, I'm guessing it would take a server at the lab a similar amount of time. As 10-15 WUs are produced per second, they don't have the spare CPU power to "pre-process" everything.

End result, they get sent out, and the machines in the field sorts them out. Not ideal, but there isn't a simple way around this.

Ianab
It may only take 5 seconds to sort out at our end but in the current climate it can can up to 10 - 20 minutes or more to get the WU and if you get more than 1 at once as I was last week it becomes very annoying.

To the Head Shed
Can we lift the limits from 100 to 150 to push out some of the demand from lower order machines??

I'm missing something - precisely where in the pipe from Arecibo to my front room, this 'noise' is applied to the signals we process. And is that assumption correct? The signal data is modified like someone going through a sensitive document with a black marker pen.

Raise your flags! Just kidding.

***

Tangentially-
In Message 1329935 I ranted about some configuration nonsense -
I have stopped emulation of 4 cores on my Opteron 185. I'm just letting them run native for a better, more adaptable system. Well, for my set-up it is.
Tally Ho!

I'm missing something - precisely where in the pipe from Arecibo to my front room, this 'noise' is applied to the signals we process. And is that assumption correct? The signal data is modified like someone going through a sensitive document with a black marker pen.
...

Two places (for Astropulse).

The first method of replacing data affected by RFI is done at Berkeley. When they get a 2 TB disk full of data from Arecibo it is broken up into ~50.2 GB files. Before those files are made available for splitting there's another process which reads the data and checks whether multiple channels have patterns matching those produced by the RADARs in Puerto Rico. A 15th channel gets the signal indicating good or bad data. Then when either an mb_splitter or ap_splitter is reading in data to split, the bad sections are replaced with pseudo-random data which has been shaped to match the frequency distribution of the good data.

The second method is within the Astropulse applications. The RFI detection method is different, based on the level in the DC bin of an FFT (probably detecting partial overload of the receiver IMO). The replacement data is like that used server-side.

For each detection of RFI, it makes sense to replace some data both before and after the detection point. The server-side replacement range is less than that in the Astropulse application. Another obvious difference is that MB processing is narrow band, so the AP detection method wouldn't make sense.

There are obvious possibilities to do things more efficiently, and at least one is being worked on. Code is being developed to not split and distribute data if it's too bad. If more than 0.28% of volunteers would donate cash to the project, it might be possible to get more staff so development of such improvements would go faster.