Thanks again. I had already come across that link, but the information I did see on there is above my league to be honest ;). But I'll nose around further and see where it may lead me to.

Regarding the servers of the SETI project: are we to assume then the system (software of the SETI project servers) "lacks" somekind of security mechanisme preventing these problems? I mean, the more VHARs they send out, the more traffic they generate, cauze "us crunchers" chew right through them at very high speeds, hence asking lots of new units in return, thus chocking up traffic eventually for everybody. It seems the system is unable to maintain somekind of balance between the 3 main types of units being sent out to the crunchers: that of course would mean it would have be able to select data from different tapes, better yet, multiple tapes holding a different type of recording (as you've specified earlier) to maintain a balanced mixtures of units being sent out. If I understand it correctly, right now, the data from the different tapes that have been split, being sent out now, all are 'basketweave' mode recordings, leading inevitably to the current problems (shorties storm).

Well, we're back up after the weekly outage, but the Scheduler appears to be struggling already.
Most requests result in "Couldn't connect to server" messages & the uploads are pretty hit or miss at the moment as well.
Maybe Bruno & Synergy have got more than they can handle?Grant
Darwin NT

well,
There is a hundred megabits of bandwidth going somewhere doing something,
One thing i do know about it is it aint comeing here an doing anything.
Looks like Einstein will be heating the house tonight :¬)

well,
There is a hundred megabits of bandwidth going somewhere doing something,
One thing i do know about it is it aint comeing here an doing anything.
Looks like Einstein will be heating the house tonight :¬)

If you werent english i would ask you to translate.......to englishBy your command !!!

Yes, things havn't come back right for the last 3 outages now so it seems that something is being overlooked somewhere.

Or all the shorties presently in the system have found yet another limitation.

We had, what- a couple of months? where there we barely any shorties at all, and still quite a few VLARs being sent out. Now we've got shorties, lots & lots of shorties. So for one of my cards, instead of doing 3 WUs every 15-20min it's now doing 3 every 4-5min. Over 3 times the throughput.
Could be the systems are hitting their limit of RAM, or they need more HDDs for more I/O.

Whatever it is, the problems- inability to upload & frequent "Project has no tasks available", "No tasks sent" or "Scheduler reached timeout" messages- all seem to have started around the time all the shorties started coming through en-mass.

EDIT- one system finally managed to upload enough WUs to request more- and now the Scheduler is just timing out on the request. By the time it can request more work again, there'll be too many uploads backed up for it to be able to do so.Grant
Darwin NT

What is the explanation behind a "shorties storm" ? They don't seem to originate from the same tapes? Yet almost anything being sent out are VHAR units. Is this a server problem? I'm curious to read some info on this ;) Works flows slowly at best right now, better than nothing at all of course.

Kind regards.

In simple terms: SETI gets its data for free, by taking its own copy of the data being recorded during the course of astronmonical observations at the Arecibo radio telescope.

Different groups of radio astronomers are allocated observing time on the telescope, according to an observatory schedule which can be searched online if you're really interested. Each separate group of observers has control of the telescope during their assigned time slot, and control its movement and observing patterns.

Some astronomers are interested in long, steady, deep-space observations of, near enough, point sources. The focal point of the telescope remains steady in relation to the sky - the recordings have a low 'angle range' between the beginning and end of the 109 seconds we study in each workunit. Those sessions create the 'VLAR' tasks when we get to crunch the recordings.

Other observing teams are more interested in fast surveys of large parts of the sky. They use the observatory's radio antenna in what is known as a 'basketweave' mode, with the telescope nodding from side to side while the earth turns under the sky. That leads to the high angle range tasks - we know them as 'shorties', because it's not worth doing such intense analysis when potential signal sources remain in the field of view for such a short time.

And in between, there are observations - or even recordings taken during telescope maintenance - where the antenna is not being actively steered at all, but simply receiving whatever happens to be coming from the sky patch directly overhead as the earth turns. That gives us the normal, mid-AR tasks which form our staple diet.

Hi Richard,

Thank you for this information. I have been crunching SETI work for 12 1/5 years and I did not know this fact. I thought it had to do with the angle that the antenna was pointing at, relative to the sky. and that this has some issues with not picking up the same amount of information and was degraded for some reason or other.

The download servers have been trading off for a bit - we are now currently settled on using vader and georgem as the download server pair. As well, I just moved from apache to nginx on those servers. I think it's working well, but if any of you notice weird behavior let me know!

"Sometimes it is the people no one imagines anything of who do the things that no one can imagine."

Things are certainly borked upload wise- prior to the outage & since whatever was done on Sunday there had been a steady tream of work being returned- 100,000 results per hour.
Apart from a very brief burst it has dropped down to 54,000/hr & is still declining. Not because there isn't work to be returned, but becasue it can't be returned.

And on those very rare occasions where i can ask for work, the usual response from the Scheduler at the moment is to not give any, or just timeout.

EDIT- with caches continuing to shrink, if the upload backlog ever fully clears, the Scheduler is going to be hammered even more than it is now. And it's not coping now.
And this is happening even with new AP work disabled & no work available to go out. When AP starts up again it's going to be even worse.Grant
Darwin NT

well,
There is a hundred megabits of bandwidth going somewhere doing something,
One thing i do know about it is it aint comeing here an doing anything.
Looks like Einstein will be heating the house tonight :¬)

If you werent english i would ask you to translate.......to english

I'd expect clive to mean that Einstein@Home workunits will be running tonight, and the heat from that will be warming the house.

Things are certainly borked upload wise- prior to the outage & since whatever was done on Sunday there had been a steady tream of work being returned- 100,000 results per hour.
Apart from a very brief burst it has dropped down to 54,000/hr & is still declining.

It's now down to less than 45,000/hr, and i notice that the splitters are unable to get much above 25/s & so the ready to send buffer has actually dropped down to 1,700, and continues to fall.
So even if people eventually do upload all of their completed work, and a Scheduler request finally does go through- there won't be any work left to allocate.

Yes, and this is very curious. Very, very curious. Unless Matt has isolated himself completely, he must be aware that the project is not functioning correctly and hasn't for some time now.

I thought about offering to send him a "cruncher" to use at home (something basic with a GPU) so he could have a user experience, first hand. But then I supposed if he has no trusted confidant here who can get his attention, a cruncher on the shelf at home probably would have the same success wrestling him away from whatever else he is doing.

In the BOOK, Jurassic Park, Crichton wrote-in a reason nobody on the island was aware of the number of loose and rampaging dinosaurs. Being unaware that they could breed, the monitoring system "listened-for" 6 of this, 8 of that, 3 of the other. Once that number was accounted-for, the system quit counting. (it stopped looking, so never picked-up on the fact that there were 18 of this, 12 of that, and 46 of the other)

I wonder if Matt (et al) aren't falling victim to the same sort of thing: The 100Mb pipe is full and there are uploads, downloads, and results created and received; therefore everything must be working as well as it can (or some inadequately-deductive equivalent of that).

It is exactly as you say: "...and seems to be totally unaware..."

Our usual grousing about the servers' speed and inadequate bandwidth may have made him deaf to our complaints.

I think he has dozens, if not hundreds, of people who would be thrilled by an opportunity to make him acutely aware of what is happening, if we only had the means.