For whatever reasons, the splitters seem to be unable to work fast enough to maintain a 'ready to send' cache. So at times, there is little work going out.
Thus, the 'feathering' seen on the Cricket graph.

Yes but look at the in progress graph on the server graphs page, the last 96 Hrs graph shows a steady increase with no fluctuations, the slope of the graph seems to be as steep as when the pipe has been maxed.

Is the pipe able to handle this load easier than being maxed with all the network problems.

If everyone is getting some work does it matter if us faster users take a little bit longer to fill our caches to our prefured levels?

For whatever reasons, the splitters seem to be unable to work fast enough to maintain a 'ready to send' cache. So at times, there is little work going out.
Thus, the 'feathering' seen on the Cricket graph.

Yes but look at the in progress graph on the server graphs page, the last 96 Hrs graph shows a steady increase with no fluctuations, the slope of the graph seems to be as steep as when the pipe has been maxed.

Is the pipe able to handle this load easier than being maxed with all the network problems.

If everyone is getting some work does it matter if us faster users take a little bit longer to fill our caches to our prefured levels?

I won't complain. At least when some work is assigned, the downloads seem to be going through straight off.Cats.....what more does one need?

Sticking to technical news to at least some degree, I notice over the past 2 days the cricket is giving an unusual pattern showing a moment of 'downtime' after regular periods, say every 2 hours. What's causing this?

For whatever reasons, the splitters seem to be unable to work fast enough to maintain a 'ready to send' cache. So at times, there is little work going out.
Thus, the 'feathering' seen on the Cricket graph.

Maybe that's by design, something they did to make the ageing Bruno's life a bit less stressful.

I think it's more of a sign of some other issue.
The master DB queries per second graph shows there's been a lot of activity there since the last outage. In the past if the number of Queries per second is over around 600, the splitters get bogged down. And that has pretty much been the case since the last outage- it's been averaging over 800.Grant
Darwin NT

...
The master DB queries per second graph shows there's been a lot of activity there since the last outage. In the past if the number of Queries per second is over around 600, the splitters get bogged down. And that has pretty much been the case since the last outage- it's been averaging over 800.

With the replica BOINC database off, all queries have to be handled by the master. Carolyn can handle a few thousand per second, but perhaps there's enough extra delay on each query to affect other processes.