The UC data center switched over to a new firewall this morning. Since then packets into and out of the data center have been suffering drops. The Data Center staff is debugging the problem, we'll probably be dropping packets until it's resolved. @SETIEric

Thank you for the update, Eric.
At least we know they are aware of the problem and I am sure it will be resolved in due course."Learn from yesterday. Live for today. Hope for tomorrow." Albert Einstein
"With cats." kittyman

Oh, the humanity! And all the SETI participants screaming around here. I tell you; it – I can't even talk to people, their friends are on here!
Ah! It's... it... it's a... ah! I... I can't talk, ladies and gentlemen.

Description: UPDATE: Monday, April 30, 2018 3:38pm – The firewalls have been stable since 11:40am this morning. Users may need to check/restart services that could have hung during the outage.
IST staff are still working on the root cause of this outage. This evening after business hours at 10:00pm, the network team will troubleshoot further to fully restore network services.

But uploads- taking lots & lots of retries to get them to go through, at 2-4kB/s when they eventually do upload.
Edit- now it's down to 1-2kB/s.

The actual upload speed is OK. The speed reported in BOINC Manager is averaged over all the stops, starts, and retries. Some uploads get stuck at the 16K point and have to go through the full restart from the beginning again.

Description: UPDATE: Tuesday, May 1, 2018 1:59am – The firewall has been stable since 12:42am, services appear to be restored. The vendor will continue monitoring.

Monday, April 30, 2018 8:51pm – This evening at 7:06pm the data center firewalls reloaded on their own. The vendor is currently working to restore service.

Monday, April 30, 2018 3:38pm – The firewalls have been stable since 11:40am this morning. Users may need to check/restart services that could have hung during the outage.

IST staff are still working on the root cause of this outage. This evening after business hours at 10:00pm, the network team will troubleshoot further to fully restore network services.

Monday, April 30, 2018 2:20pm – This continues to be a sporadic ongoing issue and the network team is working to resolve the problem. There is no ETA at this time.

Monday, April 30, 2018 10:17am – IST staff is aware of instability to the Palto Alto Firewall in the Earl Warrent Data Center and are troubleshooting to determine the cause and work toward a resolution.

Monday, April 30, 2018 9:17am – The Service Desk continues to receive reports of network issues affecting many services including CAS, VPN, and connectivity to other applications hosted on campus. The network team is working to correct the issue as quickly as possible. All workloads hosted in our environment are up and running and should respond normally as soon as the issue is resolved.

The Service Desk are receiving calls of intermittent network issues.

IST staff is working quickly to identify the source of the problem and to restore services as quickly as possible.

But uploads- taking lots & lots of retries to get them to go through, at 2-4kB/s when they eventually do upload.
Edit- now it's down to 1-2kB/s.

The actual upload speed is OK. The speed reported in BOINC Manager is averaged over all the stops, starts, and retries. Some uploads get stuck at the 16K point and have to go through the full restart from the beginning again.

Ok, so the longer it takes to time out, and the more retries it requires to go through, the lower the reported speed.Grant
Darwin NT

I once solved issues with network messaging software called JGroups by watching the UDP receive buffers using a linux command. We had a UDP receive buffer that was overflowing and thus dropping UDP packets so we increased concurrency (threads) so that the receive buffer queue would process quicker and not overflow. That fixed it. Computers can have concurrency limits or buffer limits.

Description: UPDATE: Monday, May 7, 2018 11:51am – The vendor is still investigating the equipment failure. Release of the configuration freeze is scheduled for Monday May 14th at 12:00 noon to help ensure no disruption during finals.

Some come midday of the 14th, things may get ugly again if they haven't sorted out what went wrong last time.

Copyright 2006 The Regents of the University of California.

It's been a while since they gave that page a good going over.Grant
Darwin NT