Monitoring -
Today we have confirmed the source of the request spike causing this incident.
After today we don't expect a re-occurrence and we have moved the incident state to "Monitoring" while we work with the source of these requests to remove it's negative impact.
Jul 11, 08:19 BST

Update -
During there last occurrences we narrowed down the cause of the request spike as coming from our api (api.serverdensity.io) and not from eg. the user facing app or incoming device payloads.
Today we were able to prevent the daily 07:00 UTC occurrence by blocking a set of suspect API calls. This has reduced the issue scope even further, putting us closer to a solution. Today's impact was a 4 minute unavailability (06:58 - 07:02) on that set of API calls.
Jul 9, 08:15 BST

Update -
We have kept this incident open this long as this is an event only happening at 07:00 UTC, preventing us from continuously verifying possible corrections. We are continuing to work on it.
We'll update this again tomorrow after 07:00 UTC.
Jul 8, 08:37 BST

Update -
Between 07:00 and 07:11 UTC we had a re-occurrence of this incident. The consequence was immediately mitigated but we are still following up on the root cause of this data request spike.
Jul 7, 08:24 BST

Identified -
We have identified a reduction in our device payload processing capacity caused by an abnormal data request. This may show on some devices as missing metrics data. Alerting is not affected.
We've adjusted capacity while we identify and resolve the request spike.
Jul 6, 08:49 BST