Slow operations
Incident Report for TrustSource
Resolved
We used todays incident to assess the situation and discover potential issues. For example was it possible to identify a queue with unreasonably long retention times and false visibility configuration. This in combination with a consuming function that blocked more resources in the DB than it required lead to a pile up in requests and a shortage in read tickets slowing further down other requests.
We will recap todays events in a postmortem.
However, all services are fully operational since a while.
Posted Sep 20, 2022 - 22:09 CEST
Monitoring
Since 1554 CET some instances experience extreme long latencies. We are aware and investigating the issue.
PLEASE NOTE: API processing is not impacted by the latencies. Some UI interactions are. Meanwhile we identified the cause and throttled some functions to reduce workload created by enormous load. The reason for the massive load is not yet clear.
Posted Sep 20, 2022 - 16:04 CEST
This incident affected: TrustSource Services (Core Service).