API latency and error rate remain stable and within normal limits. We are resolving this incident but will continue to keep careful watch for any signs of recurrence.
The experimentation pipeline will continue to be delayed for an additional hour or so until it catches up with the backlog.
May 18, 22:43 PDT
As of 9:15pm PT, we have corrected root cause of the incident and access to the LaunchDarkly website has returned to normal.
We identified a database change that significantly increased load on our main datastore, slowing application responses and making the web application slow or unavailable for most users. The experimentation data pipeline is currently behind schedule, but is catching up and no data has been lost. Also during this time, the flag delivery network experienced increased latency in propagating flag updates, but flag evaluations continued unaffected.
After identifying the problematic update, we reverted the change and saw system health restored.
We will conduct a post-mortem and can provide more details if requested. If you have further questions please contact firstname.lastname@example.org or your account team.
May 18, 21:45 PDT
We are still experiencing high latency in our API leading to increased errors in the API and web application. We are continuing to work to restore normal service.
May 18, 20:40 PDT
We continue to see high latency and timeouts across much of the API. The LD web application is currently unavailable. SDK initializations and flag evaluations are exhibiting higher latency and an increased error rate, but remain within operational bounds. We are continuing our work to mitigate the issue.
May 18, 20:01 PDT
We are continuing to investigate this issue.
May 18, 19:11 PDT
We are continuing to see very high latency and timeouts across much of the API and website. We are continuing to investigate.
May 18, 19:08 PDT
We are currently investigating increased latency and timeout-related error rates in our REST API.
May 18, 18:33 PDT