So it turns out that the cause was indeed a rogue change they couldn’t roll back as we had been speculating.
Weird that whatever this issue is didn’t occur in their test environment before they deployed into Production. I wonder why that is.
So it turns out that the cause was indeed a rogue change they couldn’t roll back as we had been speculating.
Weird that whatever this issue is didn’t occur in their test environment before they deployed into Production. I wonder why that is.
If this is how they do their routine updates, they have had an extremely lucky run so far. Inadequate understanding of what the update would/could do, inadequate testing prior to deployment, no rollback capability, no disaster recovery plan. Yeah nah, you can’t get that lucky for that long. Maybe they have cut budget or sacked the people who knew what they were doing? Let’s hope they learn from this.