Incident History
View Incident History
Resolved [28/11/2024 19:54]
All circuits remain stable.
Completed [25/11/2024 11:16]
The work is complete.
Completed [21/10/2024 21:42]
Completed [22/10/2024 11:32]
Completed [17/10/2024 10:46]
The work is now complete and the server is back online.
Resolved [21/10/2024 21:03]
Resolved [16/10/2024 07:09]
Resolved [21/10/2024 21:03]
Completed [30/09/2024 22:01]
Works completed successfully.
Completed [12/08/2024 14:29]
Completed [24/06/2024 22:59]
Works have been completed successfully with no interuption to service.
Resolved [03/06/2024 19:12]
Resolved [28/05/2024 14:09]
The control panel has now been unlocked. Apologies for the disruption caused.
Resolved [28/05/2024 14:04]
The issue has been identified and corrected with core switches at the data centre. The data centre staff are continuing to look into the issue to prevent a repeat. We apologise for the disruption this has caused to hosting clients this afternoon.
We are checking through things with the control panel before releasing access again.
Completed [21/05/2024 20:17]
This work completed at 20:17 BST.
Resolved [21/05/2024 10:31]
The control panel restriction has now been lifted and access can be granted again. Apologies for the disruption caused.
Resolved [21/05/2024 14:11]
The issues impacting the broadband checker and ordering processes has been resolved. We apologise for the disruption this has caused.
Completed [17/05/2024 10:46]
The work is complete.
Resolved [21/05/2024 09:13]
Completed [09/05/2024 09:35]
Resolved [09/05/2024 23:19]
Completed [09/05/2024 09:35]
Resolved [30/04/2024 17:37]
The fibre breaks have been corrected and we are seeing normal paths and performance restore. Apologies for the disruption caused.
Completed [18/04/2024 21:29]
The upgrades are now complete.
Resolved [10/04/2024 13:54]
Resolved [08/04/2024 12:42]
Completed [10/04/2024 09:25]
Completed [20/03/2024 11:35]
Completed [13/03/2024 20:16]
This work completed at 20:15.
Resolved [08/03/2024 09:05]
The issues with the CIX mail platform were fully resolved yesterday evening. Email should be working as expected now. We apologies for the disruption.
Resolved [29/02/2024 17:16]
== Reason for Outage Summary ==
During routine configuration updates, unforeseen repercussions occurred in an unrelated segment of our network, leading to disconnections to approximately 30% of our BT Wholesale based broadband connections. Leased lines were not impacted. Upon identification, the configuration changes were promptly reverted, initiating service restoration. However, due to the inherent nature of PPP connections, some customer devices experienced delays in reconnecting, resulting in a number of lingering stale sessions.
== Response and Mitigation ==
The incident has been attributed to a potential bug and has been escalated to our vendor's Technical Assistance Center (TAC) for thorough investigation. Following the restoration process, the service has stabilised, and we have no expectations of a recurrence.
Completed [26/02/2024 12:04]
The work is complete.
Completed [31/01/2024 10:58]
The work is complete.
Resolved [17/01/2024 10:57]
Virgin have yet to provide an official RFO. So far they have not been able to explain the outage experienced which affected ourselves, other ISP, and their own retail broadband operations.
Resolved [11/12/2023 12:08]
Service has now restored.
Resolved [29/11/2023 16:29]
Completed [21/11/2023 22:21]
Completed [21/11/2023 20:45]
Completed [20/11/2023 13:05]
Completed [06/11/2023 23:52]
Work has now been completed
Resolved [24/10/2023 10:49]
Incident has now been resolved
Completed [03/10/2023 11:00]
The work is now complete.
Completed [12/09/2023 20:06]
Completed [12/09/2023 20:06]
Completed [07/09/2023 11:17]
The work is now complete.
Completed [06/09/2023 15:09]
This was postponed.
Completed [03/09/2023 08:26]
Completed [01/09/2023 12:07]
The work is now complete.
Resolved [05/09/2023 13:46]
As previously reported, the ultimate cause of the outage was a crash of an active switch in a virtual switch chassis at our Telehouse North PoP following the replacement of the failed standby switch. This is a procedure that we have carried out many times in the past and it has always been a hitless operation and is indeed documented as such. Following post-mortem analysis involving vendor TAC it has been concluded that the supervisor on the active switch must have entered a partially failed state when it switched over from standby to active after the switch failure the following week. Had this been visible to us in any way we would have scheduled the replacement work in an out of hours maintenance window. In light of this incident we will of course plan to carry out replacements of this nature out of hours should we see any switch failures in these systems going forwards.
This particular switch chassis had an uptime of just over six and a half years prior to the outage last week. Despite this solid stability we are now planning to move away from these virtual switch systems as part of our planned network upgrades. This will see our network transition to a more modern and efficient spine-leaf architecture where the failure of a single device will have limited to no impact to service. These upgrades will see significant investment and will be rolled out to all PoPs within the next 1-2 years.
All maintenance work at our THN PoP is now complete and its previous stability is being observed. Please accept our apologies again for the downtime witnessed.
Resolved [24/08/2023 16:04]
== Root Cause ==
On Friday 11th August at 14:40, monitoring systems detected a significant issue with traffic routing via the Data Centre's DDoS mitigation solution, triggering a Major Incident response. Core network devices in DC5 and THN2 London data centres were failing to handle traffic as expected. The service disruption was caused by a routing problem within the DC5 London Data Centre. Under normal operating conditions traffic would have routed via an additional resilient London Data Centre. However, a failure by a third-party supplier meant that the route to the resilient Data Centre was unavailable for the full duration of the incident.
The Data Centre encountered a significant issue pertaining to the routing of traffic by the Data Centre's DDOS mitigation solution. This was a complex issue resulting in a lengthy investigation process across multiple appliances in the DC5 data centre. The Data Centre's investigations confirmed the issue was in the network layer and therefore made the necessary amendments leading to service restoration.
Customers may have experienced disruptions in DNS services for domain names hosted with our network. Our DNS servers, namely ns1.interdns.co.uk and ns2.interdns.co.uk, are typically hosted in separate data centers within London, each on distinct IP ranges. These servers are designed to ensure uninterrupted DNS service, but as a result of this incident spanning both centres, services were impacted.
== Next Steps ==
The data centre has undertaken internal reviews. The root cause was analysed and their technical teams defined a detailed action plan, which includes an immediate review of appliance configuration, software upgrades, resiliency validation and process improvements.
We have undertaken a strategic initiative to enhance our DNS infrastructure. Our plan includes expanding our presence into additional data centers and establishing two entirely independent network setups. These measures are intended to safeguard against any future occurrences of similar disruptions, ensuring the continued reliability of our DNS services.
We apologise for the disruption and inconvenience this has caused you and your customers and appreciate your patience and understanding during this time.
Completed [26/07/2023 11:40]
The work is now complete.
Completed [30/06/2023 11:43]
The work is now complete.
Completed [07/06/2023 22:09]
The work completed and service was restored just before 10PM.
Resolved [07/06/2023 12:27]
Resolved [25/05/2023 02:38]
The issue was associated to planned notifications from Virgin (C01390678). The list of associated circuits was not exchaustive and hence the confusion. The planned works are now complete and no further disruption is expected.
Resolved [12/05/2023 16:31]
Control panel service has now been restored. Apologies for the disruption this afternoon.
Resolved [03/05/2023 17:11]
The issue has now been resolved. Apologies for the disruption caused.
Completed [21/04/2023 06:30]
The work is now complete.
Completed [19/04/2023 09:46]
The work is complete.
Completed [19/04/2023 17:00]
Resolved [13/04/2023 17:51]
All affected circuits have now been restored. Refer to the incident in the control panel for further details from the carrier regarding the cause.
Completed [06/03/2023 10:50]
The work is now complete.
Completed [20/01/2023 21:43]
The planned works were successful. Total downtime was contained to less than 4 minutes.
Resolved [20/01/2023 20:59]
Resolved [22/12/2022 16:52]
Virgin Media confirmed they replaced a faulty transmission card to restore all services.
Resolved [21/12/2022 10:47]
In summary, here are details of the issue observed on Friday afternoon / evening:
- It was observed that there was a significant and unexpected memory leak on core equipment in our Telehouse West (THW) core.
- It was determined that the best course of action was to carry out a controlled reload out of hours.
- We began slowly culling broadband sessions terminating at THW and steering them to other PoPs in preparation.
- A short time later the memory exhausted on the THW core, the BGP process terminated and resulted in all broadband sessions on LNSs at the PoP disconnecting.
- All broadband circuits that were operating via THW were automatically steered to other PoPs in our network.
- At this point we had no choice but to carry out an emergency reload of the core.
- Leased lines operating from THW were impacted throughout.
- Reload of the core took 30 minutes to complete, however a secondary issue was identified with the hardware of one of the switches.
- Half of the leased lines were restored, whilst on-site hands moved the affected NNIs from the failed switch to the other. This involved configuration changes.
- Circuits were impacted between 1 hour and 4 hours at worst. The majority of circuits were up around the 1 to 2 hour point.
- We are not set to move the NNIs again, to ensure that there is no further disruption.
- Owing to fulfilment issues the replacement hardware is now expected to arrive today, but to avoid any further risks, installation will be postponed until the New Year.
- We have raised the memory leak issue with Cisco TAC.
We apologise for the disruption this would have caused.
Resolved [21/12/2022 21:26]
Completed [08/12/2022 14:18]
Resolved [08/12/2022 14:17]
Resolved [19/10/2022 12:38]
The issue has been traced to a line card rebooting on a switch at our Telehouse West PoP. This resulted in some carrier NNIs going offline briefly and subsequently the Ethernet circuits terminating on them whilst the card rebooted. Diagnostics are not showing any issues following the event but we have raised it with the hardware vendor's TAC for further investigation. Apologies for the disruption this may have caused.
Resolved [27/10/2022 09:25]
Resolved [22/09/2022 16:12]
Completed [21/08/2022 11:10]
Completed [11/08/2022 10:35]
The work is now complete.
Resolved [05/08/2022 11:01]
Resolved [05/06/2022 23:00]
The carrier has corrected their issue. Normal sevice has been witnessed since.
Completed [01/06/2022 08:30]
The work is now complete.
Resolved [22/05/2022 07:35]
The issue has now been resolved. Full details will be supplied via the control panel incident for the impacted circuits.
Resolved [14/04/2022 11:03]
Resolved [14/04/2022 11:03]
Resolved [14/03/2022 10:21]
The issue remains with our NOC team and Cisco.
Resolved [14/03/2022 10:21]
Completed [14/02/2022 14:06]
Resolved [11/02/2022 14:25]
Control panel access has been restored.
Resolved [15/02/2022 23:09]
Issue resolved.
Resolved [15/02/2022 23:09]
Issue resolved.
Resolved [01/02/2022 17:29]
We are now able able to access TalkTalk services without issue. Apologies for the disruption this may have caused.
Resolved [29/01/2022 14:47]
We are now seeing service restored to the remaining NNI. The majority of associated circuits are showing as up, however if you have any issues please reboot the NTU and any associated supplyed routers before raising a fault. We apologise for this prolonged and unexpected outage today.
Completed [29/01/2022 14:58]
Resolved [16/12/2021 23:38]
Virgin have identified and corrrected a fibre break as of 23:10. This was part of a wider major service outage. Service has now been restored to the impacted circuits. Full details of the issue have been relayed as part of the incident available within the control panel. We apologise for the disruption caused.
Resolved [18/09/2021 19:40]
Virgin have confirmed the issue was an faulty attenuator at Telehouse West. That was replaced to resolve the fault. Circuits restored at approximately 13:08.
Resolved [17/11/2021 15:38]
Resolved [13/09/2021 10:13]
The issue has been identified and resolved. Apologies for any disruption caused.
Resolved [13/09/2021 10:06]
Completed [18/08/2021 12:22]
A new significant bug has been found impacting Ubuntu servers:
https://ubuntu.com/security/notices/USN-5039-1
We're patching all Linux Dedicated and VPS servers in the coming days which have our control panel installed. This is going to be a fairly simple task, however a REBOOT will be required, so please don't be concerned should you notice your server go down. We are trying to ensure patches are rolled out as quickly as possible, so apologies if this impacts working hours.
If you don't have a managed server, with the control panel enabled, we encourage you to apply the necessary patch ASAP.
If you have any further questions please get in touch with our support team quoting the server number.
Completed [13/08/2021 15:27]
Completed [24/07/2021 07:58]
The work completed and all services and VMs have been successfully restarted.
Completed [22/07/2021 09:28]
Completed [05/07/2021 16:54]
Resolved [02/07/2021 13:44]
Whilst investigating a degraded performance issue on a dark fibre at our LD8 PoP, a third party engineer inadvertently disconnected another dark fibre that connects LD8 to a third location. This subsequently resulted in LD8 becoming isolated from the rest of the network for a short period, between 00:06:02 and 00:09:56.
As previously reported, during this time leased line circuits terminating at LD8 would have experienced a loss of connectivity. Broadband circuits were impacted further due to a large number of subscriber sessions that were terminating at LD8 disconnecting.
Whilst the majority of the affected broadband subscribers regained a session at another PoP relatively quickly, others whose sessions were steered to a particular aggregation router on the network failed to start. Our engineers investigated and discovered that the router was experiencing a fault condition and took it out of service. At this point the vast majority of remaining subscribers re-gained their sessions.
Apologies for the disruption this may have caused.
Resolved [14/06/2021 10:00]
At 12:32:23 on 13/06/21 a supervisor in a core switch at our THW PoP experienced an inexplicable reboot. Shortly afterwards at 12:32:40 a hot standby supervisor took over the active role and restored the overall connectivity to the PoP.
The original active supervisor that rebooted was back in service as a hot standby by 12:41:52. By 12:54:47 it had brought all its line cards online following a full and successful diagnostics run. All connectivity was restored to the site by this point.
Non-resilient leased line circuits that terminate on NNIs directly connected to the rebooted supervisor would have experienced an outage between 12:32:23 and 12:54:47.
All other non-resilient leased line circuits as well as any broadband circuits that were terminating at THW would have seen a loss of connectivity between 12:32:23 and 12:32:40.
We have raised this to the vendor's TAC for further investigation. The device is currently stable and not showing any signs of issues. As such we do not deem the site to be at further risk at this time.
Apologies for the disruption this may have caused.
Resolved [18/05/2021 17:01]
The carrier has resolved the issue and the majority of affected circuits are online. A power cycle of the router may be required to force a reconnection.
Resolved [11/05/2021 09:33]
The issue was traced to a peering device that we've now taken offline and full service has been restored. Apologies for the disruption that this may have caused some users this morning.
Resolved [10/05/2021 17:54]
The issue has been resolved. The cause remains under investigation. Apologies for any disruption to service you may have experienced.
Resolved [10/05/2021 12:58]
Resolved [30/04/2021 09:21]
The cause was linked to a denial of service attack. We apologise for the disruption experienced.
Completed [22/03/2021 14:15]
As part of our ongoing efforts to improve our web hosting platform, we are pleased to inform you that we will be upgrading our Shared Windows Server 21 from Windows 2012 to Windows 2019, bringing with it performance improvements, security enhancements, automatic malware scanning of web sites and the introduction of the http/2 protocol for sites with SSL certificates.
This will work will take place on Friday 26th March 2021 and begin at 7PM (19:00 GMT) and should take no longer than four hours to complete. During this time, access to all web sites on Server 21 will be unavailable
Completed [26/03/2021 11:29]
Completed [22/03/2021 14:15]
Completed [15/03/2021 15:07]
Resolved [06/03/2021 22:04]
Resolved [28/06/2021 11:14]
Resolved [17/02/2021 17:46]
CityFibre have confirmed that all affected services have been restored and a full investigation is underway. We apologise for those customers affected by this issue.
Resolved [12/02/2021 17:24]
We believe the reboot has resolved the issue.
Completed [10/02/2021 09:50]
Completed [12/02/2021 14:43]
Resolved [10/02/2021 09:46]
Resolved [02/02/2021 13:19]
Completed [10/02/2021 09:46]
Resolved [02/03/2021 12:23]
Completed [30/12/2020 01:25]
Resolved [12/01/2021 15:07]
Resolved [18/12/2020 16:21]
We are now receiving responses from the various affected systems. Confirmation of a resolution hasn't been announced by Openreach, so services should be considered at risk.
Resolved [15/12/2020 13:21]
The issue has been resolved. Control panel users can see further details - https://control.interdns.co.uk/notification.aspx?id=13739839
Resolved [16/12/2020 11:25]
We believe yesterday's networking issues for the hosting data centre are now fully resolved. We have waited for the dust to settle before giving the all clear. Apologies again for any disruption this has caused. Once we receive a full explanation into the cause, we will provide this as soon as possible.
Resolved [14/12/2020 05:35]
Resolved [02/12/2020 09:39]
This issue has now been resolved and diagnostics are working again.
Resolved [12/01/2021 15:03]
Completed [03/11/2020 22:46]
This work completed.
Resolved [30/10/2020 06:33]
Service was resumed at approximately 02:15. We apologise for this unexpected outage.
Resolved [19/10/2020 13:50]
Completed [03/11/2020 22:46]
Resolved [14/10/2020 11:15]
Apologies for the session drops this morning. The cause was linked to additional interconnects being patched into one of our London POPs. This caused an issue with one of our broadband LNS which dropped sessions, only for them to be able to reconnect. It would have impacted any circuits routed via that LNS across TalkTalk and BT Wholesale.
This was unexpected behaviour and should not have occurred. We will continue to monitor and will raise this with the manufacturer as a suspected bug.
Completed [15/10/2020 12:05]
Resolved [26/09/2020 08:18]
Fault was tracked down to a power failure within a Virgin Media rack. All circuits are operational.
Resolved [15/09/2020 16:20]
We have seen near-normal levels of sessions restore through the afternoon. Anyone unable to reconnect should be able to do so with a power cycle. If this doesn't address it try powering down for an hour and reconnect. Failure to connect still may require assistance from our support team.
We will not terminate sessions to force a reconnection back to Telehouse North, they will naturally spread out as sessions drop of their own accord.
We are reviewing this outage internally, but ultimately the cause lay with the carrier.
Resolved [01/09/2020 10:16]
This issue has now been resolved.
Resolved [31/08/2020 22:47]
The root cause was TalkTalk maintained hardware failure affecting 1 Ethernet NNI of ours and 6000 other B2B clients. TalkTalk fault incident resolution states:
<-- snip -->
NOC monitoring identified an FPC10 (Flexible PIC Concentrator) failure at NGE001.LOH. This caused a total loss of service to approx. 6k B2B circuits from approx. 12:47 (31/08). The Core Network Ops team were engaged and their investigations found that the FPC10 had failed and could not be restored remotely. To restore service as of approx. 17:23 a field engineer attended site and replaced the faulty FTP10 with support from the core network ops team. This incident will now be closed with any further root cause analysis being completed via the problem management process.
<-- snip -->
Apologies for the disruption caused this afternoon.
Resolved [30/08/2020 21:25]
The issue was resolved around 16:10. CenturyLink responded via Twitter to say:
<-- snip -->
We are able to confirm that all services impacted by today’s IP outage have been restored. We understand how important these services are to our customers, and we sincerely apologize for the impact this outage caused.
<-- snip -->
Although we and their other global customers withdrew routes and shut down peering sessions, they continued to announce them to their peers regardless. This caused black holing of any inbound traffic routed via CenturyLink. All affected customers were left powerless and it has been a case of having to wait for them resolve the issue.
Thankfully less than 10% of our overall traffic routes in via CenturyLink's network, so the impact was minimal. We know of only a small handful of destinations that were unreachable during their outage. Apologies if your access was disrupted.
Resolved [21/08/2020 17:29]
A faulty network has been replaced and service restored. Apologies for the delay in resolution, it wasn't obvious that the card may have been at fault.
Resolved [04/10/2020 13:46]
Resolved [04/10/2020 13:45]
Completed [18/08/2020 05:05]
Resolved [23/07/2020 17:25]
Following a small fire at one of our Newcastle Upon Tyne exchanges earlier today, Openreach have now restored power to all services. All Broadband and Ethernet services should now be up and working.
Completed [17/07/2020 11:47]
Completed [10/07/2020 22:53]
Completed [06/07/2020 10:37]
Resolved [04/07/2020 21:44]
Resolved [04/07/2020 21:50]
Completed [13/06/2020 10:36]
This work completed successfully.
Resolved [19/06/2020 13:51]
Resolved [13/06/2020 06:29]
Resolved [13/06/2020 06:33]
Resolved [13/06/2020 06:33]
Resolved [13/06/2020 06:33]
Completed [05/05/2020 10:36]
Resolved [22/04/2020 15:07]
Resolved [26/03/2020 16:42]
A network routing issue has been resolved. All services are working as expected now.
Resolved [26/03/2020 10:57]
A network routing issue has been resolved. All hosting and email services are functioning as expected now.
Resolved [13/06/2020 06:34]
Resolved [19/02/2020 13:10]
The issue was related to LINX (the London Internet Exchange), which has now been resolved and would have potentially affected several Internet providers in the UK. We are awaiting a full RFO from them to confirm the cause.
Resolved [08/10/2019 15:26]
The cause has been located and service has now stablised. If a connection hasn't returned please power cycle the router to force a reconnection attempt. Apologies for the disruption witnessed.
Completed [30/08/2019 00:10]
The upgrade was successful and cleared the fault condition as suspected. We have been monitoring for the past hour and have not seen any further instability.
Resolved [29/08/2019 17:42]
We are seeing services restored now. If any connections remain offline please reboot the routers. The root cause is under investigation.
Resolved [25/07/2019 15:42]
The issue with the VPS platform has been resolved and it was due to a denial of service attack against the platform. We have worked closely with our mitigation service and transit provider to ensure this will not happen again.
We have also investigated with them why this incident did not get detected in the manner it should have been, and this was due to a configuration issue at their end. Assurances have been provided this has been rectified. In addition to this, we are looking more closely at network level to see what further and additional protection could be put in place to further prevent this from occurring again.
Our apologies for the disruption this has caused.
Resolved [19/07/2019 11:25]
The maintenance is complete now and services are back up and running.
Resolved [10/07/2019 12:40]
The attack has now been isolated and brought under control, so normal service show now be witnessed. We apologise for the disruption witnessed this morning. We are continuing to monitor the platform carefully.
Resolved [10/07/2019 09:10]
Resolved [01/06/2019 15:30]
We believe the issue with the VPS platform to be resolved. We will be monitoring the service closely over the next few hours to ensure all continues to be well.
Resolved [30/05/2019 12:29]
The fault was resolved with all circuits restored by 13:35.
Full notes from Virgin Media Business surrounding the handling of this fault can be found here:
https://cdn.interdns.co.uk/downloads/support-downloads/RFO_Virgin_29_05_2019.pdf
We apologise again for the prolonged outage which affected working hours.
Resolved [17/04/2019 17:19]
BT have confirmed that a line card needed to be reloaded in order to resolve the issue, we consider services to no longer be at risk.
Please let support@icuk.net know if you have any further concerns.
Resolved [15/04/2019 15:26]
The problem was localised to 1 rack of servers following a PDU failure.
Resolved [05/04/2019 13:02]
The affected circuits appear to have been restored. We have received no further communication from Virgin, so please consider service to be at risk.
Resolved [24/06/2019 09:09]
Resolved [25/02/2019 12:23]
TalkTalk's systems appear to be operational again. However, please consider them to be at risk as we have not received any communication from them to confirm that everything is back to normal.
Resolved [14/02/2019 11:22]
The issue has been resolved. The cause was linked to a connectivity issue between the servers in the cluster.
Resolved [18/01/2019 10:11]
The issue with routing has been resolved, we apologise for any inconvenience caused this morning.
If you continue to have any problems please contact the support desk with specific examples.
Completed [14/12/2018 01:31]
The maintenance is now complete.
Completed [05/12/2018 10:29]
The maintenance is now complete.
Resolved [15/11/2018 17:33]
Completed [26/10/2018 01:58]
The server rebuild and data restoration is now complete.
Resolved [16/10/2018 08:59]
Virgin have supplied the following reason for outage:
"In relation to the issue identified in the London area regarding loss of service. This issue was fully restored at 20:43 yesterday evening when a faulty DC output breaker was discovered at our Hayes hubsite and services were moved away from it onto a different output breaker. All services have been stable since that time."
Completed [10/10/2018 23:57]
Resolved [09/10/2018 12:05]
All services are back working now.
Completed [02/10/2018 10:19]
The maintenance is now complete.
Resolved [02/10/2018 09:47]
Resolved [25/09/2018 12:11]
This issue is fully resolved now.
Resolved [20/09/2018 17:37]
The FTP proxy service is now back online.
Resolved [13/09/2018 17:24]
This issue is fully resolved now.
Completed [12/09/2018 15:03]
VPS host VPS1 requires an emergency reboot. We apologize for any disruption caused.
Resolved [09/09/2018 22:02]
Earlier this evening, we experienced an issue with our core SQL cluster which prevented access to our webmail interfaces and access to our Control Panel. This was resolved around 21:30 BST.
Resolved [09/09/2018 22:03]
We believe all issues were resolved successfully on Thursday morning.
Completed [09/09/2018 22:03]
Work completed around 2AM BST.
Resolved [03/09/2018 12:50]
Service has been resolved. Apologies for the disription caused. Engineers are working to ensure that this doesn't repeat itself.
Resolved [23/08/2018 09:30]
MySQL has been restarted and the issue has been resolved.
Resolved [22/08/2018 13:22]
The issue has been resolved. Mail is starting to flow again, although there may be a slight bottleneck for the platform to process. Apologies for the disruption witnessed.
Completed [22/08/2018 13:24]
Resolved [05/07/2018 16:54]
This issue appears to be resolved now, although we are monitoring the situation carefully.
Reason For Outage - TalkTalk identified an issue with a third party peering provider, Iomart, who incorrectly advertised a subnet. This was a highly unusual event, and hopefully one we never see repeated.