February 9, 2021 By Andrea Sayles 3 min read

Business continuity and disaster recovery (BCDR) plans need to keep pace with increasing business demands and growth in physical and compute infrastructures. For a hyperconnected digital business, even a small disruptive event can ripple through the entire organization. Today most businesses have BCDR plans. But do these plans deliver operational resilience during the moment of truth?

With the world becoming increasingly uncertain and risks proliferating, IT leaders must look beyond crisis management to be able to achieve operational resilience. A review of an organization’s operational resilience posture must prioritize the following four areas.

Data center resilience

Many organizations still have aging data center facilities that aren’t well aligned with current business and technology demands. Cloud migration – even cloud repatriation – in many cases takes place in multiple stages and is seldom planned or governed in an integrated manner. As complexity increases, there may be blind spots that could be easily missed.

For example, power distribution and cooling systems in data centers are often weak links in BCDR plans. In many legacy data centers, cooling systems are connected separately to diesel generators, not to the uninterrupted power supply. This puts compute infrastructures at risk of overheating should there be a generator failure. With business growth and changes in compute infrastructures, power equipment and capacities can become out of alignment, exposing your business to huge risk.

Modern businesses need next-generation data centers that are failsafe, responsive and workload-aware, while complying with industry standards, regulations and green/energy norms.

Integrated business continuity strategy

Closely aligned with a data center strategy should be a holistic BCDR strategy that considers all types of risks (system failure, natural disaster, human error or cyberattack) and outage scenarios, and provides plans for mitigation with minimal or no impact to the business. The strategy must also consider organization and culture, business processes, technology, standards and regulations. And no strategy or plan can be effective unless it is regularly tested. A well-planned data center design, integrated system testing and a regular functional test of the BCDR plan can help IT managers quickly detect equipment fault or vulnerabilities in near real time.

Recoverability and reliability

Business continuity best practices suggest that backup sites be built at physically different locations, in different seismic zones. Cloud-based data protection and recovery allows organizations to back up and store critical data and applications off-site, so they are protected from local disruptions. However, managing backup and storage as well as disaster and cyber recovery for a hybrid environment with hundreds or thousands of applications isn’t easy. Organizations simply don’t have the resources or the skills and expertise to do so.

Recovery at scale within minutes or seconds of an outage in such complex environments can only be achieved with an orchestrated recovery platform – a platform that also allows frequent tests to establish recovery reliability. While manual tests are slow, error-prone and dependent on availability of skilled resources, an orchestrated recovery platform can help eliminate human error and improve recoverability and recovery reliability.

Rapid response and recovery

While many organizations have robust BCDR plans, the need for planned production downtime inhibits their test schedules. Some of them use manual runbooks to perform failover/failbacks. This requires a significant amount of training and experience. By automating the runbook, tests and failover/failback processes, organizations can conduct regular disaster and cyber recovery drills that can keep the runbooks current and the execution smooth during real disasters.

It’s not enough to have data backups or IT infrastructure components available in real time. Organizations need the ability to quickly recover critical applications and data supporting business operations. Increasing cases of cyberattacks put the highest emphasis on the integrity of data being replicated in real time, as the backup data itself can also be corrupted.

As the stakes get higher, achieving operational resilience is a business imperative. Many organizations have witnessed devastating outages over the past few years, some of which could have been avoided. The cost of ignoring these situations will be increasingly dear in a post-COVID, hyper-digitized era.

To learn more about how a small disruptive event can have a ripple effect across your company, and what you can do to prevent it, explore the Moment of Truth.

Was this article helpful?
YesNo

More from Cloud

Prioritizing operational resiliency to reduce downtime in payments

2 min read - The average lost business cost following a data breach was USD 1.3 million in 2023, according to IBM’s Cost of a Data Breach report. With the rapid emergence of real-time payments, any downtime in payments connectivity can be a significant threat. This downtime can harm a business’s reputation, as well as the global financial ecosystem. For this reason, it’s paramount that financial enterprises support their resiliency needs by adopting a robust infrastructure that is integrated across multiple environments, including the…

Agility, flexibility and security: The value of cloud in HPC

3 min read - In today’s competitive business environment, firms are confronted with complex, computational issues that demand swift resolution. Such problems might be too intricate for a single system to handle or might require an extended time to resolve. For companies that need quick answers, every minute counts. Allowing problems to linger for weeks or months is not feasible for businesses determined to stay ahead of the competition. To address these challenges, enterprises across various industries, such as those in the semiconductor, life…

Field programmable gate arrays (FPGAs) vs. microcontrollers: What’s the difference?

6 min read - Field programmable gate arrays (FPGAs) and microcontroller units (MCUs) are two types of commonly compared integrated circuits (ICs) that are typically used in embedded systems and digital design. Both FPGAs and microcontrollers can be thought of as “small computers” that can be integrated into devices and larger systems. As processors, the primary difference between FPGAs and microcontrollers comes down to programmability and processing capabilities. While FPGAs are more powerful and more versatile, they are also more expensive. Microcontrollers are less…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters