We've been talking about business continuity for years. No, make that decades. While a standard for business continuity planning has only existed since 2006, organisations have been defining what happens in the case of a serious issue - fire, theft, flood or otherwise - since the last millennium.
Surely then, that means the topic is already dealt with! Do we really need to talk about it anymore? The answer is yes - but not because organisations are bad at it, quite the contrary. Rather, the way that it is done is costing an awful lot more than it needs.
Consider a retail customer I spoke to recently. We picked a core application at random, which had a database size of 13GB. However, taking into account RAID, disk replication, cross-site duplication and then off-site data protection, the amount of physical disk space being used in the name of Business Continuity was 840GB. That's over 60 times more, for a simple application with a relatively small footprint.
The problem isn't just about inefficient use of disk space, nor particularly bad architecture decisions. Rather, it can often come from different parts of the IT organisation each working in silos. In this scenario for example, resilience was implemented three times - once by the applications team, once by the infrastructure team and once by the business continuity team. Each added different techniques for data protection and included excess 'headroom' to ensure the application never ran out of space.
How can this be changed? Business continuity has traditionally focused on ensuring the availability of equipment, which results in a '2N+1" conversation - take whatever you have, double it and add one. Today, we are looking at having more of a service availability conversation, which assures end-users have access to their IT services and can get on with their jobs.
Rather than allowing individual groups to make their own resilience decisions, efficient service delivery requires an overview of all the links in the chain. A true business critical environment requires capabilities that were previously treated in isolation - for example security, archiving, data protection, storage management and indeed high availability - to be considered as functions of the environment as a whole.
As part of Symantec's journey, we are reviewing our capabilities to fit with the need to deliver integrated service availability for our customers. Our three-point plan involves first simplifying and integrating the portfolio, then building a layer of service insight so customers can see where data is stored. On top of this we are looking to deliver cross-platform orchestration so that whole environments can be managed as dynamically and efficiently as possible.
This work is on-going - we will keep you informed of progress as we flesh out the Symantec Business Continuity Platform. The bottom line is that 13Gb needs to mean 13Gb. And with Symantec, it will.