Five Imperatives for Extreme Data Protection in Virtualized Environments - Page 2
#3: Focus on simplicity and speed for recovery
Numerous user implementations have revealed that server virtualization introduces new recovery challenges. Recovery complications arise when backups are performed at the physical VM host level (obscuring and prolonging granular restores) or through a proxy (necessitating multi-step recovery).
It is important to consider the availability of a searchable backup catalog when evaluating VM backup tools. Users of traditional, file-based backup often assume that the searchable catalog they are used to is available in any backup tool. But with VMs this is not always the case. Systems that do full VM image backups or use snapshot-based backups often are not able to catalog the data, meaning there is no easy way to find a file. Some provide partial insight, allowing users to manually browse a directory tree, but not allowing a search.
The lack of a strategic approach to data backup and recovery is creating an enormous challenge for IT organizations.
It is also important to understand how the tool handles file history. A common recovery use case is the need to retrieve a file that has been corrupted, but the exact time of corruption is not known. This requires the examination of several versions of a file. A well-designed recovery tool will allow input for both the file name and a date range to detect every instance of the file housed in the backup repository. While this may seem a minor point, it can make the difference between an easy five-minute recovery process and a frustrating hour or two hunting around for files.
Fast and simple recovery, at either a granular or virtual machine level, can be achieved if point-in-time sever backup images on the target disks are always fully “hydrated” and ready to be used for multiple purposes. In fact, with a data protection model that follows this practice, immediate recovery to a virtual machine, cloning to virtual machine, and even quick migrating from a physical to virtual machine are all done the same way – by simply transferring a server backup image onto a physical VM host server.
#4: Minimize secondary storage requirements
Traditional backup results in multiple copies of the entire IT environment on secondary storage. Explosive data growth has made those copies larger than ever, and the need for extreme backup performance to accommodate more data has necessitated the move from tape backup to more expensive disk backup. The result is that secondary disk data reduction has become an unwanted necessity.
Deduplication of redundant files can be achieved at the source or at the target. In isolation, each approach has drawbacks. Each new data stream needs to be compared with an ever-growing history of previously stored data. Source-side deduplication technology can impact performance on backup clients because of the need to scan the data for changes. They do, however, reduce the amount of data sent over the wire. Target-side deduplication does nothing to change the behavior of the backup client or limit sent data, though it does significantly reduce the amount of disk resources required.
A hybrid approach combining efficient data protection software with target-side deduplication can help organizations achieve the full benefits of enterprise deduplication without losing the other benefits.
#5: Strive for administrative ease-of-use
Very few users have a 100% virtualized environment. Consequently, a data protection solution that behaves the same in virtual and physical environments is desirable.
A data protection solution in which a backup agent is installed on each VM can help ease the transition from physical to virtual. Concerns about backup agents needing to be added to every new virtual machine are overstated because each VM needs to be provisioned anyway – with an operating system and other commonly deployed applications and software. New virtual machines cloned from a base system will already include the data protection agent.
When evaluating solutions, it is vital to consider the entire backup lifecycle, from end to end. For example, if some data sets need to be archived to tape, a deduplication device may not allow easy transfer of data to archive media. This might then require an entire secondary set of backup jobs to pull data off the device and transfer it to tape, greatly increasing management overhead. This kind of “surprise” is not something organizations want to discover after they have paid for and deployed a solution.
Ease of use can also be realized with features such as unified platform support, embedded archiving, and centralized scheduling, reporting, and maintenance – all from a single pane of glass.
A holistic view of virtualization
To maximize the value of a virtualization investment, planning at all levels is required. Data protection is a key component of a comprehensive physical-to-virtual (P2V) or virtual-to-virtual (V2V) migration plan.
The five imperatives recommended here can help significantly improve organizations' long-term ROI around performance and hardware efficiencies and accelerate the benefits of virtualization. To complete this holistic vision, organizations must demand easy to use data protection solutions that rate highly on all five of the imperatives. Decision makers who follow these best practices may avoid the common data protection pitfalls that plague many server virtualization initiatives.
Syncsort is exhibiting at 360�IT, the IT Infrastructure Event held 22nd – 23rd September 2010, at Earl's Court, London. The event provides an essential road map of technologies for the management and development of a flexible, secure and dynamic IT infrastructure. For further information please visit www.360itevent.com