Posts Tagged ‘DR’
The feature set that gives customers storage savings is described in a 42 minute informative video on Hyper-V and Netapp storage – Overview. I have summarized it in a 5 minute long post below.
Enterprise System Storage Portfolio
The Enterprise product portfolio consists of the FA series, V Series storage systems. These systems have a unified storage architecture based on the Data ONTAP, OS running across all storage arrays. Data ONTAP provides a single app interface and supports protocols such as FC-SAN, FCoE-SAN, IP-SAN (iSCSI), NAS, NFS, CIFS. The V-Series controllers also offer multiple vendor array support, i.e., they can offer the same features on disk arrays manufactured by Netapp’s competitors.
- Block-level de-duplication, or de-dupe, retains exactly one instance of each unique disk block. When applied to live production systems, it can reduce data 95% for full backups, especially when there are identical VM images created from the same template, and as much as 25%-55% for most data sets.
- Snapshot copies of a VM are lightweight because they share the same disk blocks with the parent and do not require as much space for the copy as the parent. If a disk block is updated with a snapshot, e.g., if a configuration parameter is customized for an application, or when a patch is applied, the Write Anywhere File Layout (WAFL) file system associates the updated block with the snapshot copy and writes to the disk, leaving the original block and its referrers intact. Snapshot copies therefore impose negligible storage performance impact on running VM’s.
- Thin provisioning allows users to define storage pools (Flexvol) for which storage allocation is done dynamically from the storage array on demand. Flexvol can be enabled at any point in time while the storage system is in operation.
- Thin replication between disks provides data protection. Differential Backups and mirroring over the IP network works at the block level copying only the changed blocks – compressed blocks are sent over the wire It enables virtual restores of full, point in time data at granular levels
- Double parity RAID, called Raid DP, provides superior fault tolerance and provides 46% saving vs mirrored data or RAID 10. You can think of it as being a RAID 6 (RAID 5 + 1 Double Parity disk). RAID DP can lose any two disk in the raid stripe without losing any data. It offers availability equivalent to RAID 1 and allows lower cost /higher capacity SATA disks for applications. The industry standard best practice is to use RAID 1 for important data, RAID 5 for other data.
- Virtual Clones (Flex clones). You can clone a volume / LUN or individual files. Savings = size of the original data set minus blocks subsequently changed in clone. Enables ease of dev and test cycles. Typical use cases: Build a tree of clones (clone of clones), clone a sysprep‘ed vhd, DR testing, VDI
There are several other videos on the same site that show the setup for the storage arrays. They are worth seeing to get an idea of what is involved to get all the machinery working in order to leverage the above features. It involves many steps and seems quite complex. (The hallmark of an “Enterprise-class” product? 😉 ) The SE’s have done a great job of making it seem simple. Hats off to them!
50% Storage Savings Guarantee
- Engage them for planning your virtualization storage need
- Implement best practices recommended by them
- Leverage features like, De duplication, Thin provisioning, RAID DP (Double Parity RAID), NetApp Snapshot copies
If you don’t use 50% less storage, you can get the required additional capacity at no additional costs
I learned about this in a 42 minute informative video on Hyper-V and Netapp storage – Overview
How to replicate Hyper-V VHDs for DR?
The author is looking for a block-level transfer tool like rsync on Windows. A respondent has suggested using Volume Shadow Copy Services (VSS). However, VSS needs space for shadow copy data and this becomes an issue if you have large VM’s to transfer and are short of space.
scp is a secure transfer tool like rsync that is used for performing remote copies of files, including vhd mages across the LAN or the Internet.
Mircosoft recommends that customers can implement a backup or a disater recovery solution within their WAN using the File Replication Services (FRS2) of the Distributed File System (DFS) of Windows 2003 server R2, or later. This solution will perform well over the WAN, if WAN acceleration is in use. However, if WAN acceleration is not in use, then they should enable the Remote Differential Compression (RDC) protocol available in Windows 2003 Server R2, which Optimizes File Replication over Limited-Bandwidth Networks
Read about the use of rsync for vmdk, vhd backup and disaster recovery.
I use ftp to transfer large VM image of my code to a remote development team based in India and rsync for copying and backing up code, configuration and data from ec2. I researched the web for best practices that have evolved for speeding up large VM transfers. It seems there are none today, unless you are transferring VM’s on your company’s WAN and they are using WAN accleration to improve the transfer rate. However, I have found two models for using rsync with vmdk’s and vhd’s. Here’s a sample of use cases:
rsync is used for backing up large VM’s to a remote store or for disaster recovery
Read about Backup, Disaster Recovery for Windows VM’s