shareVM- Share insights about using VM's

Simplify the use of virtualization in everyday life

Posts Tagged ‘vmdk

Best Practice: Defrag VMDK, VHD, VirtualBox Virtual Disk

leave a comment »

Wikipedia describes defragmentation as

a process that reduces the amount of fragmentation in file systems. It does this by physically organizing the contents of the disk to store the pieces of each file close together and contiguously. It also attempts to create larger regions of free space using compaction to impede the return of fragmentation.

Generically, the defragmentation of a Windows guest within a virtual disk running on a Windows host (Windows on Windows) requires a three-step process:

  1. Defragment the guest
  2. Defragment the virtual disk
  3. Defragment the host

On a Linux host or guest, the ext3 and ext4 file systems are more resilient to defragmentation.

Windows on Windows

You should perform the following steps whether you are using a Microsoft VHD, VirtualBox VDI or VMware VMDK virtual disk,

  1. On a Windows guest OS, run the Windows Disk Defragmenter to defragment the files within the volumes stored inside the virtual disk.
  2. Next, power down the virtual machine and defragment the virtual disk using contig. Defragmenting the virtual disk simply reorganizes the blocks so that used blocks move towards lower-numbered sectors and unused blocks move towards higher-numbered sectors.
  3. Run the Windows Disk Defragmenter to achieve an overall defragmentation of all files on the host including the virtual disk.

VMware VMDK specific

The following steps can be used generically for VMware VMDK, for Windows on WIndows or any other suppoted platforms. vmware-vdiskmanger:is a standalone tool for defragmenting a growable VMware Workstation, VMware Fusion or VMware Server, vmdk when it is offline. Note that you cannot defragment:

  • Preallocated virtual disks
  • Physical hard drives
  • Virtual disks that are associated with snapshots.

The recommended steps for defragmenting a vmdk are:

  1. On a Windows guest OS, run the Windows Disk Defragmenter to defragment the files within the volumes stored inside the VMDK.
  2. Next, power down the virtual machine and defragment the vmdk using the command vmware-vdiskmanager -d myVirtualDisk.vmdk. Defragmenting the vmdk simply reorganizes the blocks so that used blocks move towards lower-numbered sectors and unused blocks move towards higher-numbered sectors.
  3. If the host OS is also Windows, run the Windows Disk Defragmenter to achieve an overall defragmentation of all files on the host including the VMDK.

Should you de-fragment Virtual Disks?

with one comment

Windows de-fragmentation tool or some other commercial alternative, need 5-15% of free disk space, for the tool to be effective. Sometimes it may need more if you have some very large files (like video or database files). Below is the layout of c-drive of may virtual machine. The red segments you see are the fragmented files.

If you have a file with one large segment, for the defrag to be effective it has to move this segment to a free area and copy the rest of the segments with it to make the file contiguous. If there is no place to copy the large extent of a file, then it wont get defragmented.

The best way to de-fragment is to get an empty disk and copy all the files onto the empty disk. So the more free disk you have the better these tools will perform.

Also how you think about de-fragmentation in a virtual disk is very different than how we think about de-fragmentation in a physical world. Take the above disk it is a virtual disk 2GB Max Extent Sparse

The disk was full and then I extended the disk (with fatVM) and then defragmented one file (you can do that with Mark Russinovich’s Contig Tool http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx). You can see that the files are contiguous (blue) in the extended portion. The original disk clearly requires defragmentation, but without extending it, we would not have been able to get the key database file to be contiguous.

It makes one ask the question whether you really need the traditional way of defrag the virtual disk. It is much faster to extend the disk and/or attach a separate disk and simply copy over all the files and re-place the original disk with the new extended disk.

Another advantage of doing this is that it is much faster than defragging also you can improve the performance of the virtual machine considerably. Also you can take the files which are static (don’t change)  by taking the files in a virtual machine which don’t change and making the base new disk for c-drive a flat file instead of a sparse disk as the sparse disk is not really saving you anything once you get full. If you have a parent which is flat and then a child which is sparse you get the best of both worlds.

In my limited experience instead of defrag, do the following

  • create a new flat disk, copy all the files from C: to the new disk
  • make the new disk your c: drive
  • create a clone of the base disk (which by definition is sparse)
  • extend the sparse disk

Your virtual machine’s performance will be significantly improved.

Why do Windows C drives get full in virtual disks?

leave a comment »

A real life experience posted by a member in the VMware vCenter Server Communities yesterday (Feb 8, 2010):

I have installed vc with sql 2005 express, now my vCenter server c:\ is almost full
is it possible to move my vCenter database to another drive

The solution recommended by an expert is:

you can install a new server with more space and migrate the data as following.
link to kb post
But you can use also tolls like gparted or dell_expart to incrase your space.

While this recommendation is consistent with the perceived state of the art, it does have the following impact:

It is not going to affect the running VMs and also ESX but you/VSC may see a disconnect for a while.

Another member recommends a different approach

A different approach would be to extend the c-drive.
We have recently released a tool (fatVM) to make this easy (or easier).
It creates the extended VM in a new directory (with the original as parent). Does not touch the original files. Is able to extend most VM in a couple of minutes.
Here is the link: http://www.gudgud.com/fatvm

A third member is contemplating a similar move:

I have a 4 host ESX 3.5U4 system.My VCenter is pointing to an external SQL server. I am about to upgrade to vSphere and want to have the SQL running on on the VCenter server itself – most likely using SQL Express. I have the same concern about space.

You must have noticed the pattern that is emerging. Your C:drive can get full when you are using a database system, or a log aggregation server, within a VM that has a pre-allocated disk and size of the data is growing. As a best practice, review your apps for potential of data growth before pre-allocating the size of the VM.

EMC FAST Fully Automated Storage Tiering for storage savings

leave a comment »

Chuck Hollis, VP Global Marketing, CTO, EMC, describes FAST over 3 blog posts. The technology has been in Beta usage by several customers in 2009.

The premise

When you analyze the vast majority of application I/O profiles, you’ll realize that a small amount of data is responsible for the majority of I/Os; almost all of it is infrequently accessed. 

The principle

Watch how the data is being accessed, and dynamically cache the most popular/ frequently accessed data on flash drives, usually the small amount, and the vast majority of infrequently accessed data on big, slow SATA drives.

The storage savings solution

FAST Place  the right information on the right media based on frequency of access
Thin This (virtual) provisioning allocate physical storage when it is actually being used, rather than when it is provisioned.
Small Compression, single-instancing and data deduplication technologies eliminate information redundancies.
Green A significant amount of enterprise information is used *very* infrequently.  So infrequently, in fact, that the disk drives can be spun down, or at the least  be made semi-idle. 
Gone Policy-based lifecycle management – Archiving and Deletion, Federation to the cloud through private and public cloud integration.
The information can get shopped to a specialized service provider as an option

 

… and life goes on!

One thing hasn’t changed, though. The information beast continues to grow

Written by paule1s

December 11, 2009 at 9:29 am

Thin Provisioning – when to use, benefits and challenges

with 4 comments

There are excellent posts by two prominent authors that provide a lot of insight into the nuances of using thick or thin provisioning for VM’s: Thin Provisioning Part 1 – The Basics and Thin Provisioning Part 2 – Going Beyond by Vaughn Stewart of NetApp and Thin on Thin – where should you do Thin Provisioning by Chad Sakac of EMC.

Synopsis:
Escalating storage costs are stalling the deployment of virtualized data centers and it is becoming increasingly important for customers to leverage storage technology developed by VMware and its storage partners, Netapp and EMC for reducing storage costs.

vmdk formats:

vmdk formats

VMFS blocks
pre-allocated

Disk array block
pre-allocated

Disk array blocks
pre-allocated

Thin

No

No

No

Thick (Non-zeroed)

Yes

No

No

Eager zeroed thick

Yes

Yes

Yes

 

Recommendations:
Use Thin on Thin (Thin vmdk’s and Thin Provisioning on the storage array) for the best storage utilization because they allocate storage capacity from the datastore and storage array only on demand.

Stewart:

The Goal of Thin Provisioning is Datastore Oversubscription  The challenge is that datastore, and all of its components (VMFS, LUNs, etc…) are static in terms of storage capacity. While the capacity of a datastore can be increased on the fly, this process is not automated or policy driven. Should an oversubscribed datastore encounter an out of space condition, all of the running VMs will become unavailable to the end user. In these scenarios the VMs don’t ‘crash’ the ‘pause’; however, applications running inside of VMs may fail if the out of space condition isn’t addressed in a relatively short period of time. For example Oracle databases will remain active for 180 seconds, after that time has elapsed the database will fail.

Sakac:

If you DO use Thin on Thin, use VMware or 3rd party usage reports in conjunction with array-level reports, and set thresholds with notification and automated action on both the VMware layer (and the array level (if you array supports that). Why? Thin provisioning needs to carefully manage for “out of space” conditions, since you are oversubscribing an asset which has no backdoor (unlike how VMware oversubscribes guest memory which can use VM swap if needed). When you use Thin on Thin – this can be very efficient, but can “accelerate” the transition to oversubscription.

Sakac:

The eagerzeroedthick virtual disk format is required for VMware Fault Tolerant VMs on VMFS (if they are thin, conversion occurs automatically as the VMware Fault Tolerant feature is enabled). It continues to also be mandatory for Microsoft clusters (refer to KB article) and recommended in the highest I/O workload Virtual Machines, where the slight latency and additional I/O created by the “zeroing” that occurs as part and parcel of virtual machine I/O to new blocks is unacceptable.

vmdk growth:

Stewart:

VMDK grew beyond the capacity of the data which it is storing. The reason for this phenomenon is deleted data is stored in the GOS file system. When data is deleted the actual process merely removes the content from the active file system table and marks the blocks as available to be overwritten. The data still resides in the file system and thus in the virtual disk. This is why you can purchase undelete tools like WinUndelete.

Don’t run defrag within a thin provisioned VM

Stewart:

the defragmentation process results in the rewriting all of the data within a VMDK. This operation can cause a considerable expansion in the size of the virtual disk, costing you your storage savings.

How to recover storage

Stewart:

First is to zero out the ‘free’ blocks within in the GOS file system. This can be accomplished by using the ‘shrink disk’ feature within VMTools or with tools like sdelete from Microsoft. The second half, or phase in this process, is to use Storage VMotion to migrate the VMDK to a new datastore.

The second half, or phase in this process, is to use Storage VMotion to migrate the VMDK to a new datastore. You should note that this process is manual; however, Mike Laverick has posted the following guide which includes how to automate some of the components in this process. Duncan Epping has also covered automating parts of this process.

NetApp features for virtualization storage savings

with 3 comments

The feature set that gives customers storage savings is described in a 42 minute informative video on Hyper-V and Netapp storage – Overview. I have summarized it in a 5 minute long post below.

Enterprise System Storage Portfolio

The Enterprise product portfolio consists of the FA series, V Series storage systems. These systems have a unified storage architecture based on the Data ONTAP, OS running across all storage arrays. Data ONTAP provides a single app interface and supports protocols such as FC-SAN, FCoE-SAN, IP-SAN (iSCSI), NAS, NFS, CIFS. The V-Series controllers also offer multiple vendor array support, i.e., they can offer the same features on disk arrays manufactured by Netapp’s competitors.

Features

  • Block-level de-duplication, or de-dupe, retains exactly one instance of each unique disk block. When applied to live production systems, it can reduce data 95% for full backups, especially when there are identical VM images created from the same template, and as much as 25%-55% for most data sets.
  • Snapshot copies of a VM are lightweight because they share the same disk blocks with the parent and do not require as much space for the copy as the parent. If a disk block is updated with a snapshot, e.g., if a configuration parameter is customized for an application, or when a patch is applied, the Write Anywhere File Layout (WAFL) file system associates the updated block with the snapshot copy and writes to the disk, leaving the original block and its referrers intact. Snapshot copies therefore impose negligible storage performance impact on running VM’s.
  • Thin provisioning allows users to define storage pools (Flexvol) for which storage allocation is done dynamically from the storage array on demand. Flexvol can be enabled at any point in time while the storage system is in operation.
  • Thin replication between disks provides data protection. Differential Backups and mirroring over the IP network works at the block level copying only the changed blocks – compressed blocks are sent over the wire It enables virtual restores of full, point in time data at granular levels
  • Double parity RAID, called Raid DP, provides superior fault tolerance and provides 46% saving vs mirrored data or RAID 10. You can think of it as being a RAID 6 (RAID 5 + 1 Double Parity disk). RAID DP can lose any two disk in the raid stripe without losing any data. It offers availability equivalent to RAID 1 and allows lower cost /higher capacity SATA disks for applications. The industry standard best practice is to use RAID 1 for important data, RAID 5 for other data.
  • Virtual Clones (Flex clones). You can clone a volume / LUN or individual files. Savings = size of the original data set minus blocks subsequently changed in clone. Enables ease of dev and test cycles. Typical use cases: Build a tree of clones (clone of clones), clone a sysprep‘ed vhd, DR testing, VDI

There are several other videos on the same site that show the setup for the storage arrays. They are worth seeing to get an idea of what is involved to get all the machinery working in order to leverage the above features. It involves many steps and seems quite complex. (The hallmark of an “Enterprise-class” product? 😉 ) The SE’s have done a great job of making it seem simple. Hats off to them!

Netapp promises to reduce your virtualization storage needs by 50%

leave a comment »

50% Storage Savings Guarantee

NetApp‘s  Virtualization Gurantee Program promises that you will install 50% less storage for virtualization than if you buy from their competition, when you

  • Engage them for planning your virtualization storage need
  • Implement best practices recommended by them
  • Leverage features like, De duplication, Thin provisioning, RAID DP (Double Parity RAID), NetApp Snapshot copies

If you don’t use 50% less storage, you can get the required additional capacity at no additional costs

I learned about this in a 42 minute informative video on Hyper-V and Netapp storage – Overview

Written by paule1s

November 29, 2009 at 11:20 am