Archive for the ‘free up disk space’ Category
a process that reduces the amount of fragmentation in file systems. It does this by physically organizing the contents of the disk to store the pieces of each file close together and contiguously. It also attempts to create larger regions of free space using compaction to impede the return of fragmentation.
Generically, the defragmentation of a Windows guest within a virtual disk running on a Windows host (Windows on Windows) requires a three-step process:
- Defragment the guest
- Defragment the virtual disk
- Defragment the host
On a Linux host or guest, the ext3 and ext4 file systems are more resilient to defragmentation.
Windows on Windows
You should perform the following steps whether you are using a Microsoft VHD, VirtualBox VDI or VMware VMDK virtual disk,
- On a Windows guest OS, run the Windows Disk Defragmenter to defragment the files within the volumes stored inside the virtual disk.
- Next, power down the virtual machine and defragment the virtual disk using contig. Defragmenting the virtual disk simply reorganizes the blocks so that used blocks move towards lower-numbered sectors and unused blocks move towards higher-numbered sectors.
- Run the Windows Disk Defragmenter to achieve an overall defragmentation of all files on the host including the virtual disk.
VMware VMDK specific
The following steps can be used generically for VMware VMDK, for Windows on WIndows or any other suppoted platforms. vmware-vdiskmanger:is a standalone tool for defragmenting a growable VMware Workstation, VMware Fusion or VMware Server, vmdk when it is offline. Note that you cannot defragment:
- Preallocated virtual disks
- Physical hard drives
- Virtual disks that are associated with snapshots.
The recommended steps for defragmenting a vmdk are:
- On a Windows guest OS, run the Windows Disk Defragmenter to defragment the files within the volumes stored inside the VMDK.
- Next, power down the virtual machine and defragment the vmdk using the command
vmware-vdiskmanager -d myVirtualDisk.vmdk.Defragmenting the vmdk simply reorganizes the blocks so that used blocks move towards lower-numbered sectors and unused blocks move towards higher-numbered sectors.
- If the host OS is also Windows, run the Windows Disk Defragmenter to achieve an overall defragmentation of all files on the host including the VMDK.
Windows de-fragmentation tool or some other commercial alternative, need 5-15% of free disk space, for the tool to be effective. Sometimes it may need more if you have some very large files (like video or database files). Below is the layout of c-drive of may virtual machine. The red segments you see are the fragmented files.
If you have a file with one large segment, for the defrag to be effective it has to move this segment to a free area and copy the rest of the segments with it to make the file contiguous. If there is no place to copy the large extent of a file, then it wont get defragmented.
The best way to de-fragment is to get an empty disk and copy all the files onto the empty disk. So the more free disk you have the better these tools will perform.
Also how you think about de-fragmentation in a virtual disk is very different than how we think about de-fragmentation in a physical world. Take the above disk it is a virtual disk 2GB Max Extent Sparse
The disk was full and then I extended the disk (with fatVM) and then defragmented one file (you can do that with Mark Russinovich’s Contig Tool http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx). You can see that the files are contiguous (blue) in the extended portion. The original disk clearly requires defragmentation, but without extending it, we would not have been able to get the key database file to be contiguous.
It makes one ask the question whether you really need the traditional way of defrag the virtual disk. It is much faster to extend the disk and/or attach a separate disk and simply copy over all the files and re-place the original disk with the new extended disk.
Another advantage of doing this is that it is much faster than defragging also you can improve the performance of the virtual machine considerably. Also you can take the files which are static (don’t change) by taking the files in a virtual machine which don’t change and making the base new disk for c-drive a flat file instead of a sparse disk as the sparse disk is not really saving you anything once you get full. If you have a parent which is flat and then a child which is sparse you get the best of both worlds.
In my limited experience instead of defrag, do the following
- create a new flat disk, copy all the files from C: to the new disk
- make the new disk your c: drive
- create a clone of the base disk (which by definition is sparse)
- extend the sparse disk
Your virtual machine’s performance will be significantly improved.
When you analyze the vast majority of application I/O profiles, you’ll realize that a small amount of data is responsible for the majority of I/Os; almost all of it is infrequently accessed.
Watch how the data is being accessed, and dynamically cache the most popular/ frequently accessed data on flash drives, usually the small amount, and the vast majority of infrequently accessed data on big, slow SATA drives.
The storage savings solution
|FAST||Place the right information on the right media based on frequency of access|
|Thin||This (virtual) provisioning allocate physical storage when it is actually being used, rather than when it is provisioned.|
|Small||Compression, single-instancing and data deduplication technologies eliminate information redundancies.|
|Green||A significant amount of enterprise information is used *very* infrequently. So infrequently, in fact, that the disk drives can be spun down, or at the least be made semi-idle.|
|Gone||Policy-based lifecycle management – Archiving and Deletion, Federation to the cloud through private and public cloud integration.
The information can get shopped to a specialized service provider as an option
… and life goes on!
One thing hasn’t changed, though. The information beast continues to grow
I work for a relatively small, but growing, research non-profit. When last I measured it, our data use was growing at a compound rate of about 8% each month; in other words, we double our storage use every nine months or so. (As we’re in the midst of a P2V project where direct-attached storage is moving to our NetApps, we’re actually growing faster than that now, but that’s a temporary bump.) We already have multi-terabyte volumes – so, you do the math… the 16TB aggregate limit (of the 2020) is a real problem for sites like us.
It’s also worth noting that a 16TB aggregate is not a 16TB file system available to a server. 750GB SATA drives become Rightsize 621 GB drives. Then, for RAID-DP, subtract two disks out of each RAID group. Next, there’s the 10% WAFL overhead. And don’t forget to translate from marketing GB to real GB (or GB to GiB, if you will). So that maximum-size 26-disk aggregate made up of 750GB drives winds up as 11.4TB. And – of course – don’t forget your snap reserves after that.
As you mention, backups could be a challenge for large volumes; here’s how we solve it: The 2020 in question was purchased as a SnapVault secondary. Backups go from our primary 3040s to it, and then go via NDMP to tape for off-site/DR purposes. The secondary tier gives us the extended backup window we need to get the data to tape and meet our DR requirements. (I actually think this is a pretty common setup in this day and age.)
Of course, I’m not naive enough to think we can grow by adding drive shelves indefinitely (just added another one last Friday…). My personal opinion is that we’ll ultimately move to an HSM system, especially since much of the storage is used for instrument data (mass spec, microscopy, etc.) that is often difficult for researchers to categorize immediately as to its value. The thought is to let the HSM algorithms find the appropriate tier for the data automatically.
The EMC Celerra Deduplication is substantially different in concept, implementation and its benefits from the block-level deduplication offered by NetApp, Data Domain and others in their products. To understand the differences, let us first look at the comparison of data reduction technologies:
Data reduction technologies
|Technology||Typical Space Savings||Resource footprint|
|Fixed block deduplication||20%||High|
- File-level deduplication provides relatively modest space savings.
- Fixed-block deduplication provides better space savings, but consumes more CPU to calculate hashes for each block of data, and more memory to hold the indices used to determine if a given hash has been seen before.
- Variable-block deduplication provides slightly better space savings; but the difference is not significant when applied to file system data. it is most effective when applied to data sets that contain repeated but block-misaligned data, such as backup data in backup-to-disk or virtual tape library (VTL) environments.
- Compression is different from file-level or block-level deduplication in the granularity at which it applies. It is described as infinitely variable, bit-level, intra-object deduplication. It offers the greatest space savings of all the techniques listed for typical NAS data, and is relatively modest in terms of its resource footprint. It is relatively CPU-intensive but requires very little memory.
The storage space savings realized by compression is far greater than those offered by the other techniques and its resource requirements are quite modest by comparison. However, compression has a disadvantage in that there is a potential performance “penalty” associated with decompressing the data when it is read or modified. This decompression “penalty” can work both ways. Reading a compressed file can often be quicker than reading a non-compressed file. The reduction in the size of data that you must retrieve from the disk more than offsets the additional processing required to decompress the data.
Celerra Data Deduplication
Celerra Data Deduplication combines file-level deduplication and compression to provide maximum space savings for file system data based on
- Frequency of file access: files that are not “new” (creation time older than a configuration parameter), or not “hot”, i.e., in active use (access time or modification time older than a configuration parameter)
- File size: It avoids compressing files either if the files are small and the anticipated space savings are minimal, or if the file is large and its decompression could degrade performance and impact file access service levels.
The space reduction process
Celerra Data Deduplication has a flexible policy engine that specifies data for exclusion from processing and decides whether to deduplicate specific files based on their age. When enabled on a file system, Celerra Data Deduplication periodically scans the file system for files that match the policy criteria and then compresses them. The compressed file data is hashed to determine if the file has been identified before. If the compressed file data has not been identified before, it is copied into a hidden portion of the file system. The space that the file data occupied in the user portion of the file system is freed and the file’s internal metadata is updated to reference an existing copy of the data. If the data associated with the file has been identified before, the space it occupies is freed and the internal file metadata is updated. Note that Celerra detects non-compressible files and stores them in their original form. However, these files can still benefit from file-level deduplication.
Celerra Data Deduplication employs SHA-1 (Secure Hash Algorithm) for its file-level deduplication. SHA1 can take a stream of data less than 2 bits in length and produce a 160-bit hash, which is designed to be unique to the original data stream. The likelihood of different files hashing the same value is so substantially low that a collision rate has been reported after 2^69 hash operations. Unlike in compression, you can disable file-level deduplication in Celerra Data Deduplication.
Designed to minimize client impact
Celerra Data Deduplication processes the bulk of the data in a file system without affecting the production workload. All deduplication processing is performed as a background asynchronous operation that acts on file data after it is written into the file system. This avoids latency in the client data path, because access to production data is sensitive to latency. By policy, deduplication is performed only for those files that are not in active use. This avoids introducing any performance penalty on the data that clients and users are using to run their business.
Interesting post on The changing role of the IT storage pro by John Webster who interviewed the CIO of an unnamed storage vendor
The CIO observed that the consolidation of IT infrastructure driven by server virtualization projects and a future rollout of virtual desktops is forcing a convergence of narrowly focused IT administrative groups. This convergence will cause IT administrators to develop competency in systems and services delivery in the future, rather than remain silo’ed experts in servers, networks, and storage.
Virtualization has brought about the convergence of systems and networks; the convergence of Fibre Channel and Ethernet within the data center changes the nature of the relationships between enterprise IT operational groups as well as the traditional roles of server, networking, and storage groups.
As the virtual operating systems (VMware, MS Hyper-V, etc.) progress, we will see an increased tendency to offer administrators the option of doing both storage and data management at the server rather than the storage level. Backups and data migrations can be done by a VMware administrator for example. Storage capacity can be managed from the virtualized OS management console.
John’s observations tie-in with the lessons from the two preceding posts where we explored Netapp’s virtualization storage features and thin provisioned thin virtual disks, where we learnt that the administrators have to understand not just the file system nuances but also the storage features to use storage for virtualization effectively.
There are excellent posts by two prominent authors that provide a lot of insight into the nuances of using thick or thin provisioning for VM’s: Thin Provisioning Part 1 – The Basics and Thin Provisioning Part 2 – Going Beyond by Vaughn Stewart of NetApp and Thin on Thin – where should you do Thin Provisioning by Chad Sakac of EMC.
Escalating storage costs are stalling the deployment of virtualized data centers and it is becoming increasingly important for customers to leverage storage technology developed by VMware and its storage partners, Netapp and EMC for reducing storage costs.
Disk array block
Disk array blocks
Eager zeroed thick
Use Thin on Thin (Thin vmdk’s and Thin Provisioning on the storage array) for the best storage utilization because they allocate storage capacity from the datastore and storage array only on demand.
The Goal of Thin Provisioning is Datastore Oversubscription The challenge is that datastore, and all of its components (VMFS, LUNs, etc…) are static in terms of storage capacity. While the capacity of a datastore can be increased on the fly, this process is not automated or policy driven. Should an oversubscribed datastore encounter an out of space condition, all of the running VMs will become unavailable to the end user. In these scenarios the VMs don’t ‘crash’ the ‘pause’; however, applications running inside of VMs may fail if the out of space condition isn’t addressed in a relatively short period of time. For example Oracle databases will remain active for 180 seconds, after that time has elapsed the database will fail.
If you DO use Thin on Thin, use VMware or 3rd party usage reports in conjunction with array-level reports, and set thresholds with notification and automated action on both the VMware layer (and the array level (if you array supports that). Why? Thin provisioning needs to carefully manage for “out of space” conditions, since you are oversubscribing an asset which has no backdoor (unlike how VMware oversubscribes guest memory which can use VM swap if needed). When you use Thin on Thin – this can be very efficient, but can “accelerate” the transition to oversubscription.
The eagerzeroedthick virtual disk format is required for VMware Fault Tolerant VMs on VMFS (if they are thin, conversion occurs automatically as the VMware Fault Tolerant feature is enabled). It continues to also be mandatory for Microsoft clusters (refer to KB article) and recommended in the highest I/O workload Virtual Machines, where the slight latency and additional I/O created by the “zeroing” that occurs as part and parcel of virtual machine I/O to new blocks is unacceptable.
VMDK grew beyond the capacity of the data which it is storing. The reason for this phenomenon is deleted data is stored in the GOS file system. When data is deleted the actual process merely removes the content from the active file system table and marks the blocks as available to be overwritten. The data still resides in the file system and thus in the virtual disk. This is why you can purchase undelete tools like WinUndelete.
Don’t run defrag within a thin provisioned VM
the defragmentation process results in the rewriting all of the data within a VMDK. This operation can cause a considerable expansion in the size of the virtual disk, costing you your storage savings.
How to recover storage
First is to zero out the ‘free’ blocks within in the GOS file system. This can be accomplished by using the ‘shrink disk’ feature within VMTools or with tools like sdelete from Microsoft. The second half, or phase in this process, is to use Storage VMotion to migrate the VMDK to a new datastore.
The second half, or phase in this process, is to use Storage VMotion to migrate the VMDK to a new datastore. You should note that this process is manual; however, Mike Laverick has posted the following guide which includes how to automate some of the components in this process. Duncan Epping has also covered automating parts of this process.