shareVM- Share insights about using VM's

Simplify the use of virtualization in everyday life

A sysadmin’s DAS to Netapp NAS migration experience

leave a comment »

This is from Andy Leonard’s post

I work for a relatively small, but growing, research non-profit. When last I measured it, our data use was growing at a compound rate of about 8% each month; in other words, we double our storage use every nine months or so. (As we’re in the midst of a P2V project where direct-attached storage is moving to our NetApps, we’re actually growing faster than that now, but that’s a temporary bump.) We already have multi-terabyte volumes – so, you do the math… the 16TB aggregate limit (of the 2020) is a real problem for sites like us.

Storage Math

It’s also worth noting that a 16TB aggregate is not a 16TB file system available to a server. 750GB SATA drives become Rightsize 621 GB drives. Then, for RAID-DP, subtract two disks out of each RAID group. Next, there’s the 10% WAFL overhead. And don’t forget to translate from marketing GB to real GB (or GB to GiB, if you will). So that maximum-size 26-disk aggregate made up of 750GB drives winds up as 11.4TB. And – of course – don’t forget your snap reserves after that.

Backups

As you mention, backups could be a challenge for large volumes; here’s how we solve it: The 2020 in question was purchased as a SnapVault secondary. Backups go from our primary 3040s to it, and then go via NDMP to tape for off-site/DR purposes. The secondary tier gives us the extended backup window we need to get the data to tape and meet our DR requirements. (I actually think this is a pretty common setup in this day and age.)

Archiving

Of course, I’m not naive enough to think we can grow by adding drive shelves indefinitely (just added another one last Friday…). My personal opinion is that we’ll ultimately move to an HSM system, especially since much of the storage is used for instrument data (mass spec, microscopy, etc.) that is often difficult for researchers to categorize immediately as to its value. The thought is to let the HSM algorithms find the appropriate tier for the data automatically.

Written by paule1s

December 10, 2009 at 4:42 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: