shareVM- Share insights about using VM's

Simplify the use of virtualization in everyday life

Posts Tagged ‘virtualization

how “capitalism” forces virtualization downstream

leave a comment »

Over the last decade we have systematically added a layer of indirection at every interface in the stack. These days we call this virtualization!

On a NetApp Filer we had

> raid disk group > volume

The problem was you could not expand/shrink Raid Groups on the fly, you couldn’t move data easily between different Raid Groups. We get a layer of indirection or virtualization

> raid disk group > aggregate > flex volume>

Since an aggregate was logical instead of physical, it could be expanded or shrunk without changing the volume, you could move data around.

On a USB Disk

If we look inside the disk itself, especially usb flash devices we went from

cylinder, heads, sector > logical table > device abstraction

Again this allowed the rotation of different logical sectors to different physical cells, to ensure a single cell was not rewritten more times than its lifetime.

In a SAN

we put a switch between the Raid Groups and the computer. The switch puts a layer of indirection between the blocks and the computer

You knew all that :-). So what does it have to do with capitalism. My simplistic definition of capitalism is that the system will remove all inefficiencies in a chain and who ever will remove them stands to benefit economically. Or said another way: money finds its way into the right pockets!

So look at the stack today:

chips > motherboard, network, storage, bios > hypervisor > OS > Security, Backup etc > Business, Productivity Apps

Every layer presents an interface to the layer above. Each layer is also owned by different companies in the eco-system. Each of those companies has pressure to maximize its revenue. Tasked with this difficult challenge, you look at the layer above and see what is selling and can you add it to your layer. Happens naturally over time: intel added virtulization support, phoenix bios is adding the hypervisor, operating systems are trying to add backup and security …. The cycle goes on ….

Virtualization will be “innovated” always in a higher layer of the stack and commoditized by the lower layers.

The higher layer in the stack finds a lot of new functionality and benefit by making interface to a lower layer “logical”. They take this to market, till at some point the lower layer realizes that this is their API, they should move virtualization into their layer. The pressure to do this is extreme and the time frame to monetize this really small:

  • Imagine the tussle between VMW and the storage vendors. VMW introduces logical disks with cloning, but storage vendors want to offer logical luns and volumes and disk files, as this moves the cloning functionality from the hypervisor to the storage.
  • Imagine: Western Digital or Seagate could create multiple disks (vhd/vmdk files) on a single physical disk and then offer the capabilities to grow, shrink, move data between them. Even add networking to the disk controller, then different disks can connect to each other. They can do that if the processing power, memory reach a price point that it can be embedded directly into the component or lower layer. Which is what effectively happened to computing.
  • VMW introduces logical network switch, Cisco jumps in with nexus-V

For a consumer this is a good thing, but money and value are shifting down stack across different companies, which have to co-exist in the eco-system (cisco, intel, emc, vmw), yet guard their innovation from becoming commoditized.

Written by RS

December 13, 2009 at 8:36 am

EMC FAST Fully Automated Storage Tiering for storage savings

leave a comment »

Chuck Hollis, VP Global Marketing, CTO, EMC, describes FAST over 3 blog posts. The technology has been in Beta usage by several customers in 2009.

The premise

When you analyze the vast majority of application I/O profiles, you’ll realize that a small amount of data is responsible for the majority of I/Os; almost all of it is infrequently accessed. 

The principle

Watch how the data is being accessed, and dynamically cache the most popular/ frequently accessed data on flash drives, usually the small amount, and the vast majority of infrequently accessed data on big, slow SATA drives.

The storage savings solution

FAST Place  the right information on the right media based on frequency of access
Thin This (virtual) provisioning allocate physical storage when it is actually being used, rather than when it is provisioned.
Small Compression, single-instancing and data deduplication technologies eliminate information redundancies.
Green A significant amount of enterprise information is used *very* infrequently.  So infrequently, in fact, that the disk drives can be spun down, or at the least  be made semi-idle. 
Gone Policy-based lifecycle management – Archiving and Deletion, Federation to the cloud through private and public cloud integration.
The information can get shopped to a specialized service provider as an option


… and life goes on!

One thing hasn’t changed, though. The information beast continues to grow

Written by paule1s

December 11, 2009 at 9:29 am

Cloud storage predictions for 2010

leave a comment »

Detailed post by Sajai Krishnan, CEO, ParaScale is on David Marshall’s VMBlog. The key ideas are summarized below:

The advent of cloud computing has given rise to several cloud storage vendors.

1) The cloud starts to get described

Vendors will begin to describe concrete features and benefits of their product offerings

2) Commodity hardware starts to displace proprietary storage

While all storage vendors claim to use commodity hardware, in reality they are all essentially closed solutions qualified on two or three commodity boxes. Customers are locked into stovepipes with little ability to truly benefit from Moore’s law by selecting from the thousands of commodity servers available at any given point and at multiple points of purchase.

3) Server Virtualization will drive Private Cloud Storage adoption in the Enterprises

With server virtualization, organizations are free to take advantage of low-cost commodity hardware and aren’t tied to proprietary linkage of the OS and the hardware platform. The weak link today is the storage infrastructure behind virtualized servers.

4) A storage middle tier will emerge

The strategic importance of a low-cost, self-managing, petabyte scale tier that provides a platform for analysis and integrated applications emerges in organizations with large stores of file data. These organizations that are investing heavily in new tier1 storage and moving aged data to archive will experiment with a middle tier that leverages low cost commodity hardware and provides read/write access. This middle tier will provide opportunity for administrators to automate storage management and optimize for performance and cost, but at a much lower expense. This middle tier will also support large scale analysis while eliminating related data migration and administrative tasks. The emerging middle tier will also provide an integration layer with service provider cloud offerings. The similar architectures enable "cloud bursting," the seamless ability for service providers to offer spillover capacity and compute to enterprises.

5) Opex, not Capex will emerge as the most important criteria driving storage purchases

Maintenance costs on existing gear will be under heavy review with the emergence of commodity-based hardware storage options.

Written by paule1s

December 10, 2009 at 5:50 pm

Steve Herrod’s Virtualization Predictions for 2010

leave a comment »

Steve Herrod is the CTO and Senior VP R&D at VMWare.

1) Growing modularization of the data center

The EMC/Cisco vBlock bundles, as well as “converged infrastructure” offerings from HP, demonstrate that vendors understand that customers want simplicity, ease of use, and a single support channel for utilizing them. Look for more to come

2) Private clouds

Industry participants in 2010 will better define the private cloud and help customers implement it. We’ve finally emerged from the hype of cloud computing to tackle the realities of what’s needed and confront the challenges that currently exist, particularly around security. Furthermore, customers are realizing that their IT landscape will include a mix of both private and public cloud offerings (hybrid clouds). Customers want this hybrid world to be more easily managed and give them the right balances between efficiency, scalability, security, and control I fully expect that vendors, systems integrators, and certification agencies will rally to meet that demand in 2010

3) High-level application frameworks

I expect 2010 to be the year when developers and customers recognize the need to have even more choice as to where their applications are run in the cloud. This will lead to both open cloud standards and the arrival of new PaaS offerings designed with cloud portability in mind

Read the full post here

Predictions for 2009 are here.

Written by paule1s

December 7, 2009 at 4:27 pm

Steve Herrod’s self-graded report card for 2009 virtualization predictions

leave a comment »

Steve Herrod is the CTO and Senior VP R&D at VMWare.

Full credit: 60% Partial credit: 40%.

… the economy definitely forced businesses to do more with less and this caused many transformative investments to be postponed; short-term return on investment became a near single-minded focal point for customers. These are all still valid trends – especially as the economy rebounds — and I do expect more progress in 2010 than what we saw in 2009 for all of them (especially enterprise desktop virtualization now that Microsoft Windows 7 is out).

Read the entire post here.

Original predictions for 2009 are here.

Written by paule1s

December 7, 2009 at 4:15 pm

Posted in survey

Tagged with , ,

Who is the virtualization storage administrator?

leave a comment »

Interesting post on The changing role of the IT storage pro by John Webster who interviewed the CIO of an unnamed storage vendor

The CIO observed that the consolidation of IT infrastructure driven by server virtualization projects and a future rollout of virtual desktops is forcing a convergence of narrowly focused IT administrative groups. This convergence will cause IT administrators to develop competency in systems and services delivery in the future, rather than remain silo’ed experts in servers, networks, and storage.

Virtualization has brought about the convergence of systems and networks; the convergence of Fibre Channel and Ethernet within the data center changes the nature of the relationships between enterprise IT operational groups as well as the traditional roles of server, networking, and storage groups.

As the virtual operating systems (VMware, MS Hyper-V, etc.) progress, we will see an increased tendency to offer administrators the option of doing both storage and data management at the server rather than the storage level. Backups and data migrations can be done by a VMware administrator for example. Storage capacity can be managed from the virtualized OS management console.

John’s observations tie-in with the lessons from the two preceding posts where we explored Netapp’s virtualization storage features and thin provisioned thin virtual disks, where we learnt that the administrators have to understand not just the file system nuances but also the storage features to use storage for virtualization effectively.

Written by paule1s

December 3, 2009 at 11:03 pm

Brian Madden: Terminal Services versus VDI

leave a comment »

These are my notes summarizing Brian Madden’s one hour long Terminal Services versus VDI presentation at VMWorld Europe 2009. I will highly recommend that you should watch Brian’s video, he is insightful, lively and articulate, you’ll enjoy him while learning from it as I did.


Both Terminal Services (TS) and Virtual Desktop Infrastructure (VDI) employ server-based computing (SBC) and offer the benefits that are inherent to SBC, namely

  • Central management
  • Central access control
  • High performance
  • Security

Historically, several applications have been found not to work with SBC due to limitations of remote display protocols, although the end-user often views it as an application compatibility issue. Typically applications that are multimedia, graphic-intensive, or write their state to proprietary folders on a local disk drive, or require multiple monitors, or a hardware security dongle,  are known to break.

Should I use TS and VDI?

If you have to answer this question, you should first identify which applications are SBC compatible. Once you have identified them, then you should decide between TS and VDI.

TS advantages:

  1. Very high user density (By contrast, VMWare VDI supports only 6 to 8 users per core today)
  2. Proven solution/Mature technology (80 Million users/lot of user experiences on wiki’s  and training material on the Web. By contrast, VDI is cutting edge, there is a learning curve coupled with a dearth of usage-based content)
  3. Automatic “thin” provisioning: All users share a common copy of OS/app code

VDI advantages:

  1. Live migration for load balancing, supporting mobile users
  2. VM’s can be rebooted without rebooting the host
  3. Suspend/resume of VM’s possible (TS disconnect continues to consume resources)
  4. Fault tolerance per user (since each user has their own VM)
  5. Competition amongst vendors (Citrix was a monopoly for TS, now VMWare, Microsoft, Citrix and several other vendors are investing in the future of VDI)

    Brian showcased Atlantis Computing as an innovator that dynamically composes a bootable virtual disk (vhd or vmdk) for booting a VM.

VDI disadvantages:

  1. Disk space: 10GB per user in the data center (cost per GB in data center is 5X to 10X its cost on the desktop)
  2. Routine Op’s: How to run AV, backup, patch one master image

Brian’s predictions for 2010:

  • Improving user density
  • Remote Display Protocol improvements
  • Thin provisioning/ Windows layering
  • Offline VDI/Local hypervisors
  • Local personality/Application management

Brian possesses journalistic flair; his posts are always insightful and thought-provoking. I have become a great fan of his blog.

Written by paule1s

April 16, 2009 at 2:25 pm

Posted in VDI

Tagged with , , ,