shareVM- Share insights about using VM's

Simplify the use of virtualization in everyday life

Posts Tagged ‘cisco

how “capitalism” forces virtualization downstream

leave a comment »

Over the last decade we have systematically added a layer of indirection at every interface in the stack. These days we call this virtualization!

On a NetApp Filer we had

> raid disk group > volume

The problem was you could not expand/shrink Raid Groups on the fly, you couldn’t move data easily between different Raid Groups. We get a layer of indirection or virtualization

> raid disk group > aggregate > flex volume>

Since an aggregate was logical instead of physical, it could be expanded or shrunk without changing the volume, you could move data around.

On a USB Disk

If we look inside the disk itself, especially usb flash devices we went from

cylinder, heads, sector > logical table > device abstraction

Again this allowed the rotation of different logical sectors to different physical cells, to ensure a single cell was not rewritten more times than its lifetime.

In a SAN

we put a switch between the Raid Groups and the computer. The switch puts a layer of indirection between the blocks and the computer

You knew all that :-). So what does it have to do with capitalism. My simplistic definition of capitalism is that the system will remove all inefficiencies in a chain and who ever will remove them stands to benefit economically. Or said another way: money finds its way into the right pockets!

So look at the stack today:

chips > motherboard, network, storage, bios > hypervisor > OS > Security, Backup etc > Business, Productivity Apps

Every layer presents an interface to the layer above. Each layer is also owned by different companies in the eco-system. Each of those companies has pressure to maximize its revenue. Tasked with this difficult challenge, you look at the layer above and see what is selling and can you add it to your layer. Happens naturally over time: intel added virtulization support, phoenix bios is adding the hypervisor, operating systems are trying to add backup and security …. The cycle goes on ….

Virtualization will be “innovated” always in a higher layer of the stack and commoditized by the lower layers.

The higher layer in the stack finds a lot of new functionality and benefit by making interface to a lower layer “logical”. They take this to market, till at some point the lower layer realizes that this is their API, they should move virtualization into their layer. The pressure to do this is extreme and the time frame to monetize this really small:

  • Imagine the tussle between VMW and the storage vendors. VMW introduces logical disks with cloning, but storage vendors want to offer logical luns and volumes and disk files, as this moves the cloning functionality from the hypervisor to the storage.
  • Imagine: Western Digital or Seagate could create multiple disks (vhd/vmdk files) on a single physical disk and then offer the capabilities to grow, shrink, move data between them. Even add networking to the disk controller, then different disks can connect to each other. They can do that if the processing power, memory reach a price point that it can be embedded directly into the component or lower layer. Which is what effectively happened to computing.
  • VMW introduces logical network switch, Cisco jumps in with nexus-V

For a consumer this is a good thing, but money and value are shifting down stack across different companies, which have to co-exist in the eco-system (cisco, intel, emc, vmw), yet guard their innovation from becoming commoditized.

Written by RS

December 13, 2009 at 8:36 am

Who is the virtualization storage administrator?

leave a comment »

Interesting post on The changing role of the IT storage pro by John Webster who interviewed the CIO of an unnamed storage vendor

The CIO observed that the consolidation of IT infrastructure driven by server virtualization projects and a future rollout of virtual desktops is forcing a convergence of narrowly focused IT administrative groups. This convergence will cause IT administrators to develop competency in systems and services delivery in the future, rather than remain silo’ed experts in servers, networks, and storage.

Virtualization has brought about the convergence of systems and networks; the convergence of Fibre Channel and Ethernet within the data center changes the nature of the relationships between enterprise IT operational groups as well as the traditional roles of server, networking, and storage groups.

As the virtual operating systems (VMware, MS Hyper-V, etc.) progress, we will see an increased tendency to offer administrators the option of doing both storage and data management at the server rather than the storage level. Backups and data migrations can be done by a VMware administrator for example. Storage capacity can be managed from the virtualized OS management console.

John’s observations tie-in with the lessons from the two preceding posts where we explored Netapp’s virtualization storage features and thin provisioned thin virtual disks, where we learnt that the administrators have to understand not just the file system nuances but also the storage features to use storage for virtualization effectively.

Written by paule1s

December 3, 2009 at 11:03 pm

Virtual Humor (or is it virtual insanity?)

with one comment

Dr Pinto advised fatvm to check out various exercise options and guess what the Google searches revealed:

Is it funny that this is real, or is this virtual insanity? 😉

VMWorld 2009: Impending Cisco, VMWare, EMC partnerships?

leave a comment »

virtualization.info is reporting that a glimpse of the Cisco, VMWare, EMC strategy has emerged in a post on the personal blog of Chad Sakac, Sr. Director VMWare Strategic Alliance at EMC.

Upon reading Chad’s blog, my impression is that a partnership ecosystem seems to be emerging between Cisco, VMWare and EMC for supporting private clouds within an Enterprise and public clouds, a la EC2, through

  • A deep integration with the VMWare Hypervisor or “VMWare’s cloud operating system” using standard API’s
  • A broad integration to provide a cross-vendor management fabric that spans across management tools of the respective vendors and enables management of VM’s/Virtual Appliances, the underlying host servers, storage and the network.
  • VM/Virtual appliance portability across the private and public clouds

This management layer will permit control over individual VM’s and groups of VM’s within this cloud and permit applications (virtual appliances) to be deployed using Just Enough OS’s (JeOS)

EMC’s internal goals (and perhaps VMWare and Cisco’s, too) seem to be

1) To Drive 100% virtualization

Requires A Virtualization Layer that can literally meet the scaling, performance and availability goals of any x86 workload.

EVERY EMC product is being turned into a Virtual Appliance.

Physical adaptability (i.e. increase/decrease CPU/Memory model) needs to extend into the Networking and Storage stacks. People will REALLY start to see “purpose built servers/network/storage” for VMware in 2009

2)  To drive API Integration

Streamline the integration of existing management tools and capabiities with VMWare’s management tools and capabilties.  

These are about making sure that virtual world is able to do everything the phyiscal world can do.   They make sure that the datacenter CAN be 100% virtualized

3)  To create infrastructure that understands and responds to “VM/Application objects”

the next phase is where things really get blended – where thin provisioning is integrated, where management tasks are integrated, where “VM object awareness” is added, and where networking policy portability really takes off.

vCenter is surely a critical new management point – so expect to see core management capability for EMC storage integrated into vCenter in the very near term. … We’ll leverage existing open APIs to create plug-in extension models. BUT at the same time – we will continue to integrate into the vCenter APIs for integrated views in management frameworks that are “home” to people other than VMware Administrators.

Epilogue

Done right – the Private Cloud and Public Cloud can share the applications transparently, and the “Public Cloud” infrastructure layers can “read the same bar-codes”  Clearly the infrastructure needs to be a bit different (management model, federation, multi-tenancy, scale and price points are all different), but they need to be linked.

This ain’t about consolidating servers (though includes that too!).   It **IS** about the next big transformation we all see coming in the IT space we deal in.   We’re gearing up, and as leaders in our respective spaces, focusing our resources, and driving towards a vision. 

If Cisco, VMWare and EMC indeed work together on this, they will be able to dominate this market for years to come. Very cool, Chad! Thanks for sharing your views.

Written by paule1s

February 3, 2009 at 11:59 pm