shareVM- Share insights about using VM's

Simplify the use of virtualization in everyday life

Posts Tagged ‘VDI

Unidesk Virtual Desktop VDI technology

leave a comment »

This is a summary of Kris Midgely’s (Founder and CTO, Unidesk) interview by Brian Madden

Unidesk is a PC Lifecycle Management company planning to provide

  • Virtual Desktop Management
  • Personalization
  • Storage reduction

with no agent on the desktop.

Supports VMware ESX today. Intends to support Citrix XenServer and Microsoft Hyper-VVMware Workstation, VMware Fusion, Citrix XenClient. and application virtualization technologies such as, VMware ThinApp, Microsoft App-V, etc.


CacheCloud: is a content delivery network (think Akamai) for pushing out VDI gold images to different data centers, laptops/desktops in branch offices or machines that connect intermittently. Cloud consists of  a large number of virtual appliances, called CachePoints, running one per blade or laptop. Each CachePoint stores user personalization locally as well as replicates it out. CachePoint appliances are made of Linux, have virtualized storage that supports

  • thin provisioning
  • replication
  • versioning

Windows and app code is shared, user personalization is unique. This makes scanning for AV really fast since there is only image of code

Block-level replication of deltas, file-level replication for compositing.  Personalization data can be written from several individual CachePoints to a  NAS/SAN in the data center which enables legal discovery of changes to data, which was not possible until today.

Composite Virtualization

Composite Virtualization understands the abstract layers, Windows’, apps and user data and knows how to merge them together (composite) in real time to create a bootable C: device and provide a rich desktop experience. Virtualizes each desktop into  layers

  • exe, com objects and dlls are apps
  • Registry – configuration 
  • everything else is data 

It will support encryption in the future: Shared keys for windows and apps code, personal keys for private data

Composting engine sits on top of the device driver and form the individual layers by merging individual IO streams with the namespace knowledge it maintains.

A virtualization storage layer implemented as a NTFS file system filter driver provides a high performance block IO device that talks to the CacheCloud. It loads early in the boot cycle. Once it is loaded, it loads a vmdk disk image which contains Just Enough Windows pre-composited to provide a bootable C drive. The latter can be served from the Cache Cloud.

It Snapshots the system automatically by auto detecting application installs/uninstalls, ActiveX control downloads. An admin can get a timeline view of user-installed software to reconstruct a hosed machine easily from the CacheCloud. Lets you recover system state while retaining your data.


Currently in Beta with 22 customers spanning Financial Institutions, Higher Ed and the Government.

Distribution through a channel strategy, working with Top Channel providers for VMware, Citrix, Microsoft. Can replace WAN acceleration, Backup and DR and Persistent Personalization products.

Type 1 and Type 2 Client Hypervisors

with 2 comments

This post is based on insight gained from two of Brian Madden’s posts: A deeper look at VMware’s upcoming bare-metal client hypervisor and Bare-metal client hypervisors are coming — for real this time

Wikipedia distinguishes between two distinct types of hypervisors

Type 1 Hypervisor

Type 1 (or native, bare-metal) hypervisors are software systems that run directly on the host’s hardware to control the hardware and to monitor guest operating-systems. A guest operating system thus runs on another level above the hypervisor. Some examples are VMware ESX, Xen, Microsoft Hyper-V, etc.

Type 1 hypervisors are appropriate when you want to provide the only OS that is used on a client. When a user turns a machine on, he only sees a single OS that looks and feels local.

Type 2 Hypervisor

Type 2 (or hosted) hypervisors are software applications running within a conventional operating-system environment. Considering the hypervisor layer as a distinct software layer, guest operating systems thus run at the third level above the hardware. Some examples are VMware Workstation, VMware FusionMED-V, Windows Virtual PC, VirtualBox, Parallels, MokaFive, etc.

Type 2 hypervisors are appropriate when you want a user to have access to their own local desktop OS in addition to the centrally-managed corporate VDI OS. This could be for an employee-owned PCscenario, or it could be a situation where you have contractors, etc., who need access to their personal apps and data in addition to the company’s apps and data.

Client Hypervisors

Over the past 5 years, Type 1 hypervisors are dominantly used in the server market, whereas, Type 2 hypervisors are being used on clients, i.e., desktops and laptops. Recently, the need for a Type 1 hypervisor that runs locally on a client device, called the client hypervisor, has emerged for supporting the Virtual Desktop Infrastructure VDI).


VDI’s promise lies in realizing a significant cost reduction for managing desktops. A client hypervisor is useful because it combines the centralized management of VDI with the performance and flexibility of local computing. It offers several advantages:

  • It provides a Hardware Abstraction Layer so that the same virtual disk image can be used on a variety of different devices.
  • The devices do not need a “base OS” when the client hypervisor is present. The maintenance overhead of patching a “base OS” frequently on each of the devices is greatly reduced.
  • Once a virtual disk image has been provisioned, it runs and the display is driven locally. This frees up the client from the need to support remote display protocols.
  • It decouples the management of the device from the management of Windows and the user; administrators can spend their time focusing on user needs instead of device maintenance.

Type 1 Server and Client Hypervisors

Server hypervisors are designed to make VMs portable and increasing the utilization of physical hardware. Client hypervisors are intended to increase the manageability of the client device and improve security by separating work and personal VMs.

The bottom line is that even though they’re both called “Type 1” or “bare-metal hypervisors,” there are some philosophical differences in how each came to be. (This could help explain why it has taken over five years to extend the Type 1 hypervisor concept from the server to the desktop.)

Dimension Type 1 Server Hypervisor Type 1 Client Hypervisor
Design Goal Host multiple VMs and make each VM seem like a “real” server on the network. The user shouldn’t even know that there is a hypervisor or they are using a VM.
Virtualization Goal I/O: Disk and Networking Native device support that affects user experience, e.g.,
a) GPU and graphics capabilities
b) USB ports and devices
c) Laptop battery and power state
d) Suspend/Hibernate states
Tuning Maximum simultaneous network, processor and disk I/O utilization Graphics, multimedia and wireless connectivity
Hardware Support Narrow set of different preapproved hardware models Should (ideally) run on just about anything
Intrusiveness Controls most if not all of the hardware platform and devices and provide a near complete emulated and/or para-virtualized device model to the virtual machines running on top a) Should support full device pass-through to a guest VM.
b) Should also support dynamic assignment and “switching” of devices between different guests

Type 1 Client Hypervisor Vendors
In the Type 1 client hypervisor space, there are Neocleus NeoSphere and Virtual Computer NXTop. There are product announcements from both VMware and Citrix, however, there is no shipping product to date. There is also the Xen Client Initiative – an effort to port the open source Xen hypervisor to the client.

Editorial Opinion
Today, hypervisors are a commodity. While they are indeed foundational technology, they are “out of sight is out of mind”, i.e., most users do not perceive their presence and hence ascribe no/low value for this technology. Hypervisor developers will be hard pressed to build a lasting public company solely based on selling hypervisors.

NetApp features for virtualization storage savings

with 3 comments

The feature set that gives customers storage savings is described in a 42 minute informative video on Hyper-V and Netapp storage – Overview. I have summarized it in a 5 minute long post below.

Enterprise System Storage Portfolio

The Enterprise product portfolio consists of the FA series, V Series storage systems. These systems have a unified storage architecture based on the Data ONTAP, OS running across all storage arrays. Data ONTAP provides a single app interface and supports protocols such as FC-SAN, FCoE-SAN, IP-SAN (iSCSI), NAS, NFS, CIFS. The V-Series controllers also offer multiple vendor array support, i.e., they can offer the same features on disk arrays manufactured by Netapp’s competitors.


  • Block-level de-duplication, or de-dupe, retains exactly one instance of each unique disk block. When applied to live production systems, it can reduce data 95% for full backups, especially when there are identical VM images created from the same template, and as much as 25%-55% for most data sets.
  • Snapshot copies of a VM are lightweight because they share the same disk blocks with the parent and do not require as much space for the copy as the parent. If a disk block is updated with a snapshot, e.g., if a configuration parameter is customized for an application, or when a patch is applied, the Write Anywhere File Layout (WAFL) file system associates the updated block with the snapshot copy and writes to the disk, leaving the original block and its referrers intact. Snapshot copies therefore impose negligible storage performance impact on running VM’s.
  • Thin provisioning allows users to define storage pools (Flexvol) for which storage allocation is done dynamically from the storage array on demand. Flexvol can be enabled at any point in time while the storage system is in operation.
  • Thin replication between disks provides data protection. Differential Backups and mirroring over the IP network works at the block level copying only the changed blocks – compressed blocks are sent over the wire It enables virtual restores of full, point in time data at granular levels
  • Double parity RAID, called Raid DP, provides superior fault tolerance and provides 46% saving vs mirrored data or RAID 10. You can think of it as being a RAID 6 (RAID 5 + 1 Double Parity disk). RAID DP can lose any two disk in the raid stripe without losing any data. It offers availability equivalent to RAID 1 and allows lower cost /higher capacity SATA disks for applications. The industry standard best practice is to use RAID 1 for important data, RAID 5 for other data.
  • Virtual Clones (Flex clones). You can clone a volume / LUN or individual files. Savings = size of the original data set minus blocks subsequently changed in clone. Enables ease of dev and test cycles. Typical use cases: Build a tree of clones (clone of clones), clone a sysprep‘ed vhd, DR testing, VDI

There are several other videos on the same site that show the setup for the storage arrays. They are worth seeing to get an idea of what is involved to get all the machinery working in order to leverage the above features. It involves many steps and seems quite complex. (The hallmark of an “Enterprise-class” product? 😉 ) The SE’s have done a great job of making it seem simple. Hats off to them!

Netapp promises to reduce your virtualization storage needs by 50%

leave a comment »

50% Storage Savings Guarantee

NetApp‘s  Virtualization Gurantee Program promises that you will install 50% less storage for virtualization than if you buy from their competition, when you

  • Engage them for planning your virtualization storage need
  • Implement best practices recommended by them
  • Leverage features like, De duplication, Thin provisioning, RAID DP (Double Parity RAID), NetApp Snapshot copies

If you don’t use 50% less storage, you can get the required additional capacity at no additional costs

I learned about this in a 42 minute informative video on Hyper-V and Netapp storage – Overview

Written by paule1s

November 29, 2009 at 11:20 am

Virtual disk (VM) transfers in the cloud

leave a comment »

VM Transfer workflow

There are two sets of use cases:

  1. Within a development team
  2. Within IT

Development teams:

Developers carry between one to three VM’s on their laptops. They often transfer them to other developers/QA Engineers in their own team, or other teams for integration testing.

IT (regular file transfer, no streaming):

IT receives a VM that is packaged and ready for deployment – either developed by an in-house/contracting application development team, or buys it from an external vendor.

The VM is transferred to a staging (pre-production) fileshare from which it can be loaded on to one or more test servers.

When the app within the VM passes acceptance tests, it is transferred to a production fileshare, from which it can be loaded on to one or more production servers.

The VM can also be transferred to archival storage.

Written by paule1s

September 9, 2009 at 9:57 pm

Brian Madden: Terminal Services versus VDI

leave a comment »

These are my notes summarizing Brian Madden’s one hour long Terminal Services versus VDI presentation at VMWorld Europe 2009. I will highly recommend that you should watch Brian’s video, he is insightful, lively and articulate, you’ll enjoy him while learning from it as I did.


Both Terminal Services (TS) and Virtual Desktop Infrastructure (VDI) employ server-based computing (SBC) and offer the benefits that are inherent to SBC, namely

  • Central management
  • Central access control
  • High performance
  • Security

Historically, several applications have been found not to work with SBC due to limitations of remote display protocols, although the end-user often views it as an application compatibility issue. Typically applications that are multimedia, graphic-intensive, or write their state to proprietary folders on a local disk drive, or require multiple monitors, or a hardware security dongle,  are known to break.

Should I use TS and VDI?

If you have to answer this question, you should first identify which applications are SBC compatible. Once you have identified them, then you should decide between TS and VDI.

TS advantages:

  1. Very high user density (By contrast, VMWare VDI supports only 6 to 8 users per core today)
  2. Proven solution/Mature technology (80 Million users/lot of user experiences on wiki’s  and training material on the Web. By contrast, VDI is cutting edge, there is a learning curve coupled with a dearth of usage-based content)
  3. Automatic “thin” provisioning: All users share a common copy of OS/app code

VDI advantages:

  1. Live migration for load balancing, supporting mobile users
  2. VM’s can be rebooted without rebooting the host
  3. Suspend/resume of VM’s possible (TS disconnect continues to consume resources)
  4. Fault tolerance per user (since each user has their own VM)
  5. Competition amongst vendors (Citrix was a monopoly for TS, now VMWare, Microsoft, Citrix and several other vendors are investing in the future of VDI)

    Brian showcased Atlantis Computing as an innovator that dynamically composes a bootable virtual disk (vhd or vmdk) for booting a VM.

VDI disadvantages:

  1. Disk space: 10GB per user in the data center (cost per GB in data center is 5X to 10X its cost on the desktop)
  2. Routine Op’s: How to run AV, backup, patch one master image

Brian’s predictions for 2010:

  • Improving user density
  • Remote Display Protocol improvements
  • Thin provisioning/ Windows layering
  • Offline VDI/Local hypervisors
  • Local personality/Application management

Brian possesses journalistic flair; his posts are always insightful and thought-provoking. I have become a great fan of his blog.

Written by paule1s

April 16, 2009 at 2:25 pm

Posted in VDI

Tagged with , , ,

Windows 7 migration a driver for seeding VDI adoption

with 3 comments

Migration to Windows 7 is an impending event and it will happen, since Windows XP was released in 2001 and is already over 7 years old, while new generations of processors (multi-core, 64-bit, Intel VT), chip sets, graphics cards, audio cards, and disk interfaces (e.g., SATA), which were developed after XP gained mainstream adoption, are already shiiping in commodity computer hardware today.

63% of all desktops/laptops/workstations worldwide use XP, 23% use Vista; the remaining market share is fragmented across other Windows, Mac, Linux and OS’s mobile devices. [Net Applications Operating Systems Market Share report.]

XP has lost 10% market share between May 2008 and March 2009, while Vista gained just over 8% [Net Applications Top Operating Systems Share Trend report.] I am presuming that 8% of the XP users migrated to Vista and the remaining 2% siezed this opportunity to migrate to a Mac instead

The migration from Vista to Windows 7 should be smooth since the latter is an incremental release of the former.  However, the migration from XP to Windows 7 poses some of the same structural challenges outlined in my earlier post.

At the end of the day, end users care about running their applications and expect to continue to do so over the course of routine hardware and OS refresh cycles – the hardware and OS have become a commodity. The challenge for Microsoft, and the enormous market opportunity, is to provide solutions that can permit a seamless migration from XP to Windows 7 such that end users can continue to use all of their existing applications from the same desktop cost-effectively.

While the Windows 7 migration is not a dislocating event by itself, its timing coincides with the business need to move to a modern hardware and desktop OS, which encourages corporate customers to look at alternate ways of managing the desktop. The Virtual Desktop Infrastructure (VDI) vendors are viewing it as an opportunity to gain adoption for their VDI offerings – Citrix XenDesktop, Microsoft MED-V, VMWare View.

In upcoming posts, I will outline the alternative that Microsoft is offering to smoothen the upgrade path from XP to Windows 7.

Written by paule1s

April 9, 2009 at 12:36 pm