Posts Tagged ‘citrix’
Unidesk is a PC Lifecycle Management company planning to provide
- Virtual Desktop Management
- Storage reduction
with no agent on the desktop.
Supports VMware ESX today. Intends to support Citrix XenServer and Microsoft Hyper-V, VMware Workstation, VMware Fusion, Citrix XenClient. and application virtualization technologies such as, VMware ThinApp, Microsoft App-V, etc.
CacheCloud: is a content delivery network (think Akamai) for pushing out VDI gold images to different data centers, laptops/desktops in branch offices or machines that connect intermittently. Cloud consists of a large number of virtual appliances, called CachePoints, running one per blade or laptop. Each CachePoint stores user personalization locally as well as replicates it out. CachePoint appliances are made of Linux, have virtualized storage that supports
- thin provisioning
Windows and app code is shared, user personalization is unique. This makes scanning for AV really fast since there is only image of code
Block-level replication of deltas, file-level replication for compositing. Personalization data can be written from several individual CachePoints to a NAS/SAN in the data center which enables legal discovery of changes to data, which was not possible until today.
Composite Virtualization understands the abstract layers, Windows’, apps and user data and knows how to merge them together (composite) in real time to create a bootable C: device and provide a rich desktop experience. Virtualizes each desktop into layers
- exe, com objects and dlls are apps
- Registry – configuration
- everything else is data
It will support encryption in the future: Shared keys for windows and apps code, personal keys for private data
Composting engine sits on top of the device driver and form the individual layers by merging individual IO streams with the namespace knowledge it maintains.
A virtualization storage layer implemented as a NTFS file system filter driver provides a high performance block IO device that talks to the CacheCloud. It loads early in the boot cycle. Once it is loaded, it loads a vmdk disk image which contains Just Enough Windows pre-composited to provide a bootable C drive. The latter can be served from the Cache Cloud.
It Snapshots the system automatically by auto detecting application installs/uninstalls, ActiveX control downloads. An admin can get a timeline view of user-installed software to reconstruct a hosed machine easily from the CacheCloud. Lets you recover system state while retaining your data.
Currently in Beta with 22 customers spanning Financial Institutions, Higher Ed and the Government.
Distribution through a channel strategy, working with Top Channel providers for VMware, Citrix, Microsoft. Can replace WAN acceleration, Backup and DR and Persistent Personalization products.
Phoenix Technologies is offering a Linux-based virtualization platform called HyperSpace enabled by the HyperCore hypervisor embedded within the BIOS. HyperCore is most likely Xen-based and runs specialized core services side-by-side with Windows on Intel VT CPU’s.
Its primary value proposition is that it is a fast boot environment. The concept is to boot the user into a VM running Linux and show him a Mozilla-based browser within the first 10 seconds, while Windows is booting up in parallel in another VM within the first minute or so. While the Windows boot in in progress, the user can connect (through Linux) with an available wireless network, browse the Internet, and switch between the two virtual machines using the F4 function key.
What do users think?
Here are some interesting reviews,
- Phoenix Technologies HyperSpace instant-on OS review
- Phoenix HyperSpace Dual and Hybrid
- A peek at Phoenix’s HyperSpace fast-boot Linux add-on
- Torture-Testing Phoenix HyperSpace, the Linux-Based Instant-On OS
Some other fast boot environments are:
- DeviceVM Splashtop (They don’t use virtualization today but have filed US Pat. 11772700 on Jul 2, 2007 for virtualizing dual OS boot)
- Asus ExpressGate
- Dell Latitude On
However, currently …
Phoenix was selling HyperSpace Dual (Linux only, no HyperCore) and Hybrid (Linux + HyperCore) in 2009 but they seem to have discontinued the Hybrid product line. Was the adoption poor due to limited hardware support? Or, shudder, was the product not fulfilling a customer need?
Perhaps we may see it once again in the near future, the HyperSpace front page hints that “HyperSpace 2.0 is coming soon”.
The technology is cool, but …
Fast boot alone is not a compelling need. There aren’t many times in life when users can’t wait an additional 30 or so seconds to have full access to Windows.
If you look at why Mac users have adopted VMware Fusion for running Windows, you’ll realize that there must be a compelling need for users to change their behavior and adopt something new and different. Users in corporate environments switched to Macs because they did not want a Common Operating Environment Windows desktop, which was locked down by IT. Using Fusion, they can continue to use Office, particularly, Outlook, and especially the Outlook calendar, to continue to meet the demands at work without missing a beat. Conversely, people who have always used Macs did not want to change their lifestyle when they joined a new company and using Fusion, they were able to assimilate into the corporate routine very quickly.
So the question at hand is, what is the compelling use case for a BIOS-based client hypervisor to gain adoption and market penetration?
What is the killer use case?
Perhaps the killer use case is the one that both HyperSpace and Splashtop are already fulfilling today for NetBooks and Nettops, using non-virtualized Linux to provide a Mozilla or Chrome browser as the primary interface for email, Facebook, Zynga, IM, browsing the Internet and using Microsoft Office compatible apps.
This begs the question, is there a compelling need for a Type 1 BIOS-based client hypervisor?
Dear Reader, What do you think?
This post is based on insight gained from two of Brian Madden’s posts: A deeper look at VMware’s upcoming bare-metal client hypervisor and Bare-metal client hypervisors are coming — for real this time
Type 1 Hypervisor
Type 1 (or native, bare-metal) hypervisors are software systems that run directly on the host’s hardware to control the hardware and to monitor guest operating-systems. A guest operating system thus runs on another level above the hypervisor. Some examples are VMware ESX, Xen, Microsoft Hyper-V, etc.
Type 1 hypervisors are appropriate when you want to provide the only OS that is used on a client. When a user turns a machine on, he only sees a single OS that looks and feels local.
Type 2 Hypervisor
Type 2 (or hosted) hypervisors are software applications running within a conventional operating-system environment. Considering the hypervisor layer as a distinct software layer, guest operating systems thus run at the third level above the hardware. Some examples are VMware Workstation, VMware Fusion, MED-V, Windows Virtual PC, VirtualBox, Parallels, MokaFive, etc.
Type 2 hypervisors are appropriate when you want a user to have access to their own local desktop OS in addition to the centrally-managed corporate VDI OS. This could be for an employee-owned PCscenario, or it could be a situation where you have contractors, etc., who need access to their personal apps and data in addition to the company’s apps and data.
Over the past 5 years, Type 1 hypervisors are dominantly used in the server market, whereas, Type 2 hypervisors are being used on clients, i.e., desktops and laptops. Recently, the need for a Type 1 hypervisor that runs locally on a client device, called the client hypervisor, has emerged for supporting the Virtual Desktop Infrastructure VDI).
VDI’s promise lies in realizing a significant cost reduction for managing desktops. A client hypervisor is useful because it combines the centralized management of VDI with the performance and flexibility of local computing. It offers several advantages:
- It provides a Hardware Abstraction Layer so that the same virtual disk image can be used on a variety of different devices.
- The devices do not need a “base OS” when the client hypervisor is present. The maintenance overhead of patching a “base OS” frequently on each of the devices is greatly reduced.
- Once a virtual disk image has been provisioned, it runs and the display is driven locally. This frees up the client from the need to support remote display protocols.
- It decouples the management of the device from the management of Windows and the user; administrators can spend their time focusing on user needs instead of device maintenance.
Type 1 Server and Client Hypervisors
Server hypervisors are designed to make VMs portable and increasing the utilization of physical hardware. Client hypervisors are intended to increase the manageability of the client device and improve security by separating work and personal VMs.
The bottom line is that even though they’re both called “Type 1” or “bare-metal hypervisors,” there are some philosophical differences in how each came to be. (This could help explain why it has taken over five years to extend the Type 1 hypervisor concept from the server to the desktop.)
|Dimension||Type 1 Server Hypervisor||Type 1 Client Hypervisor|
|Design Goal||Host multiple VMs and make each VM seem like a “real” server on the network.||The user shouldn’t even know that there is a hypervisor or they are using a VM.|
|Virtualization Goal||I/O: Disk and Networking||Native device support that affects user experience, e.g.,
a) GPU and graphics capabilities
b) USB ports and devices
c) Laptop battery and power state
d) Suspend/Hibernate states
|Tuning||Maximum simultaneous network, processor and disk I/O utilization||Graphics, multimedia and wireless connectivity|
|Hardware Support||Narrow set of different preapproved hardware models||Should (ideally) run on just about anything|
|Intrusiveness||Controls most if not all of the hardware platform and devices and provide a near complete emulated and/or para-virtualized device model to the virtual machines running on top||a) Should support full device pass-through to a guest VM.
b) Should also support dynamic assignment and “switching” of devices between different guests
Type 1 Client Hypervisor Vendors
In the Type 1 client hypervisor space, there are Neocleus NeoSphere and Virtual Computer NXTop. There are product announcements from both VMware and Citrix, however, there is no shipping product to date. There is also the Xen Client Initiative – an effort to port the open source Xen hypervisor to the client.
Today, hypervisors are a commodity. While they are indeed foundational technology, they are “out of sight is out of mind”, i.e., most users do not perceive their presence and hence ascribe no/low value for this technology. Hypervisor developers will be hard pressed to build a lasting public company solely based on selling hypervisors.
Interesting post on The changing role of the IT storage pro by John Webster who interviewed the CIO of an unnamed storage vendor
The CIO observed that the consolidation of IT infrastructure driven by server virtualization projects and a future rollout of virtual desktops is forcing a convergence of narrowly focused IT administrative groups. This convergence will cause IT administrators to develop competency in systems and services delivery in the future, rather than remain silo’ed experts in servers, networks, and storage.
Virtualization has brought about the convergence of systems and networks; the convergence of Fibre Channel and Ethernet within the data center changes the nature of the relationships between enterprise IT operational groups as well as the traditional roles of server, networking, and storage groups.
As the virtual operating systems (VMware, MS Hyper-V, etc.) progress, we will see an increased tendency to offer administrators the option of doing both storage and data management at the server rather than the storage level. Backups and data migrations can be done by a VMware administrator for example. Storage capacity can be managed from the virtualized OS management console.
John’s observations tie-in with the lessons from the two preceding posts where we explored Netapp’s virtualization storage features and thin provisioned thin virtual disks, where we learnt that the administrators have to understand not just the file system nuances but also the storage features to use storage for virtualization effectively.
These are my notes summarizing Brian Madden’s one hour long Terminal Services versus VDI presentation at VMWorld Europe 2009. I will highly recommend that you should watch Brian’s video, he is insightful, lively and articulate, you’ll enjoy him while learning from it as I did.
Both Terminal Services (TS) and Virtual Desktop Infrastructure (VDI) employ server-based computing (SBC) and offer the benefits that are inherent to SBC, namely
- Central management
- Central access control
- High performance
Historically, several applications have been found not to work with SBC due to limitations of remote display protocols, although the end-user often views it as an application compatibility issue. Typically applications that are multimedia, graphic-intensive, or write their state to proprietary folders on a local disk drive, or require multiple monitors, or a hardware security dongle, are known to break.
Should I use TS and VDI?
If you have to answer this question, you should first identify which applications are SBC compatible. Once you have identified them, then you should decide between TS and VDI.
- Very high user density (By contrast, VMWare VDI supports only 6 to 8 users per core today)
- Proven solution/Mature technology (80 Million users/lot of user experiences on wiki’s and training material on the Web. By contrast, VDI is cutting edge, there is a learning curve coupled with a dearth of usage-based content)
- Automatic “thin” provisioning: All users share a common copy of OS/app code
- Live migration for load balancing, supporting mobile users
- VM’s can be rebooted without rebooting the host
- Suspend/resume of VM’s possible (TS disconnect continues to consume resources)
- Fault tolerance per user (since each user has their own VM)
- Competition amongst vendors (Citrix was a monopoly for TS, now VMWare, Microsoft, Citrix and several other vendors are investing in the future of VDI)
Brian showcased Atlantis Computing as an innovator that dynamically composes a bootable virtual disk (vhd or vmdk) for booting a VM.
- Disk space: 10GB per user in the data center (cost per GB in data center is 5X to 10X its cost on the desktop)
- Routine Op’s: How to run AV, backup, patch one master image
Brian’s predictions for 2010:
- Improving user density
- Remote Display Protocol improvements
- Thin provisioning/ Windows layering
- Offline VDI/Local hypervisors
- Local personality/Application management
Brian possesses journalistic flair; his posts are always insightful and thought-provoking. I have become a great fan of his blog.
Migration to Windows 7 is an impending event and it will happen, since Windows XP was released in 2001 and is already over 7 years old, while new generations of processors (multi-core, 64-bit, Intel VT), chip sets, graphics cards, audio cards, and disk interfaces (e.g., SATA), which were developed after XP gained mainstream adoption, are already shiiping in commodity computer hardware today.
63% of all desktops/laptops/workstations worldwide use XP, 23% use Vista; the remaining market share is fragmented across other Windows, Mac, Linux and OS’s mobile devices. [Net Applications Operating Systems Market Share report.]
XP has lost 10% market share between May 2008 and March 2009, while Vista gained just over 8% [Net Applications Top Operating Systems Share Trend report.] I am presuming that 8% of the XP users migrated to Vista and the remaining 2% siezed this opportunity to migrate to a Mac instead
The migration from Vista to Windows 7 should be smooth since the latter is an incremental release of the former. However, the migration from XP to Windows 7 poses some of the same structural challenges outlined in my earlier post.
At the end of the day, end users care about running their applications and expect to continue to do so over the course of routine hardware and OS refresh cycles – the hardware and OS have become a commodity. The challenge for Microsoft, and the enormous market opportunity, is to provide solutions that can permit a seamless migration from XP to Windows 7 such that end users can continue to use all of their existing applications from the same desktop cost-effectively.
While the Windows 7 migration is not a dislocating event by itself, its timing coincides with the business need to move to a modern hardware and desktop OS, which encourages corporate customers to look at alternate ways of managing the desktop. The Virtual Desktop Infrastructure (VDI) vendors are viewing it as an opportunity to gain adoption for their VDI offerings – Citrix XenDesktop, Microsoft MED-V, VMWare View.
In upcoming posts, I will outline the alternative that Microsoft is offering to smoothen the upgrade path from XP to Windows 7.