shareVM- Share insights about using VM's

Simplify the use of virtualization in everyday life

Posts Tagged ‘vmware

Type 1 and Type 2 Client Hypervisors

with 2 comments

This post is based on insight gained from two of Brian Madden’s posts: A deeper look at VMware’s upcoming bare-metal client hypervisor and Bare-metal client hypervisors are coming — for real this time

Wikipedia distinguishes between two distinct types of hypervisors

Type 1 Hypervisor

Type 1 (or native, bare-metal) hypervisors are software systems that run directly on the host’s hardware to control the hardware and to monitor guest operating-systems. A guest operating system thus runs on another level above the hypervisor. Some examples are VMware ESX, Xen, Microsoft Hyper-V, etc.

Type 1 hypervisors are appropriate when you want to provide the only OS that is used on a client. When a user turns a machine on, he only sees a single OS that looks and feels local.

Type 2 Hypervisor

Type 2 (or hosted) hypervisors are software applications running within a conventional operating-system environment. Considering the hypervisor layer as a distinct software layer, guest operating systems thus run at the third level above the hardware. Some examples are VMware Workstation, VMware FusionMED-V, Windows Virtual PC, VirtualBox, Parallels, MokaFive, etc.

Type 2 hypervisors are appropriate when you want a user to have access to their own local desktop OS in addition to the centrally-managed corporate VDI OS. This could be for an employee-owned PCscenario, or it could be a situation where you have contractors, etc., who need access to their personal apps and data in addition to the company’s apps and data.

Client Hypervisors

Over the past 5 years, Type 1 hypervisors are dominantly used in the server market, whereas, Type 2 hypervisors are being used on clients, i.e., desktops and laptops. Recently, the need for a Type 1 hypervisor that runs locally on a client device, called the client hypervisor, has emerged for supporting the Virtual Desktop Infrastructure VDI).

Benefits

VDI’s promise lies in realizing a significant cost reduction for managing desktops. A client hypervisor is useful because it combines the centralized management of VDI with the performance and flexibility of local computing. It offers several advantages:

  • It provides a Hardware Abstraction Layer so that the same virtual disk image can be used on a variety of different devices.
  • The devices do not need a “base OS” when the client hypervisor is present. The maintenance overhead of patching a “base OS” frequently on each of the devices is greatly reduced.
  • Once a virtual disk image has been provisioned, it runs and the display is driven locally. This frees up the client from the need to support remote display protocols.
  • It decouples the management of the device from the management of Windows and the user; administrators can spend their time focusing on user needs instead of device maintenance.

Type 1 Server and Client Hypervisors

Server hypervisors are designed to make VMs portable and increasing the utilization of physical hardware. Client hypervisors are intended to increase the manageability of the client device and improve security by separating work and personal VMs.

The bottom line is that even though they’re both called “Type 1” or “bare-metal hypervisors,” there are some philosophical differences in how each came to be. (This could help explain why it has taken over five years to extend the Type 1 hypervisor concept from the server to the desktop.)

Dimension Type 1 Server Hypervisor Type 1 Client Hypervisor
Design Goal Host multiple VMs and make each VM seem like a “real” server on the network. The user shouldn’t even know that there is a hypervisor or they are using a VM.
Virtualization Goal I/O: Disk and Networking Native device support that affects user experience, e.g.,
a) GPU and graphics capabilities
b) USB ports and devices
c) Laptop battery and power state
d) Suspend/Hibernate states
Tuning Maximum simultaneous network, processor and disk I/O utilization Graphics, multimedia and wireless connectivity
Hardware Support Narrow set of different preapproved hardware models Should (ideally) run on just about anything
Intrusiveness Controls most if not all of the hardware platform and devices and provide a near complete emulated and/or para-virtualized device model to the virtual machines running on top a) Should support full device pass-through to a guest VM.
b) Should also support dynamic assignment and “switching” of devices between different guests


Type 1 Client Hypervisor Vendors
In the Type 1 client hypervisor space, there are Neocleus NeoSphere and Virtual Computer NXTop. There are product announcements from both VMware and Citrix, however, there is no shipping product to date. There is also the Xen Client Initiative – an effort to port the open source Xen hypervisor to the client.

Editorial Opinion
Today, hypervisors are a commodity. While they are indeed foundational technology, they are “out of sight is out of mind”, i.e., most users do not perceive their presence and hence ascribe no/low value for this technology. Hypervisor developers will be hard pressed to build a lasting public company solely based on selling hypervisors.

Should you de-fragment Virtual Disks?

with one comment

Windows de-fragmentation tool or some other commercial alternative, need 5-15% of free disk space, for the tool to be effective. Sometimes it may need more if you have some very large files (like video or database files). Below is the layout of c-drive of may virtual machine. The red segments you see are the fragmented files.

If you have a file with one large segment, for the defrag to be effective it has to move this segment to a free area and copy the rest of the segments with it to make the file contiguous. If there is no place to copy the large extent of a file, then it wont get defragmented.

The best way to de-fragment is to get an empty disk and copy all the files onto the empty disk. So the more free disk you have the better these tools will perform.

Also how you think about de-fragmentation in a virtual disk is very different than how we think about de-fragmentation in a physical world. Take the above disk it is a virtual disk 2GB Max Extent Sparse

The disk was full and then I extended the disk (with fatVM) and then defragmented one file (you can do that with Mark Russinovich’s Contig Tool http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx). You can see that the files are contiguous (blue) in the extended portion. The original disk clearly requires defragmentation, but without extending it, we would not have been able to get the key database file to be contiguous.

It makes one ask the question whether you really need the traditional way of defrag the virtual disk. It is much faster to extend the disk and/or attach a separate disk and simply copy over all the files and re-place the original disk with the new extended disk.

Another advantage of doing this is that it is much faster than defragging also you can improve the performance of the virtual machine considerably. Also you can take the files which are static (don’t change)  by taking the files in a virtual machine which don’t change and making the base new disk for c-drive a flat file instead of a sparse disk as the sparse disk is not really saving you anything once you get full. If you have a parent which is flat and then a child which is sparse you get the best of both worlds.

In my limited experience instead of defrag, do the following

  • create a new flat disk, copy all the files from C: to the new disk
  • make the new disk your c: drive
  • create a clone of the base disk (which by definition is sparse)
  • extend the sparse disk

Your virtual machine’s performance will be significantly improved.

Why do Windows C drives get full in virtual disks?

leave a comment »

A real life experience posted by a member in the VMware vCenter Server Communities yesterday (Feb 8, 2010):

I have installed vc with sql 2005 express, now my vCenter server c:\ is almost full
is it possible to move my vCenter database to another drive

The solution recommended by an expert is:

you can install a new server with more space and migrate the data as following.
link to kb post
But you can use also tolls like gparted or dell_expart to incrase your space.

While this recommendation is consistent with the perceived state of the art, it does have the following impact:

It is not going to affect the running VMs and also ESX but you/VSC may see a disconnect for a while.

Another member recommends a different approach

A different approach would be to extend the c-drive.
We have recently released a tool (fatVM) to make this easy (or easier).
It creates the extended VM in a new directory (with the original as parent). Does not touch the original files. Is able to extend most VM in a couple of minutes.
Here is the link: http://www.gudgud.com/fatvm

A third member is contemplating a similar move:

I have a 4 host ESX 3.5U4 system.My VCenter is pointing to an external SQL server. I am about to upgrade to vSphere and want to have the SQL running on on the VCenter server itself – most likely using SQL Express. I have the same concern about space.

You must have noticed the pattern that is emerging. Your C:drive can get full when you are using a database system, or a log aggregation server, within a VM that has a pre-allocated disk and size of the data is growing. As a best practice, review your apps for potential of data growth before pre-allocating the size of the VM.

how “capitalism” forces virtualization downstream

leave a comment »

Over the last decade we have systematically added a layer of indirection at every interface in the stack. These days we call this virtualization!

On a NetApp Filer we had

> raid disk group > volume

The problem was you could not expand/shrink Raid Groups on the fly, you couldn’t move data easily between different Raid Groups. We get a layer of indirection or virtualization

> raid disk group > aggregate > flex volume>

Since an aggregate was logical instead of physical, it could be expanded or shrunk without changing the volume, you could move data around.

On a USB Disk

If we look inside the disk itself, especially usb flash devices we went from

cylinder, heads, sector > logical table > device abstraction

Again this allowed the rotation of different logical sectors to different physical cells, to ensure a single cell was not rewritten more times than its lifetime.

In a SAN

we put a switch between the Raid Groups and the computer. The switch puts a layer of indirection between the blocks and the computer

You knew all that :-). So what does it have to do with capitalism. My simplistic definition of capitalism is that the system will remove all inefficiencies in a chain and who ever will remove them stands to benefit economically. Or said another way: money finds its way into the right pockets!

So look at the stack today:

chips > motherboard, network, storage, bios > hypervisor > OS > Security, Backup etc > Business, Productivity Apps

Every layer presents an interface to the layer above. Each layer is also owned by different companies in the eco-system. Each of those companies has pressure to maximize its revenue. Tasked with this difficult challenge, you look at the layer above and see what is selling and can you add it to your layer. Happens naturally over time: intel added virtulization support, phoenix bios is adding the hypervisor, operating systems are trying to add backup and security …. The cycle goes on ….

Virtualization will be “innovated” always in a higher layer of the stack and commoditized by the lower layers.

The higher layer in the stack finds a lot of new functionality and benefit by making interface to a lower layer “logical”. They take this to market, till at some point the lower layer realizes that this is their API, they should move virtualization into their layer. The pressure to do this is extreme and the time frame to monetize this really small:

  • Imagine the tussle between VMW and the storage vendors. VMW introduces logical disks with cloning, but storage vendors want to offer logical luns and volumes and disk files, as this moves the cloning functionality from the hypervisor to the storage.
  • Imagine: Western Digital or Seagate could create multiple disks (vhd/vmdk files) on a single physical disk and then offer the capabilities to grow, shrink, move data between them. Even add networking to the disk controller, then different disks can connect to each other. They can do that if the processing power, memory reach a price point that it can be embedded directly into the component or lower layer. Which is what effectively happened to computing.
  • VMW introduces logical network switch, Cisco jumps in with nexus-V

For a consumer this is a good thing, but money and value are shifting down stack across different companies, which have to co-exist in the eco-system (cisco, intel, emc, vmw), yet guard their innovation from becoming commoditized.

Written by RS

December 13, 2009 at 8:36 am

EMC FAST Fully Automated Storage Tiering for storage savings

leave a comment »

Chuck Hollis, VP Global Marketing, CTO, EMC, describes FAST over 3 blog posts. The technology has been in Beta usage by several customers in 2009.

The premise

When you analyze the vast majority of application I/O profiles, you’ll realize that a small amount of data is responsible for the majority of I/Os; almost all of it is infrequently accessed. 

The principle

Watch how the data is being accessed, and dynamically cache the most popular/ frequently accessed data on flash drives, usually the small amount, and the vast majority of infrequently accessed data on big, slow SATA drives.

The storage savings solution

FAST Place  the right information on the right media based on frequency of access
Thin This (virtual) provisioning allocate physical storage when it is actually being used, rather than when it is provisioned.
Small Compression, single-instancing and data deduplication technologies eliminate information redundancies.
Green A significant amount of enterprise information is used *very* infrequently.  So infrequently, in fact, that the disk drives can be spun down, or at the least  be made semi-idle. 
Gone Policy-based lifecycle management – Archiving and Deletion, Federation to the cloud through private and public cloud integration.
The information can get shopped to a specialized service provider as an option

 

… and life goes on!

One thing hasn’t changed, though. The information beast continues to grow

Written by paule1s

December 11, 2009 at 9:29 am

Top 7 requirements for infrastructure cloud providers in 2010

leave a comment »

This is a summary of the post on the VMOps blog.

1) Inexpensive storage

The storage industry is built on the back of NAS and SAN, but for cloud providers, the overwhelming preference is for inexpensive local disk, or DAS solutions. … every cloud provider I talk with expects storage to be independent of the host physical server, redundant, and provide support for HA.

2) Open source hypervisor

Service providers know that if they plan to compete with Amazon, Rackspace and other cloud providers, on price, VMware is not a good option. Perhaps because it is being used by Amazon, Xen seems to be the most popular hypervisor for Infrastructure clouds among the service providers

3) Integration with Billing and Provisioning Apps

… most hosting companies and MSPs have billing and user management approaches that they have built-up over the years. Every one of the companies I’ve spoken with expect their cloud solution to plug into these existing systems.

4) Image-based pricing to support both Windows and Linux

Most service providers I talk to expect Linux to make up the majority of the images they run int he cloud, but they still need to make sure the cloud will support Windows, and all of the associated technology necessary to manage licenses.

5) Simplicity of administration by end0users

Plenty of end-users will leverage a Clouds API to automatically provision and manage virtual machines, but that doesn’t change the need for a simple UI. Most hosting companies have a huge number of end-users who are used to working with control panels, and an Infrastructure cloud needs to make life easy for these end-users.

6) Reliability

Over the next few years, many of the large providers of dedicated servers will be offering their customers the option to transition to virtual machines running on a computing cloud. For this to be successful, VMs need to offer better reliability than dedicated machines at a lower cost.

7) Turn-key solution

… service providers today can implement a completely integrated cloud stack on commodity hardware, and receive ongoing maintenance and upgrades over the years. Equally important, service providers can license software on a consumption basis, so upfront investment is negligible.

Incidentally, Mr. VMOps Product Manager, you may wish to provide just 3 more requirements to make this a Top 10 requirements list.

Written by paule1s

December 10, 2009 at 6:58 pm

Steve Herrod’s Virtualization Predictions for 2010

leave a comment »

Steve Herrod is the CTO and Senior VP R&D at VMWare.

1) Growing modularization of the data center

The EMC/Cisco vBlock bundles, as well as “converged infrastructure” offerings from HP, demonstrate that vendors understand that customers want simplicity, ease of use, and a single support channel for utilizing them. Look for more to come

2) Private clouds

Industry participants in 2010 will better define the private cloud and help customers implement it. We’ve finally emerged from the hype of cloud computing to tackle the realities of what’s needed and confront the challenges that currently exist, particularly around security. Furthermore, customers are realizing that their IT landscape will include a mix of both private and public cloud offerings (hybrid clouds). Customers want this hybrid world to be more easily managed and give them the right balances between efficiency, scalability, security, and control I fully expect that vendors, systems integrators, and certification agencies will rally to meet that demand in 2010

3) High-level application frameworks

I expect 2010 to be the year when developers and customers recognize the need to have even more choice as to where their applications are run in the cloud. This will lead to both open cloud standards and the arrival of new PaaS offerings designed with cloud portability in mind

Read the full post here

Predictions for 2009 are here.

Written by paule1s

December 7, 2009 at 4:27 pm