Announcing Open Source Virtualization India meetup group

We are glad to announce the Opensource Virtualization India meetup group!! Its been long time we are answering/discussing virt related queries via emails or irc, so the effort to bring virtualizers under one group.

This is a forum which discuss about various opensource Llinux virtualization technologies like Xen, KVM, Ovirt..and the friends libvirt, virt-manager..etc

If you would like to be part of it, please join www.meetup.com/open-virtualization-india/

We have scheduled 3 meetups ( Bangalore, Pune, Aurangabad) for near future, yet to decide the exact dates though.


Pune – Organizer: Anil Vettathu

Aurangabad – Organizer: Ashutosh Sudhakar Bhakare

Please let me know if you would like to volunteer for any of the meetups in your area.

** Welcome Virtualizers!! **

Guest Disk Device names or its naming schema in xen and KVM…

Most of the people are confused after seeing guest device names in different strings. As there are different hypervisors exist, it is highly likely that they get confused…

Here is small description on disk device names..

 

1) XVDA, XVDB..etc : These are disk names of Xen paravirtualized guests.. ( Xen)

2) HDA, HDB…etc : These are disk devices when you use IDE disks inside the guest.. ( Both in Xen and KVM)

3) VDA, VDB ..etc  : These are virtio disks used in KVM context..

4) SDA, SDB...etc  : These are scsi disks used in guests..

 

Hope it helps…

Xen and Kvm

Hey Guys ,

Thought of sharing some bits on most popular open source virtualization technologies ..

Yeah .. it is “Xen” and “Kvm“.

Mainly I am focusing more on the hypervisor loading process and features.

Xen

Xen originated as a research project at the University of Cambridge, led by Ian Pratt, senior lecturer at Cambridge and founder of XenSource, Inc. This company supported the development of the open source project and also sells enterprise versions of the software. The first public release of Xen occurred in 2003. Citrix Systems acquired XenSource, Inc in October 2007 and the Xen project moved to www.xen.org.

Loading Xen Hypervisor

Xen architecture consists of a Hypervisor, Domain 0 (in short Dom0, a previleged domain or guest) and Domains (DomUs or guests). The Xen hypervisor runs on the physical hardware directly and its functions includes scheduling, resource control, memory management and loading Dom0 kernel. Dom0 runs on top of hypervisor and acts as a privileged guest. It provides device drivers for host hardware devices like network card, storage controllers and etc and it starts user space services like xend and management tools (xm, virsh, virt-manager etc). It also performs all other OS related functions.
The x86 architecture was notorious for its lack of virtualizability. Xen overcame these problems by modifying the guest OSes to communicate with the hypervisor. Whenever any instruction was to be executed which had to be virtualization-aware, a call is made to the hypervisor and the hypervisor performs the necessary task for the guest on its behalf. Due to this design, it’s not possible to run OSes which cannot be modified on CPUs that do not have virtualization extensions.

When host starts, bootloader (ex: grub) boots the hypervisor first and then the hyperviosr loads  Dom0 kernel and initrd . DomUs or guests are later installed using the management tools provided inside dom0.

The above figure shows a general view of Xen architecture.

An example grub configuration for Xen

root (hd0,0)
kernel /xen.gz-2.6.18-xxx                                                                                                           module /vmlinuz-2.6.18-xxx.xen ro root=/dev/VolGroup00/LogVol00 rhgb quiet

module /initrd-2.6.18-xxx.img                                                                                                    

The complex booting process of xen, makes hard to troubleshoot issues arises at the booting stage as it is difficult to isolate the issues in hypervisor and Dom0 kernel layer. Sometimes it is required to install and test unmodified kernel of host OS to isolate whether the issue is with Xen kernel or not.  Another drawback is that, xen kernel does not support all the hardware and features that are available on linux kernel natively. Maintenance and development of xen code ( without being part of upstream kernel)  is an extra burden among developers. As mainly xen considered as a para virtualization instance, the guest operating system has to be modified. Unlike KVM, separate utilities ( xm, xmtop ..etc ) are developed only for xen. When considering the security part,  two privileged entities has to be secured, hypervisor and DOM0. But we respect Xen, for delivering near to native performance on the modified guest kernels. Also no special hardware support ( virtualization Capability )  is required to run guests.

Kernel-based Virtual Machines (KVM)

KVM is a hardware assisted virtualization developed by Qumranet, Inc to offer a desktop virtualization platform using Linux servers using their proprietary (now open source) SPICE protocol. KVM make use of the advancement in the processor technologies Intel-VTx and AMD-V to support virtualization. In 2007, KVM was merged with upstream 2.6.20 mainline kernel and in 2008, Qumranet was acquired by Red Hat, Inc.

Loading KVM Hypervisor and its components

KVM is a kernel module to the Linux kernel that actually makes Linux into a hypervisor when its a loaded. Device emulation is handle by a modified version of QEMU (qemu-kvm).

The guest running on KVM is actually executed in user space of the host. This makes each guest instance looks like a normal process to the underlying host kernel.

The above figure shows a general view of KVM architecture.

Executing the following commands as root will load KVM.

# egrep “svm|vmx” /proc/cpuinfo 

If there’s output from this command, it means the computer is capable of full virtualization and you can run guests under KVM.

# modprobe kvm

You also have to modprobe kvm-intel or kvm-amd depending on the type of host.
KVM enters into linux kernel by wearing “module” form, thus adopt all the features of a linux kernel module. It invites a talk that, “I dont want to reboot entire system to reflect my new patch”. KVM uses the following modules to make Linux kernel as a hypervisor.

kvm.ko    –> Core KVM module
kvm-intel –> Used on Intel CPUs
kvm-amd –> Used on AMD CPUs

A normal Linux process has two modes of execution: kernel mode and user mode. KVM adds a third one: guest mode. When a guest process is executing non-I/O guest code, it will run in guest mode. All the virtual guests running under KVM are just regular linux processes on the host. You will be able to see a process called “qemu-kvm” for each virtual guest and for each virtual CPU for a single guest. This brings another advantage that , these process’ priorities can be increased or decreased using normal priority adjusting (nice/renice) commands, can set affinity and use ps, top, kill, etc. Also, control groups can be created and used to limit resources that each guest can consume on a host.

Each and every virtual CPU of your KVM guests are implemented using a Linux thread . The Linux scheduler is responsible for scheduling a virtual CPU, as it is a normal thread. In KVM , Guest physical memory is just a chunk of host virtual memory, that said, the guest physical memory has emulated within virtual memory of the “qemu-kvm” process.  So it can be Swapped, Shared,  Backed by large pages, Backed by a disk file, COW’ed and is also NUMA aware.

By “virtio” drivers KVM is closely competing with XEN to deliver native performance. virtio provides a common front end for device (network, block etc) emulations to standardize the interface. The front-end drivers for virtio is implemented in the guest operating system and the back-end drivers is implemented in the hypervisor. Each front-end driver has a corresponding back-end driver in the hypervisor.

Install Fully virtualized guest on an ITANIUM system

To install a fully virtualized guest on an Itanium Red Hat Enterprise Linux system, first ensure that “xen-ia64-guest-firmware” package is installed on the host.

After creating the guest, either through virt-manager or using virt-install, the guest will present an EFI shell. In order to begin the installation, enter mount fs0. This will make files available to the guest virtual machine which are necessary to commence the installation.

Next, enter fs0: to switch to that device, then bootia64.efi to start ELILO. Finally, hit enter at the ELILO prompt to begin the installation of the guest operating system.

Below is the step by step ( with screen-shots) process, I followed for the same .

* Login to the XEN ( IA 64 ) server.


Run virt-manager

* Create new Virtual Machine “Click on New

* Select a name Ex–> test-fv-on-IA64

* Choose a Virtualization method

* Choose Installation method Select – Network boot (PXE), Select OS Type and Variant. Here I selected “ISO image location” , OS type as “Linux” and “OS variant as “Red Hat Enterprise Linux 5”.

* Assign Storage Space
Select – “Simple File”
Edit File Location and specify it like /var/lib/xen/images/test-fv-on-IA64.img
Select Size – Ex:- 4000 MB

* Specify Network Please select – Virtual Network

* Allocate memory and CPU –

* Summary

* Now you will be directed to EFI shell

* Here you need to type the command “mount fs0”

* Then exeute the command “fs0:” and check the directory contents by the command “dir” as shown in the below attachment.


* now type the command “bootia64.efi” as shown and press “ENTER” key.

* Thats it !!

Now a FULLY virtualized system is ready for you to play

How can I increase a virtual image disk space if it is created on a file image

This is a very useful method in your real life.. I have come across to increase the space in virtual machine when it created on a file image ..

You can try any of the below to increase the space depending on the virtual guest’s storage .. That being said , it is possible to install a virtual guest either on “fully allocated image” Or it is possible to install on a “sparse file” depending on the option ( “Allocate entire disk now” ? ) you selected when installing a virtual guest .

If it is a fully allocated file image , follow below procedure to increase the space.. Make sure that the subjected guest is powered Off for the safer side….:)

[root@hchiramm kvmguests-images]# du -sh rhel5.4-x86_64-kvm.img
5.9G rhel5.4-x86_64-kvm.img
[root@hchiramm kvmguests-images]# dd if=/dev/zero bs=1M count=2048 >> rhel5.4-x86_64-kvm.img
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 22.8518 s, 94.0 MB/s
[root@hchiramm kvmguests-images]# du -sh rhel5.4-x86_64-kvm.img
7.9G rhel5.4-x86_64-kvm.img
[root@hchiramm kvmguests-images]#

If it is a sparse file ( have a look at my previous blog for more information on sparse file ) , do below steps..


#dd if=/dev/zero of=/kvmguests-images/test-guest.img bs=1M count=0 seek=7168 conv=notrunc

By above commmand you increased the space of guest storage to “7G”…

It was “something < 7G” before .. 🙂

You will be able to see the new space inside the guest once it is started … Verify it with “fdisk -l” command….

More to come 🙂

How to know actual memory/RAM installed in Xen Host system?

I always hear complaints on “actual memory ( RAM ) displayed in XEN host system is NOT CORRECT or how to know actual physical RAM in Xen host system” …

People will execute the command “free” command in Xen host system and will complain as mentioned.. “free” command wont help you here, because ‘free’ will show you the physical memory available for Dom0 or Privileged Guest. This command can not help you in this situation..

But there is another way to get the exact information.. Fire the command “xm info” in your Xen host system and grep for “memory”

 

Ex: # xm info |grep -i memory

Did you try and see the result ? 🙂