Announcing Open Source Virtualization India meetup group

We are glad to announce the Opensource Virtualization India meetup group!! Its been long time we are answering/discussing virt related queries via emails or irc, so the effort to bring virtualizers under one group.

This is a forum which discuss about various opensource Llinux virtualization technologies like Xen, KVM, Ovirt..and the friends libvirt, virt-manager..etc

If you would like to be part of it, please join www.meetup.com/open-virtualization-india/

We have scheduled 3 meetups ( Bangalore, Pune, Aurangabad) for near future, yet to decide the exact dates though.


Pune – Organizer: Anil Vettathu

Aurangabad – Organizer: Ashutosh Sudhakar Bhakare

Please let me know if you would like to volunteer for any of the meetups in your area.

** Welcome Virtualizers!! **

ovirt – docker integration !!

Previously Oved had published a blog post about the same topic and Today I noticed this presentation from Federico and surprised to see the current status of this attempt!! things are really fast and promising!

[gview file=”http://www.ovirt.org/images/d/dd/2014-ovirt-docker-integration.pdf”]

virt-manager is not able to detect the ISO file ?

[ ping blog]

Any way its a common complaint that, the virt-manager is not able to scan or show the iso files stored in a system … Even after making sure the permissions and other bits correct , virt-manager is not able to populate it..

First of all I would like to ask where is that ISO stored ?

if its ** NOT ** in below path, can u move the subject ISO file to this location and see it helps ?

/var/lib/libvirt/images

Once you move the ISO file to above location normally issue should go away.. If yes/no please let me know ..

We can poke more… Hope this helps..

/me @ KVM forum 2013, UK Edinburgh..

Oh yeah.. It was one of the wonderful week in my life!!.

This year I was lucky enough to get green flag to KVM forum and I really enjoyed it. I would say, it was an awesome time which allowed me to experience Scottish beauty together with friends – renowned virtualization developers in opensource community.

I met lot many people with whom I worked in the past, or I would say the crew with whom I am working at present .

Day 1 @ KVM forum:

The sessions were tracked mainly in 3 suites , so couldnt get into all. Obviously I missed 2/3 sessions per day 🙂

I will write more about the sessions I attended in different tracks, how-ever I am place holding these videos till then..

Keynote by Gleb

Obviously Gleb has the privilege to present the keynote on opening day and here it is :

Modern QEMU Device Emulation – Andreas Färber, Suse

Andreas presented the device model, QOM with a demo . . Device emulation has undergone many changes lately and it is a recurring topic how exactly new devices should be written (which example to copy and which not) or how to rebase out-of-tree device models against changing upstream. Andreas took an in-tree device that has not even been qdev’ified yet and turn it into a modern QOM device and showcase how the management infrastructure surrounding QOM allows to inspect and manipulate that device once it is in the proper form. In short its easy to define a Qemu Object Model and play with it.

Virgil3D – Virtio Based 3D Capable GPU- Dave Airlie , Red Hat

This session was an attempt to provide missing capability to provide 3D rendering capabilities to guest OSes inside qemu. The Virgil3D project aims to to provide a virtual GPU device that can be used by guest OSes to provide OpenGL or Direct3D capabilities. The host side of the device will use OpenGL on the host to render the command stream from the guest. The command stream will be based on the Mesa project’s Gallium3D framework, using similar states and shader encoding. Dave gave a demo at end of the session and was happy to see the gaming inside the guest. 🙂

[sound is broken]

VGA Assignment Using VFIO – Alex Williamson, Red Hat

VGA and GPU device assignment is an often requested feature for adding new functionality to to virtual machines, whether it be for gaming or traditional graphics workstation usage. The VFIO userspace driver framework has been extended to include VGA support, enabling QEMU assignment of graphics cards. In this presentation, Alex Williamson gave an overview of the architecture for doing VGA assignment, explain the differences between VGA assignment and traditional PCI device assignment, and provided current status report for VGA/GPU device assignment in QEMU.

Unfortunately I dont see the video for this session.

Platform Device Passthrough Using VFIO – Stuart Yoder, Freescale Semiconductor

This session was all about platform device passthrough.. Stuart started the session by explaining The linux driver model. virtio-pci binding and unbinding process.
VFIO provides a framework for securely allowing user space applications to directly access and use I/O devices, including QEMU which allows pass-
through of devices to virtual machines. QEMU and the upstream Linux kernel currently support VFIO for PCI devices. System-on-a-chip processors frequently
have I/O devices that are not PCI-based and use the platform bus framework in Linux. An increasing number of QEMU/KVM users have the need to pass
through platform devices to virtual machines using VFIO.

This presentation described:
vfio
how VFIO-based passthrough of PCI devices is similar and different for platform devices
issues and challenges in solving platform device pass-through
proposed kernel changes to enable this

sysfs_bind_only
boolean addition to sysfs

[sound is broken]

Gerd on x86 firmware maze:

Explanation bits on SeaBIOS, TianoCore, coreboot, iPXE, UEFI, CSM, ACPI, fw_cfg . Then about which firmwares exist in the qemu world and the interaction among them. seabios initialization sequence.. Later explained how the hardware configuration and initialization works in qemu and which interfaces are used to handle this.

[slides] docs.google.com/file/d/0BzyAwvVlQckecXpCSnBRekN2bDQ/edit

Nested Virtualization presentation by Orit ..

Orit explained about what is nested virtualization.. L0, L1 & L2 levels in nested virtualization and bits around it..

Nested EPT to Make Nested VMX Faster – Gleb Natapov,
Gleb started to explain about ‘shadow paging’ and the reason for being its slow.. With the help of EPT, we can avoid shadow paging table altogether and we relay on Two Level Paging in HW which improves performance a lot..

Memory virtualization overhead has a huge impact on virtual machine performance.To reduce the overhead two level paging was introduced to virtualization extensions by all X86 vendors. On Intel it is called Extended Page Table or EPT. Nested guests running on nested VMX cannot enjoy the benefits of two level paging though and have to rely on much less efficient shadow paging mechanism which, combined with other overheads that nested vitalization incur, makes nested guests unbearably slow. Nested EPT is a mechanism that allows nested guest to benefit from EPT extension and greatly reduce overhead of memory virtualization for nested guests.

[slides ] docs.google.com/file/d/0BzyAwvVlQckedmpobUY1Sm0zNWc/edit

Red Hat Running Windows 8 on top of Android with KVM – Zhi Wang, Intel

Running windows guest on android!!!

Zhi Wang from Intel discussed about how they were able to run Windows 8 guest efficiently (including virtio drivers) on top of an x86 Android tablet with KVM. . With KVM, they enabled H/W-based virtualization simply and efficiently on such small devices, taking advantage of the Linux-based system, including Android. At the same time, they found various challenges especially with qemu mainly because of the differences in 1) the user level infrastructure ( display, input, sound..etc), such as libraries ( Bionic, limbo-android..) , the graphics system, and system calls, 2) scheduling (e.g. foreground apps are suspended). He concluded his sessions with the next steps, such as, sensor support, Connected Standby for Windows 8.

Slides: docs.google.com/file/d/0Bx_UwXmBKWsyWnhRaFgxNDlfZlU/edit
Demo: docs.google.com/file/d/0Bx_UwXmBKWsyVS01WVBSM3FSM3M/edit

[OSv -> Best Cloud Operating System]

Avi and Glauber presented whats Osv , Why ..etc . Personally I had looked into some of the previous presentations and docs about Osv in recent past ( noted here ), so I felt this session as a bit of repetition 🙂 .. How-ever, there was a good takeaway from this session by Glauber..ie “There are 2 types of C++ Programming”.. If you really want to get it , please listen to the video. 🙂

Dinner at a restaurant in Haymarket street with Omer Frenkel , Oved, Arik .etc..

Day 2 @ KVM forum

Started with Antony’s Qemu weather Report which included Major features and fixes, Rleases, GSoC projects and growing community..

Effective Multi-Threading in QEMU- Paolo Bonzini, Red Hat

Paolo explained about Qemu architecutre in past and present, virtio-blk-dataplane archtitecure, unlocked memory dispatch and unlocked MMIO.

Block Layer Status Report – Stefan Hajnoczi, Red Hat & Kevin Wolf, Red Hat

Kevein and Stefan presented the changes came in block layer recently and the features they are working on.. This included performance improvement ( data deduplication, corruption prevention, COW , INternal COW, journalling,) followed with drive specific configuration , data plane. Concluded by mentioned future thoughts like image fleecing, point in time snapshots, incremental backups, image syncing…etc..) Also there is an addition of new command line called ‘qemu-img map’ ..

An Introduction to OpenStack and its use of KVM Virtualization – Daniel Berrange, Red Hat

Daniel explained about openstack and how KVM is tightly coupled with openstack .. He started with openstack components briefly , followed with main point of integration attempts on KVM and openstack. Obviously on Nove computing engine. The talk also outlined the overall OpenStack architecture with a focus on Nova, the capabilities of KVM as used in Nova, how KVM integrates with the OpenStack storage and networking sub-projects, and what developments to expect in future releases of OpenStack.

New Developments and Advanced Features in the Libvirt Management API – Daniel Berrange, Red Hat
Danp on New Developments and advanced features in libvirt. In this session he started with libvirt disk access permission implementation, so on ‘sanlock’ and virtlockd. Granular access control, also covered bits on svirt, selinux ..etc. Later he came to cgroups. and finally he concluded his talk with tuning cpu, memory and block..


Empowering Data Center Virtualization Using KVM – Livnat Peer, Red Hat

In this session Livnat explained about what is ovirt project ( the management of multi-host, multi-tenant virtual data centers, including high availability, VM and storage live migration, storage and network management, system scheduler and more_. Its integration point for several open source virtualization technologies, including kvm, libvirt, spice, oVirt node and numerous OpenStack components such as Neutron and Glance. ) and how it can be used to manage a data center

One Year Later: And There are Still Things to Improve in Migration! – Juan Quintela,

Juan was busy with finding latencies, bottle necks in live migration and trying to improve it :). He explained about the changes that have happened to improve migration in machines with huge amount of memory/vcpus. There has also been changes integrating migration over RDMA.

Amit presented an idea about static checker to avoid live migration compatibility issues.. This is not a implemented solution rather a thought on reducing ‘issues’ wrt to live migration failure caused by compatibility features around qemu ..

Debugging Live Migration – Alexander Graf, SUSE
Alexander came up with an interesting session on debugging live migration dynamically.. I really loved this session especially with the slides he prepared and the content , may be the way of presentation.

Automatic memory ballooning – Luiz Capitulino

When a Linux host is running out of memory, the kernel will take action to reclaim memory. This action may be detrimental to KVM guests performace (eg. swapping). To help avoiding this scenario, a KVM guest could automatically return memory to the host when the host is facing memory pressure. By doing so the guest may also get into memory pressure so we also need a way to allow the guest to automatically get memory back. This is what the automatic ballooning project is about. In this talk Luiz drived into the project’s implementation, challenges and discussed current results.

The day was ended with a party at Cargo Bar where most of my time was spent with Hai Huang, Osier Yang, Vinod Chegu, Eduardo Habkost, Bandan Das, Sean Cohen..

From the hotel, Hai Huang lead us ( Amos Kong, Fam, Mike Cao, Asias He, Osier Yang ) to the hotel.

Day 3 @ KVM forum

The day started with OVA update :

how-ever this day was almost dedicated to ovirt sessions.. Itamar started with below presentation where he discussed about ovirt project status.. The oVirt project just released version 3.3 with many new features and integration with other open source projects like Foreman, Gluster, OpenStack Glance and Neutron. Itamer covered the current state of the project, as well as roadmap and plans going forward.

Doron taught us what is Chicken and Egg problem , ah.. I forgot to say Chicken & Egg in Ovirt Context 🙂 .. This talk was about ‘self hosted engine’ feature of Ovirt..

oVirt for PowerPC – Leonardo Bianconi, Instituto de Pesquisas Eldorado

So, ovirt is planning to extend or include PPC arch.. In this talk, Leonardo discussed about a work which add PowerPC architecture awareness to oVirt, which currently makes various assumptions based on the x86 architecture. Many projects are being involved in this task, like: LIBVIRT, QEMU and KVM.

Rik talked about the methods to reduce context switches overhead :

The Future Integration Points for oVirt Storage and Storage Consumption – Sean Cohen; Ayal Baron

This was almost an interactive session lead by Sean and Ayal. They discussed about whats new in ovirt 3.3 & whats planned for 3.4 and beyond.

slides: www.slideshare.net/SeanCohen/kvm-forum-2013-future-integration-points-for-ovirt-storage

Using oVirt and Spice for VDI – Frantisek Kobzik

This was a nice presentation about spice in ovirt and also about other display protocols support in ovirt..

LinuxCon + KVM forum people landed on National Museum of Scotland this evening and it was a remarkable time !! I dont know to whom and all I talked that day, how-ever I was fully engaged with discussions. Spent some time at Casino here , oh .. it was not only me, almost half of the crowd.. After all, we ( Beijing team + /me) made a walk to the hotel with lot many discussions, even-though I dont remember, Osier or some one may.

Ovirt : How to shutdown/stop or start virtual machines (VMs) in ovirt DC automatically using python/ovirt- sdk ? [Part 1]

Recently I got a request to provide a python program to shutdown all the vms in ovirt DC using python-sdk and also to start Vms in the DC. Below program is submitted as a quick solution.. Sharing it here hoping it will help you.

To shutdown all the VMs in a ovirt Data Center, you can try below program called ‘shutdown_all_up_vms.py’ . You need to configure below parameters in the program depending on your setup :

Programs can be downloaded from here .

Here is the copy of the program:

Another program ( almost replica) is available for ‘start’ vms (start_all_down_vms.py ) as well.. You can download it from github.com/humblec/python-sdk-ovirt

If you execute any of above program, the console output may look like this:

As you can see above, the program logged better inside ‘/tmp/start_vms_dc.log’

Ovirt : Convert physical/virtual systems to virtual using virt-p2v && virt-v2v then use it in ovirt DC

Here is the detailed process about virt-v2v and virt-p2v which can be used in an ovirt setup to migrate your physical and virtual systems to ovirt data center.!!!

Below strings concludes about what is virt-v2v and virt-p2v, how it work, how it can be used for the conversion. It also includes how you can debug issues in this area. !!!

I am sure there are many sysadmins who have large collections of virtual machines running either on proprietary hypervisor or physcial machines in their deta centers and are wondering how they are going to virtualize  them on truely open source virtualization platform oVirt (ovirt.org)

Well answer is virt-v2v utility. This tool is absolutely killer, It allows you to convert physical server/desktop to virtual server/desktop, It also allows you to convert virtual server/desktop of another format, example vmware, kvm, maybe xen into a flavor of kvm/ovirt .

At some point virt-v2v will be able to be used as v2c which means virtual to cloud, using it you would be able to move virtual machines/physical machines into cloud.

Before learning this tool I always had question, Do we really need special utility for migrating a virtual machine from one platfrom to another or Physical to virtual server migration? Can’t we just copy the bits residing on a physical disk to a virtual disk, e.g dd + nc )? I even tried that with partial success and realized that such tool is really required because v2v or p2v is not just copying bits from one disk to another, along with copying bits from disk it’s very important to inject paravirtualizaed drivers, and modify some other bits to support those drivers which is hard when manually performed. it can be achieved perfectly only when a special conversion utility like virt-v2v is used.

virt-v2v can currently convert RHEL4, RHEL 5, RHEL 6, Windows XP, Windows Vista, Windows 7, Windows Server 2003 and Windows Server 2k8 virtual machines running on Xen, KVM and VMware ESX or physical system

The following source hypervisors are currently supported by virt-v2v:

Xen
KVM
VMware ESX

In future version, It may also support.

HyperV
Citrix Xen Server

picture1


How to get the virt-v2v utility ?

The package name is virt-v2v. It is available in Fedora Base channel. No need
to subscribe any special channel to get the tool.

The system on which virt-v2v package installed is referred as virt-v2v conversion Server. It can be installed on virtual or physical system.

This package install list of files on the system, Above 3 are important among
Them .

/etc/virt-v2v.conf : Main configuration file of virt-v2v, You can define specific
profiles for a conversion method, storage location, output format and
allocation policy

/usr/bin/virt-p2v-server : A p2v server that receives data from
p2v client.

/usr/bin/virt-v2v : The Script which actuallly performs a conversion.
Now we have conversion sever ready, Lets see how to migrate virtual machine to ovirt from foreign hypervisors,

Virtual to Virtual Migration :

Before converting a virtual machines to run on ovirt, you must attach an export storage domain to the ovirt data center being used.

All Converted Virtual Machines image are saved on ‘Export Storage domain’ first and later you can import them on desired data storage domain.

picture2

Diagram explaining typical virtual to virtual migration procedure

Following are the prerequisites of virtual to virtual migration using virt-v2v utility.

A ) Operating system specific

Linux :

– Recommended to register the intended virtual machine with RHN or have access to local yum repo because as part of conversion process, virt-v2v may install new kernel and drivers in the system.

Windows :

– Install the libguestfs-winsupport package on the host running virt-v2v. This package provides support for NTFS, which is used by many Windows systems.

– Install the virtio-win package on the host running virt-v2v. This package provides para-virtualized block and network drivers for Windows guests.

# yum install libguestfs-winsupport virtio-win on virt- v2v conversion server.

B ) Foreign hypervisors specific

KVM : Ensure that ssh is enabled and listening on the default port.

VMWARE :

1) Remove vmware-tools installed on the system. ssh access is enabled.
2) Authentication : Connecting to the ESX / ESX(i) server requires authentication. virt-v2v supports password authentication when connecting to ESX / ESX(i). It reads passwords from $HOME/.netrc. An example entry is:

machine esx.example.com login root password s3cr3t

XEN : System must have acccess to rpm package repository as new kernel and drivers need to be downloaded and ssh should be enabled.

Actual Procedure : virt-v2v command to migrate virtual machines.

VMware to OVirt migration :
1) Create an NFS export domain. Attach that to the ovirt data center. Make sure host acting as virt-v2v conversion sever have access it.

2) Shutdown the virtual machine and uninstall VMware Tools on the guest operating system .

3) Make sure ssh is enabled on ESX/ESXi host and the authentication details are enterd in $HOME/.netrc

4) Now Convert the virtual machines using below commands.

Here :

Optionally you can create virt-v2v profile. very useful when large number of virtual machines migration is planned.

5) Import the virtual machine from Export Storage domain into Ovirt.

KVM to OVirt migration :

1) Create an NFS export domain. Attach that to the ovirt data center. Make sure host acting as virt-v2v conversion sever have access it.

2) Shutdown the virtual machine. Make sure ssh is enabled on the host.

3) Convert the virtual machine using below command :

qemu+ssh://root@kvmhost.example.com/system = KVM host from which virtual machine need to be migrated

4) Import the virtual machine from Export Storage domain into Ovirt.

Xen to Ovirt Migration :

1) Don’t confuse with Citrix XenServer or XenDesktop, Here xen is referred kernel-xen .

2) Make sure that virtual machine which need to be migrated to ovirt has access to yum repository to downloade non-xen kernel and some other stuff.

3) Make sure ssh connection is enabled on Xen Dom0.

4) convert the virtual machine using below command :

5) Import the virtual machine from Export Storage domain into Ovirt.

What to do when things go wrong? How to troubleshoot virt-v2v related issues :

1) Ensure that required v2v packages are installed on the virt-v2v conversion server. e.g libguestfs-winsupport and virtio-win installed when a windows guest migration is planned.

2) Ensure that ssh is enabled on source host machine.

3) Make sure that Export Storage domain has enough space to accommodate new virtual machines.

4) Verify virt-v2v command syntax being used is correct. Virt-v2v man page has detail explanation of each parameter and examples also included.

Everything fine but still migration is failing?

In such situation enabling virt-v2v debug logs will be helpful. Virt-v2v debug logs can be enabled by prefix the virt-v2v command with the following environment variables

Physical to Virtual Migration

Virt P2V is comprised of both virt-p2v-server, included in the virt-v2v package, and the P2V client. Fedora virt-p2v image is unfortunately not available on any official fedora site for download. You will need build it your own . To know how to build a virt-p2v iso refer this post : website-humblec.rhcloud.com/how-to-build-a-virt-p2v-iso-in-fedora-f18f17/

I have built one and uploaded here :
docs.google.com/file/d/0B9WWLq_PzVsZWDlETzlPeDJIX3M/edit
its version 0.9.0. Feel free to download and use it.

p2v.iso is a bootable disk image based on a customized fedora image. Booting a physical machine from p2v.iso and connecting to a V2V conversion server that has virt-v2v installed allows data from the physical machine to be uploaded to the conversion server and converted for use ovirt virtualization.

Is it possible to convert any physical computers to virtual using virt-v2v ?

Depends on physical system configuration. I have tested converting many physical computers and it worked absolutely fine. However, There are some restrictions apply to P2V. To perform a P2V conversion, your source computer:

1. Must have at least 512 MB of RAM.
2. Cannot have any volumes larger than 2040 GB.
3. Virt-v2v support P2V conversion for computers based on x86 or x86_64 architecture. You wont able able convert computers with Itanium architecture–based operating systems.
4. Computers with their root filesystem root on a software RAID md device cannot be converted to virtual machines.
5. Operating system installed on computer must sbe upported to run as guest on ovirt.
6. Must have at least one ethernet connection.

If your physical computer meets above basic hardware requirements, It will successfully boot the P2V client:

picture3

Diagram explaining typical physical to virtual migration.

Actual Procedure :

Before you use P2V, you must first prepare your conversion server and download and prepare the p2v.iso boot media.

Boot the intended physical system with virt-p2v iso > connect to virt-v2v conversion server > select the virt-v2v profile > start the conversion 🙂

How to prepare virt-v2v conversion server ?

1) Install virt-v2v and related packages on any physical system or virtual machine. You may also use ovirt engine node to act as virt-v2v conversion server.

2) Enable root login over ssh:
Change ‘PermitRootLogin’ directive to yes in /etc/ssh/sshd_config and restart sshd service.

3) Define a Target Profile in virt-v2v.conf : In virtual to virtual migration defining virt-v2v profile is optional but for physical to virtual it is mandatory to define Target Profile in virt-v2v.conf

As root, edit /etc/virt-v2v.conf on virt-v2v conversion server

Scroll to the end of the file. Before the final , add the following:

Where:
Profile Name is an arbitrary, descriptive target profile name.
Method is the destination hypervisor type (rhev for ovirt ).
Storage Format is the output storage format, either raw or qcow2.
Allocation is the output allocation policy, either preallocated or sparse.
Network type specifies the network to which a network interface should be

4) Once your virt-v2v conversion server is ready. Boot the intended physical machine from virt-p2v iso. Because P2V client is built upon a Red Hat Enterprise Linux 6 live image, the Red Hat Enterprise Linux 6 splash image is displayed while the tool is booting.

Configure networking. Generally the P2V client configures networking automatically using DHCP. If it is unable to configure networking automatically, you will need to configure it manually. IP Address, Gateway, and Prefix are required fields. Enter values that are appropriate for your network, and click Use these network settings.

picture4

5) Once the network is configured. Virt-p2v will present loging screen asking for virt-v2v conversion server hostname and password.

picture5

6) After connecting to your conversion server, configure the virtual hardware that will be attached to the converted physical machine, and select which physical hardware should be converted. At least one Fixed Storage device must be converted, including the device containing the operating system installation.

picture6

7 ) Select a Destination Profile from the drop down menu. These reflect the target profiles included in the /etc/virt-v2v.conf file on the conversion server.

8) Enter a Name for the Virtual Machine that will result from the conversion.
The Number of CPUs and Memory(MB) are automatically detected and completed. Change them if more CPUs and/or Memory are desired on the outputted virtual machine.

9) When Target Properties, Fixed Storage, Removable Media, and Network Interfaces have all been configured as desired, click Convert.

That’s all. Conversion may take some time and once completed you will be able to see virtual machine name in export storage domain. Import it !

Troubleshooting physical to virtual conversion failure .

Debug logs can be collected by following below steps:

1.In virt-p2v client:ctrl+alt+F2 to console:dmesg
2.Modifiy /usr/bin/virt-p2v-server to set the debug level:

1. Initialize logging

Uncomment these 2 lines to capture debug information from the conversion

2. process

The logs are now captured in conversion server .