KVM live migration process [Part -1 ]

If you are using KVM to run virtual machines in your environment then, you can make use of one of the fantastic features of KVM. i.e. Live Migration of virtual machines. So, first we will see what is the exact meaning of KVM live migration.

KVM live migration is the process of moving the virtual machine from one host to other.

Lets see in which situation, one will migrate virtual machine from one host to other:

– when we need to upgrade, add, or remove hardware devices on the host, we can live migrate virtual machines to other hosts. This means, there will not be any downtime to any of virtual machine during Host upgrade activity.
– Energy saving. For ex. consider there are two hosts(host-A & Host-B). On Host-A there is only one running virtual machine, where on Host-B, there are 6 virtual machines already running. But, Host-B have enough resources(CPU,RAM etc.) to accommodate few more virtual machines. So in such scenario, we can migrate the virtual machine from Host-A to Host-B and further power off Host-A.
– Geographic migration. We can migrate the vm to other host which might be located on different geographic location in case of any Natural disaster.

Prerequisites for Live Migration:

– Storage which is being used for storing the virtual machines disk should be shared. Supported protocols:

Fibre Channel-based LUNs
iSCSI
FCoE
NFS
GFS2
SCSI RDMA protocols (SCSI RCP): the block export protocol used in Infiniband and 10GbE iWARP adapters


For ex:

kvm_live

– If firewall is enabled then, required ports should be opened on sorce and destination hypervisor(22,80/443,49152-49216).
– FQDN of destination host should be resolvable.
– Fedora version should be same on both source/destination hypervisor.
– libvirtd service should be running.

=======
vm migration can be live(when vm is in running state) OR offline(vm in paused state).
live migration: In live migration, you can live migrate the vm from one host to other when vm is in running state. End user of vm will never know when vm is being migrated. In back end, KVM from source host starts sending memory pages of virtual machine to destination host. Also, it monitors for change in memory pages which are already sent to destination. If it founds any then, it will re transfer those pages to destination host.

offline migration: In offline migration, vm is paused/suspended on source host then, memory of guests gets transferred to destination and finally vm get resumed on destination host.

If the virtual machine modifies pages faster than KVM can transfer them to the destination host, then in such scenario to speed up the vm migration process you can suspend the guest on source host and perform offline migration.

Migration speed of any virtual machine is totally depends on network bandwidth in your environment.
– You can check default migration speed by:

[terminal]

# virsh migrate-getspeed
[/terminal]

– To manually change default migration speed as per your requirement:

[terminal]
# virsh migrate-setspeed test_vm 100
# virsh migrate –live test_vm qemu+ssh://abc.xyz.com/system
[/terminal]


With above command we are setting 100mbps speed for guest(test_vm) migration.

[terminal]
2013-10-08 13:31:59.839+0000: 29686: debug : virDomainMigrateSetMaxSpeed:17286 : dom=0x7fee640c7ad0, (VM: name=test_vm, uuid=d8eafa30-ca85-cf64-8648-4685a2adfa14), bandwidth=100, flags=0 <=== From source host(libvirt logs) [/terminal] In addition to “--live” virsh do have many additional options. For more information, please refer man page of virsh: [terminal] # man virsh [/terminal] In following example, I have exported iscsi LUN from Netapp which I am going to use as shared storage to save virtual machine disks: --->o—->o—

– Login to iSCSI target:

[terminal]
# iscsiadm –mode node –targetname iqn.1992-08.com.netapp:sn.135107447 –portal netapp.example.com:3260 –login

Logging in to [iface: libvirt-iface-05ce9ce1, target: iqn.1992-08.com.netapp:sn.135107447, portal: xx.xx.xx.xx,3260] (multiple)
Starting iscsid: [ OK ]
Login to [iface: libvirt-iface-05ce9ce1, target: iqn.1992-08.com.netapp:sn.135107447, portal: xx.xx.xx.xx,3260] successful.

# fdisk -l /dev/sde

Disk /dev/sde: 31.1 GB, 31142707200 bytes
64 heads, 32 sectors/track, 29700 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[/terminal]

– create an LVM-based storage pool with VG:test_kvm, and PV:/dev/sde


[terminal]
# virsh pool-define-as lvm_storage_pool logical – – /dev/sde test_kvm /dev/test_kvm

Pool lvm_stoage_pool defined
[/terminal]

– Build storage pool
[terminal]

# virsh pool-build lvm_storage_pool

Pool lvm_stoage_pool built
[/terminal]

– Activate pool
[terminal]

# virsh pool-start lvm_storage_pool

Pool lvm_stoage_pool started
[/terminal]

– Check available storage pools:

[terminal]
virsh # pool-list
Name State Autostart
—————————————–
lvm_stoage_pool active no

virsh # pool-dumpxml lvm_stoage_pool lvm_stoage_pool
98dafeb0-144a-e8a8-a82e-a5bab2a283f7
31138512896
7470055424
23668457472
test_kvm

/dev/test_kvm 0755
-1
-1

[/terminal]

– Create new vm:

[terminal]
# virt-install –name test_vm –ram 1024 –disk pool=lvm_stoage_pool,size=5 –boot network,hd,menu=on –graphics none –network bridge=br0 –vcpus 1 –os-variant=rhel6
[/terminal]

– On 2nd host, create same storage pool.

– Set migration speed to 1Gbps.

[terminal]
# virsh migrate-setspeed test_vm 1000
# virsh migrate-getspeed test_vm

1000
[/terminal]

– Now, lets try live migrating the vm:

[terminal]
# virsh migrate –live –verbose test_vm qemu+ssh://abc.xyz.com/system
root@abc.xyz.com’s password:
Migration: [100 %]
[/terminal]

– From libvirt.logs:

[terminal]
2013-10-14 10:05:00.975+0000: 2252: debug : virDomainMigratePerform3:6247 : dom=0x7f43e800ed30, (VM: name=test_vm, uuid=a7af86ca-38f8-ba79-5ca6-a692090472ce), xmlin=(null) cookiein=0x7f43e80232f0, cookieinlen=292, cookieout=0x7f4401fb0b30, cookieoutlen=0x7f4401fb0b3c, dconnuri=(null), uri=tcp:abc.xyz.com:49155, flags=121, dname=(null), bandwidth=0
2013-10-14 10:05:00.975+0000: 2252: debug : qemuMigrationPerform:2810 : driver=0x7f43f4012b40, conn=0x7f43dc00d550, vm=0x7f43e4016dc0, xmlin=(null), dconnuri=(null), uri=tcp:dhcp223-129.pnq.redhat.com:49155, cookiein=
test_vm
a7af86ca-38f8-ba79-5ca6-a692090472ce
abc.xyz.com
815be882-fe51-cb11-977b-fa3797b72df2

[…]

2013-10-14 10:04:51.902+0000: 2250: debug : virDomainMigrateSetMaxSpeed:16545 : dom=0x7f43dc00b310, (VM: name=test_vm, uuid=a7af86ca-38f8-ba79-5ca6-a692090472ce), bandwidth=1000, flags=0
[/terminal]

4 thoughts on “KVM live migration process [Part -1 ]”

  1. Post is quite difficult to understand for a newbie to understand, means how can i connect one host with other?and how to implement shared storage(nfs)?

  2. Post is quite difficult to understand for a newbie, means how can i connect one host with other?and how to implement shared storage(nfs)?

    • Makrand, somehow I missed this comment..
      >>

      means how can i connect one host with other?and how to implement shared storage(nfs)?
      >>

      In reality, you dont want to connect both hosts.. The only requirement is shared storage has to be viweable ( accessible) by both hosts..

      regarding “shared storage(nfs)”: it is that easy as creating a NFS share from any of your nfs servers and making it visible to both hosts.. Basically the VM image will be stored under the nfs shared stroage . so that both hosts can see the image..

      Please let me know if you need any further clarification..

Comments are closed.