Search This Blog

Showing posts with label virtualisation. Show all posts
Showing posts with label virtualisation. Show all posts

Sunday, April 27, 2014

Overlay technologies in data center

Everyone speaks about SDN an the benefits its brings when deploying cloud or enterprise infrastructures. But do we actually know or have any understanding what this all SDN is about? If you want be fluent in the language of virtual networking and network overlays in modern data centers you need to understand at least the following concepts:
In the remaining of the post we will concentrate solely on existing overlay technologies. These information was extracted from Cisco doc: Cisco Nexus 9000 Series Switches - Data Center Overlay Technologies).

Network-Based Overlay Networks
  1. IEEE 802.1ad Provider Bridging or IEEE 802.1q Tunneling also known as IEEE 802.1QinQ or simply Q-in-Q
  2. IEEE 802.1ah Provider Backbone Bridges (PBB) or Mac-in-Mac Tunnels
  3. Cisco FabricPath allows multipath networking at Layer 2
  4. TRILL - IETF Transparent Interconnection of Lots of Links is a Layer 2 multipathing technology
  5. Shortest-Path Bridging (SPB) is defined in IEEE 802.1aq and is targeted as a replacement for Spanning Tree Protocol (example info based on Avaya documentation)
  6. Cisco Overlay Transport Virtualization (OTV) is a Layer 2-over-Layer 3 encapsulation "MAC-in-IP" technology
  7. The Cisco Location/Identifier Separation Protocol (LISP) is currently defined as a Layer 3 overlay scheme over a Layer 3 network
  8. Multiprotocol Label Switching (MPLS)
  9. Virtual Private LAN Service (VPLS) a Layer 2 tunneling protocols
  10. Virtual Private Routed Network (VPRN) also known as BGP/MPLS or IP-VPN provides IP VPN services
Host-Based Overlay Networks
    1. Virtual Extensible LAN (VXLAN) is a Layer 2 overlay scheme over a Layer 3 networ that uses IP/UDP encapsulation
    2. Network Virtualization Using Generic Routing Encapsulation (NVGRE) allows creation of virtual Layer 2 topologies on top of a physical Layer 3 network
    3. Stateless transport tunneling (STT) is an overlay encapsulation scheme over Layer 3 networks that use a TCP-like header

    Monday, January 6, 2014

    Using Qemu on cloud server to run emulated virtual machines

    We know that there is not support for nested hypervisors on cloud instances: Nested virtualization support on Rackspace public cloud.

    Problem

    How to use Qemu on cloud server and start virtual machine to overcome the nested virtualization limitation.

    Demonstration and results description

    To overcome this limitation we will use Qemu in its emulated mode. Qemu in this mode doesn't require any specials virtualization support in CPU (HVM - Hardware-assisted virtualization).

    The VM image was downloaded from here: http://people.debian.org/~aurel32/qemu/i386/. The default u/p is root.


    Alternatively we could use Virtualbox. Although I'm not quite sure what would work better yet and give more options to customizing the VMs.

    References

    https://wiki.debian.org/QEMU
    http://www.linux-kvm.org/page/FAQ - this is more to show what can be missing as the CS don't support KVM
    http://en.wikipedia.org/wiki/QEMU
    http://www.linuxforu.com/2012/05/virtualisation-faceoff-qemu-virtualbox-vmware-player-parallels-workstation/



    Sunday, January 5, 2014

    Nested virtualization support on Rackspace public cloud

    We have found the CPU hardware architecture that the public cloud is running on in this blog: Hypervisor hardware differences on Openstack Rackspace Cloud.

    Problem

    Does Rackspace public cloud support nested visualization?

    Results discussion
    • Public Cloud
    Of course for the cloud to exists the physical sever where the hypervisor runs (Xen or KVM for example) needs to have in-hardware virtualization support (Intel VT-x or AMD-V). This is the only way to provide a high performance cloud servers.

    But once the cloud server boots up the cloud virtual CPU no longer exports the hardware CPU virtualization capabilities. You can verify this with this little script below.

    egrep -i 'vmx|svm|ept|vpid|npt|tpr_shadow|flexpriority|vnmi'
    

    That means you can't use your cloud server to run another, a guest hypervisor ( called as well nested hypervisor).
    • Private cloud 
    The nested virtualization can be enabled. As an example this link describe some of the steps for Linux KVM:
    Another solution

    If your cloud server doesn't offer nested virtualization support you can always use the emulation mode. Qemu supports running VM that way.


    References

    http://www.ibm.com/developerworks/cloud/library/cl-nestedvirtualization/
    https://www.diigo.com/user/rtomaszewski/nested_virtualization?type=all&snapshot=no&sort=updated
    http://en.wikipedia.org/wiki/X86_virtualization

    Hypervisor hardware differences on Openstack Rackspace Cloud

    You can spin up test cloud servers and extract CPU flags with the help of this little script using csplit.
     
    cat /proc/cpuinfo | csplit -z  - '/processor/' '{*}'
    diff xx0*
    grep flags xx01 | cut -d ':' -f 2 | xargs -n1 echo | sort > flags.txt
    

    By comparing the results we can definitely say that:
    • Performance1 and performance2 cloud servers are running on the same hardware.
    • New performance cloud servers are hosted on Intel CPU.
    • The standard (next generation) series are being hosted on AMD CPU.
    The CPU flags for comparison.
     
    processor       : 1
    vendor_id       : AuthenticAMD
    cpu family      : 16
    model           : 4
    model name      : Quad-Core AMD Opteron(tm) Processor 2374 HE
    stepping        : 2
    microcode       : 0x1000086
    cpu MHz         : 2200.096
    cache size      : 512 KB
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 5
    wp              : yes
    flags           : fpu de tsc msr pae cx8 cmov pat clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow rep_good nopl pni cx16 popcnt hypervisor lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch hw_pstate
    bogomips        : 4400.19
    TLB size        : 1024 4K pages
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 48 bits physical, 48 bits virtual
    power management: ts ttp tm stc 100mhzsteps hwpstate
    

    processor       : 1
    vendor_id       : GenuineIntel
    cpu family      : 6
    model           : 45
    model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
    stepping        : 7
    microcode       : 0x70d
    cpu MHz         : 2600.068
    cache size      : 20480 KB
    physical id     : 0
    siblings        : 2
    core id         : 0
    cpu cores       : 1
    apicid          : 0
    initial apicid  : 43
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 13
    wp              : yes
    flags           : fpu de tsc msr pae cx8 sep cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc rep_good nopl pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt tsc_deadline_timer aes hypervisor lahf_lm arat pln pts dtherm
    bogomips        : 5200.13
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 46 bits physical, 48 bits virtual
    power management:
    

    Monday, August 5, 2013

    Problem booting virtual F5 LTM on Virtual box

    There is no better way to learn about LTM load balancer then playing and testing it even though it may be a virtual appliance and running LTM  10.1 version.

    To get started download the LTM BIGIP-10.1.0.3341.1084.ova software and the base registration key (you will need to register first).

    Problem

    After you import the ova file and boot the BigIp you can be presented with the following error:
     
    Memory for crash kernel (0x0 to 0x0) notwithin permissible range
    

    Resolution

    The VM should be bootable out of the box on a PC with an Intel CPU  64bit. If you are using AMD 64bit system instead you need to enable the "Enable IO APIC" option under VM properties, System, Motherboard.

    After the change the system will still show the message but the booting process will no longer stop.

    After the system boots login as root/default. Finds the management ip and using your OS browser navigate to https://ip to activate the trial license.
     
    tmsh list /sys management-ip
    

    References

    https://www.f5.com/trial/
    http://lost-and-found-narihiro.blogspot.co.uk/2011/04/how-to-fly-big-ip-ltm-ve-in-vmware.html
    https://forums.virtualbox.org/viewtopic.php?f=5&t=24988

    Sunday, June 9, 2013

    SR-IOV technology enables low level network virtualization

    In the virtualization space the SR-IOV technology was introduced in about 2008/2010 [1]. The technical details can be found under the links in the reference section but in plain English the technology allows to create many virtual devices base on single a physical device. For this to work the hardware (CPU, North chip) and operating system need to have SR-IOV support.

    Below is a video demonstrating packet processing for an Intel Ethernet card that supports SR-IOV.


    Interesting slides showing the concept from the video and  reference links:
    • After the frame enters the physical port on the NIC the low level driver/firmware (supporting the SR-IOV) distributes the packet (based on header classifications/hash value/etc) to separate virtual queues 
    • Each virtual queue is assigned directly to a virtual device 
    • Once the packet is in the queue it can be deliver to the VM DIRECTLY without the usual software hypervisor overhead
    • Packets don't have to be copied from physical port buffer(s) to OS RAM and than from OS RAM to VM OS buffers. The data can be sent directly from the physical port to VM OS buffers. That way the hypervisor processing overhead can be minimised.
    • Critical part for the technology is CPU and chipset virtualization support
    • As access to physical RAM need to be protected between hypervisor and VMs as well as VMs themselves the virtual memory address is translated to the physical location by the north chipset
    • For the DMA request to copy the packets the address translation between the hypervisor address space and the VM address space is transparent (north chip take care of it)
    • Another view how the packet is delivered from physical port to the VM
    References
    1. http://www.intel.com/content/dam/doc/application-note/pci-sig-sr-iov-primer-sr-iov-technology-paper.pdf
    2. http://www.intel.com/content/dam/doc/white-paper/pci-sig-single-root-io-virtualization-support-in-virtualization-technology-for-connectivity-paper.pdf
    3. http://communities.intel.com/community/wired/blog/2010/09/07/sr-iov-explained

    Friday, May 17, 2013

    Brocade On-Demand Data Center vision

    As cloud technologies help to evolve and create new set of product and services withing data centers the data center concept is changing as well. When it comes to resource provisioning there is no more talk about silos but rather an agile and on demand scalable pool of resources that can be freely used when required.

    With the progressing changes in the industry many vendors adopt and innovate to meet the increasing demands of the next generation data center. This On-Demand Data Center link explains Brocade DC vision. In short they believe in:
    • x86 server virtualization
    • Openstack (like Quantum), OpenDaylight, SDN
    • openflow
    • virtual network appliances (like vADX, vRouter like vyatta)
    • and network hardware that supports virtual as well physical workload: Brocade MLX router and Brocade VCS switch product lines
    References
    1. http://www.brocade.com/downloads/documents/technical_briefs/vcs-technical-architecture-tb.pdf

    Friday, December 28, 2012

    How to emulate Raspberry Pi computer

    How much money we would you have to spent to assemble a simple x86 PC (Intel/AMD compatible PC)? With the prices on the market it sounds almost impossible to buy all the necessary elements under a 100$ budget. But if Intel binary compatibility is not your requirements you can try a cheapest ARM based computer called Raspberry Pi.

    What is Raspberry Pi

    For about only 35$ you can buy a complete ARM compatible PC. 

    The Raspberry Pi (short: RPi or RasPi) is an ultra-low-cost ($25-$35) credit-card sized Linux computer.

    The Raspberry Pi measures 85.60mm x 56mm x 21mm, with a little overlap for the SD card. It weighs 45g. 


    Graphics capabilities are roughly equivalent to Xbox 1 level of performance.

    Overall real world performance is something like an old 300MHz Pentium 2, with kind of better graphics.

    The device is powered by 5v micro USB.

    Raspberry Pi emulator

    You can use qemu emulator that works on Linux and Windows to run and test almost any Raspberry compatible distribution. The emulator will take care of abstracting the necessary ARM underlying hardware when the system is turned on. For detailed instruction we can use Google or follow one of these links:

    • windows qemu 
    http://www.raspberrypi.org/phpBB3/viewtopic.php?f=5&t=5743
    http://sourceforge.net/projects/rpiqemuwindows/

    • Linux qemu
    http://www.smallbulb.net/2012/225-emulating-raspberry-pi
    http://hexample.com/2012/01/10/emulating-raspberry-pi-debian/
    http://xecdesign.com/qemu-emulating-raspberry-pi-the-easy-way/

    Some screens from booting of the Raspbian “wheezy” image can be seen below.




    References
    • Others
    http://www.raspberrypi.org/faqs
    http://www.raspberrypi.org/downloads

    Friday, November 23, 2012

    Installing Rackspace Private Cloud (Alamo) on a single virtual server using VMware Workstation

    If you are looking for info about installation on a physical server you may like this article instead: How to install Rackspace Private Cloud (Alamo) on a single physical server

    VMware Workstation

    The Openstack installation within a virtual server is almost identical like on a physical one. The main difference is that we have to create a virtual machine that emulated/simulate the CPU extension needed for the nested hypervisor [1]

    Practically that means that  you have to enable this box in your VM config.


    If you prefere editing of a config file this is the variable that you have to set in *.vmx file
     
    .encoding = "windows-1252"
    config.version = "8"
    virtualHW.version = "8"
    numvcpus = "4"
    vcpu.hotadd = "TRUE"
    scsi0.present = "TRUE"
    scsi0.virtualDev = "lsilogic"
    memsize = "4096"
    mem.hotadd = "TRUE"
    scsi0:0.present = "TRUE"
    scsi0:0.fileName = "Ubuntu 64-bit.vmdk"
    ide1:0.present = "TRUE"
    ide1:0.fileName = "C:\Users\radoslaw\Downloads\alamo-v2.0.0.iso"
    ide1:0.deviceType = "cdrom-image"
    floppy0.startConnected = "FALSE"
    floppy0.fileName = ""
    floppy0.autodetect = "TRUE"
    ethernet0.present = "TRUE"
    ethernet0.virtualDev = "e1000"
    ethernet0.wakeOnPcktRcv = "FALSE"
    ethernet0.addressType = "static"
    ethernet0.address = "00:50:56:25:71:FB"
    usb.present = "TRUE"
    ehci.present = "TRUE"
    sound.present = "TRUE"
    sound.fileName = "-1"
    sound.autodetect = "TRUE"
    serial0.present = "TRUE"
    serial0.fileType = "thinprint"
    serial1.present = "TRUE"
    serial1.fileType = "file"
    serial1.fileName = "test.py"
    pciBridge0.present = "TRUE"
    pciBridge4.present = "TRUE"
    pciBridge4.virtualDev = "pcieRootPort"
    pciBridge4.functions = "8"
    pciBridge5.present = "TRUE"
    pciBridge5.virtualDev = "pcieRootPort"
    pciBridge5.functions = "8"
    pciBridge6.present = "TRUE"
    pciBridge6.virtualDev = "pcieRootPort"
    pciBridge6.functions = "8"
    pciBridge7.present = "TRUE"
    pciBridge7.virtualDev = "pcieRootPort"
    pciBridge7.functions = "8"
    vmci0.present = "TRUE"
    hpet0.present = "TRUE"
    usb.vbluetooth.startConnected = "TRUE"
    displayName = "Ubuntu 64-bit"
    guestOS = "ubuntu-64"
    nvram = "Ubuntu 64-bit.nvram"
    virtualHW.productCompatibility = "hosted"
    vhv.enable = "TRUE"
    powerType.powerOff = "hard"
    powerType.powerOn = "hard"
    powerType.suspend = "hard"
    powerType.reset = "hard"
    extendedConfigFile = "Ubuntu 64-bit.vmxf"
    vmci0.id = "-1561829832"
    uuid.location = "56 4d 22 7e ee b5 2b 19-3e c1 5e 76 a2 e8 5e 38"
    uuid.bios = "56 4d 22 7e ee b5 2b 19-3e c1 5e 76 a2 e8 5e 38"
    cleanShutdown = "TRUE"
    replay.supported = "FALSE"
    replay.filename = ""
    scsi0:0.redo = ""
    pciBridge0.pciSlotNumber = "17"
    pciBridge4.pciSlotNumber = "21"
    pciBridge5.pciSlotNumber = "22"
    pciBridge6.pciSlotNumber = "23"
    pciBridge7.pciSlotNumber = "24"
    scsi0.pciSlotNumber = "16"
    usb.pciSlotNumber = "32"
    ethernet0.pciSlotNumber = "33"
    sound.pciSlotNumber = "34"
    ehci.pciSlotNumber = "35"
    vmci0.pciSlotNumber = "36"
    usb:1.present = "TRUE"
    vmotion.checkpointFBSize = "37748736"
    usb:1.speed = "2"
    usb:1.deviceType = "hub"
    usb:1.port = "1"
    usb:1.parent = "-1"
    tools.remindInstall = "TRUE"
    usb:0.present = "TRUE"
    usb:0.deviceType = "hid"
    usb:0.port = "0"
    usb:0.parent = "-1"
    

    References
    1. Nested virtualization
      http://www.ibm.com/developerworks/cloud/library/cl-nestedvirtualization/ http://www.veeam.com/blog/nesting-hyper-v-with-vmware-workstation-8-and-esxi-5.html

      You can't use Oracle Virtual Box tool as it doesn't support nested virtualisation. https://www.virtualbox.org/ticket/4032

    2. http://www.rackspace.com/knowledge_center/article/installing-rackspace-private-cloud-vmware-fusion

    Sunday, November 18, 2012

    What does Software Defined Data Center means

    After the industry created a Software Defined Network (SDN) [1] term it is time for a new one. A new emerging IT buzz word is a Software Defined Data Center (SDDC) [2]. It appears that only VMware is marketing this extensively at the moment.

    From a technical point of view its all makes sense: compute resources are already largely virtualized, virtual storage and virtual networks are following. Looking at the last VMware acquisition of Nicira [3] the company has many if not all necessary products to build such a SDDC Data Center.



    Let's see how the market will respond to it and if other vendors start looking and using this as well in the near future.

    References
    1. SDN
    2. http://rtomaszewski.blogspot.co.uk/2012/10/software-defined-network-sdn-as-example.html
      http://rtomaszewski.blogspot.co.uk/2012/10/google-does-use-sdn-in-its-data-centers.html
      http://rtomaszewski.blogspot.co.uk/2012/09/emerging-of-virtual-network-aka-quantum.html

    3. SDDC
    4. http://www.networkcomputing.com/data-center/the-software-defined-data-center-dissect/240006848
      http://www.vmware.com/solutions/datacenter/software-defined-datacenter/index.html
      http://blogs.vmware.com/console/2012/08/the-software-defined-datacenter-meets-vmworld.html


    5. Openstack, Nicira and VMware
    6. http://www.infoworld.com/t/data-center/what-the-software-defined-data-center-really-means-199930

    Friday, November 2, 2012

    After server virtualization there is a time for network to be virtualized

    Virtualization become today de facto a standard in almost every company. What about 10 years ago was a revolution in computing it has become a mature product for everyone. But when we look at the history how it was evolving we can see one component to remain unmodified: the network, one of the very few unvirtualized technology bastions.

    But today the market is changing. To understand the changes and what all this means I recommend to read at least these 2 blogs:

    VMWARE BUYS NICIRA: A HYPERVISOR VENDOR WOKE UP
    VMware’s Acquisition of Nicira – VMware confirming the hypervisor is dead

    The changes the blogs describe are already happening. As a practical example is a Rackspace Cloud Network product, a hybrid network that was created to leverage the potential in software defined network that data center provider can benefit from.


    Further reading & references
    1. http://www.chriscolotti.us/vmware/nicira-nvp/nicira-nvp-virtualized-networking-primer/
    2. http://www.chriscolotti.us/vmware/nicira-nvp/the-nicira-nvp-component-architecture/
    3. http://nicira.com/en/frequently-asked-questions 
    4. http://www.rackspace.com/cloud/

    Saturday, September 29, 2012

    Software defined networks (SDN) with F5 and Microsoft Hyper-V

    There is a lot of going on in the network space. As the virtualization is changing the server landscape there is as well as more and more talk about virtualization in the network.

    Microsoft and F5 has collaborated together and with the new BigIp software release and as well as generation Windows Hyper-V technology they both offer a Software defined networks (SDN) solution for Windows based cloud servers.

    Software Defined Networking, Enabled in Windows Server 2012 and System Center 2012 SP1, Virtual Machine Manager

    F5'S Network Virtualization solution optimizes app delivery for Windows Server 2012 Hyper-V;


    MEC 2012--F5 Network Virtualization Solution




    Thursday, December 29, 2011

    How to resize (increase) main NTFS system partition of your Windows base virtual mashine in VMware Workstation


    Problem summary
    When we create a VM we specify the various hardware component we want to virtualize. One of them is the HDD.  After some time you may find out that the hard drive your VM has is too small.

    The VMware Workstation allows you to resize the disk (as long there are not snapshots).

    Impact
    You can't install any additional software because there is not enough free space.


    Problem
    Although there are many tools you can use to resize the NTFS file system many of them have some limitation (demo version doesn't write data to the disk) or require a comercial license to bought or can be use on the main partition (diskpart.exe) [2]



    Solution
    The task can be done with the help of [1]. We don't need license. It can resize the main NTFS partition.

    Free EaseUS® Partition Master 9.1 Home Edition 

    References
    [1]
    Free EaseUS® Partition Master 9.1 Home Edition
    http://www.partition-tool.com/personal.htm

    [2]

    How to extend a data volume in Windows Server 2003, in Windows XP, in Windows 2000, and in Windows Server 2008
    http://support.microsoft.com/kb/325590



    Monday, October 10, 2011

    XenServer v6 installation problem: UNSUPPORTED_INSTALL_METHOD - other-config:install-repository was not set to an appropriate value, and this is required for the selected distribution type

    It has been a long time since I played with the XenServer for the last time.

    I managed to install the XenServer without any problems. The XenConsole installation went flawlessly as well. But when I tried to create a new VS using the provided templates i run into the desribing problem:

    What I did in XenCentre:
    1. XenCentre
    2. New VM
    3. Ubuntu Lucid Lynx 10.04(32 bit) 
    4. Then use the default settings and once you see the screen with the:
    "Select the installation method for the operating system software you want to install on the new VM"

    You seen on the screen a page where you should select the desired installation source. In my example the option for "Install from ISO library or DVD drive" was grey but you can still select one of the ISO images from your storage pool (if you defined it previously).

    Problem:
    If you attempt to run the VS to start the installation you will get an error like this:

    UNSUPPORTED_INSTALL_METHOD - other-config:install-repository was not set to an appropriate value, and this is required for the selected distribution type
    

    There is not much on the Citrix support/technical knowledge page for this. The solution is here although.

    Solution:
    • To create VM and start the installation from a ISO file saved on the attached storage (NFS or CIFS) you have to use the correct profile: Other install media
    • To use the Ubuntu Lucid Lynx 10.04(32 bit) profile you have to specify an URL for the data to be retrived (for example like http://mirror.clarkson.edu/ubuntu/ )

    References:
    http://lists.xensource.com/archives/html/xen-users/2011-06/msg00394.html

    Wednesday, November 4, 2009

    Which CPU for XenServer do I need

    Citrix XenServer to be able to virtualise Windows OS need some support on the hardware site. You have to have a CPU which bring the "virtualisation feature". It's mean: Intel-VT or Amd-V.

    To see if your processor is 64 bit, you can run the following command:

    # egrep --color ' lm ' /proc/cpuinfo

    How you can check if your CPU has one of these necessary features:

    # egrep --color '^flags.*(vmx|svm)' /proc/cpuinfo

    How to check if your CPU supports hardware virtualization
    Hyper-V: Will My Computer Run Hyper-V? Detecting Intel VT and AMD-V
    Does your CPU run Intel-VT or AMD-V?
    XenServer 3.2 Hardware Support FAQ