Search This Blog

Friday, December 28, 2012

How to emulate Raspberry Pi computer

How much money we would you have to spent to assemble a simple x86 PC (Intel/AMD compatible PC)? With the prices on the market it sounds almost impossible to buy all the necessary elements under a 100$ budget. But if Intel binary compatibility is not your requirements you can try a cheapest ARM based computer called Raspberry Pi.

What is Raspberry Pi

For about only 35$ you can buy a complete ARM compatible PC. 

The Raspberry Pi (short: RPi or RasPi) is an ultra-low-cost ($25-$35) credit-card sized Linux computer.

The Raspberry Pi measures 85.60mm x 56mm x 21mm, with a little overlap for the SD card. It weighs 45g. 


Graphics capabilities are roughly equivalent to Xbox 1 level of performance.

Overall real world performance is something like an old 300MHz Pentium 2, with kind of better graphics.

The device is powered by 5v micro USB.

Raspberry Pi emulator

You can use qemu emulator that works on Linux and Windows to run and test almost any Raspberry compatible distribution. The emulator will take care of abstracting the necessary ARM underlying hardware when the system is turned on. For detailed instruction we can use Google or follow one of these links:

  • windows qemu 
http://www.raspberrypi.org/phpBB3/viewtopic.php?f=5&t=5743
http://sourceforge.net/projects/rpiqemuwindows/

  • Linux qemu
http://www.smallbulb.net/2012/225-emulating-raspberry-pi
http://hexample.com/2012/01/10/emulating-raspberry-pi-debian/
http://xecdesign.com/qemu-emulating-raspberry-pi-the-easy-way/

Some screens from booting of the Raspbian “wheezy” image can be seen below.




References
  • Others
http://www.raspberrypi.org/faqs
http://www.raspberrypi.org/downloads

Thursday, December 27, 2012

Openstack auto provisioning with Puppet and razor

To build and operate big Openstack infrastructure solutions you have to be able to deploy and provision quickly and effectively many new servers.

This is not definitive list but as a simple task you will need to make sure that all your servers have the right OS version, all dependency packages are installed and finally that the right Openstack code (projects like nova, cinder, etc) are deployed  This list is only as very simple example what you need to think about. As a demonstration in this blog I wanted to show an example how this can be achieved with a Puppet razor tool.

Problem

How to provision and configure Openstack servers.

Analysis and results description

Openstack+puppet+razor: all details of how to run this can be found here: http://wiki.debian.org/OpenStackRazorHowto

References
  1. http://puppetlabs.com/blog/introducing-razor-a-next-generation-provisioning-solution/
  2. http://wiki.debian.org/OpenStackRazorHowto
  3. http://purevirtual.eu/2012/07/02/how-to-get-started-with-razor-and-puppet-part-1/


Dev vs Ops team and DevOps anti patterns

In this video from Puppet conf Roy Tyler talks about important aspects and rules that sysadmin, engineering and devops teams do and follow. He gives many examples and tries to explain the principles that lead into many bad ideas within organisations when it comes to run operation, engineering or devs teams. He calls the bad ideas ops anti patterns. The full video can be found here: We'll Do It Live - Operations Anti-Pattern. Below are couple of notes I took when watching it :).
  • ~4.20m; use configuration management; we are in transition and your infrastructure should be treated not different than a software code
  • ~6.00m; use automatic testing
  • ~7.40m; if your infrastructure is treated as code please do implement a release management process
  • ~8.40m; scripts written by ops team are a great way to start but without some good software engineering practices it doesn't work in long time
  • ~9.20m; don't use root user for code deployments into production but rather a well known good tools design for it 
  • ~11.40m; Ops team tends to be reactive instead of proactive; this is dangerous because your SPOF (single point of failures) are changing as your application matures and gets more complex
  • ~14.30m; dev team may take care of HA in software but there is a lot more that needs to be done on the infrastructure level as well; dedicate resources or highly skilled contractors to work on it as soon as possible
  • ~14.40m; HA is never extensively tested so failures should be expected
  • ~15.30m; when building in cloud you have to assume that everything, I mean everything is going to fail
  • ~16.00m; make sure that your alerting system is not over logging
  • ~18.45m; this sysadmin attitude is wrong: never touch a running system
  • ~19.30m; use continues integration to minimize long-term risk
  • ~21.40m; isolated silos like a separate dev and ops team that don't talk to each other is wrong
  • ~24.30m; share the necessary information about your production to developers
  • ~25.00m; strict control is an enemy of creativity
  • ~25.30m; too much IT and ops control will lead to wrong and bad workarounds
  • ~26.50m; silos and poor communication will lead to waist of resources
  • ~29.00m; don't jump and use the hype-bla products/tools only because they are popular on Internet; by choosing the right tools you have to evaluate a risk between:
    • from personal experience knowing the bad design and limitations the old tools have vs
    • knowing the advantages the new products have from only reading about them
  • ~30.40m; don't invent tools in house before you evaluate potential solutions
  • ~33.50m; don't build your own packaging system; use the one that opensource or some vendors offer
  • ~36.40m; don't netboot all our hardware; there is a reason the servers have local disk that can be used
  • ~39.20m; don't delete your production data to test your backup
  • ~40.30m; don't trust your vendor
  • ~43.00 don't use multi data center deployments if your application is not ready for it; use other methods to implement disaster recovery if needed
  • ~45.00 have a centralized location for code that is used for production deployments
References
  1. http://www.agileweboperations.com/devops-anti-patterns
  2. http://unethicalblogger.com/2012/10/09/video-well-do-it-live.html

Thursday, December 20, 2012

How to document python code

There are couple of way how you can document a python code. In this article I'm not trying to compare them but rather to show how a final documentation may look like.

Problem

How many tools can we use to generate documentation from python source.
How a final documentation look like after generation from a source code.

Analysis

When searching on Google we quickly find that the most popular are (there are more that I'm not listing here):
  • epydoc
  • sphinx 
  • doxygen
Example

These are examples how the documentation looks like.

Epydoc - It has as stile of a classical javadoc documentation introduced by Sun when Java was released.


Sphinx - the layout is very different from Epydoc. It seems to be very likable by the python project itself. All the doc for it is generated using it.


Demonstration

Epydoc installation

# python --version
Python 2.7.3
# aptitude show python-epydoc
# aptitude install  python-epydoc
# aptitude install apache2

The source code

Generation of epydoc documentation
# epydoc --verbose --verbose  example_epydoc.py
Building documentation
[  0%] example_epydoc (example_epydoc.py)
Merging parsed & introspected information
[  0%] example_epydoc
Linking imported variables
[  0%] example_epydoc
[ 12%] example_epydoc.MyClass
Indexing documentation
[  0%] example_epydoc
Checking for overridden methods
[ 12%] example_epydoc.MyClass
Parsing docstrings
[  0%] example_epydoc
[ 12%] example_epydoc.MyClass
Inheriting documentation
[ 12%] example_epydoc.MyClass
Sorting & Grouping
[  0%] example_epydoc
[ 12%] example_epydoc.MyClass
Writing HTML docs to 'html'
[  4%] epydoc.css
[  9%] epydoc.js
[ 13%] identifier-index.html
[ 45%] module-tree.html
[ 50%] class-tree.html
[ 54%] help.html
[ 59%] frames.html
[ 63%] toc.html
[ 68%] toc-everything.html
[ 72%] toc-example_epydoc-module.html
[ 77%] example_epydoc-module.html
[ 81%] example_epydoc.MyClass-class.html
[ 86%] example_epydoc-pysrc.html
[ 90%] redirect.html
[ 95%] api-objects.txt
[100%] index.html

Timing summary:
  Building documentation.............     0.2s |=================================================
  Merging parsed & introspected i....     0.0s |
  Linking imported variables.........     0.0s |
  Indexing documentation.............     0.0s |
  Checking for overridden methods....     0.0s |
  Parsing docstrings.................     0.0s |=
  Inheriting documentation...........     0.0s |
  Sorting & Grouping.................     0.0s |
  Writing HTML docs to 'html'........     0.0s |=======

Showing the doc

You have to configure the Apache and points it to the generated 'html' directory. Once the page is loaded in the browser the documentation looks like this:

References
  1. http://stackoverflow.com/questions/4126421/how-to-document-python-code-epydoc-doxygen-sphinx
  2. http://stackoverflow.com/questions/1125970/python-documentation-generator
  3. http://stackoverflow.com/questions/5334531/python-documentation-standard-for-docstring
  4. http://epydoc.sourceforge.net/
  5. http://sphinx-doc.org/

What do you need to implement virtual network and build hybrid cloud

To build an infrastructure that can be used to host a hybrid cloud environments or to benefit from the flexibility that a virtual networking provides we need software and hardware components. Below are couple of links I found when researching on this topic.

VMware ESXi/vSphere
Citrix XenServer
Opensource Linux alternatives
Microsoft Hyper-V
Vendors


Monday, December 10, 2012

A simple GIMP snippets for graphic files manipulations

In MS Windows we can use Paint [1] program to manipulate graphical files. In Linux the most famous and often recommend alternative to is is GIMP.

The problem with GIMP is that as it is a very powerful tool it isn't intuitive and simple to use like Paint. Below are some tricks I use when work with GIMP.

How to draw a square or border line

Once you create a selection you can draw a line to make it visible. An example is shown below.


You can do it from GIMP by using the options: Edit -> Stroke Selection.
More info about his can be found here [2].

How to create a new image from a selected region

With a help of it we can for example extract from an original picture below the Ubuntu logo only.

Original picture:

After extracting and croping:


You can do it from GIMP by using the option: Image  -> Crop to selection
More info about his can be found here [3].

References
  1. http://www.functionx.com/windows/Lesson03.htm

  2. http://pbs01.wordpress.com/2007/09/30/145/
    http://www.youtube.com/watch?v=RiZXLkB82cI
    http://www.gimp.org/tutorials/Borders_On_Selections/

  3. http://gimp.open-source-solution.org/manual/gimp-image-menu.html
    http://gimp.open-source-solution.org/manual/gimp-image-crop.html
    http://docs.gimp.org/2.6/en/plug-in-autocrop-layer.html

Proprietary AMD graphics drivers for Linux

How do I start Catalyst Control Center (CCC) from bash

I created a demo user to test my X server config and ATI graphic driver. I run into a problem that my user didn't have relevant permissions to run commands with sudo. Every time I tried to lunch the Catalyst Control Centre I got a popup windows asking for password to perform administrative task.



Solution
  1. The name of the program is displayed in the popup windows: amdcccle
  2. If you didn't notice you can find this using these methods
$ dpkg -l | grep ati | egrep -iv 'configuration|application|automatic|compatible|compatibility|static|foomatic|ating|ation|ative' | grep ati
$ dpkg -l | grep amd

$ dpkg -L fglrx-amdcccle | grep bin
/usr/lib/fglrx/bin
/usr/lib/fglrx/bin/amdcccle
/usr/lib/fglrx/bin/amdxdg-su
/usr/lib/fglrx/bin/amdupdaterandrconfig

How do verify that my driver is loaded

More checks can be found in [1]. As a simple check you can run:

$ lsmod | grep fglrx
$ dmesg | grep fglrx


Execpt CCC what command can I used to list, print and change the graphic driver settings

You can use a command 'aticonfig'.  Example output can look like this:

$ aticonfig --lsa --odgc
* 0. 06:00.0 ATI Radeon HD 5700 Series

$ aticonfig --odgt
Default Adapter - ATI Radeon HD 5700 Series
                  Sensor 0: Temperature - 37.00 C

$ aticonfig --odgc
Default Adapter - ATI Radeon HD 5700 Series
                            Core (MHz)    Memory (MHz)
           Current Clocks :    850           1200
             Current Peak :    850           1200
  Configurable Peak Range : [600-960]     [1200-1445]
                 GPU load :    0%

References
  1. Unofficial Wiki for the AMD Linux Driver
  2. http://wiki.cchtml.com/index.php/Frequently_Asked_Questions
  3. http://support.amd.com/us/gpudownload/linux/Pages/radeon_linux.aspx

Sunday, December 9, 2012

High availability pattern using anycast IP addresses for cloud and applications

Anycast architecture that helps to create and achieve HA

To run efficiently applications demands more and more resources. Even with the right amount of computational resources like servers, CPU, RAM, storage for them to be considered efficient and successfully on the market they have to meet many more requirements. It is impossible to list here all of them as they can depend on internal factors (for example driven by the application architecture itself ) or relay on external factors that may be specific and unique to a customer and an environment.

Although in this short blog post, I would like to discuss the importance of scalability factor and show one patters that can be used to build a highly available and efficient infrastructure systems.

There are two concept how we can try to implement a scalability: scale up vs scale out. For more information about scale up (or vertical scaling) these links provide further information [1]. We will concentrate here only on the scale out option. All the pictures below are base on this presentation that slides can be found here: OpenStack-Design-Summit-HA-Pairs-Are-Not-The-Only-Answer.
  1. To fully benefit from the HA pattern your application architecture should relay use share nothing paradigm

  2. That way if failure occurs only an isolated, small part of the computational resources will be impacted.


  3. Next we have to configure our routers and implementing necessary changes for a routing protocol.
  4. OSPF routing protocols is an examples and others can be used in similar way as well. For more info can be found here [4].


    The slides show only a fraction of the configuration. Another good example with con figs can be found here:  Anycast DNS - Part 4, Using OSPF

  5. As last you have to configure you servers to listen and accept traffic for our anycast IP.
A best practice is to configure the external IP on the loopback interface, disable ARP protocols for it and bind our application specifically to this IP.

References
  1. http://en.wikipedia.org/wiki/Scalability
  2. https://devcentral.f5.com/blogs/us/lots-of-little-virtual-web-applications-scale-out-better-than-scaling-up
  3. Seattle Conference on Scalability: Lessons In Building Scalable Systems
  4. What is “anycast” and how is it helpful?

Friday, November 30, 2012

Introduction into tunneling protocols when deploying cloud network for your cloud infrastructure

Cloud network is a hot topic for cloud providers and hosting companies. In basic the concept should enable and allow tenants to create, manage and destroy network typologies on demand for the cloud servers by using cloud open API. That said, we want to allow a tenant to create an isolated virtual layer 2 network with its own IP subnet.

Before going into further details we have to realize that the problem isn't trivial to solve. The difficulty comes from a fact that the existing network that interconnects hypervizors hosts is not very flexible and adaptable for changes. That physical network architecture was build and tune to  allow to handle all traffic from all cloud VM across you cloud deployment. This represent its strength and limitation as it is not flexible enough when it comes to configure and create many isolated virtual layer 2 or layer 3 networks per single tenant. This is exactly the problem that the cloud network promises to resolve.

At the moment there isn't a single standard how to implement a cloud network. Instead, we have 3 different protocols that were proposed: VXLAN, NVGRE, STT [1].

All these protocols relay on the fact that the hypervisor hosts are interconnected  The implementation of the additional features is done by using tunneling mechanisms. All of them implement a L2 in L3 tunnels by using TCP, UDP or IP datagrams.

A short introduction and more explanation how this works can on the screenshots below that were taken from this video:  Video: Cloud Tunnels @ Cloud Mafia ( slies can be found here http://ifup.org/slides/cloud-tunnels/ )






References
  1. VXLAN
  2. NVGRE
  3. STT

Friday, November 23, 2012

Installing Rackspace Private Cloud (Alamo) on a single virtual server using VMware Workstation

If you are looking for info about installation on a physical server you may like this article instead: How to install Rackspace Private Cloud (Alamo) on a single physical server

VMware Workstation

The Openstack installation within a virtual server is almost identical like on a physical one. The main difference is that we have to create a virtual machine that emulated/simulate the CPU extension needed for the nested hypervisor [1]

Practically that means that  you have to enable this box in your VM config.


If you prefere editing of a config file this is the variable that you have to set in *.vmx file
 
.encoding = "windows-1252"
config.version = "8"
virtualHW.version = "8"
numvcpus = "4"
vcpu.hotadd = "TRUE"
scsi0.present = "TRUE"
scsi0.virtualDev = "lsilogic"
memsize = "4096"
mem.hotadd = "TRUE"
scsi0:0.present = "TRUE"
scsi0:0.fileName = "Ubuntu 64-bit.vmdk"
ide1:0.present = "TRUE"
ide1:0.fileName = "C:\Users\radoslaw\Downloads\alamo-v2.0.0.iso"
ide1:0.deviceType = "cdrom-image"
floppy0.startConnected = "FALSE"
floppy0.fileName = ""
floppy0.autodetect = "TRUE"
ethernet0.present = "TRUE"
ethernet0.virtualDev = "e1000"
ethernet0.wakeOnPcktRcv = "FALSE"
ethernet0.addressType = "static"
ethernet0.address = "00:50:56:25:71:FB"
usb.present = "TRUE"
ehci.present = "TRUE"
sound.present = "TRUE"
sound.fileName = "-1"
sound.autodetect = "TRUE"
serial0.present = "TRUE"
serial0.fileType = "thinprint"
serial1.present = "TRUE"
serial1.fileType = "file"
serial1.fileName = "test.py"
pciBridge0.present = "TRUE"
pciBridge4.present = "TRUE"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "TRUE"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "TRUE"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "TRUE"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "TRUE"
hpet0.present = "TRUE"
usb.vbluetooth.startConnected = "TRUE"
displayName = "Ubuntu 64-bit"
guestOS = "ubuntu-64"
nvram = "Ubuntu 64-bit.nvram"
virtualHW.productCompatibility = "hosted"
vhv.enable = "TRUE"
powerType.powerOff = "hard"
powerType.powerOn = "hard"
powerType.suspend = "hard"
powerType.reset = "hard"
extendedConfigFile = "Ubuntu 64-bit.vmxf"
vmci0.id = "-1561829832"
uuid.location = "56 4d 22 7e ee b5 2b 19-3e c1 5e 76 a2 e8 5e 38"
uuid.bios = "56 4d 22 7e ee b5 2b 19-3e c1 5e 76 a2 e8 5e 38"
cleanShutdown = "TRUE"
replay.supported = "FALSE"
replay.filename = ""
scsi0:0.redo = ""
pciBridge0.pciSlotNumber = "17"
pciBridge4.pciSlotNumber = "21"
pciBridge5.pciSlotNumber = "22"
pciBridge6.pciSlotNumber = "23"
pciBridge7.pciSlotNumber = "24"
scsi0.pciSlotNumber = "16"
usb.pciSlotNumber = "32"
ethernet0.pciSlotNumber = "33"
sound.pciSlotNumber = "34"
ehci.pciSlotNumber = "35"
vmci0.pciSlotNumber = "36"
usb:1.present = "TRUE"
vmotion.checkpointFBSize = "37748736"
usb:1.speed = "2"
usb:1.deviceType = "hub"
usb:1.port = "1"
usb:1.parent = "-1"
tools.remindInstall = "TRUE"
usb:0.present = "TRUE"
usb:0.deviceType = "hid"
usb:0.port = "0"
usb:0.parent = "-1"

References
  1. Nested virtualization
    http://www.ibm.com/developerworks/cloud/library/cl-nestedvirtualization/ http://www.veeam.com/blog/nesting-hyper-v-with-vmware-workstation-8-and-esxi-5.html

    You can't use Oracle Virtual Box tool as it doesn't support nested virtualisation. https://www.virtualbox.org/ticket/4032

  2. http://www.rackspace.com/knowledge_center/article/installing-rackspace-private-cloud-vmware-fusion

Thursday, November 22, 2012

How to install Rackspace Private Cloud (Alamo) on a single physical server

Rackspace has released a new version of its private cloud offerings. The new private cloud is based on the lasted Folsom Openstack release and includes many of the latest features.

ISO downloading 

The software can be downloaded fro http://www.rackspace.com/cloud/private/openstack_software/ after you fill in a simple form. Once filled you receive an email with a direct link to download the ISO image (alamo-v2.0.0.iso). The other important link is Getting Started Guide document.

Supported features

In the guide we can find more details about supported and unsupported features.

Supported OpenStack Features:
  • Rackspace supports integration with the other components of OpenStack, as well as features such as floating IP address management, security groups, availability zones, and the python-novaclient command line client
  • Single and dual NIC configurations
  • NFS and ISCSI file storage as backing stores for VM storage
  • VNC Proxy
  • KVM hypervisor
  • Nova Multi Scheduler instead of Filter Scheduler
  • Keystone integrated authentication
  • Glance integrated image service
  • Horizon dashboard
  • Cinder block storage service
  • Swift object storage service
  • Linux and Windows guests to the extent to which they accept handoff from KVM and boot
  • Single metadata server running on each device
  • Cloud management through OpenStack APIs
  • Rackspace also supports the use of Rackspace Cloud Files as a backend for OpenStack
  • Image Storage. For information about OpenStack Object Storage, refer to Rackspace Private Cloud OpenStack Object Storage installation
Unsupported OpenStack Features. The following features are not supported yet.
  • Nova high availability
  • Nova object store
  • Nova volumes
  • NFS and ISCSI file storage via volumes for guest VMs
  • Clustered file system solutions
  • xpvnc
  • Xen and other hypervisors
  • Centralized metadata servers
  • Quantum Software Defined Networking
Installation

To start the installer I had to first convert the ISO into a bootable USB (In the guide you find a link to this tool that can help you with this task www.pendrivelinux.com)

Once done change the boot sequence in BIOS, plug your Pendrive and restart the server.

The installation is relatively simple. The couple of questions that you may get confused are about the number of NIC and what subnet do you want to use for the cloud servers.

Below are some of my screenshots to demonstrate how it looks like (sorry for the quality)

The installation welcome screen


There will be couple of questions for IP address, gateway, netmask, DNS, user, password that you have to answer as well. Random screenshots are showed below.




During the whole installation you can switch to console 4 (ALT+4) and observe various debugging logs as the installation progresses.


At some point the phase 1 of 2 will be completed and the system reboots. Once the system come back we can see different logs showing how the chef progress as Openstack is configured.


At the end of phase 2/2 when the installation is completed you are going to see a screen with summaries about the server resources.


References
  1. http://www.rackspace.com/cloud/private/openstack_software/
  2. http://www.rackspace.com/knowledge_center/article/installing-rackspace-private-cloud-on-physical-hardware

Monday, November 19, 2012

Openstack popularity trends over time since 2010

It is a difficult task to measure a popularity of a product, item, thing or a software. But with a help of Internet we can use specialized search engines that track internet messages and post around the world for a particular string, key work or some for of textual identity.

An simple but powerful example of such an search engine is trend search on Google: http://www.google.com/trends .

Problem

How popular is Openstack.
Who uses Openstack.

Analysis

The below graphs and stats are generated by using this link: http://www.google.com/trends/explore#q=openstack.

Q: Is Openstack getting more and more popular?
Q: How popular is Openstack?

Below is a graph from Google trends (if you can't see it you need to be logged in a Google account)


Q:
What is a country where Openstack is the most popular?
A: China

Q: Is the USA the most popular country for searches that include Openstack?
A: No


Below is a graph from Google trends (if you can't see it you need to be logged in a Google account)


Q:
Is Europe interested in Openstack?
A: Yes

Q: In what European countries is Openstack the most popular?
A: In 2012: UK, Germany, France, Spain, Sweden and Poland

Q: In what cities is Openstack the most popular?

Below is a graph from Google trends (if you can't see it you need to be logged in a Google account)

Sunday, November 18, 2012

What does Software Defined Data Center means

After the industry created a Software Defined Network (SDN) [1] term it is time for a new one. A new emerging IT buzz word is a Software Defined Data Center (SDDC) [2]. It appears that only VMware is marketing this extensively at the moment.

From a technical point of view its all makes sense: compute resources are already largely virtualized, virtual storage and virtual networks are following. Looking at the last VMware acquisition of Nicira [3] the company has many if not all necessary products to build such a SDDC Data Center.



Let's see how the market will respond to it and if other vendors start looking and using this as well in the near future.

References
  1. SDN
  2. http://rtomaszewski.blogspot.co.uk/2012/10/software-defined-network-sdn-as-example.html
    http://rtomaszewski.blogspot.co.uk/2012/10/google-does-use-sdn-in-its-data-centers.html
    http://rtomaszewski.blogspot.co.uk/2012/09/emerging-of-virtual-network-aka-quantum.html

  3. SDDC
  4. http://www.networkcomputing.com/data-center/the-software-defined-data-center-dissect/240006848
    http://www.vmware.com/solutions/datacenter/software-defined-datacenter/index.html
    http://blogs.vmware.com/console/2012/08/the-software-defined-datacenter-meets-vmworld.html


  5. Openstack, Nicira and VMware
  6. http://www.infoworld.com/t/data-center/what-the-software-defined-data-center-really-means-199930

What your Dev, Engineering and sales teams can achieve by using Openstack cloud and Quantum network

Nicira NVP controller can be used together with Quantum as a plugin to enable extend virtual networking capabilities in Openstack. With NVP help the providers, operators and users can start building Software Defined Network (SDN) networks to interconnect the virtual machines.

It was predicted and by many it is seen as a natural evolution and a step forward for Openstack and cloud network in general.

We are no longer tight to static, unchangeable set of switches and routers when it comes to building and designing networks. The more flexible networking allow us to build more agile cloud infrastructures that will allow us to meet changing demands of rapidly evolving applications.

As a real business example of cloud flexibility this is a video of how Nicira is using Quantum to solve its internal infrastructure challenges for the Dev, Engineering and Sales teams: Running Quantum on Quantum @ Nicira's Multi-tenant OpenStack.



For these who don't like videos I have copied some of the key point from the presentation:
  • In a physical word it is difficult to achieve full automation and implement provisioning process that can build new configuration in a few minutes only 
  • This is a challenge for the operational team to deliver results in a rapidly changing environment and keep the some quality of work when your company grows fast
  • There is always a risk of a human error that can potentially bring your production network down
  • Most network are designed to meet only the requirements that are known initially; It is difficult to plan ahead and new changes can be impossible to implement later on
  • Developers need to have a flexibility to test without a risk of bringing a company network down
  • You can easily provision new isolated environments for new projects or new hires
  • For better resource utilization you want to have an automation in place that allows you to tier down and spin up whole environments (cloud servers + networking) as easy as possible during a day
  • Not only a compute resources have to be easy to provision but as well as interconnecting and isolating them to achieve infrastructure resources agility; This is important especially for tier application typologies where we have web, db and application servers. 
  • Quickly building POC for new projects and customers demonstration
  • Flexible network infrastructure allows collaboration between different development teams; breaking the network isolation boundaries in a statically built network may be impossible
  • Within minutes or second you can create and build new networks as well as to decommission them
  • You can add new resources and boost capacity when you need it; More flexible handling of cloud bursting; You don't have to oversubscribe resource only to meet the pick application requirements
And at the end copies of the interesting slides from the above presentation.






How to extract captions from a youtube video

There are videos on youtube that are 30min and longer. You've watched a video and later on you would like to find only a single moment that is interesting for you. With a help of subtitles and key word searching we could easily do it.

Problem

How to download and search in Google text captions file from a youtube video.

Solution

There are couple of solution that can be found on Google in the referece section. All the automatic ways didn't work for me. Below is a manual method that I used to extract the subtitles (caption) file from a Google video.

  • Open a video page in Chrome browser. 
  • Enable debuging and reqeuest trucking by pressing F12 in Chrome.
  • Enable caption in the video. 
  • Navigate into the Network tab and find the last timedtext request (the last at the bottom)
  • Right click on it and open that file in a new tab; An xml file containing subtitles with the imestamps should be opened. 

References
  1. http://mvark.blogspot.in/2012/06/how-to-extract-subtitles-from-youtube.html
  2. http://stackoverflow.com/questions/10036796/how-to-extract-subtitles-from-youtube-videos
  3. http://webapps.stackexchange.com/questions/25072/how-do-i-download-subtitles-from-a-youtube-video
  4. http://www.youtube.com/watch?v=drTyNDRnyxs

Thursday, November 15, 2012

What network topologies can I build with Openstack Quantum server

The Folsom Openstack release brings many enhancements to networking stack in the Openstack version of the cloud. The features have been encapsulated with in a new core project called Quantum. But even looking at the documentation it can be very difficult at the beginning to understand what are the differences to old nova-network solution and how an example Quantum virtual network can be configured.

Fortunately there is an easy to follow video introduction to Quantum from last Summit [1]. The slides to it can be found here as well [2].

Below are example slides from the presentation of how a virtual network can be build and how it can look like in cloud powered by Openstack, examples: single flat network, multiple flat networks, mixed flat plus private networks, single provider network, multiple per tenant private networks plus single provider network.






References

  1. OpenStack Networking (Quantum) Project Update 
  2. http://www.slideshare.net/danwent/quantum-grizzly-summit 
  3. More slides about Quantum from Dan Wendlandt at slideshare

Wednesday, November 14, 2012

How many Linux distributions provide officially Openstack support

In the same time as Openstack gets more mature the number of Python code line increases and the software become more complex. The usual tasks like installation, support and operational maintenance start to get more complicated and require more advance skills.

Seeing the opportunities on the market companies like Rackspace, Red Hat and Ubuntu rolling out new processes for sales and support to address these needs. As a prove of evidence below is couple of screenshots from Openstack Ubuntu presentation The promise of the Open Cloud with further links in references section.



References

http://www.rackspace.com/cloud/openstack/ 
http://www.redhat.com/about/news/press-archive/2012/8/red-hat-announces-preview-version-of-enterprise-ready-openstack-distribution
http://www.ubuntu.com/cloud/private-cloud/openstack 
Videos from 2012 Openstack Summit in San Diego

Videos from 2012 Openstack Summit in San Diego

Here you can find videos for these of us who couldn't participate in the 2012 Openstack Summit in San Diego:

At Openstack home page: Openstack summit sessions/
At Openstack chanell  on youtube: OpenStackFoundation

Tuesday, November 13, 2012

What happens to FTPS data channel if client closes control connection

There are couple of extensions added to standard FTP protocol to make it secure. It is important as in the default FTP configuration the control as well as the data channel use clear text to exchange commands or transmit data.

Problem

We assume that we were able to established a successful FTPS base session between a client and server. The client started a new data session to download a large file from the server or is uploading a file using the passive mode.

What happens to a file transfer if the control session is terminated by the client.

Troubleshooting

To verify the scenario we are going to setup a simple test scenario like in Does IPv4 based FTPS server supports EPSV FTP protocol extension blog [1].

As the curl client by default is not closing the control connections (what is a correct behavior that we will discuss at the end of this blog) we are going to use an active method to close an established tcp session described here How to forcibly kill an established TCP connection in Linux  [2].

Test #1: client download a large file

Client logs

Logs when the control connection is being closed and reseted

root@clinet:~# netstat -tulpan | grep curl
tcp        0      0 5.79.21.166:45707       5.79.17.48:8000         ESTABLISHED 5546/curl
tcp    64210      0 5.79.21.166:43796       5.79.17.48:8011         ESTABLISHED 5546/curl

root@clinet:~# ./killcx.pl 5.79.17.48:8011
killcx v1.0.3 - (c)2009-2011 Jerome Bruandet - http://killcx.sourceforge.net/

[PARENT] checking connection with [5.79.17.48:8011]
[PARENT] found connection with [5.79.21.166:43796] (ESTABLISHED)
[PARENT] forking child
[CHILD]  interface not defined, will use [eth0]
[CHILD]  setting up filter to sniff ACK on [eth0] for 5 seconds
[CHILD]  hooked ACK from [5.79.21.166:43796]
[CHILD]  found AckNum [1229126485] and SeqNum [3095306962]
[CHILD]  sending spoofed RST to [5.79.21.166:43796] with SeqNum [1229126485]
[CHILD]  sending RST to remote host as well with SeqNum [3095306962]
[CHILD]  all done, sending USR1 signal to parent [5781] and exiting
[PARENT] received child signal, checking results...
         => success : connection has been closed !

These are the client logs from the start of downloading until the control session is closed.

root@client:~# curl -v --limit-rate 10K -o file.txt -u rado:pass -k --ftp-ssl ftp://5.79.17.48:8000/c2900-universalk9-mz.SPA.152-1.T.bin
* About to connect() to 5.79.17.48 port 8000 (#0)
*   Trying 5.79.17.48...   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0connected
< 220-FileZilla Server version 0.9.41 beta
< 220-written by Tim Kosse (Tim.Kosse@gmx.de)
< 220 Please visit http://sourceforge.net/projects/filezilla/
> AUTH SSL
< 234 Using authentication type SSL
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS handshake, CERT (11):
{ [data not shown]
* SSLv3, TLS handshake, Server finished (14):
{ [data not shown]
* SSLv3, TLS handshake, Client key exchange (16):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
> USER rado
< 331 Password required for rado
> PASS pass
< 230 Logged on
> PBSZ 0
< 200 PBSZ=0
> PROT P
< 200 Protection level set to P
> PWD
< 257 "/" is current directory.
* Entry path is '/'
> EPSV
* Connect data stream passively
< 229 Entering Extended Passive Mode (|||8011|)
*   Trying 5.79.17.48... connected
* Connecting to 5.79.17.48 (5.79.17.48) port 8011
> TYPE I
< 200 Type set to I
> SIZE c2900-universalk9-mz.SPA.152-1.T.bin
< 213 77200652
> RETR c2900-universalk9-mz.SPA.152-1.T.bin
< 150 Connection accepted
* Doing the SSL/TLS handshake on the data stream
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSL re-using session ID
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
* Maxdownload = -1
* Getting file with size: 77200652
{ [data not shown]
  0 73.6M    0  616k    0     0  10095      0  2:07:27  0:01:02  2:06:25  9753* SSL read: error:00000000:lib(0):func(0):reason(0), errno 104
  0 73.6M    0  620k    0     0  10160      0  2:06:38  0:01:02  2:05:36 11170
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
} [data not shown]
curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 104

Server logs

As the file download started this is logged on the server.


After the client control connections is terminated the server logs '426 Connection closed' tranfer aborted' log message.


After about 3-5 seconds after the connections clears from the server logs.


Test #2: client upload a large file

Client logs

The client logs when control channel is terminated.

root@client:~# netstat -tulpan | grep curl
tcp        0      0 5.79.21.166:43489       5.79.17.48:8016         ESTABLISHED 13177/curl
tcp        0      0 5.79.21.166:45717       5.79.17.48:8000         ESTABLISHED 13177/curl

root@client:~# ./killcx.pl  5.79.17.48:8016 
killcx v1.0.3 - (c)2009-2011 Jerome Bruandet - http://killcx.sourceforge.net/

[PARENT] checking connection with [5.79.17.48:8016]
[PARENT] found connection with [5.79.21.166:43489] (ESTABLISHED)
[PARENT] forking child
[CHILD]  interface not defined, will use [eth0]
[CHILD]  setting up filter to sniff ACK on [eth0] for 5 seconds
[PARENT] sending spoofed SYN to [5.79.21.166:43489] with bogus SeqNum
[CHILD]  hooked ACK from [5.79.21.166:43489]
[CHILD]  found AckNum [781536832] and SeqNum [2094006657]
[CHILD]  sending spoofed RST to [5.79.21.166:43489] with SeqNum [781536832]
[CHILD]  sending RST to remote host as well with SeqNum [2094006657]
[CHILD]  all done, sending USR1 signal to parent [13547] and exiting
[PARENT] received child signal, checking results...
         => success : connection has been closed !

Curl logs when the upload starts and the control channel is terminated.

root@client:~# curl -v --limit-rate 10K -T file.txt -u rado:pass -k --ftp-ssl ftp://5.79.17.48:8000/
* About to connect() to 5.79.17.48 port 8000 (#0)
*   Trying 5.79.17.48...   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0connected
< 220-FileZilla Server version 0.9.41 beta
< 220-written by Tim Kosse (Tim.Kosse@gmx.de)
< 220 Please visit http://sourceforge.net/projects/filezilla/
> AUTH SSL
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0< 234 Using authentication type SSL
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS handshake, CERT (11):
{ [data not shown]
* SSLv3, TLS handshake, Server finished (14):
{ [data not shown]
* SSLv3, TLS handshake, Client key exchange (16):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
> USER rado
< 331 Password required for rado
> PASS pass
< 230 Logged on
> PBSZ 0
< 200 PBSZ=0
> PROT P
< 200 Protection level set to P
> PWD
< 257 "/" is current directory.
* Entry path is '/'
> EPSV
* Connect data stream passively
< 229 Entering Extended Passive Mode (|||8016|)
*   Trying 5.79.17.48... connected
* Connecting to 5.79.17.48 (5.79.17.48) port 8016
> TYPE I
< 200 Type set to I
> STOR file.txt
< 150 Connection accepted
* Doing the SSL/TLS handshake on the data stream
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSL re-using session ID
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
} [data not shown]
  0 73.6M    0     0    0  688k      0  10122  2:07:07  0:01:09  2:05:58  9814* SSL_write() returned SYSCALL, errno = 10422:51:35
  0 73.6M    0     0    0  688k      0  10122  2:07:07  0:01:09  2:05:58  8177
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
} [data not shown]
curl: (55) SSL_write() returned SYSCALL, errno = 104

Server logs

When the upload starts and 1-3 seconds after the control channel is closed.




Results discussion

We can see that every time the client closes the TCP session used to host the control channel bad things happen to the upload or download process.

This is expected behavior and is documented in the relevant RFC documents:


http://tools.ietf.org/html/rfc4217
7. Data Connection Behaviour

http://tools.ietf.org/html/rfc959
3.2.  ESTABLISHING DATA CONNECTIONS

The server MUST close the data connection under the following conditions:

         1. The server has completed sending data in a transfer mode
            that requires a close to indicate EOF.

         2. The server receives an ABORT command from the user.

         3. The port specification is changed by a command from the
            user.

         4. The control connection is closed legally or otherwise.

         5. An irrecoverable error condition occurs.


References
  1. http://rtomaszewski.blogspot.co.uk/2012/11/does-ipv4-based-ftps-server-supports.html
  2. http://rtomaszewski.blogspot.co.uk/2012/11/how-to-forcibly-kill-established-tcp.html

Monday, November 12, 2012

Does IPv4 based FTPS server supports EPSV FTP protocol extension

FTP Extension description

The EPSV stands for Extended Passive Mode and is defined in RFC 2428 [1].

According to the RFC specification it is used for:

This paper specifies extensions to FTP that will allow the protocol to work over IPv4 and IPv6. 
...
The EPRT command allows for the specification of an extended address for the data connection. 
... 
The following are sample EPRT commands: 
   EPRT |1|132.235.1.2|6275| 
   EPRT |2|1080::8:800:200C:417A|5282|

In the RFC I couldn't find any word about default values or how the server should behave if the client doesn't provide any additional arguments and used the command in this simple way:

EPSV

Test configuration

To verify the ftp extension I build a simple test scenario using Rackspace cloud:

  • Windows 2008 cloud server running FTPS server; I used FileZilla Server [2]
  • Ubuntu 12.04 LTS Linux base system acting as a client; we used curl tool to simulate FTPS requests
Setting up of the clouds, ftps server and client are relatively simple so we are not going to describe these here. After FileZilla Server was installed I enabled the FTPS and a little bit customized the standard configuration. The screenshots below show the relevant settings.





Client connection

Below are client logs when we try to download a file from ftps server.

root@client:~# curl -v -o tmp -u user:pass -k --ftp-ssl ftp://<server_ip>:8000/file.txt
* About to connect() to 5.79.17.48 port 8000 (#0)
< 220-FileZilla Server version 0.9.41 beta
< 220-written by Tim Kosse (Tim.Kosse@gmx.de)
< 220 Please visit http://sourceforge.net/projects/filezilla/
> AUTH SSL
< 234 Using authentication type SSL
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS handshake, CERT (11):
{ [data not shown]
* SSLv3, TLS handshake, Server finished (14):
{ [data not shown]
* SSLv3, TLS handshake, Client key exchange (16):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
> USER user
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0< 331 Password required for user
> PASS pass
< 230 Logged on
> PBSZ 0
< 200 PBSZ=0
> PROT P
< 200 Protection level set to P
> PWD
< 257 "/" is current directory.
* Entry path is '/'
> EPSV
* Connect data stream passively
< 229 Entering Extended Passive Mode (|||8007|)
*   Trying 5.79.17.48... connected
* Connecting to 5.79.17.48 (5.79.17.48) port 8007
> TYPE I
< 200 Type set to I
> SIZE file.txt
< 213 77200652
> RETR file.txt
< 150 Connection accepted
* Doing the SSL/TLS handshake on the data stream
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSL re-using session ID
* SSLv3, TLS handshake, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Server hello (2):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
{ [data not shown]
* SSLv3, TLS handshake, Finished (20):
{ [data not shown]
* SSLv3, TLS change cipher, Client hello (1):
} [data not shown]
* SSLv3, TLS handshake, Finished (20):
} [data not shown]
* SSL connection using AES256-SHA
* Server certificate:
*        subject: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        start date: 2012-11-08 00:13:54 GMT
*        expire date: 2013-11-08 00:13:54 GMT
*        common name: www (does not match '5.79.17.48')
*        issuer: CN=www; C=11; ST=aaa; L=bbb; O=ddd; OU=aaa; emailAddress=a@a.com
*        SSL certificate verify result: self signed certificate (18), continuing anyway.
* Maxdownload = -1
* Getting file with size: 77200652
{ [data not shown]


FileZilla server connection logs

As the client connects and start the session these are the logs we can observe on the serve.

Creating listen socket on port 8000...
Creating listen socket on port 990...
Server online
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> Connected, sending welcome message...
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> 220-FileZilla Server version 0.9.41 beta
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> 220-written by Tim Kosse (Tim.Kosse@gmx.de)
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> 220 Please visit http://sourceforge.net/projects/filezilla/
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> AUTH SSL
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> 234 Using authentication type SSL
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> SSL connection established
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> USER user
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> 331 Password required for user
(000022)11/12/2012 22:33:35 PM - (not logged in) (5.79.21.166)> PASS ********
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> 230 Logged on
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> PBSZ 0
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> 200 PBSZ=0
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> PROT P
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> 200 Protection level set to P
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> PWD
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> 257 "/" is current directory.
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> EPSV
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> 229 Entering Extended Passive Mode (|||8007|)
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> TYPE I
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> 200 Type set to I
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> SIZE c2900-universalk9-mz.SPA.152-1.T.bin
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> 213 77200652
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> RETR c2900-universalk9-mz.SPA.152-1.T.bin
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> 150 Connection accepted
(000022)11/12/2012 22:33:35 PM - user (5.79.21.166)> SSL connection for data connection established
(000022)11/12/2012 22:34:33 PM - user (5.79.21.166)> 426 Connection closed; transfer aborted.

Summary

We can see that the EPSV extension can be used even on a server that has only IPv4 addresses. It is not a surprise as the RFC clearly defines that both protocols are supported (IPv6 and IPv4).

What is interesting is the server that once receives the EPSV command that is sent by the client using IPv4 it assumes this is the default protocol and defaults itself to IPv4 address.

References
  1. http://tools.ietf.org/html/rfc2428
  2. http://filezilla-project.org/