But what is important is that in essence what we are going to built will be a typical multi-tier network with access, distribution and core layers like this example below:
Once the network is built there is now time for the cloud network element to be added. This is again very simplistic view to avoid all the technical details.
The industry is still working to established a common ground and consensus how a cloud network should look like and what services it should provide but in practice (base on a few companies like Nicira or Midokura) it is tightly associated with Software defined networking (SDN) concept and architecture. And the common practice today is to implement SDN network as an additional network overlay on top of IP fabric infrastructure.
Like every network, cloud network needs to provide IP connectivity for cloud resources (cloud servers for example). Often to achieve this al hypervisors are inter-connected using tunneling protocols. This model allow us to decouple the cloud network from the physical one and allow more flexibility. That way all VMs traffic is going to be routed within the tunnels. To solve the cloud network problem is to find a solution how to route between the hypervisors using the tunnels.
Data flows, connections and tunnels are manged by cloud controller (a distributed server cluster) that need to be deployed in our existing physical network. An example of Nicira NVP controller can be found here: Network Virtualization: a next generation modular platform for the data center virtual network.
As we agreed VMs data will be routed within tunnels. These are the most popular ones: NVGRE, STT and VXLAN (Introduction into tunneling protocols when deploying cloud network for your cloud infrastructure).
As tunnels require additional resources there is an open question what overhead, resource consumption and performance implication will they represent. This post: The Overhead of Software Tunneling (*), do a comparison and tries to shed some more light on the topic.
|Throughput||Recv side cpu||Send side cpu|
|Linux Bridge:||9.3 Gbps||85%||75%|
|OVS Bridge:||9.4 Gbps||82%||70%|
This next table shows the aggregate throughput of two hypervisors with 4 VMs each.
|OVS Bridge:||18.4 Gbps||150%|
We can see that not all tunnels are completely transparent when it comes to performance. The GRE tunnel shows a significant degradation in throughput. The TCP based STT tunnel works fine although For a complete analysis, explanation and further discussion I recommend to read the blog above (*).