But what is important is that in essence what we are going to built will be a typical multi-tier network with access, distribution and core layers like this example below:
Once the network is built there is now time for the cloud network element to be added. This is again very simplistic view to avoid all the technical details.
The industry is still working to established a common ground and consensus how a cloud network should look like and what services it should provide but in practice (base on a few companies like Nicira or Midokura) it is tightly associated with Software defined networking (SDN) concept and architecture. And the common practice today is to implement SDN network as an additional network overlay on top of IP fabric infrastructure.
Like every network, cloud network needs to provide IP connectivity for cloud resources (cloud servers for example). Often to achieve this al hypervisors are inter-connected using tunneling protocols. This model allow us to decouple the cloud network from the physical one and allow more flexibility. That way all VMs traffic is going to be routed within the tunnels. To solve the cloud network problem is to find a solution how to route between the hypervisors using the tunnels.
Data flows, connections and tunnels are manged by cloud controller (a distributed server cluster) that need to be deployed in our existing physical network. An example of Nicira NVP controller can be found here: Network Virtualization: a next generation modular platform for the data center virtual network.
As we agreed VMs data will be routed within tunnels. These are the most popular ones: NVGRE, STT and VXLAN (Introduction into tunneling protocols when deploying cloud network for your cloud infrastructure).
As tunnels require additional resources there is an open question what overhead, resource consumption and performance implication will they represent. This post: The Overhead of Software Tunneling (*), do a comparison and tries to shed some more light on the topic.
Throughput | Recv side cpu | Send side cpu | |
Linux Bridge: | 9.3 Gbps | 85% | 75% |
OVS Bridge: | 9.4 Gbps | 82% | 70% |
OVS-STT: | 9.5 Gbps | 70% | 70% |
OVS-GRE: | 2.3 Gbps | 75% | 97% |
This next table shows the aggregate throughput of two hypervisors with 4 VMs each.
Throughput | CPU | |
OVS Bridge: | 18.4 Gbps | 150% |
OVS-STT: | 18.5 Gbps | 120% |
OVS-GRE: | 2.3 Gbps | 150% |
We can see that not all tunnels are completely transparent when it comes to performance. The GRE tunnel shows a significant degradation in throughput. The TCP based STT tunnel works fine although For a complete analysis, explanation and further discussion I recommend to read the blog above (*).
References
- http://rtomaszewski.blogspot.co.uk/2012/12/what-do-you-need-to-implement-virtual.html
- http://rtomaszewski.blogspot.co.uk/2012/11/what-network-topologies-can-i-build.html
- http://rtomaszewski.blogspot.co.uk/2012/11/after-server-virtualization-there-is.html
- http://rtomaszewski.blogspot.co.uk/2013/05/sdn-system-software-architecture.html
- http://bradhedlund.com/2012/10/06/mind-blowing-l2-l4-network-virtualization-by-midokura-midonet/
- http://blog.ioshints.info/2012/08/midokuras-midonet-layer-2-4-virtual.html
No comments:
Post a Comment