Search This Blog

Showing posts with label data center. Show all posts
Showing posts with label data center. Show all posts

Sunday, April 27, 2014

Overlay technologies in data center

Everyone speaks about SDN an the benefits its brings when deploying cloud or enterprise infrastructures. But do we actually know or have any understanding what this all SDN is about? If you want be fluent in the language of virtual networking and network overlays in modern data centers you need to understand at least the following concepts:
In the remaining of the post we will concentrate solely on existing overlay technologies. These information was extracted from Cisco doc: Cisco Nexus 9000 Series Switches - Data Center Overlay Technologies).

Network-Based Overlay Networks
  1. IEEE 802.1ad Provider Bridging or IEEE 802.1q Tunneling also known as IEEE 802.1QinQ or simply Q-in-Q
  2. IEEE 802.1ah Provider Backbone Bridges (PBB) or Mac-in-Mac Tunnels
  3. Cisco FabricPath allows multipath networking at Layer 2
  4. TRILL - IETF Transparent Interconnection of Lots of Links is a Layer 2 multipathing technology
  5. Shortest-Path Bridging (SPB) is defined in IEEE 802.1aq and is targeted as a replacement for Spanning Tree Protocol (example info based on Avaya documentation)
  6. Cisco Overlay Transport Virtualization (OTV) is a Layer 2-over-Layer 3 encapsulation "MAC-in-IP" technology
  7. The Cisco Location/Identifier Separation Protocol (LISP) is currently defined as a Layer 3 overlay scheme over a Layer 3 network
  8. Multiprotocol Label Switching (MPLS)
  9. Virtual Private LAN Service (VPLS) a Layer 2 tunneling protocols
  10. Virtual Private Routed Network (VPRN) also known as BGP/MPLS or IP-VPN provides IP VPN services
Host-Based Overlay Networks
    1. Virtual Extensible LAN (VXLAN) is a Layer 2 overlay scheme over a Layer 3 networ that uses IP/UDP encapsulation
    2. Network Virtualization Using Generic Routing Encapsulation (NVGRE) allows creation of virtual Layer 2 topologies on top of a physical Layer 3 network
    3. Stateless transport tunneling (STT) is an overlay encapsulation scheme over Layer 3 networks that use a TCP-like header

    Tuesday, April 8, 2014

    How does switch fabric network work

    A network engineer can list a number of issues you can potentially run when using STP protocol  in your switch network. Over the years the network industry has created successor protocols like RSTP or MSTP. Both are improvements and offer much better convergence time and respond much quicker to switch topology changes. One of the major disadvantages for networks that relay on STP is the fact that they don't support multipathing. It means once network topology converges there will be blocked path between switches that are elected and managed by STP. This often redundant links can't be used because of a loop risk.

    But there are better solutions today on the market to design better layer 2 Ethernet networks (more scalable, with higher throughput and with active link redundancy as an example). The 2 most popular are based on SPB and TRILL protocols. Both of them are used as a foundation in switch fabrics products. To better understand both of them the pictures below provide a side by side comparison. This was taken from Avaya document: Compare and Contrast SPB and TRILL.

    Avaya is a SPB promoted so the comparison is a bit waited towards SPB but nevertheless it gives some inside view into both protocols.



    References

    http://cciethebeginning.wordpress.com/2008/11/20/differences-between-stp-and-rstp/
    http://etherealmind.com/spb-attention/
    http://en.wikipedia.org/wiki/IEEE_802.1aq
    http://en.wikipedia.org/wiki/TRILL_(computing)
    http://www.avaya.com/uk/resource/assets/whitepapers/SPB-TRILL_Compare_Contrast-DN4634.pdf
    http://nanog.org/meetings/nanog50/presentations/Monday/NANOG50.Talk63.NANOG50_TRILL-SPB-Debate-Roisman.pdf
    http://www.ebrahma.com/2012/06/trill-vs-spb-similarities-differences/
    http://wikibon.org/wiki/v/Network_Fabrics,_L2_Multipath_and_L3

    Tuesday, December 31, 2013

    The successful architecture for a top-of-rack switch for data center

    TOR switch architecture

    We wrote before about Arista switches and about the Arista EOS architecture (network OS). A company with a name Pica8 is another example that follows a very similar technological model. They use the best out of the Linux and butter this up with some more hardware (ASIC) dependent software to achieve maximal performance.

    What is interesting for Pica8 is that they take a very liberal approach to the hardware itself. They say they could use any plain switching chip or motherboard blades and turn it into a fully operational switch. The secret is once again a well design Pica8 network OS they built.




    In comparison to Arista CLI (that is very alike the Cisco one) Pica8 uses a rather different syntax: http://pica8.org/blogs/?p=399. At first glance it has some similarities to what you typie on Juniper boxes :).

    More info about them and products can be found here:
    http://www.networkworld.com/news/2010/102810-pica8-opensource-switching.html?page=1
    http://www.pica8.com/open-switching/1-gbe-10gbe-open-switches.php

    Thursday, June 6, 2013

    Saturday, May 18, 2013

    How to install Arista EOS on Virtualbox

    The solid EOS architecture that makes Arista switches so powerful allow as well easy testing and experimenting. You don't need to buy any physical switch to get access to the CLI to play with it.

    These links below are going to give you enough information how to deploy yours EOS-4.10.2-veos.vmdk switch image within Virtualbox or any other hypervisor. All what you need is to download 2 files (the EOS.vmdk and Aboot*.iso) and follow the steps.

    vEOS and VirtualBox
    VMWare Fusion Virtual Networks
    Building a Virtual Lab with Arista vEOS and VirtualBox

    If everything works fine after you power on your Arista VM switch you should see the following window:


    The most important commands at the beginning (as seen above):
     
    admin # user name
    en    # no pass is required 
    bash  # get out of the arista cli to linux bash
    

    As it follows the Cisco CLI behavior you can play with it by using Tab and '?' chars to explore available options.

    References
    1. http://www.aristanetworks.com/en/support/gettingstarted
    2. http://www.aristanetworks.com/en/support/docs/eos
    3. http://www.aristanetworks.com/docs/Manuals/ConfigGuide.pdf

    Friday, May 17, 2013

    Arista is recognized as one of the main data center networking vendor

    As data centers architecture is transforms by the evolution driven by technologies like OpenFlow, SDN and cloud virtualization it is important who are the main players on the market. Below is a snapshot from the latest Magic Quadrant for Data Center Network Infrastructure showing the main vendors:

    We can see that Arist is listed as one of the vendors along big market giants like Cisco, HP, Juniper, Dell, Brocade and others.

    References

    1. http://www.aristanetworks.com/media/system/pdf/AristaProductQuickReferenceGuide.pdf
    2. http://www.theregister.co.uk/2012/09/19/arista_networks_7150s_switches/
    3. https://eos.aristanetworks.com/home.php

    How to monitor your data center infrastructure

    I've found this very interesting article NetFlow vs. sFlow for Network Monitoring and Security: The Final Say that debates over the 2 main monitoring solution for network devices.

    By looking at the protocols itself we can find a more broad deployment and adoption in application as well (example config for HAProxy load balancer using sflow to monitoring host resources).

    This example sflow video  provides more details about the protocol and how it can be used across data center to monitor and visualize server as well as network infrastructure.

    Brocade On-Demand Data Center vision

    As cloud technologies help to evolve and create new set of product and services withing data centers the data center concept is changing as well. When it comes to resource provisioning there is no more talk about silos but rather an agile and on demand scalable pool of resources that can be freely used when required.

    With the progressing changes in the industry many vendors adopt and innovate to meet the increasing demands of the next generation data center. This On-Demand Data Center link explains Brocade DC vision. In short they believe in:
    • x86 server virtualization
    • Openstack (like Quantum), OpenDaylight, SDN
    • openflow
    • virtual network appliances (like vADX, vRouter like vyatta)
    • and network hardware that supports virtual as well physical workload: Brocade MLX router and Brocade VCS switch product lines
    References
    1. http://www.brocade.com/downloads/documents/technical_briefs/vcs-technical-architecture-tb.pdf

    Tuesday, February 26, 2013

    How to build a data center

    Working for a global hosting company can be a big fun and challenge every day. There is always something happening and changes are constant. But majority of us visit a data center only rarely or only at the beginning I believe. For these who would like to know a little more and understand why it may take a while to build a row in data center and fill it full with servers and network gear take a look at these example movies below. It is a good fun to watch ;).



    Sunday, November 18, 2012

    What does Software Defined Data Center means

    After the industry created a Software Defined Network (SDN) [1] term it is time for a new one. A new emerging IT buzz word is a Software Defined Data Center (SDDC) [2]. It appears that only VMware is marketing this extensively at the moment.

    From a technical point of view its all makes sense: compute resources are already largely virtualized, virtual storage and virtual networks are following. Looking at the last VMware acquisition of Nicira [3] the company has many if not all necessary products to build such a SDDC Data Center.



    Let's see how the market will respond to it and if other vendors start looking and using this as well in the near future.

    References
    1. SDN
    2. http://rtomaszewski.blogspot.co.uk/2012/10/software-defined-network-sdn-as-example.html
      http://rtomaszewski.blogspot.co.uk/2012/10/google-does-use-sdn-in-its-data-centers.html
      http://rtomaszewski.blogspot.co.uk/2012/09/emerging-of-virtual-network-aka-quantum.html

    3. SDDC
    4. http://www.networkcomputing.com/data-center/the-software-defined-data-center-dissect/240006848
      http://www.vmware.com/solutions/datacenter/software-defined-datacenter/index.html
      http://blogs.vmware.com/console/2012/08/the-software-defined-datacenter-meets-vmworld.html


    5. Openstack, Nicira and VMware
    6. http://www.infoworld.com/t/data-center/what-the-software-defined-data-center-really-means-199930

    Thursday, October 18, 2012

    Funny picture from one of the Google's Data Center

    There has been a new footage and video about a Google data center this year. To make it more interesting they created a mini Google map version where you can interactively walk through the data center building and look around ;) During my virtual journey this is what found:



    References
    1. http://www.engadget.com/2012/10/17/google-inside-data-centers/
    2. http://www.engadget.com/photos/inside-google-s-data-centers/#5366823

    Who owns the most servers

    The cloud is expanding and changing the landscape in data centers. It would be interesting to know who has the most servers on the world, isn't it? The links below may not be fully up to date but they still revel pretty good overview about the industry. The most interesting part is the difference between the Google #1 and the rest of the world.

    References
    1. http://www.datacenterknowledge.com/archives/2009/05/14/whos-got-the-most-web-servers/
    2. http://gizmodo.com/5517041/googles-insane-number-of-servers-visualized
    3. http://news.netcraft.com/archives/2012/09/10/september-2012-web-server-survey.html

    Tuesday, October 9, 2012

    Google does use SDN in its data centers

    There is a lot of rumors that Google already uses SDN inside its data centers as well as to route and control the external traffic between them.

    According to this article Going With the Flow: Google’s Secret Switch to the Next Wave of Networking Google is operating an SDN enabled network. But can you really believe in everything what you read? Researching more on the the topic of SDN plus Google I have found this more  skeptical article. It says that this may not be a whole truth Openflow @ Google: Brilliant, but not revolutionary .

    Despite this to contradicting articles It would be nice to know that SDN can scale and has already proved its usefulness on a large scale. But without this there is still the open question you have to ask your self.

    Is SDN really going to transform and shape the traffic within and outside of small or big data centers. Will it really provide a more sophisticated ways to control and direct the external traffic over wan links as well as over the 10 GiB enabled internal networks? Without any prove of concept (POC) this all is only one idea more that may or may not get bigger traction in networking.

    To finis this up I would like to refer refer to this article Balloons, Bags, and SDNs . There is one very useful and I particularly smart conclusion about the whole SDN future:

    Before you say, “it won’t scale,” ask, “compared to what?”

    With it in mind let's wait how the market is going to move and what expectation will or will not be fulfilled :).