Skip to main content

Network Throughput constraints - Layer 2 vs Layer 3 limitations

The time for inventions and making something new are now gone. With the growth of world's population everything from generation of resources to feeding everyone has become lot bigger responsibility. Along with all of this growth trend, the essentials of communication and transport has got to stay up to speed with everything else in order to keep the momentum.

You could think of an airport runway to be used in rare instance but how about transporting hundreds of thousands passengers in hundreds of flights every day. Communication technologies that keep everyone informed in order to manage have to become mega scale.

There was time when mobile phone felt like an imagination and when data was introduced to allow you few kbps transfer rate it was probably sufficient for an individual at the time but now when you are having to manage whole office or business from one phone on the go you need to be talking 5th generation of mobile network or 5th generation of wireless system after the 4G is pretty much deployed everywhere.

Where a business needed few megabytes per second, these days domestic users who would often find bunch of terabyte portable disks in their household or small business since storage has been up to speed with people's expectation. However the most common disk speed or USB2 interface now makes everyone think that if you had to shift your information from one disk of 2TB to another larger or similar disk for one or other reason how long it is going to take? Yes if you get full throughput of USB2 disk write speed, it will take you 10 hours to transfer 2TB of data but usually common disks are behind the speed of USB2.

What about USB3, yes flaming fast but unless your disk writes at faster speed you can't utilise the maximum potential of USB3.

So we start thinking where is the constraint, is the bottleneck always going to be disk? No it is only domestic single disks that are slow but SSD are also catching up. SSD disks are slightly expensive but extremely fast and greener disks, if you can afford to pay 6 times more than normal disk cost. You can then make disks go faster by putting them in RAID groups using some description of disk arrays where you increase the write speed by writing on multiple disks at same time.

So there is no issue with disk or communication? No it is constant battle and does not end. We talk mega these days. Imagine the first 100MB hard drive you came across, if you didn't see that 5.25" floppy disks. Storing 256GB on a USB pen drive or 2TB in passport size USB disk is already talking very very large volume of data collected in centuries. This gets more tricky when you have only one copy and the lives worth of data could get damaged or lost and continues to grow. You start talking cloud to add protection, security, reliability, scalability and portability.

If this is going to grow and speed of growth is turning exponential what is next scale and what are challenges? Well while the physical constraints start to facilitate more and more, the logical layer starts to attract more of focus. Your home based or business cloud that stores tons and tones of data for dozens or sometime hundreds or even thousands of company employees doesn't only need to be transported but has to be secure and audited too.

This is where layer 1 & 2 (physical and data link layers) from 7 layers Open Systems Interconnection model or OSI model meets upper layers often referred to network, application and presentation layers. Unlink meeting rooms in offices that sell like hot cakes this meeting of Layer 2 and Layer 3 makes it lot busy as there is everything that can be or should be processed and the more of data takes more of CPU processing and cache time. The more you look into upper layers you find more features you may have missing in your infrastructure and this is what costs the most these days.

So where you can easily find a gigabit capable network switch for your home under twenty bucks or a managed enterprise scale catalyst gigabit switch with 24 ports under a grand you will not find a device that handles same level of throughput in layer 3-5 and even if it states you can't be sure because the more you ask it to do the slower it gets.

So lets stay at gigabit for the purpose of this article as it is the most common speed everywhere. You are talking the MTU of 1500 which define the maximum packet size since not everything running on gigabit will be able to handle the MTU of 9000 referred as Jumbo frames. The switches such as Cisco's catalyst 24 port gigabit switch can handle layer 2 traffic at gigabit rate on every single interface supporting up to 24GB/s and more with larger switches that are not very expensive.

Now you have a lot of traffic on this access switch but how about filtering and auditing the traffic to and from the distribution or core switches. You want to add a firewall that is of similar cost as the large switch and you will find a device which has gigabit interfaces but what you wouldn't know is that the device CPU and Memory are not capable of handling a total of 1 gigabit per second. Yes true, and it goes back to the time of Cisco's PIX architecture which moved on to ASA devices and for example you could put fibre gigabit in PIX or copper in ASA devices one of the common and enterprise grade firewall from Cisco the PIX525 could only handle 330Mbps, yes that is true and I have seen it myself. When you start adding more and more inspection, encrypted VPNs and clever features it will perform less than the maximum tested on what they call cleartext throughput.


Now when you go for bigger firewalls in ASA 5500 series you will again find constraints and gaining the single gigabit throughput will still be a challenge even with these inexpensive piece of architectures. At the same time you want to deploy more and more features, or maybe you will be forced to use them in some cases.

For example if you ware using Microsoft's Office 365 in the cloud, you want to allow some traffic from the cloud addresses but in this case there are hundreds of IP addresses or ranges in the cloud spread worldwide and you can't just create that many objects and track their IP addresses which will keep changing. You would want to use the URL and get firewall to perform a DNS translation every time user is requesting a transaction in form of packets but dns resolution on firewall is an optional feature and usually admins don't allow or recommend as it will start to kill your firewall.

If you want to do more such as traffic shaping and QOS, you are more at the mercy of upper layer hardware, but why is that? This is not a logical made up issue it again has some physicals and in these cases where you want a device to do lot more things than it used to do, it needs more processing power, more  memory, more storage and everything should be faster than before.

So how do you select correct device and upper layer features? Well you must run the analysis and understand your requirement including security and compliance requirements before defining exactly what will be the capacity of your network and solution.

This is critical part of solution design and your architect MUST gather requirement and state exactly what throughput will be delivered by solution with given list of features. If it is not stated, you must add the requirement in request for proposal (RFP) for throughput capabilities of end to end transaction, gateways, firewall processing, core/distribution/access switches, storage network, backup network and VPN devices. Once defined and agreed a good designer is expected to state throughput or capacity limitations of every interface and every device, if not you must ask the figures to be documented and added to every interface and layer 3 device.


Popular posts from this blog

Useful website performance and load testing tools

http://tsung.erlang-projects.org/ http://httpd.apache.org/docs/2.0/programs/ab.html http://phantomjs.org/ https://developers.google.com/speed/pagespeed/ http://servermonitoringhq.com/blog/how_to_quickly_stress_test_a_web_server https://code.google.com/p/httperf/ http://loadimpact.com/ http://www.paessler.com/webstress http://loaduiweb.org/ http://en.wikipedia.org/wiki/Web_server_benchmarking http://en.wikipedia.org/wiki/Load_testing http://www.loadui.org/ http://www.loadtestingtool.com/index.shtml http://www.appdynamics.com/blog/devops/load-testing-tools-explained-the-server-side/

Copy files and folders using SCP with spaces in path

Copying data from one system to other with file or folder names that contain spaces in path can be achieved using this guide. In this case I am copying data from Macbook to Windows 10 computer. In order to copy the data easily it is better to use bash commands. Windows computer can support WSL (Windows subsystem for Linux) and you can run one of few linux distributions to use shell commands. I have Ubuntu set up within my Windows 10 using WSL. If you do not have WSL, you can set it up using my guide here . The copy can be performed in two ways: 1) Using SCP Source (MacOs) path: /home/Users/username/Documents/data extract from 2020/ First of all you add escape sequence to the path so it will become:  /home/Users/me/Documents/data\ extract\ from\ 2020/ . While this works on local system for SCP you'll have to double the escape sequences by replacing \ with \\, as below. Figure out your source computer IP address using "ifconfig" command. Now using scp command on target syst...

TrueCrypt on macOS X Mojave 10.14

If you have updated your macOS recently to Mojave otherwise known as verison 10.14 you may not be able to install the last version of Truecrypt in order to access your old volumes encrypted with Truecrypt software. This article will guide you to get this working on your MacOS v10.14 (Mjoave) . Download the package from  https://truecrypt.ch/downloads/  or  https://www.truecrypt71a.com/downloads/ . Find downloaded package using Finder in your HDD/Users/username/Downloads folder and will look like  TrueCrypt 7.1a Mac OS X.dmg . Open file location in Finder and open or double click on  TrueCrypt 7.1a Mac OS X.dmg . This will mount Truecrypt 7.1a and will have Truecrypt 7.1a.mpkg in it. Drag the package T rueCrypt 7.1a.mpkg and drop in your Downloads folder. From Locations in Finder you can eject your TrueCrypt mount. Now go to your Downloads location, find the file  TrueCrypt 7.1a.mpkg , right click and select Show Package Contents . Find the ...