Occasionally I hear complaints from vendors about low WAN throughput at some of my remote sites. I provide dual resilient 1 Gig uplinks from them into a 10 Gig MPLS cloud that also has dual 10 Gig links to my data centers.
NPM proves bandwidth is not an issue--traffic is not bottle-necking or choking at the end sites, there's room for them to send or receive plenty more. Yet sometimes vendors say they're running into throughput problems.
I ask if their application is very sensitive to latency. My three regional hub sites are about 300 miles away from each other, and latency across those 1 Gig MPLS circuits is 11 milliseconds. I warn them that if they need less latency than that, their application / hardware may not be appropriate for our WAN sites.
One of my peers recently was troubleshooting this exact issue and came across an interesting article: How to Calculate TCP throughput for long distance WAN links
It's brief, simple, and helps understand how a PC or server using 1 Gig WAN pipe might only be able to transfer a LOT less than 1 Gb/s.
In my case I learned TCP window size and 11ms WAN latency limits individual transmissions across the 1 Gig WAN pipes to no more than 52 Mb/s.
Read the article in the link and let me know:
- Does apply to your environment?
- Have you run into the issue and improved throughput by adjusting TCP window size?
- Are you using WAN Accelerators to address the problem?
- If you're using WAN accelerators, what brands & models have you tested, what did you like, what was the cost per end site and per core site?
- What problems did you discover on the way? Were there things that simply couldn't be addressed & corrected?
- How do you adjust TCP windows size on a PC and on a server?
- Does the TCP window size need to be adjusted on both PC and Server, or only on one side or the other?