top of page

Throughput in Networking: Why Your “Gigabit” Link Doesn’t Feel Like Gigabit

  • Writer: Dan LANCaster
    Dan LANCaster
  • 1 day ago
  • 7 min read

Updated: 24 minutes ago

Bandwidth is the capacity. Throughput is what you actually get.
Bandwidth is the capacity. Throughput is what you actually get.

When you read “1 Gbps” on a spec sheet, it’s tempting to assume that applications will send and receive data at 1 Gbps. In practice, that almost never happens outside of a clean LAN.


The key concept here is throughput: the actual rate at which useful data is delivered end-to-end. Throughput is affected by latency, packet loss, congestion control, buffers, and a few other very real-world imperfections.

In this post, we’ll unpack:


  • The difference between bandwidth and throughput

  • Why RTT (round-trip time) and packet loss have such a strong impact

  • The role of the Bandwidth-Delay Product (BDP)

  • Why LAN tests often look “perfect” and WAN tests don’t

  • How all of this shows up in real measurements


  1. Bandwidth vs Throughput


Bandwidth: Theoretical Capacity


Bandwidth is the nominal capacity of a link, e.g.:


  • 1 Gbps Ethernet

  • 10 Gbps fiber

  • 1.2 Gbps Wi-Fi


It’s a property of the physical or wireless medium, not of your applications. Think of it as the maximum number of bits per second that could be sent if everything were ideal.


Throughput: What You Really Get


Throughput is the actual end-to-end data rate that an application achieves. It depends on:


  • Transport protocol behavior (usually TCP)

  • RTT (round-trip time)

  • Packet loss and retransmissions

  • Congestion control algorithm (Reno, CUBIC, BBR, etc.)

  • Window sizes and buffer sizes

  • MTU, tunnels, VPNs, NAT

  • CPU, NIC offloads, and OS limits


Analogy: Bandwidth is the width of the highway. Throughput is how many cars actually make it from one city to another per second, after accounting for traffic lights, accidents, and speed limits.


On a quiet, short highway (a LAN), the two are almost the same. On a long, congested route (a WAN), they can be dramatically different.


  1. Why Throughput Depends on RTT and Packet Loss


RTT: How Far Data Has to Travel


TCP relies on acknowledgments (ACKs) to adjust its sending rate. Every time data is sent, the sender waits for ACKs to arrive before it can safely increase the congestion window (cwnd). The time between sending data and receiving the ACK is the round-trip time (RTT).


  • On a LAN, RTT is often below 1 ms. ACKs come back almost instantly, so TCP can ramp up quickly and keep a large amount of data in flight.

  • On a cross-country path, RTT might be 40–80 ms.

  • On a transoceanic path, RTT can easily be 120–180 ms.


The longer the RTT, the slower TCP can grow its sending rate, and the harder it is to keep the pipe full.


Packet Loss: The Silent Throughput Killer


In TCP, packet loss is treated as a sign of congestion. When TCP detects loss, it reduces the congestion window, often sharply. Even small amounts of loss can devastate throughput, especially on fast, long-distance links.


Different congestion control algorithms react differently:


  • TCP Reno / CUBIC: classic loss-based algorithms. Throughput drops steeply even at very small loss rates.

  • TCP BBR: model-based algorithm that estimates bottleneck bandwidth and RTT. It can sustain higher throughput at low loss rates, but performance still degrades when loss becomes significant.


To visualize this, consider the chart below.


Download: Throughput vs Packet Loss
Download: Throughput vs Packet Loss

This diagram shows normalized throughput for three TCP flavors as packet loss increases from almost 0% to 1%. Loss-based algorithms (Reno, CUBIC) fall off rapidly. BBR is more resilient, but not invincible. 


If BBR is so good, why isn’t everyone using it?


BBR is dramatically more resilient to packet loss than traditional TCP algorithms, but it also has trade-offs. Because it does not treat loss as a congestion signal, BBR can sometimes be too aggressive on shared networks, taking more than its “fair” share of bandwidth and creating queuing for other flows. Some operators also report periodic jitter or increased latency under mixed workloads. For these reasons, many large-scale environments deploy BBR selectively rather than universally.


Which algorithms do different operating systems use?


  • Windows: Uses a variant of CUBIC by default.

  • macOS: Uses NewReno with CUBIC-like behavior; recent versions incorporate enhancements but remain loss-based.

  • Linux: Default is CUBIC, but Linux also supports BBR, BBR2, Reno, Vegas, and others. Changing the algorithm is a simple system configuration change, which is why Linux environments often appear in BBR benchmarking articles.


A fun fact about asymmetric throughput


When you run a throughput test between two machines, TCP congestion control is applied only on the sender, not the receiver. This means each direction operates under whichever algorithm the sending side uses.


Example:


  • Windows → Linux may use CUBIC

  • Linux → Windows may use BBR


If RTT is high or there’s even a tiny amount of loss, BBR will often achieve much higher throughput than CUBIC. So the same physical path can produce very different results in each direction, simply because the operating systems choose different algorithms:


Asymmetric network speed: BBR in one direction, CUBIC in the opposite direction
BBR in one direction, CUBIC in the opposite direction

Engineers sometimes encounter this in WAN testing: “Why am I getting 50 Mbps in one direction and 150 Mbps in the other?” The answer is often different congestion control algorithms, not network asymmetry.


  1. Bandwidth-Delay Product (BDP): How Much Data You Must Keep in Flight


The Bandwidth-Delay Product (BDP) tells you how much unacknowledged data must be “in flight” to fully utilize a link.


BDP = Bandwidth × RTT


If bandwidth is in Mbit/s and RTT in seconds, BDP is in Mbit.


Example

  • Bandwidth: 1 Gbit/s

  • RTT: 40 ms (0.04 s)


BDP = 1000 × 0.04 = 40 Mbit ≈ 5 MB


To actually reach 1 Gbit/s throughput, TCP needs to have about 5 MB of unacknowledged data in flight. If window sizes or buffers are smaller than that, the link will be underutilized regardless of how “fast” it is on paper.


The BDP grows linearly with RTT. Here is how much in-flight data is required on a 1 Gbit/s link at different RTTs:


Download: BDP vs RTT
Download: BDP vs RTT

As RTT increases, the amount of data that must be in flight becomes surprisingly large. That’s why window scaling, buffer tuning, and congestion control matter so much on long-distance links.


  1. Throughput vs RTT: Same Bandwidth, Different Results


Even if bandwidth and packet loss stay constant, changing RTT alone can strongly affect throughput for many TCP flows. With higher RTT:


  • It takes longer to recover from a loss event.

  • It takes longer to increase the congestion window.

  • The sender spends more time “waiting” for ACKs.


The simple normalized chart below illustrates the trend: as RTT grows from a few milliseconds to hundreds of milliseconds, achievable throughput (for a typical loss-based flow) decreases.


Download: Throughput vs RTT
Download: Throughput vs RTT

The exact shape of the curve depends on congestion control and window sizes, but in practice throughput decreases gradually with RTT rather than dropping to zero. The figure shows a typical illustrative trend: going from tens to hundreds of milliseconds of RTT can easily cut throughput by a factor of 2–4, even when bandwidth and loss stay constant. But the direction is always the same: more RTT → harder to reach link capacity.


Keep in mind that most consumer speed tests behave almost like a LAN: the test server is geographically close, RTT is extremely low, and packet loss is essentially zero. Don’t let these ideal conditions deceive you into thinking that real-world WAN throughput will look the same.


  1. LAN vs WAN: Why the Numbers Look So Different


LAN: Throughput ≈ Bandwidth


In a well-designed LAN:

  • RTT is typically < 1 ms

  • Packet loss is effectively 0%

  • There is minimal shaping or policing

  • MTU is consistent and often supports jumbo frames

As a result:

  • A 1 Gbit/s Ethernet link often delivers about 950 Mbit/s TCP throughput in tests.

  • The small difference between 1000 and 950 Mbit/s is mainly protocol overhead and implementation details.

From an engineer’s perspective, LAN throughput is usually “good enough” to treat as equal to bandwidth.


WAN: Same Bandwidth, Very Different Experience


On a WAN path, conditions are much less friendly:

  • RTT: 20–200+ ms

  • Packet loss: small but non-zero, sometimes bursty

  • ISP policies: shaping, policing, oversubscription

  • Tunnels, VPNs, NAT, and smaller effective MTU values

On a nominally 1 Gbit/s WAN link, you might see:

  • 200–400 Mbit/s with CUBIC under mild loss

  • 400–800 Mbit/s with BBR in a clean but high-latency scenario

  • < 100 Mbit/s if there’s persistent loss, buffer issues, or aggressive shaping

Physically, the line is still 1 Gbit/s. But from the viewpoint of your applications, the effective capacity is much lower. This gap between theoretical and practical performance is exactly what throughput measurements reveal.


  1. How This Shows Up in Real Testing

In practical testing scenarios, engineers typically observe:

  • Gigabit switch, short patch cables:

    TCP tests reach the high 900 Mbit/s range with negligible loss and sub-millisecond RTT. The throughput graph is flat and close to the nominal bandwidth.

  • Data center to remote office over the Internet:

    RTT might be 50–100 ms, and even tiny amounts of loss or shaping cause TCP to back off. Throughput graphs show clear ceilings well below the advertised line rate.

  • Wi-Fi or RF links:

    Radio conditions, interference, and retransmissions translate into apparent packet loss at the transport layer. Sudden drops in throughput correlate with bursts of interference or movement, even when the PHY rate looks high.

Understanding why the numbers behave like this is crucial when planning capacity, troubleshooting complaints, or comparing different networks.

7. Conclusion: Why Throughput Testing Matters


To summarize:


  • Bandwidth is the maximum possible capacity of a link.

  • Throughput is the real, end-to-end data rate your applications see.

  • RTT, packet loss, congestion control, window sizes, and BDP all work together to determine throughput.

  • On a LAN, conditions are so favorable that throughput is usually very close to bandwidth.

  • On a WAN, even tiny amounts of loss and modest RTT quickly push real throughput far below the theoretical link rate.


This is why dedicated throughput measurement tools such as Tessabyte are so important. They help you see the network as your applications see it, not as the ISP brochure describes it. Armed with that understanding, you can choose better paths, tune protocols, and design networks that deliver the performance your users actually need.

Comments


bottom of page