Understanding UDP Metrics in Network Speed Tests
- Dan LANCaster

- Oct 14
- 6 min read
Updated: Nov 3

The two major IP protocols, TCP and UDP, are often misunderstood. For those who are just beginning their journey into the world of protocols, here’s a concise outline: TCP is like a postal service, with each letter tracked, acknowledged, and delivered in order. UDP is like a cargo plane air-dropping boxes; depending on the air currents, some boxes may be lost forever in the jungle or end up in the lake, and some may reach the ground earlier than the ones dropped later.
Thus, TCP is good for things like downloading a web page or sharing a file in your LAN. You don’t really want the page to have a missing paragraph, right? UDP is good for all kinds of voice and video streaming, where a few missing packets are not a big deal, just like in a movie with 24 frames per second: you won’t notice if one of the frames isn’t there. UDP is also the king when speed is crucial, because if you’re talking with your friend across the ocean on WhatsApp, the last thing you want is a one-second delay every time some packets are lost en route.
A Practical Example
Now, with the theoretical part behind us, let’s talk about practice. The questions we’re going to answer are: How well will our voice calls, streaming, and other UDP-based protocols perform across the link we’re testing? How do we specifically test UDP metrics? Which metrics matter most? The answers are not as obvious as they might seem.
Let’s say we have a small office that uses a cloud-based VoIP system. Our internet connection is provided by the local ISP, and our “business broadband” plan includes 600 Mbps downlink and 100 Mbps uplink. We’ve tested the connection using an internet speed test (for example, Speedtest) and, indeed, we see throughput slightly below 600/100 Mbps. Then we deployed Tessabyte Server in the cloud, tested TCP, and also received good results. Now we’re ready to run UDP tests. We select “UDP Only” as the protocol in Tessabyte Client, enter the hostname of our server, and run the test:

Woah, woah, woah… The throughput values are in line with expectations, but what on earth is going on with packet loss in both directions? What do these numbers tell us? Let’s unpack this.
Uplink: The client managed to successfully send and the server managed to successfully receive about 100 Mbps of UDP payload per second. However, the client tried to send much more. The loss is 67%, so the server actually received only 33% of the payload the client attempted to send. How much did the client try to send? 100 / 0.33 ≈ 300 Mbps. Sanity check: 300 Mbps sent, 67% lost, so 300 × 0.67 ≈ 200 Mbps lost out of 300.
Downlink: The server successfully sent and the client successfully received about 600 Mbps of UDP payload per second. Similarly to the uplink case, the server tried to send much more. The loss is 55%, so the client received only 45% of the payload the server attempted to send. How much did the server try to send? 600 / 0.45 ≈ 1,330 Mbps. Sanity check: 1,330 Mbps sent, 55% lost, so 1,330 × 0.55 ≈ 730 Mbps lost out of 1,330.
Why the Loss?
Remember the air-drop example above? The aircraft dropped more boxes than the folks on the ground could catch, and that was exactly the idea. At this point, we’ve measured the maximum link throughput. We’ve pushed as many UDP packets into the link as possible, and Tessabyte told us how much the receiving side could get. For that, we had to fully saturate the link. You know what? I have an even better visual metaphor for you.

You get the idea. You’re “shooting” UDP packets at maximum speed, but the network link often doesn’t have enough bandwidth to handle that rate. What happens then? Splashing and flooding, just like in the image above (for clarity, we’re talking only about UDP here; TCP is a different story). You inevitably lose some packets; they simply can’t fit into the narrower pipe and spill over.
Let me take a minute to explain the specific numbers in our example, i.e., 200 Mbps lost out of 300 Mbps uplink. Tessabyte Client was running on a laptop connected to the office LAN over Wi-Fi. When an application tells the operating system to send UDP packets as fast as it can, the operating system considers the available bandwidth to the nearest network node. In this case, the OS logic is essentially: “Okay, it looks like I can push 300 Mbps of UDP packets from this laptop to the access point it’s connected to, considering the current Wi-Fi negotiated speed.”
About 300 Mbps of UDP data then successfully reaches the access point. The access point forwards the data to the next LAN node—a router—over a 2.5-Gbps link, so there is no bottleneck there. The router then sends that UDP traffic to the ISP, and since the ISP limits the uplink bandwidth to 100 Mbps, that’s where the bottleneck is and that’s where most packets are lost.
In the downlink direction, 730 Mbps were lost out of 1,330 Mbps. Our cloud server could push 1,330 Mbps of UDP data into the cloud network infrastructure, because this cloud plan includes 1,500 Mbps of public bandwidth (slightly below 1,500 Mbps because of protocol overhead; bandwidth ≠ throughput). The network infrastructure delivered those 1,330 Mbps of UDP data to our ISP, and since the ISP limits the downlink bandwidth to 600 Mbps, that’s where the bottleneck occurs in the downlink direction.
Now, a quick mental experiment: if the ISP’s bandwidth limit had been, say, 5 Gbps, and if my laptop had been connected to the office LAN with a 2.5-Gbps Ethernet adapter rather than over Wi-Fi, then the bottleneck would have been somewhere else. Where? Probably at the cloud server’s network link, which is limited to 1,500 Mbps of public bandwidth. So something slightly below 1,500 Mbps uplink and downlink would have been our new UDP throughput limit.
Anyway, should you be worried about UDP loss? Read on.
Is UDP Loss Bad?
Generally, no. It’s perfectly normal when you intentionally saturate a network link while running tools like Tessabyte or conducting an internet speed test, Wi-Fi speed test, or old good LAN speed test. The crucial part is this: as soon as you de-saturate the link, UDP loss should drop below a reasonable level (around 5–10%). Using the water-pipe metaphor again, once you reduce pressure in the upper pipe, the splashing and flooding stop.

De-saturate, a.k.a Rate Limiting
You decrease pressure in a water pipe by turning a valve. You de-saturate a network link by enabling the “Limit rate” option and setting a numeric value. You’re basically telling Tessabyte: “Don’t even think of sending more than xxx Mbps over this connection.”

In our example above, we learned that the link becomes saturated at 100 Mbps. Therefore, let’s set a limit below that value and see what happens:

What a drastic change! UDP loss is almost gone, the chart is stable, and jitter values look healthy. And that brings us to the final question in this post.
How Much UDP Bandwidth is Enough?
So, how much UDP bandwidth do you really need? Not as much as you might think. Voice codecs like Opus or G.711 are incredibly efficient; they use only about 30 to 80 kbps per call, both ways. Even if you double that to play it safe, you’re still talking a few hundred kilobits per user, not megabits.
Video calls are the real bandwidth hogs. A typical SD video chat runs around 1 Mbps, 720p needs about 1.5–2 Mbps, and 1080p can easily hit 2.5–3 Mbps per participant. These numbers aren’t exact, since apps like Zoom or Teams constantly adjust bitrate, but they’re close enough for planning.
Here’s a simple rule of thumb that works surprisingly well:
Total_UDP_Mbps = (N_voice × 0.2) + (N_video × 3)
N_voice and N_video are just the counts of concurrent voice and video users.
Say your office usually has 10 people on video calls and 20 on voice. That’s (10 × 3) + (20 × 0.2) = 34 Mbps total UDP traffic. Now you know what to do: set Tessabyte’s “Limit rate” somewhere around 35–40 Mbps and see how stable the results look. If your link handles that with minimal jitter and near-zero loss, you’re in great shape, and you’ve just confirmed that your network can handle the real-world load without breaking a sweat.
A Word to the Wise
Professionals already know this, but it’s worth repeating: never test UDP metrics over WAN if you’re limited to IPv4, unless both the Tessabyte Client and Tessabyte Server have public IPv4 addresses. If one of them is behind a NAT (for example, the server is at 24.4.4.4 and the client is 192.168.0.5), you’ll see 100% UDP loss in the downlink direction because packets from the server can’t easily traverse NATs. If both computers are within the same LAN segment and use non-public IPv4 addresses (for example, when testing Wi-Fi speed), then it’s fine. Better yet, switch to IPv6, where NATs are normally not needed.
So... UDP loss isn’t a mystery; it’s a metric. Measure it with Tessabyte.



Comments