#bandwidth#server-performance

iperf3 Tutorial: Test Network Speed on Linux & Windows

10 min read - May 7, 2026

hero section cover
Table of contents
  • iperf3 Tutorial: Measure Network Performance on Linux and Windows
  • Installing iperf3
  • Setting Up the Server
  • Running Client Tests
  • Advanced Tests
  • Tuning and Troubleshooting
  • Video recommendation
Share

Install iperf3, run bandwidth tests, and tune TCP buffers for accurate results on Linux and Windows. Covers UDP, bidirectional, and 10GbE+ testing

iperf3 Tutorial: Measure Network Performance on Linux and Windows

iperf3 is a command-line tool for measuring network bandwidth, jitter, and packet loss between two machines. It uses a client-server model: one machine listens, the other sends traffic, and you get precise throughput numbers. This guide covers installation, basic and advanced tests, and how to tune your system for accurate results on high-speed links.

Installing iperf3

Debian / Ubuntu

sudo apt update
sudo apt install iperf3

Confirm the install with iperf3 --version. Install it on both the server and client machines.

Fedora / CentOS / Rocky / Alma

On Fedora 22+ or CentOS 8+, Rocky, or AlmaLinux:

sudo dnf install iperf3

On CentOS 7, use yum instead. If the package isn't found, enable the EPEL repository first:

sudo yum install epel-release
sudo yum install iperf3

If your firewall is active, open port 5201:

sudo firewall-cmd --add-port=5201/tcp --permanent
sudo firewall-cmd --reload

Windows

Download the standalone executable from iperf.fr or the ar51an/iperf3-win-builds GitHub repo. Extract it to a folder like C:\iperf3, then verify:

cd C:\iperf3
iperf3.exe -v

To run iperf3 from any directory, add the folder to your System PATH via System Properties > Advanced > Environment Variables. You'll also need to create an inbound firewall rule allowing TCP on port 5201 in Windows Defender Firewall.

Setting Up the Server

Start the server with:

iperf3 -s

This listens on TCP port 5201 by default. To run it in the background with logging:

iperf3 -s -D --logfile /var/log/iperf3.log

Verify it's running with ss -tulpn | grep 5201.

If port 5201 is blocked on your network, use -p to pick a different port. To bind to a specific interface, use -B:

iperf3 -s -B 192.168.1.10

For one-off tests, iperf3 -s -1 handles a single client connection and then exits. On high-bandwidth links (40 Gbps+), run multiple server instances on different ports to work around single-threaded CPU limits.

Make sure your firewall allows traffic on the chosen port. On Ubuntu/Debian with UFW:

sudo ufw allow 5201/tcp
sudo ufw allow 5201/udp   # if testing UDP

Running Client Tests

Basic TCP test

iperf3 -c 192.168.1.10

This measures upload bandwidth over TCP for 10 seconds. Extend the duration with -t:

iperf3 -c 192.168.1.10 -t 30

On 10 Gbps or 25 Gbps links, a single TCP stream often tops out at 3–5 Gbps due to single-core CPU limits. Use parallel streams to saturate the link:

iperf3 -c 192.168.1.10 -P 8

Reading the results

Each interval line shows Transfer (data sent) and Bitrate (throughput). For TCP, also watch:

  • Retr (retransmissions). High numbers mean packet loss or congestion.
  • Cwnd (congestion window). If it's low or stuck, buffer or window size limits are capping throughput.

On a clean 1 Gbps link, expect around 940 Mbps after protocol overhead. The test ends with sender and receiver summary lines. On a stable network, these should match closely.

For UDP tests (-u flag), the output adds jitter (packet arrival variance) and lost/total datagrams. Jitter under 1 ms and 0% loss is ideal for real-time traffic like VoIP.

Useful flags

FlagPurpose
-c <IP>Connect to server
-p <port>Use a specific port (default: 5201)
-t <sec>Test duration in seconds (default: 10)
-i <sec>Report interval
-P <num>Parallel streams
-uUDP mode
-b <n>MTarget bandwidth (UDP; defaults to 1 Mbps if omitted)
-RReverse mode (server sends, client receives)
-w <n>KTCP window / socket buffer size
-JJSON output
-ZZerocopy (reduces CPU on fast links)

Advanced Tests

Bidirectional testing

The --bidir flag (iperf3 3.7+) tests upload and download simultaneously:

iperf3 -c 192.168.1.10 --bidir

Both connections originate from the client, so this works through NAT without opening extra ports. If bidirectional results are much lower than one-way tests, your router or cable modem may be struggling with full-duplex traffic.

Reverse mode

The -R flag flips the data flow so the server sends and the client receives. This measures download speed without swapping roles:

iperf3 -c 192.168.1.10 -t 30 -i 5 -R

Big differences between forward and reverse results point to asymmetric paths, congestion, or buffer misconfigurations.

UDP testing

UDP tests reveal jitter and packet loss, which TCP hides behind retransmissions. Always set a target bandwidth with -b, since iperf3 defaults to 1 Mbps for UDP:

iperf3 -c 192.168.1.10 -u -b 1G

To simulate VoIP traffic (100 calls, 200-byte packets):

iperf3 -c 192.168.1.10 -u -b 8M -l 200

Quality benchmarks: jitter under 5 ms is good for VoIP, over 30 ms causes audible problems. Packet loss above 0.1% degrades real-time media noticeably.

Tuning and Troubleshooting

Common problems

Only getting 100 Mbps on a gigabit link? Check your physical interface speed with ethtool eth0. Auto-negotiation sometimes fails and drops the link to a lower speed.

MSS shows 536 bytes on Ethernet? Path MTU Discovery is probably disabled. The default MSS for a 1,500-byte MTU is 1,460 bytes. Use -m during testing to check. A 536-byte MSS wastes bandwidth and adds overhead.

CPU maxing out on fast links? Use -Z (zerocopy) to reduce CPU load. For 40 Gbps+, run multiple server instances on different ports and spread them across CPU cores.

Inconsistent results? Use -O 3 to omit the first few seconds while the TCP congestion window ramps up. Leave 30 seconds between test runs to clear network buffers.

Single stream much slower than parallel streams combined? If one stream gets 200 Mbps but eight streams combined hit 1.6 Gbps, the TCP window or OS buffers are capping the single stream. Tune the buffers below.

TCP buffer tuning

Start by calculating the Bandwidth-Delay Product: bandwidth x RTT. A 10 Gbps link with 50 ms RTT gives a BDP of 62.5 MB. Set your maximum buffer to at least 2x the BDP.

Add these to /etc/sysctl.d/99-tcp-tuning.conf and apply with sudo sysctl -p:

ParameterRecommended (1–10 Gbps)
net.core.rmem_max134217728 (128 MB)
net.core.wmem_max134217728 (128 MB)
net.ipv4.tcp_rmem4096 131072 134217728
net.ipv4.tcp_wmem4096 131072 134217728
net.core.default_qdiscfq
net.ipv4.tcp_congestion_controlbbr

Keep net.ipv4.tcp_moderate_rcvbuf set to 1 so the kernel auto-tunes within these ranges. Enable net.ipv4.tcp_window_scaling (set to 1) for TCP windows larger than 64 KB.

You can also switch from the default CUBIC congestion algorithm to Google's BBR. On high-latency links with some packet loss, BBR consistently delivers higher throughput than CUBIC.

Use the -w flag in iperf3 to test specific buffer sizes, but note this can't exceed the kernel's rmem_max or wmem_max. Start with 8 MB for gigabit links, 512 KB for 100 Mbps.

If you're provisioning dedicated servers and want to validate network performance, run iperf3 baseline tests right after setup and after any network changes to catch regressions early.

Video recommendation

Blog

Featured this week

More articles
iperf3 Tutorial: Test Network Speed on Linux & Windows
#bandwidth#server-performance

iperf3 Tutorial: Test Network Speed on Linux & Windows

Install iperf3, run bandwidth tests, and tune TCP buffers for accurate results on Linux and Windows. Covers UDP, bidirectional, and 10GbE+ testing

10 min read - May 7, 2026

#vps#dedicated-servers

ZFS Snapshots: How to Create, Restore, and Automate Them

10 min read - May 5, 2026

More articles
background image

Have questions or need a custom solution?

icon

Flexible options

icon

Global reach

icon

Instant deployment

icon

Flexible options

icon

Global reach

icon

Instant deployment