I am wondering why iperf shows much better performance in TCP than UDP. This question is very similar to this one.
UDP should be much faster than TCP because there are no acknowledge and congestion detection. I am looking for an explanation.
$ iperf -u -c 127.0.0.1 -b10G
------------------------------------------------------------
Client connecting to 127.0.0.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 52064 connected with 127.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 962 MBytes 807 Mbits/sec
[ 3] Sent 686377 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 960 MBytes 805 Mbits/sec 0.004 ms 1662/686376 (0.24%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
$ iperf -c 127.0.0.1
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 60712 connected with 127.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 31.1 GBytes 26.7 Gbits/sec
The default length of UDP datagrams is 1470 bytes. You probably need to increase the length with the -l parameter. For 26Gb/s I'd try something like 50000 for your -l parameter and go up or down from there
You also probably need to add a space between your '-b10G' so that it knows 10G is the value to use for the -b parameter. Also I believe the capital G means GigaBYTES. Your maximum achievable bandwidth with a TCP test is 26 GigaBITS which isnt anywhere close to 10GB. I would make your -b parameter value 26g, with a lower-case g.
I suspect you're using the old iperf version 2.0.5 which has known performance problems with UDP. I'd suggest a 2.0.10 version.
iperf -v will give the version
Note 1: The primary issue in 2.0.5 associated with this problem is due to mutex contention between the client thread and the reporter thread. The shared memory between these two threads was increased to address the issue.
Note 3: There are other performance related enhancements in 2.0.10.
Bob
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With