When using UDP, you really should check the output on the server side. UDP has no congestion control, so yes, the client just creates a constant bitrate stream. However, iperf on the server side will display the received bitrate, as well as jitter and packet loss.
What's the reason for using UDP? It's mostly useful for specialized cases, like checking jitter with various packet sizes. TCP is much more useful with iperf for cases like this (you may have to adjust the window size if you have a large delay between the client and the server).
Here's a sample output in UDP mode on the server side:
[ 3] local 10.50.15.19 port 5001 connected with 10.50.200.226 port 53516
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 1.326 ms 0/ 893 (0%)
[ 4] local 10.50.15.19 port 5001 connected with 10.50.200.226 port 57697
[ 4] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 2.775 ms 1/ 892 (0.11%)
[ 4] 0.0-10.0 sec 2 datagrams received out-of-order