I'm running some tests to measure the performance of UDP. I've got a server
application running under Windows which reads UDP datagrams and a client
application running under Linux that sends datagrams. It seems that the more
datagrams that arrive to the server network interface the less datagrams
that are read by the application layer. I guess it has something to do with
data dropping due to queue oveflow but I don't know how to prove this, much
less how to overcome it. Following is a list with the figures I got.
Datagrams at the network layer 10000 6000 4500
Datagrams at the application layer 1100 4000 3800
Any hint as to why this happens will be more than welcome.
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
This archive was generated by hypermail 2b29 : Sat Aug 31 2002 - 22:00:01 EST