Signals lost when using asynchronous IO

From: Mikael Vidstedt (mikeee@stacken.kth.se)
Date: Wed Jul 12 2000 - 04:08:53 EST


I'm trying to use non-blocking asynchronous IO, setting a user-defined
real-time signal to be sent when IO is completed on a socket. This works
fine most of the time, and I get the specified signal when new data is
available. However, sometimes signals appear to be lost. Debugging
indicates that this happens when the remote end closes the socket. The
problem only occurs when there are many sockets active, all using
asynchronous IO, and many signals are to be sent. And, yes, I do check for
SIGIO signals too, but none are sent (i.e. the IO list is not saturated).

The sockets are created as (AF_INET, SOCK_STREAM), and are set to be
O_NONBLOCK|O_ASYNC using fcntl(). The user-defined signal is the first
real-time signal (value 32).

Is this a known problem, and if so, is there a work-around?

System: i386 (Pentium II)/Red Hat Linux release 6.0 (Hedwig)
Kernel: 2.2.16

Thank you,
Mikael

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sat Jul 15 2000 - 21:00:14 EST