Depending on your architecture (ie: if requests come in via TCP file
descriptors) you can use select. Select takes as an argument a timeout
value. You can calculate the minimum timeout value for a set of
timeouts by finding the minimum, and use that as your select timeout.
Before each time you call select, you can figure out all the timeout
values to find the minimum select period. This way you can control what
you do when with the timeout. After the select, you can then calculate
which timeouts have occured and act on them appropriately.
I use this for example to detect when a function should be polled while
also waiting on event-driven i/o from a tcp socket. I also use this to
detect when a heartbeat message should have been received but was not in
the alloted time, causing the socket to close and reconnect.
Tom Sanders wrote:
>I'm writing an application server which receives
>requests from other applications. For each request
>received, I want to start a timer so that I can fail
>the application request if it could not be completed
>in max specified time.
>Which Linux timer facility can be used for this?
>I have checked out alarm() and signal() system calls,
>but these calls doesn't take an argument, so its not
>possible to associate application request with the
>Thanks in advance,
>Do you Yahoo!?
>Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to email@example.com
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Jan 23 2003 - 22:00:30 EST