Not ones that leave the wait queues alone until they've confirmed that
you're going to block!
Under low load, you will typically block. Under high load, you will rarely
block. A good implementation becomes more efficient under load.
> > block, so all it should require is a single linear traverse
> through the list
> > of file descriptors, tinkering with one byte each to indicate status. No
> > memory should need to be allocated, since it's all provided by
> the user. No
> > wait queues should need to be touched.
>
> You have to touch all the wait queues in case you want to sleep.
Why?
> So it becomes
> 16384 iterations of sock->ops->poll() doing a wait queue add and then
> 16384 tear downs.
Well that's not good. Wouldn't it be smarter to only touch wait queues once
you've confirmed that you need to block? That way, in the no-block case
(which is the only one that matters under load), your implementation will be
extremely fast.
> There has been experimental work doing caching of tables to avoid
> some overhead
> but for large scaling signal based I/O with rt signal returns is going to
> scale better as you pay one setup per handle not one setup per handle per
> event.
The problem is, one single call to poll can tell you that 10,000 sockets
are ready for I/O. It's much more expensive to deliver 10,000 signals. Of
course, this assumes a poll implementation that doesn't assume you are going
to block.
Sounds to me like a Linux problem being blamed on an API.
DS
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/