Exporting a lot of data to other processes?

From: Ph. Marek
Date: Thu Oct 25 2007 - 02:05:50 EST


Hello everybody,

I've already pondered about a question for some time, and would like to ask
for a better idea here. It's not entirely about the kernel - although that
surely has some impact, too.


I've got some process/daemon, that wants to export information to other
processes. As model for exporting that data I found the sysfs and procfs
nice - an easy "cat" can give you the needed data.

Now, in order to do something like that from userspace, I either have to:

-) use FUSE
- feels slow (many context switches)
- much overhead for such common things (another daemon)
-) use named pipes in some directory structure, and keep them open in
the daemon - waiting to be written to
- many open filehandles
- feels not really useable for bigger (>1000) structures
-) use some ramfs/shmfs or similar, and overwrite the data occasionally
- not current data
- runtime overhead (processor load)

Now the open/close events wouldn't be interesting; the read() and (possibly)
write events would have to be relayed (which is not the case for FUSE, IIUC)

Is there some better way? For small structures the pipes seem to be the best
way ... just wait for a reader, give it data, and finished.


Thank you for all ideas.


Regards,

Phil
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/