> Can someone explain why core dumping can't be done in userspace?
...
> There must be a good reason Unix and Linux don't do this ... but I
> haven't thought of it yet. Anyone care to enlighten me?
The problem, I believe, is that once a process has reached the point
where it has been delivered a core dumping signal (of which there are
more than just SIGSEGV, of course), you can't rely on anything about
its internal state.
So, in particular, it could have unmapped all its writeable memory, or
have overwritten it with zeroes, or ...
Therefore it's not at all clear that there's enough of the process
left to be able to guarantee that code inside it will be able to write
a core file.
If you really wanted to write core files from user space, the way to
do it would be to have a separate, well known to the kernel, daemon
process whose sole job was to dump the core of failing processes when
requested to do so by the kernel. FWIW I believe that this is what
HURD does.
This would be rather a micro-kernelish approach, but given that core
dumping is both rare and expensive anyway any performance hit from
doing it like this would be irrelevant.
Such an approach might be nice, (the kernel would get smaller, and
embedded folks could just leave out the code dumping daemon) but it's
a much more major change than anyone would want to do now for 2.4.
It's also hard to summon much enthusiasm to do it, since it's deep
into the "If it's not broken don't fix it" area.
-- Jim
James Cownie <jcownie@etnus.com>
Etnus, LLC. +44 117 9071438
http://www.etnus.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sat Oct 07 2000 - 21:00:09 EST