On 8 Aug 2002, Paul Larson wrote:
> The original issue that I had with all of this is the fact that if the
> current algorithm can't find an available pid, it just sits there
> churning forever and hangs the machine. My original patch was really
> just a very basic fix for that (see the 2.4 tree). This makes it far
> more unlikely for us to max out, but if we do aren't we just going to
> have the same trouble all over?
You'll need at least 330 million tasks to run out.
At a minimum kernel memory allocation of about 8 kB per
task, that's about 2600 GB of kernel data structures.
I'm not sure we'll hit that limit, ever. Not because
we won't have a TB of kernel data space at some point
in the future, but because 330 million tasks is a lot
more than we'd want to manage with just a few CPUs ;)
kind regards,
Rik
-- http://www.linuxsymposium.org/2002/ "You're one of those condescending OLS attendants" "Here's a nickle kid. Go buy yourself a real t-shirt"http://www.surriel.com/ http://distro.conectiva.com/
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Aug 15 2002 - 22:00:18 EST