Im intrigued
> 1) Is there a maximum amount of TASKS that I can have? I know
> that this number is tunable in "tasks.h", but are there any other limits
> that I should be aware of? (Can I just define it as 10,000?)
Its tunable. On some platforms (eg X86) it has a hardware limit. There
shouldnt be a software one but Im not an Alpha geek
> 2) Is there a maximum amount of OPEN_FILES that I can have?
> Once again, I know that this number is tunable in "fs.h" and "limits.h",
> and that /proc/sys/fs/file-max exists, but are there any other limits
> that I should be aware of? (I heard some rumblings about issues with
> select...) (I have also heard of a 3000fds patch... Where does this
> exist? )
There is a limit in 2.0 on the amount per process to do with the stack
size, there is a patch to work around this. If you can't find it email
me and I'll send it you off list.
> 3) On Digital Unix, we eventually run into a limit of the number of
> processes because of memory. (Each process has a certain footprint, and each
> new one takes a small piece of memory from the overall available) What is the
> footprint of a linux process and can it be reduced?
1 page for the kernel stack, some memory for the page tables and then 1 page
for the process info and sundries (like fd table) rising as you up the
max files/process limit. I'd expect the numbers to be fairly similar to
the values Digital Unix takes actually.
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/