The "oom-thresh" value maps to the max expected memory consumption for that process. As long as a process uses less memory than the specified threshold, then it is immune to the oom-killer.
You've just introduced a deadlock. What happens if nobody is over that
predicted memory and the kernel uses more resource ?
On an embedded platform this allows the designer to engineer the system and protect critical apps based on their expected memory consumption. If one of those apps goes crazy and starts chewing additional memory then it becomes vulnerable to the oom killer while the other apps remain protected.
That is why we have no-overcommit support. Now there is an argument for
a meaningful rlimit-as to go with it, and together I think they do what
you really need.