On Tue, 12 Sep 2000, Martin Dalecki wrote:
> Second: The concept of time can give you very very nasty
> behaviour in even cases. [integer arithmetic]
Point taken.
> Third: All you try to improve is the boundary case between an
> entierly overloaded system and a system which has a huge reserve
> to get the task done. I don't think you can find any
> "improvement" which will not just improve some cases and hurt
> some only slightly different cases badly. That's basically the
> same problem as with the paging strategy to follow. (However we
> have some kind of "common sense" in respect of this, despite the
> fact that linux does ignore it...)
Please don't ignore my VM work ;)
http://www.surriel.com/patches/
> Firth: The most common solution for such boundary cases is some
> notion of cost optimization, like the nice value of a process or
> page age for example, or alternative some kind of choice between
> entierly different strategies (remember the term strategy
> routine....) - all of them are just *relative* measures not
> absolute time constrains.
Indeed, we'll need to work with relative measures to
make sure both throughput and latency are OK. Some kind
of (very simple) self-tuning system is probably best here.
regards,
Rik
-- "What you're running that piece of shit Gnome?!?!" -- Miguel de Icaza, UKUUG 2000http://www.conectiva.com/ http://www.surriel.com/
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Fri Sep 15 2000 - 21:00:19 EST