2) Virtual interrupts have a relatively high overhead as compared with native interrupts. So, in vmitime, we wanted to be able to lower the timer interrupt rate at runtime, even if HZ is a compile time constant (and set to something high, like 1000hz). While we could hack this in by using evt->min_delta_ns, it wouldn't really work since process time accounting would be wrong. Instead, we should allow the tick_sched_timer in cases (c) and (d) to have runtime configurable period, and then scale the time value accordingly before passing to account_system_time. This is probably something the Xen folks will want also, since I think Xen itself only gets 100hz hard timer, and so it can implement at best a oneshot virtual timer with 100hz resolution. Any objections to us doing something like this?
Yes. It's gross hackery.
1) We want to have a cleanup of the tick assumptions _all_ over the
place and this is going to be real hard work.
2) As I said above. The time accounting for virtualization needs to be
fixed in a generic way.
I'm not going to accept some weird hackery for virtualization, which is
of exactly ZERO value for the kernel itself. Quite the contrary it will
make the cleanup harder and introduce another hard to remove thing,
which will in the worst case last for ever.