Re: [linux-pm] [GIT PULL] One more power management fix for 2.6.37
From: Linus Torvalds
Date: Tue Nov 09 2010 - 19:50:25 EST
On Tue, Nov 9, 2010 at 3:39 PM, Rafael J. Wysocki <rjw@xxxxxxx> wrote:
>
> So, what about the patch below?
I think it looks saner, but I also think that it would be saner yet to
have a separate list entirely, and *not* do this crazy "move things
back and forth between 'dpm_list'" thing.
So I would seriously suggest that the design should aim for each
suspend event to move things between two lists, and as devices go
through the suspend phases, they'd move to/from lists that also
indicate which suspend level they are at.
So why not introduce a new list called "dpm_list_suspended", and in
"dpm_suspend_noirq()" you move devices one at a time from the
"dpm_list" to the "dmp_list_suspended".
And then in "dpm_resume_noirq()" you move them back.
Wouldn't that be nice?
(Optimally, you'd have separate lists for "active", "suspended", and
"irq-suspended")
But regardless, your patch does seem to at least match what we
currently do in the regular suspend/resume code (ie the
non-irq's-disabled case). So I don't mind it. I just think that it
would be cleaner to not take things off one list only to put them back
on the same list again.
In particular, _if_ device add events can happen concurrently with
this, I don't understand how that would maintain the depth-first order
of the list. In contrast, if you do it with separate lists, then you
know that if a device is on the "suspended" list, then it by
definition has to be "after" all devices that are on the non-suspended
list, since you cannot have a non-suspended device that depends on a
suspended one.
So having separate lists with state should also be very sensible from
a device topology standpoint - while doing that "list_splice()" back
on the same list is _not_ at all obviously correct from a topological
standpoint (I'm not sure I'm explaining this well).
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/