Re: [SUSPECTED SPAM] Re: [linux-pm] Proposal for a new algorithmfor reading & writing a hibernation image.

From: Maxim Levitsky
Date: Fri Jun 04 2010 - 21:16:22 EST



> If the memory it writes to isn't protected, there'll be no recursive
> page fault and no problem, right? I'm imagining this page fault handler
> will only set a flag to record that the page needs to be atomically
> copied, copy the original contents to a page previously prepared for the
> purpose, remove the write protection for the page and allow the write to
> continue. That should be okay, right?
I think so, although I have no experience yet to comment on such things.
Despite that I think you might run out of 'page previously prepared for
the purpose'
However you can adopt a retrial process, like you do today in tuxonce.
Just abort suspend, and do it again.

>
> > Of course the sucky part will be how to edit the page tables.
> > You might need to write your own code to do so to be sure.
> > And this has to be arch specific.
>
> Yeah. I wondered whether the code that's already used for creating page
> tables for the atomic restore could be reused, at least in part.
This is very dangerous.
The code might work now, and tomorrow somebody will add a code that does
memory writes.
The point is that you must be sure that there are no recursive faults,
or somehow deal with them (this is probably too dangerous to even think
of)


>
> > Since userspace is frozen, you can be sure that faults can only be
> > caused by access to WO memory or kernel bugs.
>
> Userspace helpers or uswsusp shouldn't be forgotten.
This is especially bad. because a fault in userspace will mean swapping.
You won't get away with custom page fault for this.
You could assure before suspend that all relevant userspace is not
swapped, or forget about userspace, because its minor thing compared to
speed increases of full memory write.


Best regards,
Maxim Levitsky


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/