On Fri, Nov 14, 2014 at 05:53:19AM +0100, Juergen Gross wrote:
On 11/13/2014 08:56 PM, Konrad Rzeszutek Wilk wrote:
+ mfn_save = virt_to_mfn(buf);
+
+ while (xen_remap_mfn != INVALID_P2M_ENTRY) {
So the 'list' is constructed by going forward - that is from low-numbered
PFNs to higher numbered ones. But the 'xen_remap_mfn' is going the
other way - from the highest PFN to the lowest PFN.
Won't that mean we will restore the chunks of memory in the wrong
order? That is we will still restore them in chunks size, but the
chunks will be in descending order instead of ascending?
No, the information where to put each chunk is contained in the chunk
data. I can add a comment explaining this.
Right, the MFNs in a "chunks" are going to be restored in the right order.
I was thinking that the "chunks" (so a set of MFNs) will be restored in
the opposite order that they are written to.
And oddly enough the "chunks" are done in 512-3 = 509 MFNs at once?
More don't fit on a single page due to the other info needed. So: yes.
But you could use two pages - one for the structure and the other
for the list of MFNs. That would fix the problem of having only
509 MFNs being contingous per chunk when restoring.
Anyhow the point I had that I am worried is that we do not restore the
MFNs in the same order. We do it in "chunk" size which is OK (so the 509 MFNs
at once)- but the order we traverse the restoration process is the opposite of
the save process. Say we have 4MB of contingous MFNs, so two (err, three)
chunks. The first one we iterate is from 0->509, the second is 510->1018, the
last is 1019->1023. When we restore (remap) we start with the last 'chunk'
so we end up restoring them: 1019->1023, 510->1018, 0->509 order.