+ mfn_save = virt_to_mfn(buf);
+
+ while (xen_remap_mfn != INVALID_P2M_ENTRY) {
So the 'list' is constructed by going forward - that is from low-numbered
PFNs to higher numbered ones. But the 'xen_remap_mfn' is going the
other way - from the highest PFN to the lowest PFN.
Won't that mean we will restore the chunks of memory in the wrong
order? That is we will still restore them in chunks size, but the
chunks will be in descending order instead of ascending?
No, the information where to put each chunk is contained in the chunk
data. I can add a comment explaining this.
Right, the MFNs in a "chunks" are going to be restored in the right order.
I was thinking that the "chunks" (so a set of MFNs) will be restored in
the opposite order that they are written to.
And oddly enough the "chunks" are done in 512-3 = 509 MFNs at once?
+ /* Map the remap information */
+ set_pte_mfn(buf, xen_remap_mfn, PAGE_KERNEL);
+
+ BUG_ON(xen_remap_mfn != xen_remap_buf.mfns[0]);
+
+ free = 0;
+ pfn = xen_remap_buf.target_pfn;
+ for (i = 0; i < xen_remap_buf.size; i++) {
+ mfn = xen_remap_buf.mfns[i];
+ if (!released && xen_update_mem_tables(pfn, mfn)) {
+ remapped++;
If we fail 'xen_update_mem_tables' we will on the next chunk (so i+1) keep on
freeing pages instead of trying to remap. Is that intentional? Could we
try to remap?
Hmm, I'm not sure this is worth the effort. What could lead to failure
here? I suspect we could even just BUG() on failure. What do you think?
I was hoping that this question would lead to making this loop a bit
simpler as you would have to spread some of the code in the loop
into functions.
And keep 'remmaped' and 'released' reset every loop.
However, if it makes the code more complex - then please
forget my question.