Responding :-)
Please do. Also you would have to rebase all this code on the latest Linus's
@@ -797,10 +794,9 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
if (!__set_phys_to_machine(pfn, IDENTITY_FRAME(pfn)))
break;
- if (!WARN((pfn - pfn_s) != (pfn_e - pfn_s),
+ WARN((pfn - pfn_s) != (pfn_e - pfn_s),
"Identity mapping failed. We are %ld short of 1-1 mappings!\n",
- (pfn_e - pfn_s) - (pfn - pfn_s)))
- printk(KERN_DEBUG "1-1 mapping on %lx->%lx\n", pfn_s, pfn);
+ (pfn_e - pfn_s) - (pfn - pfn_s));
I would leave that as is. If you really want to modify it - spin anotherThe problem is because we now call set_phys_range_identity() on small blocks
patch for it.
the number of these messages printed is large. I can split it out to a
separate patch if you'd like.
tree as there are some changes there.
Is that based on a worst case scenario? As in could you write in the comment10 is simply what I thought a reasonable max could ever be for the number ofreturn pfn - pfn_s;Could you mention why 10 please?
}
diff --git a/arch/x86/xen/p2m.h b/arch/x86/xen/p2m.h
new file mode 100644
index 0000000..9ce4d51
--- /dev/null
+++ b/arch/x86/xen/p2m.h
@@ -0,0 +1,15 @@
+#ifndef _XEN_P2M_H
+#define _XEM_P2M_H
+
+#define P2M_PER_PAGE (PAGE_SIZE / sizeof(unsigned long))
+#define P2M_MID_PER_PAGE (PAGE_SIZE / sizeof(unsigned long *))
+#define P2M_TOP_PER_PAGE (PAGE_SIZE / sizeof(unsigned long **))
+
+#define MAX_P2M_PFN (P2M_TOP_PER_PAGE * P2M_MID_PER_PAGE * P2M_PER_PAGE)
+
+#define MAX_REMAP_RANGES 10
remapped ranges. A range here is defined as the combination of a contiguous
e820 I/O range and a contiguous e820 RAM range where the underlying I/O
memory will be remapped. I'm not really excited about this but I don't have
a better solution. Any ideas? I believe the original implementation has a
similar issue that it appears to ignore.
an example of a worst case E820 and scenario and when would 10 be more than
enough to cover it?
Right.The ident_* and remap_* address ranges will never overlap each other unless+ mod = remap_pfn % P2M_PER_PAGE;Should you do a check to make sure that 'ident_[start|end]_pfn_align'
+ remap_start_pfn_align =
+ mod ? (remap_pfn - mod + P2M_PER_PAGE):remap_pfn;
+ mod = (start_pfn + size) % P2M_PER_PAGE;
+ ident_end_pfn_align = start_pfn + size - mod;
+ mod = (remap_pfn + size) % P2M_PER_PAGE;
+ remap_end_pfn_align = remap_pfn + size - mod;
don't overlap with 'remap_[start|end]_pfn_align' ?
+Ah, here you check for it. But wouldn't it be easier to adjust the
+ /* Iterate over each p2m leaf node in each range */
+ for (ident_pfn_iter = ident_start_pfn_align, remap_pfn_iter = remap_start_pfn_align;
+ ident_pfn_iter < ident_end_pfn_align && remap_pfn_iter < remap_end_pfn_align;
+ ident_pfn_iter += P2M_PER_PAGE, remap_pfn_iter += P2M_PER_PAGE) {
+ /* Check we aren't past the end */
+ BUG_ON(ident_pfn_iter + P2M_PER_PAGE > start_pfn + size);
+ BUG_ON(remap_pfn_iter + P2M_PER_PAGE > remap_pfn + size);
remap_[start|end]_pfn_align above to check for this and make changes?
the e820 map is broken. I'm treating both ranges as independent and
iterating over each seperately at the same time. Each range will haveI was thinking that if the E820 is broken then try our best (and stop
different boundary conditions potentially. Does that make sense? Am I
understanding the comment correctly?
trying to remap) and continue. The E820 is guaranteed to be sane (no
overlapping regions), but the sizes could be bogus (zero size for example).
I added a debug check. It must check for the identity page and not INVALID_P2M_ENTRY however.Perhaps change it 'also frees' to 'MUST free'Under what conditions is this not true? The goal of processing these in+^- might
+ /* Save p2m mappings */
+ for (i = 0; i < P2M_PER_PAGE; i++)
+ xen_remap_buf[i] = pfn_to_mfn(ident_pfn_iter + i);
+
+ /* Set identity map which also frees a p2m leaf */
blocks is to guarantee that a leaf is freed.
and then have (debug code) to double-check that Naturally
after calling the early_set_phys_identity?
for (i = 0; i < P2M_PER_PAGE; i++) {
unsigned int pfn = ident_pfn_iter + i;
BUG_ON(pfn_to_mfn(pfn) != INVALID_P2M_ENTRY)
?
ACKPerhaps 'use the page free above' with 'use the P2M leaf freed above'.See above comment.+ ident_cnt += set_phys_range_identity(ident_pfn_iter,What page freed above?
+ ident_pfn_iter + P2M_PER_PAGE);
+
+ /* Now remap memory */
+ for (i = 0; i < P2M_PER_PAGE; i++) {
+ unsigned long mfn = xen_remap_buf[i];
+ struct mmu_update update = {
+ .ptr = (mfn << PAGE_SHIFT) |
+ MMU_MACHPHYS_UPDATE,
+ .val = remap_pfn_iter + i
+ };
+
+ /* Update p2m, will use the page freed above */