On Thu, Oct 05, 2023 at 01:41:38PM -0500, Kalra, Ashish wrote:
+static void unshare_all_memory(bool unmap)
+{
+ unsigned long addr, end;
+ long found = 0, shared;
+
+ /*
+ * Walk direct mapping and convert all shared memory back to private,
+ */
+
+ addr = PAGE_OFFSET;
+ end = PAGE_OFFSET + get_max_mapped();
+
+ while (addr < end) {
+ unsigned long size;
+ unsigned int level;
+ pte_t *pte;
+
+ pte = lookup_address(addr, &level);
IIRC, you were earlier walking the direct mapping using
walk_page_range_novma(), any particular reason to use lookup_address()
instead ?
walk_page_range_novma() wants mmap lock to be taken, but it is tricky as
we run here from atomic context in case of crash.
I considered using trylock to bypass the limitation, but it is a hack.
+ size = page_level_size(level);
+
+ if (pte && pte_decrypted(*pte)) {
Additionally need to add check for pte_none() here to handle physical memory
holes in direct mapping.
lookup_address() returns NULL for none entries.
We need to modify the PTE for setting c-bit to 1 so that will require cpa_flush(), though probably can add something lighter to do+ int pages = size / PAGE_SIZE;
+
+ /*
+ * Touching memory with shared bit set triggers implicit
+ * conversion to shared.
+ *
+ * Make sure nobody touches the shared range from
+ * now on.
+ *
+ * Bypass unmapping for crash scenario. Unmapping
+ * requires sleepable context, but in crash case kernel
+ * hits the code path with interrupts disabled.
In case of SNP we will need to temporarily enable interrupts during this
unsharing as we invoke set_memory_encrypted() which then hits a BUG_ON() in
cpa_flush() if interrupts are disabled.
Do you really need full set_memory_encrypted()? Can't you do something
ligher?