RE: [PATCH 1/3] drivers:pnp Add support for descendants claiming memory address space
From: Jake Oshins
Date: Thu Mar 19 2015 - 15:22:04 EST
> -----Original Message-----
> From: Rafael J. Wysocki [mailto:rjw@xxxxxxxxxxxxx]
> Sent: Tuesday, March 10, 2015 5:34 PM
> To: Jake Oshins; olaf@xxxxxxxxx
> Cc: Rafael J. Wysocki; gregkh@xxxxxxxxxxxxxxxxxxx; KY Srinivasan; linux-
> kernel@xxxxxxxxxxxxxxx; apw@xxxxxxxxxxxxx; vkuznets@xxxxxxxxxx; Linux
> ACPI; Linux PCI; Bjorn Helgaas
> Subject: Re: [PATCH 1/3] drivers:pnp Add support for descendants claiming
> memory address space
> It seems to me then that what you really want is a null protocol for PNP
> which simply doesn't do anything. I don't see any justification for the
> "descendant_protocol" name. It's just a null one.
> In that case you should slightly modify the PNP bus type to be able to
> use a null protocol without defining the stub ->get, ->put and ->disable
> methods that just do nothing and return 0.
> Then, you can define the null protocol without any methods in
> drivers/pnp/core.c and use it in your code without adding the "descendant"
> Of course, that comes with a price which is that every device using the
> null protocol will have that protocol's abstract device as its parent.
> I suppose that this is not a problem?
> > The problem comes in if there are PCI devices in the same region. There's
> > easy way to figure out whether the claim conflicts with the PCI devices,
> > the PCI device's claims are made through the pnp layer.
> Well, please look at __pci_request_region() then and tell me where it uses
> PNP layer.
I've been thinking a lot (and poking around in the code, trying things) in response to what you wrote, and in particular in response to the two parts quoted above. Having a null protocol where each of the devices has the same abstract parent doesn't serve my needs, because it won't guarantee that the ranges claimed fall within the _CRS of the grandparent or great-grandparent node. And, in fact, I don't think that my proposed patch is actually accomplishing that deterministically either, at the moment.
Your response, at length, convinced me to look at things differently and I realized that I wasn't getting as much from this approach as I thought I was. I'd like to withdraw this patch series. I can come up with an alternative solution that exists only within the Hyper-V-related drivers.
Thanks again for your time and patience,