Re: [PATCH] PCI/ASPM: Enable ASPM for links under VMD domain

From: hch@xxxxxxxxxxxxx
Date: Thu Aug 27 2020 - 12:50:33 EST


On Thu, Aug 27, 2020 at 04:45:53PM +0000, Derrick, Jonathan wrote:
> Just a few benefits and there are other users with unique use cases:
> 1. Passthrough of the endpoint to OSes which don't natively support
> hotplug can enable hotplug for that OS using the guest VMD driver

Or they could just write a hotplug driver, which would be more useful
than writing a hotplug driver.

> 2. Some hypervisors have a limit on the number of devices that can be
> passed through. VMD endpoint is a single device that expands to many.

Or you just fix the hypervisor. Never mind that this is so much
less likely than wanting to pass an individual device or VF to a guest,
which VMD makes impossible (at least without tons of hacks specifically
for it).

> 3. Expansion of possible bus numbers beyond 256 by using other
> segments.

Which we can trivially to with PCI domains.

> 4. Custom RAID LED patterns driven by ledctl

Which you can also do by any other vendor specific way.

>
> I'm not trying to market this. Just pointing out that this isn't
> "bringing zero actual benefits" to many users.

Which of those are a benefit to a Linux user? Serious, I really don't
care if Intel wants to sell VMD as a value add to those that have
a perceived or in rare cases even real need. Just let Linux opt out
of it instead of needing special quirks all over.