Re: Question on handling managed IRQs when hotplugging CPUs

From: John Garry
Date: Tue Feb 05 2019 - 10:28:03 EST


On 05/02/2019 15:15, Hannes Reinecke wrote:
On 2/5/19 4:09 PM, John Garry wrote:
On 05/02/2019 14:52, Keith Busch wrote:
On Tue, Feb 05, 2019 at 05:24:11AM -0800, John Garry wrote:
On 04/02/2019 07:12, Hannes Reinecke wrote:

Hi Hannes,


So, as the user then has to wait for the system to declars 'ready for
CPU remove', why can't we just disable the SQ and wait for all I/O to
complete?
We can make it more fine-grained by just waiting on all outstanding
I/O
on that SQ to complete, but waiting for all I/O should be good as an
initial try.
With that we wouldn't need to fiddle with driver internals, and could
make it pretty generic.

I don't fully understand this idea - specifically, at which layer would
we be waiting for all the IO to complete?

Whichever layer dispatched the IO to a CPU specific context should
be the one to wait for its completion. That should be blk-mq for most
block drivers.

For SCSI devices, unfortunately not all IO sent to the HW originates
from blk-mq or any other single entity.

No, not as such.
But each IO sent to the HW requires a unique identifcation (ie a valid
tag). And as the tagspace is managed by block-mq (minus management
commands, but I'm working on that currently) we can easily figure out if
the device is busy by checking for an empty tag map.

That sounds like a reasonable starting solution.

Thanks,
John


Should be doable for most modern HBAs.

Cheers,

Hannes