Re: IRQ affinity problem from hisi_sas_v3_hw

From: chenxiang (M)
Date: Sun Mar 05 2023 - 21:54:00 EST


Hi,


在 2023/2/27 16:38, liuchao (CR) 写道:
Hi All,
I test the linux 5.10 and found the hisi_sas_v3_hw use irq_affinitt with IRQD_AFFINITY_MANAGED.
The mechine has 96 cpus with four numa nodes.

hisi_sas_v3_hw has 16 queues and affinity mask of each queue contains 6 CPUs:

q0: 0 - 5
q1: 6 - 11
...
q15: 90 - 95

When I make all CPU of a queue mask go offline, cpu 6-11 for example:

echo 0 > /sys/devices/system/cpu/cpu6/online
echo 0 > /sys/devices/system/cpu/cpu7/online
...
echo 0 > /sys/devices/system/cpu/cpu11/online

the IO will hang and errors are reported in dmesg:

[344908.820022] sas: ata5: end_device-6:0: cmd error handler
[344908.820049] sas: ata5: end_device-6:0: dev error handler
[344908.820058] sas: ata6: end_device-6:1: dev error handler
[344908.820071] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
[344908.820080] sas: ata7: end_device-6:2: dev error handler
[344908.820085] sas: ata8: end_device-6:3: dev error handler
[344908.820091] sas: ata9: end_device-6:4: dev error handler
[344908.820095] sas: ata10: end_device-6:5: dev error handler
[344908.820097] ata5.00: failed command: WRITE DMA EXT
[344908.820111] ata5.00: cmd 35/00:08:18:20:ae/00:00:6a:00:00/e0 tag 19 dma 4096 out
res 40/00:00:47:40:9a/00:00:6c:00:00/e0 Emask 0x4 (timeout)
[344908.820117] ata5.00: status: { DRDY }
[344908.820126] ata5: hard resetting link
[344908.821819] hisi_sas_v3_hw 0000:b4:02.0: phydown: phy0 phy_state=0x3e
[344908.821824] hisi_sas_v3_hw 0000:b4:02.0: ignore flutter phy0 down
[344908.983853] hisi_sas_v3_hw 0000:b4:02.0: phyup: phy0 link_rate=10(sata)
[344908.983887] sas: sas_form_port: phy0 belongs to port0 already(1)!
[344909.145280] ata5.00: configured for UDMA/33
[344909.145308] sd 6:0:0:0: [sda] tag#814 kworker/u193:7: flush retry cmd
[344909.145324] sd 6:0:0:0: [sda] tag#814 Inserting command 000000005d29b45d into mlqueue
[344909.145341] ata5: EH complete
[344909.145358] sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1

Is this situation normal, or should the driver fix this problem?

Do you check whether it has the same issue on latest mainline code?
I remember Lei Ming sent a patchset to solve the issue (https://lore.kernel.org/linux-block/b98f055f-6f38-a47c-965d-b6bcf4f5563f@xxxxxxxxxx/T/),
and you can check whether they are merged in the code.




.