[PATCH 0/5] ibmvfc: make ibmvfc support FPIN messages
From: Dave Marquardt via B4 Relay
Date: Wed Apr 08 2026 - 13:10:52 EST
This patch series adds FPIN (fabric performance impact notification)
support to the ibmvfc (IBM Virtual Fibre Channel) driver. This comes
in three flavors:
- basic, to recognize existing FPIN messages from the virtual I/O
server (VIOS) (patch 1)
- full, supporting additional information and using its own
asynchronous sub-queue and interrupt (patches 2-4)
- extended, supporting FC-LS-5 (patch 5)
Full and extended FPIN support requires a new asynchronous sub-queue
with its own interrupt. The asynchronous sub-queue support requires
ibmvfc to also support
- a new VFC_NOOP command, which the driver recognizes and
ignores (patch 2)
- fabric login, to login separately to the fabric through messages
exchanged with VIOS rather than doing fabric login through the
existing NPIV login (patch 3)
All three modes convert an incoming FPIN message from VIOS to an FC
extended link service message, with basic and full FPIN support using
default values for information not provided by the VIOS FPIN message
but expected in the FC ELS message. This FC ELS message is passed to
fc_host_rcv_fpin for updating statistics and sending the information
upstream by netlink multicast, where it may be caught by listeners
including the DM multipath daemon "multipathd."
Signed-off-by: Dave Marquardt <davemarq@xxxxxxxxxxxxx>
---
Dave Marquardt (5):
ibmvfc: add basic FPIN support
ibmvfc: Add NOOP command support
ibmvfc: make ibmvfc login to fabric
ibmvfc: use async sub-queue for FPIN messages
ibmvfc: handle extended FPIN events
drivers/scsi/Kconfig | 10 +
drivers/scsi/ibmvscsi/Makefile | 1 +
drivers/scsi/ibmvscsi/ibmvfc.c | 668 +++++++++++++++++++++++++++++++++--
drivers/scsi/ibmvscsi/ibmvfc.h | 102 +++++-
drivers/scsi/ibmvscsi/ibmvfc_kunit.c | 219 ++++++++++++
5 files changed, 961 insertions(+), 39 deletions(-)
---
base-commit: 927722dcfe0a5294433bb087387cc52a46cbf675
change-id: 20260407-ibmvfc-fpin-support-b9b575cd2da1
Best regards,
--
Dave Marquardt <davemarq@xxxxxxxxxxxxx>