[PATCH rdma-next v3 0/4] Query GID table API

From: Leon Romanovsky
Date: Wed Sep 23 2020 - 12:50:23 EST

From: Leon Romanovsky <leonro@xxxxxxxxxx>

* Returned back port validity check, because we are using port number
to check protocol type.
* Removed check that interface is up from rdma_read_gid_attr_ndev_rcu(),
without it, this API behaved differently from sysfs by not showing GIDs
for interfaces that in DOWN state. We will send followup patch in next
cycle to nullify GID in netdev_unregister event (not critical but better
to have).
v2: https://lore.kernel.org/lkml/20200922082641.2149549-1-leon@xxxxxxxxxx
* Embedded RoCE protocol type into rdma_read_gid_attr_ndev_rcu
v1: https://lore.kernel.org/lkml/20200914111129.343651-1-leon@xxxxxxxxxx
* Moved git_type logic to cma_set_default_gid_type - Patch #2
* Changed signature of rdma_query_gid_table - Patch #3
* Changed i to be unsigned - Patch #3
* Fixed multiplication overflow - Patch #4
v0: https://lore.kernel.org/lkml/20200910142204.1309061-1-leon@xxxxxxxxxx


>From Avihai,

When an application is not using RDMA CM and if it is using multiple RDMA
devices with one or more RoCE ports, finding the right GID table entry is
a long process.

For example, with two RoCE dual-port devices in a system, when IP
failover is used between two RoCE ports, searching a suitable GID
entry for a given source IP, matching netdevice of given RoCEv1/v2 type
requires iterating over all 4 ports * 256 entry GID table.

Even though the best first match GID table for given criteria is used,
when the matching entry is on the 4th port, it requires reading
3 ports * 256 entries * 3 files (GID, netdev, type) = 2304 files.

The GID table needs to be referred on every QP creation during IP
failover on other netdevice of an RDMA device.

In an alternative approach, a GID cache may be maintained and updated on
GID change event was reported by the kernel. However, it comes with below
two limitations:
(a) Maintain a thread per application process instance to listen and update
the cache.
(b) Without the thread, on cache miss event, query the GID table. Even in
this approach, if multiple processes are used, a GID cache needs to be
maintained on a per-process basis. With a large number of processes,
this method doesn't scale.

Hence, we introduce this series of patches, which introduces an API to
query the complete GID tables of an RDMA device, that returns all valid
GID table entries.

This is done through single ioctl, eliminating 2304 read, 2304 open and
2304 close system calls to just a total of 2 calls (one for each device).

While at it, we also introduce an API to query an individual GID entry
over ioctl interface, which provides all GID attributes information.


Avihai Horon (4):
RDMA/core: Change rdma_get_gid_attr returned error code
RDMA/core: Modify enum ib_gid_type and enum rdma_network_type
RDMA/core: Introduce new GID table query API
RDMA/uverbs: Expose the new GID query API to user space

drivers/infiniband/core/cache.c | 79 ++++++-
drivers/infiniband/core/cma.c | 4 +
drivers/infiniband/core/cma_configfs.c | 9 +-
drivers/infiniband/core/sysfs.c | 3 +-
.../infiniband/core/uverbs_std_types_device.c | 196 +++++++++++++++++-
drivers/infiniband/core/verbs.c | 2 +-
drivers/infiniband/hw/mlx5/cq.c | 2 +-
drivers/infiniband/hw/mlx5/main.c | 4 +-
drivers/infiniband/hw/qedr/verbs.c | 4 +-
include/rdma/ib_cache.h | 3 +
include/rdma/ib_verbs.h | 19 +-
include/uapi/rdma/ib_user_ioctl_cmds.h | 16 ++
include/uapi/rdma/ib_user_ioctl_verbs.h | 14 ++
13 files changed, 332 insertions(+), 23 deletions(-)