Re: [PATCH V3 2/5] misc: mlx5ctl: Add mlx5ctl misc driver

From: Saeed Mahameed
Date: Thu Dec 07 2023 - 13:54:08 EST


On 07 Dec 10:41, Aron Silverton wrote:
On Tue, Dec 05, 2023 at 08:48:55PM -0800, Jakub Kicinski wrote:
On Tue, 5 Dec 2023 11:11:00 -0600 Aron Silverton wrote:
> 1. As mentioned already, we recently faced a complex problem with RDMA
> in KVM and were getting nowhere trying to debug using the usual methods.
> Mellanox support was able to use this debug interface to see what was
> happening on the PCI bus and prove that the issue was caused by
> corrupted PCIe transactions. This finally put the investigation on the
> correct path. The debug interface was used consistently and extensively
> to test theories about what was happening in the system and, ultimately,
> allowed the problem to be solved.

You hit on an important point, and what is also my experience working
at Meta. I may have even mentioned it in this thread already.
If there is a serious issue with a complex device, there are two ways
you can get support - dump all you can and send the dump to the vendor
or get on a live debugging session with their engineers. Users' ability
to debug those devices is practically non-existent. The idea that we
need access to FW internals is predicated on the assumption that we
have an ability to make sense of those internals.

Once you're on a support call with the vendor - just load a custom
kernel, module, whatever, it's already extremely expensive manual labor.

> 2. We've faced RDMA issues related to lost EQ doorbells, requiring
> complex debug, and ultimately root-caused as a defective CPU. Without
> interactive access to the device allowing us to test theories like,
> "what if we manually restart the EQ", we could not have proven this
> definitively.

I'm not familiar with the RDMA debugging capabilities. Perhaps there
are some gaps there. The more proprietary the implementation the harder
it is to debug. An answer to that would be "try to keep as much as
possible open".. and interfaces which let closed user space talk to
closed FW take us in the opposite direction.

FWIW good netdevice drivers have a selftest which tests IRQ generation
and EQ handling. I think that'd cover the case you're describing?
IDK if mlx5 has them, but if it doesn't definitely worth adding. And I
recommend running those on suspicious machines (ethtool -t, devlink has
some selftests, too)

Essentially, a warning light, and that doesn't solve the underlying
problem. We still need experts (e.g., vendors) to investigate with their
toolsets when and where the problem occurs.

I offered this as an example of one issue we solved. I cannot predict
what kind of issues will pop up in the future, and writing a self-test
for every possible situation is impossible by definition.


> Firstly, We believe in working upstream and all of the advantages that
> that brings to all the distros as well as to us and our customers.
>
> Secondly, Our cloud business offers many types of machine instances,
> some with bare metal/vfio mlx5 devices, that require customer driven
> debug and we want our customers to have the freedom to choose which OS
> they want to use.

I understand that having everything packaged and shipped together makes
life easier.

I think it is a requirement. We operate with Secure Boot. The kernel is
locked down. We don't have debugfs access, even if it were sufficient,
and we cannot compile and load modules. Even without Secure Boot, there
may not be a build environment available.

We really need the module ready-to-go when the debug calls for it - no
building, no reboots, no months long attempts to reproduce in some lab -
just immediate availability of the debug interface on the affected
machine.


If the point of the kernel at this stage of its evolution is to collect
incompatible bits of vendor software, make sure they build cleanly and
ship them to distros - someone should tell me, and I will relent.

I'm not sure I follow you... The mlx5ctl driver seems very compatible
with the mlx5 device driver. I may be misunderstanding.


mlx5ctl is 100% compatible with mlx5 ConnectX open spec [1], and supports
any mlx5 driven stacks, not only netdev, it is able to expose millions of
objects and device states interactively, debugfs would explode if we even
try to accommodate some of these objects or states via debugfs, not to
mention it is also impossible to maintain a stable debugfs output for such
a huge data set, when this mlx5ctl interface speaks out a clear and open
ConnectX language, which is the hole point of the driver.

ConnectX is a highly programmable device for the enduser, and we have a
very open / accommodating policy, an advanced user who can read the open
spec [1], will also have the ability to do self-debug of their own
RDMA/DPU/FPGA apps or similar usecases.

Also I would like to repeat, this is not touching netdev, netdev's policies
do not apply to the greater kernel or RDMA, and we have use cases with
pure-infiniband/DPU/FPGA cards that have no netdev at all, or other cases
with pur virtio instances, and much more.

[1] https://network.nvidia.com/files/doc-2020/ethernet-adapters-programming-manual.pdf