On Wed, Aug 08, 2018 at 05:23:12PM +0300, Tal Gilboa wrote:
On 8/8/2018 9:08 AM, Leon Romanovsky wrote:
On Mon, Aug 06, 2018 at 06:25:42PM -0500, Alexandru Gagniuc wrote:
This is now done by the PCI core to warn of sub-optimal bandwidth.
Signed-off-by: Alexandru Gagniuc <mr.nuke.me@xxxxxxxxx>
---
drivers/net/ethernet/mellanox/mlx5/core/main.c | 4 ----
1 file changed, 4 deletions(-)
Thanks,
Reviewed-by: Leon Romanovsky <leonro@xxxxxxxxxxxx>
Alex,
I loaded mlx5 driver with and without these series. The report in dmesg is
now missing. From what I understood, the status should be reported at least
once, even if everything is in order.
It is not what this series is doing and it removes prints completely if
fabric can deliver more than card is capable.
We need this functionality to stay.
I'm not sure that you need this information in driver's dmesg output,
but most probably something globally visible and accessible per-pci
device.
net-next (dmesg output for 07:00.0):
[270498.625351] mlx5_core 0000:07:00.0: firmware version: 14.22.4020
[270498.632130] mlx5_core 0000:07:00.0: 63.008 Gb/s available PCIe bandwidth
(8 GT/s x8 link)
[270499.169533] (0000:07:00.0): E-Switch: Total vports 9, per vport: max
uc(1024) max mc(16384)
[270499.182358] mlx5_core 0000:07:00.0: Port module event: module 0, Cable
plugged
net-next + patches (dmesg output for 07:00.0):
[ 331.608472] mlx5_core 0000:07:00.0: firmware version: 14.22.4020
[ 332.564938] (0000:07:00.0): E-Switch: Total vports 9, per vport: max
uc(1024) max mc(16384)
[ 332.616271] mlx5_core 0000:07:00.0: Port module event: module 0, Cable
plugged