[PATCH] RDMA: rxe: validate pad and ICRC before payload_size() in rxe_rcv
From: hkbinbin
Date: Wed Apr 01 2026 - 08:27:41 EST
rxe_rcv() currently checks only that the incoming packet is at least
header_size(pkt) bytes long before payload_size() is used.
However, payload_size() subtracts both the attacker-controlled BTH pad
field and RXE_ICRC_SIZE from pkt->paylen:
payload_size = pkt->paylen - offset[RXE_PAYLOAD] - bth_pad(pkt)
- RXE_ICRC_SIZE
This means a short packet can still make payload_size() underflow even
if it includes enough bytes for the fixed headers. Simply requiring
header_size(pkt) + RXE_ICRC_SIZE is not sufficient either, because a
packet with a forged non-zero BTH pad can still leave payload_size()
negative and pass an underflowed value to later receive-path users.
Fix this by validating pkt->paylen against the full minimum length
required by payload_size(): header_size(pkt) + bth_pad(pkt) +
RXE_ICRC_SIZE.
Fixes: 8700e3e7c485 ("Soft RoCE driver")
Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: hkbinbin <hkbinbinbin@xxxxxxxxx>
---
drivers/infiniband/sw/rxe/rxe_recv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c
index 5861e4244049..f79214738c2b 100644
--- a/drivers/infiniband/sw/rxe/rxe_recv.c
+++ b/drivers/infiniband/sw/rxe/rxe_recv.c
@@ -330,7 +330,8 @@ void rxe_rcv(struct sk_buff *skb)
pkt->qp = NULL;
pkt->mask |= rxe_opcode[pkt->opcode].mask;
- if (unlikely(skb->len < header_size(pkt)))
+ if (unlikely(pkt->paylen < header_size(pkt) + bth_pad(pkt) +
+ RXE_ICRC_SIZE))
goto drop;
err = hdr_check(pkt);
--
2.49.0