Re: Re: Re: [PATCH,net-next] tcp: Add TCP ROCCET congestion control module.
From: Lukas Prause
Date: Wed Apr 15 2026 - 10:19:06 EST
Thank you for your quick reply!
>>> Please reference figures in the paper and mention specific concrete
>>> numerical examples of latency reductions to quantify these statements.
>> Figures 5 and 6 show the performance of ROCCET in stationary and mobile
>> scenarios (https://arxiv.org/pdf/2510.25281). In the analyzed scenario,
>> we have observed a lower sRTT with ROCCET than with BBRv3 and CUBIC. The
>> observed throughput was marginally lower than that of BBRv3, but still
>> on a similar level. A detailed quantitative evaluation can be found in
>> the paper in sections VI and VII.
> In https://arxiv.org/pdf/2510.25281 zooming into the Figure 6 sRTT
> box-and-whisker-plot seems to show that BBRv3 actually has a lower
> median sRTT value than ROCCET. So that statement seems misleading?
>
> I would recommend using numerical examples in the commit message to
> quantify the gains from ROCCET and avoid potential issues from visual
> interpretation of graphs.
Thanks for pointing this out. We created new figures that include the
numerical values for Figures 5 [1] and 6 [2]. In Figure 5, i.e., our
stationary measurements, it can be seen that ROCCET obtains lower sRTTs
while maintaining a similar throughput to BBRv3. In Figure 6, i.e., our
mobile measurements, ROCCET and BBRv3 have an overall similar performance.
We will adjust this statement.
[1] https://seafile.cloud.uni-hannover.de/f/cc39263dad6b45ca9952/
[2] https://seafile.cloud.uni-hannover.de/f/9556ec768c084fe2ae40/
>>> Can you please elaborate on this statement here? AFAICT from figures 7
>>> and 8 in https://arxiv.org/pdf/2510.25281 it seems ROCCET is
>>> essentially starved by CUBIC when sharing a bottleneck with CUBIC when
>>> the bottleneck has 2*BDP or more of buffering. AFAICT it sounds like
>>> ROCCET does have "fairness issues when sharing a link with TCP CUBIC"?
>> Our main use case is a connection where the bottleneck link is in the
>> cellular network, where the bottleneck queue is typically not shared
>> between flows. "Fairness" between flows is being implemented by the base
>> station's scheduler. In this scenario, ROCCET achieves its objective to
>> not "bloat" its own queue.
>>
>> We have performed additional fairness experiments in non-cellular
>> networks (figures 7 and 8). Here we show that even when used in other
>> types of networks, ROCCET does not cause harm (see
>> https://dl.acm.org/doi/10.1145/3365609.3365855) to other congestion
control.
> I do not see you objecting to my statement, "it seems ROCCET is
> essentially starved by CUBIC when sharing a bottleneck with CUBIC when
> the bottleneck has 2*BDP or more of buffering." So I guess you agree.
>
> IMHO it's important to keep in mind that a congestion control that
> starves in the presence of CUBIC may have limited deployment. This is
> a key reason why Vegas was never deployed at scale.
We see the main use case for deploying ROCCET in cellular networks, but
we agree that in other types of networks, it might be starved by other
congestion control. We argue that this makes ROCCET different from
Vegas, in that there is a specific environment where its deployment can
be advantageous.
>>> Please specify what side effect or side effects ROCCET is claiming to
>>> solve (presumably bufferbloat?).
>> The side effect we observe in cellular networks is that, in particular,
>> for loss-based congestion control, the cwnd often gets 'frozen' at a
>> size that is too large for the BDP of the current link. This effect is
>> caused by the TCP cwnd validation, which at some point stops increasing
>> the cwnd because it assumes that the sender is application-limited.
>> However, this often leads to a cwnd size that is too large for the link,
>> but too small to cause a congestion event by overfilling the buffer. The
>> result is a standing queue that causes permanently high RTTs. Figure 2
>> in the paper (https://arxiv.org/pdf/2510.25281) shows the described
>> behaviour for a single TCP CUBIC flow.
> OK, so that sounds like you are describing the standard bufferbloat
> problem. So you could replace the phrase "solves an unwanted side
> effects of CUBIC’s implementation" in your comment with something
> like: "avoids the bufferbloat problems inherent in CUBIC."
With this statement, we wanted to describe the specific mechanism in the
TCP CUBIC implementation that can lead to bufferbloat, particularly in
cellular networks. But you are right, the result of this mechanism is
still the standard bufferbloat problem.
Thanks,
Lukas