On Mon, May 22, 2023 at 03:01:22PM +0800, Abel Wu wrote:
Now with the preivous patch, __sk_mem_raise_allocated() considers
nit: s/preivous/previous/
the memory pressure of both global and the socket's memcg on a func-
wide level, making the condition of memcg's pressure in question
redundant.
Signed-off-by: Abel Wu <wuyun.abel@xxxxxxxxxxxxx>
---
net/core/sock.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/net/core/sock.c b/net/core/sock.c
index 7641d64293af..baccbb58a11a 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -3029,9 +3029,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
if (sk_has_memory_pressure(sk)) {
u64 alloc;
- if (!sk_under_memory_pressure(sk))
+ if (!sk_under_global_memory_pressure(sk))
return 1;
alloc = sk_sockets_allocated_read_positive(sk);
+ /*
+ * If under global pressure, allow the sockets that are below
+ * average memory usage to raise, trying to be fair among all
+ * the sockets under global constrains.
+ */
nit:
/* Multi-line comments in networking code
* look like this.
*/
if (sk_prot_mem_limits(sk, 2) > alloc *
sk_mem_pages(sk->sk_wmem_queued +
atomic_read(&sk->sk_rmem_alloc) +
--
2.37.3