From: Shaibal Dutta <shaibal.dutta@xxxxxxxxxxxx>
Garbage collector work does not have to be bound to the CPU that scheduled
it. By moving work to the power-efficient workqueue, the selection of
CPU executing the work is left to the scheduler. This extends idle
residency times and conserves power.
This functionality is enabled when CONFIG_WQ_POWER_EFFICIENT is selected.
Cc: "David S. Miller" <davem@xxxxxxxxxxxxx>
Cc: Alexey Kuznetsov <kuznet@xxxxxxxxxxxxx>
Cc: James Morris <jmorris@xxxxxxxxx>
Cc: Hideaki YOSHIFUJI <yoshfuji@xxxxxxxxxxxxxx>
Cc: Patrick McHardy <kaber@xxxxxxxxx>
Signed-off-by: Shaibal Dutta <shaibal.dutta@xxxxxxxxxxxx>
[zoran.markovic@xxxxxxxxxx: Rebased to latest kernel version. Added
commit message.]
Signed-off-by: Zoran Markovic <zoran.markovic@xxxxxxxxxx>
---
net/ipv4/inetpeer.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index 48f4244..87155aa 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -161,7 +161,8 @@ static void inetpeer_gc_worker(struct work_struct *work)
list_splice(&list, &gc_list);
spin_unlock_bh(&gc_lock);
- schedule_delayed_work(&gc_work, gc_delay);
+ queue_delayed_work(system_power_efficient_wq,
+ &gc_work, gc_delay);
@@ -576,7 +577,8 @@ static void inetpeer_inval_rcu(struct rcu_head *head)
list_add_tail(&p->gc_list, &gc_list);
spin_unlock_bh(&gc_lock);
- schedule_delayed_work(&gc_work, gc_delay);
+ queue_delayed_work(system_power_efficient_wq,
+ &gc_work, gc_delay);