Re: [PATCH v2 20/23] sched: psi: implement bpf_psi struct ops
From: Roman Gushchin
Date: Tue Oct 28 2025 - 15:54:26 EST
Tejun Heo <tj@xxxxxxxxxx> writes:
> Hello,
>
> On Tue, Oct 28, 2025 at 11:29:31AM -0700, Roman Gushchin wrote:
>> > Here, too, I wonder whether it's necessary to build a hard-coded
>> > infrastructure to hook into PSI's triggers. psi_avgs_work() is what triggers
>> > these events and it's not that hot. Wouldn't a fexit attachment to that
>> > function that reads the updated values be enough? We can also easily add a
>> > TP there if a more structured access is desirable.
>>
>> Idk, it would require re-implementing parts of the kernel PSI trigger code
>> in BPF, without clear benefits.
>>
>> Handling PSI in BPF might be quite useful outside of the OOM handling,
>> e.g. it can be used for scheduling decisions, networking throttling,
>> memory tiering, etc. So maybe I'm biased (and I'm obviously am here), but
>> I'm not too concerned about adding infrastructure which won't be used.
>>
>> But I understand your point. I personally feel that the added complexity of
>> the infrastructure makes writing and maintaining BPF PSI programs
>> simpler, but I'm open to other opinions here.
>
> Yeah, I mean, I'm not necessarily against adding infrastructure if the need
> is justified - ie. it enables new things which isn't reasonably feasible
> otherwise. However, it's also a good idea to start small, iterate and build
> up. It's always easier to add new things than to remove stuff which is
> already out there. Wouldn't it make more sense to add the minimum mechanism,
> see how things develop and add what's identified as missing in the
> process?
Ok, let me try the TP approach and see how it will look like.
If there won't see any significant downsides, I'll drop the BPF PSI triggers
infrastructure.
Thanks!