Re: [PATCH HID 06/12] HID: bpf: add HID-BPF hooks for hid_hw_output_report
From: Benjamin Tissoires
Date: Fri Jun 21 2024 - 12:08:53 EST
On Jun 21 2024, Alexei Starovoitov wrote:
> On Fri, Jun 21, 2024 at 1:56 AM Benjamin Tissoires <bentiss@xxxxxxxxxx> wrote:
> >
> > Same story than hid_hw_raw_requests:
> >
> > This allows to intercept and prevent or change the behavior of
> > hid_hw_output_report() from a bpf program.
> >
> > The intent is to solve a couple of use case:
> > - firewalling a HID device: a firewall can monitor who opens the hidraw
> > nodes and then prevent or allow access to write operations on that
> > hidraw node.
> > - change the behavior of a device and emulate a new HID feature request
> >
> > The hook is allowed to be run as sleepable so it can itself call
> > hid_hw_output_report(), which allows to "convert" one feature request into
> > another or even call the feature request on a different HID device on the
> > same physical device.
> >
> > Signed-off-by: Benjamin Tissoires <bentiss@xxxxxxxxxx>
> >
> > ---
> >
> > Here checkpatch complains about:
> > WARNING: use of RCU tasks trace is incorrect outside BPF or core RCU code
> >
> > However, we are jumping in BPF code, so I think this is correct, but I'd
> > like to have the opinion on the BPF folks.
> > ---
> > drivers/hid/bpf/hid_bpf_dispatch.c | 37 ++++++++++++++++++++++++++++++++----
> > drivers/hid/bpf/hid_bpf_struct_ops.c | 1 +
> > drivers/hid/hid-core.c | 10 ++++++++--
> > drivers/hid/hidraw.c | 2 +-
> > include/linux/hid.h | 3 ++-
> > include/linux/hid_bpf.h | 24 ++++++++++++++++++++++-
> > 6 files changed, 68 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/hid/bpf/hid_bpf_dispatch.c b/drivers/hid/bpf/hid_bpf_dispatch.c
> > index 8d6e08b7c42f..2a29a0625a3b 100644
> > --- a/drivers/hid/bpf/hid_bpf_dispatch.c
> > +++ b/drivers/hid/bpf/hid_bpf_dispatch.c
> > @@ -111,6 +111,38 @@ int dispatch_hid_bpf_raw_requests(struct hid_device *hdev,
> > }
> > EXPORT_SYMBOL_GPL(dispatch_hid_bpf_raw_requests);
> >
> > +int dispatch_hid_bpf_output_report(struct hid_device *hdev,
> > + __u8 *buf, u32 size, __u64 source,
> > + bool from_bpf)
> > +{
> > + struct hid_bpf_ctx_kern ctx_kern = {
> > + .ctx = {
> > + .hid = hdev,
> > + .allocated_size = size,
> > + .size = size,
> > + },
> > + .data = buf,
> > + .from_bpf = from_bpf,
> > + };
> > + struct hid_bpf_ops *e;
> > + int ret;
> > +
> > + rcu_read_lock_trace();
> > + list_for_each_entry_rcu(e, &hdev->bpf.prog_list, list) {
> > + if (e->hid_hw_output_report) {
> > + ret = e->hid_hw_output_report(&ctx_kern.ctx, source);
> > + if (ret)
> > + goto out;
> > + }
> > + }
> > + ret = 0;
> > +
> > +out:
> > + rcu_read_unlock_trace();
>
> same question.
re What is this for?:
e->hid_hw_output_report might sleep, so using a plain rcu_read_lock()
introduces warnings.
> What protects prog_list ?
I currently have a mutex in "struct hid_bpf" (prog_list_lock).
I tried to take the lock instead of calling rcu_read_lock_trace() but
while in e->hid_hw_output_report, we can call hid_bpf_hw_output_report
exactly once, which leads to a deadlock as we are re-entering
dispatch_hid_bpf_output_report() (same applies to hid_raw_request).
> list_for_each_entry_rcu() should be used within RCU CS
> if elements of that list are freed via call_rcu().
> rcu_read_lock_trace() looks wrong here.
I'm not sure if I could use nested mutexes or if I should work with some
other locking mechanism (or not take the lock when we are coming from
bpf, but I would need to keep tabs on who actually called what).
Anyway, thanks for having a look at it :)
Cheers,
Benjamin