On Fri, Jan 29, 2021 at 1:49 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
On 1/29/21 11:57 AM, Daniel Borkmann wrote:
On 1/27/21 10:01 PM, Andrii Nakryiko wrote:
On Tue, Jan 26, 2021 at 10:36 AM Florent Revest <revest@xxxxxxxxxxxx> wrote:
This needs a new helper that:
- can work in a sleepable context (using sock_gen_cookie)
- takes a struct sock pointer and checks that it's not NULL
Signed-off-by: Florent Revest <revest@xxxxxxxxxxxx>
Acked-by: KP Singh <kpsingh@xxxxxxxxxx>
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 8 ++++++++
kernel/trace/bpf_trace.c | 2 ++
net/core/filter.c | 12 ++++++++++++
tools/include/uapi/linux/bpf.h | 8 ++++++++
5 files changed, 31 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 1aac2af12fed..26219465e1f7 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1874,6 +1874,7 @@ extern const struct bpf_func_proto bpf_per_cpu_ptr_proto;
extern const struct bpf_func_proto bpf_this_cpu_ptr_proto;
extern const struct bpf_func_proto bpf_ktime_get_coarse_ns_proto;
extern const struct bpf_func_proto bpf_sock_from_file_proto;
+extern const struct bpf_func_proto bpf_get_socket_ptr_cookie_proto;
const struct bpf_func_proto *bpf_tracing_func_proto(
enum bpf_func_id func_id, const struct bpf_prog *prog);
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 0b735c2729b2..5855c398d685 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1673,6 +1673,14 @@ union bpf_attr {
* Return
* A 8-byte long unique number.
*
+ * u64 bpf_get_socket_cookie(void *sk)
should the type be `struct sock *` then?
Checking libbpf's generated bpf_helper_defs.h it generates:
/*
* bpf_get_socket_cookie
*
* If the **struct sk_buff** pointed by *skb* has a known socket,
* retrieve the cookie (generated by the kernel) of this socket.
* If no cookie has been set yet, generate a new cookie. Once
* generated, the socket cookie remains stable for the life of the
* socket. This helper can be useful for monitoring per socket
* networking traffic statistics as it provides a global socket
* identifier that can be assumed unique.
*
* Returns
* A 8-byte long non-decreasing number on success, or 0 if the
* socket field is missing inside *skb*.
*/
static __u64 (*bpf_get_socket_cookie)(void *ctx) = (void *) 46;
So in terms of helper comment it's picking up the description from the
`u64 bpf_get_socket_cookie(struct sk_buff *skb)` signature. With that
in mind it would likely make sense to add the actual `struct sock *` type
to the comment to make it more clear in here.
One thought that still came to mind when looking over the series again, do
we need to blacklist certain functions from bpf_get_socket_cookie() under
tracing e.g. when attaching to, say fexit? For example, if sk_prot_free()
would be temporary uninlined/exported for testing and bpf_get_socket_cookie()
was invoked from a prog upon fexit where sock was already passed back to
allocator, I presume there's risk of mem corruption, no?
Mh, this is interesting. I can try to add a deny list in v7 but I'm
not sure whether I'll be able to catch them all. I'm assuming that
__sk_destruct, sk_destruct, __sk_free, sk_free would be other
problematic functions but potentially there would be more.