[PATCH RT 7/8] tracing: make preempt_lazy and migrate_disable counter smaller

From: zanussi
Date: Mon Mar 09 2020 - 15:48:37 EST


From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>

v4.14.172-rt78-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


[ Upstream commit dd430bf5ecb40f9a89679c85868826475d71de54 ]

The migrate_disable counter should not exceed 255 so it is enough to
store it in an 8bit field.
With this change we can move the `preempt_lazy_count' member into the
gap so the whole struct shrinks by 4 bytes to 12 bytes in total.
Remove the `padding' field, it is not needed.
Update the tracing fields in trace_define_common_fields() (it was
missing the preempt_lazy_count field).

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
Signed-off-by: Tom Zanussi <zanussi@xxxxxxxxxx>
---
include/linux/trace_events.h | 3 +--
kernel/trace/trace_events.c | 4 ++--
2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index edd1e42e8a2f7..01e9ab3107531 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -62,8 +62,7 @@ struct trace_entry {
unsigned char flags;
unsigned char preempt_count;
int pid;
- unsigned short migrate_disable;
- unsigned short padding;
+ unsigned char migrate_disable;
unsigned char preempt_lazy_count;
};

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 60e371451ec31..edd43841c94ad 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -187,8 +187,8 @@ static int trace_define_common_fields(void)
__common_field(unsigned char, flags);
__common_field(unsigned char, preempt_count);
__common_field(int, pid);
- __common_field(unsigned short, migrate_disable);
- __common_field(unsigned short, padding);
+ __common_field(unsigned char, migrate_disable);
+ __common_field(unsigned char, preempt_lazy_count);

return ret;
}
--
2.14.1