[PATCH v2] perf symbols: Cannot disassemble some routines when debuginfo present
From: Eric Saint-Etienne
Date: Wed Nov 28 2018 - 05:43:39 EST
When the kernel is compiled with -ffunction-sections and perf uses the
kernel debuginfo, perf fails the very first symbol lookup and ends up with
an hex offset inside [kernel.vmlinux]. It's due to how perf loads the maps.
Indeed only .text gets loaded by map_groups__find() into al->map.
Consequently al->map address range encompass the whole kernel image.
But then map__load() loads many function maps by splitting al->map,
which reduces al->map range drastically. Very likely the target address is
then in one of those newly created function maps, so we need to lookup the
map again to find that new map.
I'm not sure if this issue is only specific to the kernel but at least it
occurs withe the kernel dso, and when we're not using the kernel debuginfo,
perf will fallback to using kallsyms and then the first lookup will work.
The split of .text section happens in dso_process_kernel_symbol() where we
call map_groups__find_by_name() to find an existing map, but with
-ffunction-sections and a symbol belonging to a new (function) map, such
map doesn't exist yet so we end up creating one and adjusting existing maps
accordingly because adjust_kernel_syms is set there.
This patch makes sure that the event address we're looking-up is indeed
within the map we've found, otherwise we lookup another map again.
Only one extra lookup at most is required for the proper map to be found,
if it exists.
Signed-off-by: Eric Saint-Etienne <eric.saint.etienne@xxxxxxxxxx>
Reviewed-by: Darren Kenny <darren.kenny@xxxxxxxxxx>
---
tools/perf/util/event.c | 49 +++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 47 insertions(+), 2 deletions(-)
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index e9c108a..be333a9 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -1569,9 +1569,54 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
* Kernel maps might be changed when loading symbols so loading
* must be done prior to using kernel maps.
*/
- if (load_map)
+ if (load_map) {
+ /*
+ * Note when using -ffunction-sections on the kernel:
+ *
+ * Only .text got loaded into al->map at this point.
+ * Consequently al->map address range encompass the
+ * whole image.
+ *
+ * map__load() will split this map into many function
+ * maps by shrinking al->map accordingly.
+ *
+ * The split happens in dso_process_kernel_symbol()
+ * where we call map_groups__find_by_name() to find an
+ * existing map, but with -ffunction-sections and a
+ * symbol belonging to a new (function) map, such map
+ * doesn't exist yet so we end up creating one and
+ * adjusting existing maps accordingly because
+ * adjust_kernel_syms is set there.
+ */
+
map__load(al->map);
- al->addr = al->map->map_ip(al->map, al->addr);
+
+ /*
+ * Note when using -ffunction-sections on the kernel:
+ *
+ * Very likely the target address will now be in one of
+ * the newly created function maps but al->map still
+ * points to .text which has been drastically shrank by
+ * the split done in map__load()
+ */
+ if (al->addr < al->map->start ||
+ al->addr >= al->map->end)
+ al->map = map_groups__find(mg, al->addr);
+
+ /*
+ * map_groups__find() should always find a map because
+ * the target address was initially found in .text which
+ * got split *without introducing gaps* by map__load()
+ */
+ }
+
+ /*
+ * In case the later call to map_groups__find() didn't find a
+ * suitable map (it should always, but better be safe) we make
+ * sure that al->map is still valid before deferencing it.
+ */
+ if (al->map != NULL)
+ al->addr = al->map->map_ip(al->map, al->addr);
}
return al->map;
--
1.8.3.1