On Tue 20-04-21 09:32:14, Christian König wrote:
Am 20.04.21 um 09:04 schrieb Michal Hocko:[...]
On Mon 19-04-21 18:37:13, Christian König wrote:
Am 19.04.21 um 18:11 schrieb Michal Hocko:
This is something that would be really interesting in the changelog. IWhat I am trying to bring up with NUMA side is that the same problem canYes, that is the direction my thinking goes as well, but also even further.
happen on per-node basis. Let's say that some user consumes unexpectedly
large amount of dma-buf on a certain node. This can lead to observable
performance impact on anybody on allocating from that node and even
worse cause an OOM for node bound consumers. How do I find out that it
was dma-buf that has caused the problem?
See DMA-buf is also used to share device local memory between processes as
well. In other words VRAM on graphics hardware.
On my test system here I have 32GB of system memory and 16GB of VRAM. I can
use DMA-buf to allocate that 16GB of VRAM quite easily which then shows up
under /proc/meminfo as used memory.
mean the expected and extreme memory consumption of this memory. Ideally
with some hints on what to do when the number is really high (e.g. mount
debugfs and have a look here and there to check whether this is just too
many users or an unexpected pattern to be reported).
But that isn't really system memory at all, it's just allocated deviceOK, that was not really clear to me. So this is not really accounted to
memory.
MemTotal?
If that is really the case then reporting it into the oom
report is completely pointless and I am not even sure /proc/meminfo is
the right interface either. It would just add more confusion I am
afraid.
As I've pointed out in previous reply we do have an API to account perSee where I am heading?Yeah, totally. Thanks for pointing this out.
Suggestions how to handle that?
node memory but now that you have brought up that this is not something
we account as a regular memory then this doesn't really fit into that
model. But maybe I am just confused.