Re: [PATCH v5] dma-buf: Add DmaBufTotal counter in meminfo
From: Peter.Enderborg
Date: Tue Apr 20 2021 - 07:38:31 EST
On 4/20/21 1:14 PM, Daniel Vetter wrote:
> On Tue, Apr 20, 2021 at 09:26:00AM +0000, Peter.Enderborg@xxxxxxxx wrote:
>> On 4/20/21 10:58 AM, Daniel Vetter wrote:
>>> On Sat, Apr 17, 2021 at 06:38:35PM +0200, Peter Enderborg wrote:
>>>> This adds a total used dma-buf memory. Details
>>>> can be found in debugfs, however it is not for everyone
>>>> and not always available. dma-buf are indirect allocated by
>>>> userspace. So with this value we can monitor and detect
>>>> userspace applications that have problems.
>>>>
>>>> Signed-off-by: Peter Enderborg <peter.enderborg@xxxxxxxx>
>>> So there have been tons of discussions around how to track dma-buf and
>>> why, and I really need to understand the use-cass here first I think. proc
>>> uapi is as much forever as anything else, and depending what you're doing
>>> this doesn't make any sense at all:
>>>
>>> - on most linux systems dma-buf are only instantiated for shared buffer.
>>> So there this gives you a fairly meaningless number and not anything
>>> reflecting gpu memory usage at all.
>>>
>>> - on Android all buffers are allocated through dma-buf afaik. But there
>>> we've recently had some discussions about how exactly we should track
>>> all this, and the conclusion was that most of this should be solved by
>>> cgroups long term. So if this is for Android, then I don't think adding
>>> random quick stop-gaps to upstream is a good idea (because it's a pretty
>>> long list of patches that have come up on this).
>>>
>>> So what is this for?
>> For the overview. dma-buf today only have debugfs for info. Debugfs
>> is not allowed by google to use in andoid. So this aggregate the information
>> so we can get information on what going on on the system.
>>
>> And the LKML standard respond to that is "SHOW ME THE CODE".
> Yes. Except this extends to how exactly this is supposed to be used in
> userspace and acted upon.
>
>> When the top memgc has a aggregated information on dma-buf it is maybe
>> a better source to meminfo. But then it also imply that dma-buf requires memcg.
>>
>> And I dont see any problem to replace this with something better with it is ready.
> The thing is, this is uapi. Once it's merged we cannot, ever, replace it.
> It must be kept around forever, or a very close approximation thereof. So
> merging this with the justification that we can fix it later on or replace
> isn't going to happen.
It is intended to be relevant as long there is a dma-buf. This is a proper
metric. If the newer implementations is not get the same result it is
not doing it right and is not better. If a memcg counter or a global_zone
counter do the same thing they it can replace the suggested method.
But I dont think they will. dma-buf does not have to be mapped to a process,
and the case of vram, it is not covered in current global_zone. All of them
would be very nice to have in some form. But it wont change what the
correct value of what "Total" is.
> -Daniel
>
>>> -Daniel
>>>
>>>> ---
>>>> drivers/dma-buf/dma-buf.c | 12 ++++++++++++
>>>> fs/proc/meminfo.c | 5 ++++-
>>>> include/linux/dma-buf.h | 1 +
>>>> 3 files changed, 17 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
>>>> index f264b70c383e..4dc37cd4293b 100644
>>>> --- a/drivers/dma-buf/dma-buf.c
>>>> +++ b/drivers/dma-buf/dma-buf.c
>>>> @@ -37,6 +37,7 @@ struct dma_buf_list {
>>>> };
>>>>
>>>> static struct dma_buf_list db_list;
>>>> +static atomic_long_t dma_buf_global_allocated;
>>>>
>>>> static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
>>>> {
>>>> @@ -79,6 +80,7 @@ static void dma_buf_release(struct dentry *dentry)
>>>> if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
>>>> dma_resv_fini(dmabuf->resv);
>>>>
>>>> + atomic_long_sub(dmabuf->size, &dma_buf_global_allocated);
>>>> module_put(dmabuf->owner);
>>>> kfree(dmabuf->name);
>>>> kfree(dmabuf);
>>>> @@ -586,6 +588,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
>>>> mutex_lock(&db_list.lock);
>>>> list_add(&dmabuf->list_node, &db_list.head);
>>>> mutex_unlock(&db_list.lock);
>>>> + atomic_long_add(dmabuf->size, &dma_buf_global_allocated);
>>>>
>>>> return dmabuf;
>>>>
>>>> @@ -1346,6 +1349,15 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
>>>> }
>>>> EXPORT_SYMBOL_GPL(dma_buf_vunmap);
>>>>
>>>> +/**
>>>> + * dma_buf_allocated_pages - Return the used nr of pages
>>>> + * allocated for dma-buf
>>>> + */
>>>> +long dma_buf_allocated_pages(void)
>>>> +{
>>>> + return atomic_long_read(&dma_buf_global_allocated) >> PAGE_SHIFT;
>>>> +}
>>>> +
>>>> #ifdef CONFIG_DEBUG_FS
>>>> static int dma_buf_debug_show(struct seq_file *s, void *unused)
>>>> {
>>>> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
>>>> index 6fa761c9cc78..ccc7c40c8db7 100644
>>>> --- a/fs/proc/meminfo.c
>>>> +++ b/fs/proc/meminfo.c
>>>> @@ -16,6 +16,7 @@
>>>> #ifdef CONFIG_CMA
>>>> #include <linux/cma.h>
>>>> #endif
>>>> +#include <linux/dma-buf.h>
>>>> #include <asm/page.h>
>>>> #include "internal.h"
>>>>
>>>> @@ -145,7 +146,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>>>> show_val_kb(m, "CmaFree: ",
>>>> global_zone_page_state(NR_FREE_CMA_PAGES));
>>>> #endif
>>>> -
>>>> +#ifdef CONFIG_DMA_SHARED_BUFFER
>>>> + show_val_kb(m, "DmaBufTotal: ", dma_buf_allocated_pages());
>>>> +#endif
>>>> hugetlb_report_meminfo(m);
>>>>
>>>> arch_report_meminfo(m);
>>>> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
>>>> index efdc56b9d95f..5b05816bd2cd 100644
>>>> --- a/include/linux/dma-buf.h
>>>> +++ b/include/linux/dma-buf.h
>>>> @@ -507,4 +507,5 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
>>>> unsigned long);
>>>> int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>> void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
>>>> +long dma_buf_allocated_pages(void);
>>>> #endif /* __DMA_BUF_H__ */
>>>> --
>>>> 2.17.1
>>>>
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@xxxxxxxxxxxxxxxxxxxxx
>>>> https://urldefense.com/v3/__https://lists.freedesktop.org/mailman/listinfo/dri-devel__;!!JmoZiZGBv3RvKRSx!qW8kUOZyY4Dkew6OvqgfoM-5unQNVeF_M1biaIAyQQBR0KB7ksRzZjoh382ZdGGQR9k$
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@xxxxxxxxxxxxxxxxxxxxx
>> https://urldefense.com/v3/__https://lists.freedesktop.org/mailman/listinfo/dri-devel__;!!JmoZiZGBv3RvKRSx!vXvDg6I4V__QdL2fA08Rc5v6rjDzxOIQz6kwyMMLUK3_g4z7qZTg1H98BDDTxZeZjI4$