A gentle reminder.The upstream solution is for newer graphics cards and overall implementation is different.I understand that it's always trying to allocate the maximum, the question isOn Wed, Aug 21, 2024, at 05:20, Rohit Agarwal wrote:It's allocating maximum header for receiving buffer so it can
On 19/08/24 6:45 PM, Arnd Bergmann wrote:Ok, so this might be a result of mei_cl_enqueue_ctrl_wr_cb() doing
On Tue, Aug 13, 2024, at 10:45, Rohit Agarwal wrote:Yes the call stack is same everytime. This is the call stack
What is the call chain you see in the kernel messages? Is it
always the same?
<4>[ 2019.101352] dump_stack_lvl+0x69/0xa0 <4>[ 2019.101359]
warn_alloc+0x10d/0x180 <4>[ 2019.101363]
__alloc_pages_slowpath+0xe3d/0xe80
<4>[ 2019.101366] __alloc_pages+0x22f/0x2b0 <4>[ 2019.101369]
__kmalloc_large_node+0x9d/0x120 <4>[ 2019.101373] ?
mei_cl_alloc_cb+0x34/0xa0 <4>[ 2019.101377] ?
mei_cl_alloc_cb+0x74/0xa0 <4>[ 2019.101379] __kmalloc+0x86/0x130
<4>[ 2019.101382] mei_cl_alloc_cb+0x74/0xa0 <4>[ 2019.101385]
mei_cl_enqueue_ctrl_wr_cb+0x38/0x90
/* for RX always allocate at least client's mtu */
if (length)
length = max_t(size_t, length, mei_cl_mtu(cl));
which was added in 3030dc056459 ("mei: add wrapper for queuing
control commands."). All the callers seem to be passing a short
"length" of just a few bytes, but this would always extend it to
cl->me_cl->props.max_msg_length in mei_cl_mtu().
Not sure where that part is set.
accommodate any response.
Looks like this part can be optimized with pre allocated buffer pool.
whether there is ever a need to set the maximum to more than a page. Pre-
allocating a buffer at probe time would also address the issue, but if it's
possible to just make that buffer smaller, it wouldn't be needed.
Is the 64KB buffer size part of the Chrome specific interface as well, or is that
part of the upstream kernel implementation?
I'm trying to collect more information myself, it's summer vacation time, so it will take few days.
Thanks
Tomas