Re: [RFC PATCH 01/19] x86,fs/resctrl: Add support for Global Bandwidth Enforcement (GLBE)
From: Babu Moger
Date: Wed Feb 11 2026 - 16:18:46 EST
Hi Reinette,
On 2/11/26 10:54, Reinette Chatre wrote:
Hi Babu,It is not specific to CPU assignment. It applies to task assignment also.
On 2/10/26 5:07 PM, Moger, Babu wrote:
Hi Reinette,To confirm, is this then specific to assigning CPUs to resource groups via
On 2/9/2026 12:44 PM, Reinette Chatre wrote:
Hi Babu,Some domains may not have any CPUs associated with that CLOSID. Active meant, I'm referring to domains that have CPUs assigned to the CLOSID.
On 1/21/26 1:12 PM, Babu Moger wrote:
On AMD systems, the existing MBA feature allows the user to set a bandwidthHow do you define "active" QoS Domain?
limit for each QOS domain. However, multiple QOS domains share system
memory bandwidth as a resource. In order to ensure that system memory
bandwidth is not over-utilized, user must statically partition the
available system bandwidth between the active QOS domains. This typically
the cpus/cpus_list files? This refers to how a user needs to partition
available bandwidth so I am still trying to understand the message here since
users still need to do this even when CPUs are not assigned to resource
groups.
For example: We have 4 domains;
# cat schemata
MB:0=8192;1=8192;2=8192;3=8192
If this group has the CPUs assigned to only first two domains. Then the group has only two active domains. Then we will only update the first two domains. The MB values in other domains does not matter.
#echo "MB:0=8;1=8" > schemata
# cat schemata
MB:0=8;1=8;2=8192;3=8192
The combined bandwidth can go up to 16(8+8) units. Each unit is 1/8 GB.
With GMBA, we can set the combined limit higher level and total bandwidth will not exceed GMBA limit.
It sounds as though MBA and GMBA/GLBE operates within the same parameters wrtYes. Both ceilings are enforced at their respective levels.results in system memory being under-utilized since not all QOS domains areHow does this bandwidth allocation limit impact existing MBA? For example, if a
using their full bandwidth Allocation.
AMD PQoS Global Bandwidth Enforcement(GLBE) provides a mechanism
for software to specify bandwidth limits for groups of threads that span
multiple QoS Domains. This collection of QOS domains is referred to as GLBE
control domain. The GLBE ceiling sets a maximum limit on a memory bandwidth
in GLBE control domain. Bandwidth is shared by all threads in a Class of
Service(COS) across every QoS domain managed by the GLBE control domain.
system has two domains (A and B) that user space separately sets MBA
allocations for while also placing both domains within a "GLBE control domain"
with a different allocation, does the individual MBA allocations still matter?
The MBA ceiling is applied at the QoS domain level.
The GLBE ceiling is applied at the GLBE control domain level.
If the MBA ceiling exceeds the GLBE ceiling, the effective MBA limit will be capped by the GLBE ceiling.
the limits but in examples in this series they have different limits. For example,
in the documentation patch [1] there is this:
# cat schemata
GMB:0=2048;1=2048;2=2048;3=2048
MB:0=4096;1=4096;2=4096;3=4096
L3:0=ffff;1=ffff;2=ffff;3=ffff
followed up with what it will look like in new generation [2]:
GMB:0=4096;1=4096;2=4096;3=4096
MB:0=8192;1=8192;2=8192;3=8192
L3:0=ffff;1=ffff;2=ffff;3=ffff
In both examples the per-domain MB ceiling is higher than the global GMB ceiling. With
above showing defaults and you state "If the MBA ceiling exceeds the GLBE ceiling,
the effective MBA limit will be capped by the GLBE ceiling." - does this mean that
MB ceiling can never be higher than GMB ceiling as shown in the examples?
That is correct. There is one more information here. The MB unit is in 1/8 GB and GMB unit is 1GB. I have added that in documentation in patch 4.
The GMB limit defaults to max value 4096 (bit 12 set) when the new group is created. Meaning GMB limit does not apply by default.
When setting the limits, it should be set to same value in all the domains in GMB control domain. Having different value in each domain results in unexpected behavior.
Another question, when setting aside possible differences between MB and GMB.
I am trying to understand how user may expect to interact with these interfaces ...
Consider the starting state example as below where the MB and GMB ceilings are the
same:
# cat schemata
GMB:0=2048;1=2048;2=2048;3=2048
MB:0=2048;1=2048;2=2048;3=2048
Would something like below be accurate? Specifically, showing how the GMB limit impacts the
MB limit:
# echo "GMB:0=8;2=8" > schemata
# cat schemata
GMB:0=8;1=2048;2=8;3=2048
MB:0=8;1=2048;2=8;3=2048
Yes. That is correct. It will cap the MB setting to 8. Note that we are talking about unit differences to make it simple.
... and then when user space resets GMB the MB can reset like ...
# echo "GMB:0=2048;2=2048" > schemata
# cat schemata
GMB:0=2048;1=2048;2=2048;3=2048
MB:0=2048;1=2048;2=2048;3=2048
if I understand correctly this will only apply if the MB limit was never set so
another scenario may be to keep a previous MB setting after a GMB change:
# cat schemata
GMB:0=2048;1=2048;2=2048;3=2048
MB:0=8;1=2048;2=8;3=2048
# echo "GMB:0=8;2=8" > schemata
# cat schemata
GMB:0=8;1=2048;2=8;3=2048
MB:0=8;1=2048;2=8;3=2048
# echo "GMB:0=2048;2=2048" > schemata
# cat schemata
GMB:0=2048;1=2048;2=2048;3=2048
MB:0=8;1=2048;2=8;3=2048
What would be most intuitive way for user to interact with the interfaces?
I see that you are trying to display the effective behaviors above.
Please keep in mind that MB and GMB units differ. I recommend showing only the values the user has explicitly configured, rather than the effective settings, as displaying both may cause confusion.
We also need to track the previous settings so we can revert to the earlier value when needed. The best approach is to document this behavior clearly.
No. There is not such scenario.
No. When enumerating the features the number of CLOSID supported by each isI can see the following scenarios where MBA and GMBA can operate independently:From the description it sounds as though there is a new "memory bandwidthceiling/limit" that seems to imply that MBA allocations are limited by
GMBA allocations while the proposed user interface present them as independent.
If there is indeed some dependency here ... while MBA and GMBA CLOSID are
enumerated separately, under which scenario will GMBA and MBA support different
CLOSID? As I mentioned in [1] from user space perspective "memory bandwidth"
1. If the GMBA limit is set to ‘unlimited’, then MBA functions as an independent CLOS.
2. If the MBA limit is set to ‘unlimited’, then GMBA functions as an independent CLOS.
I hope this clarifies your question.
enumerated separately. That means GMBA and MBA may support different number of CLOSID.
My question is: "under which scenario will GMBA and MBA support different CLOSID?"
Because of a possible difference in number of CLOSIDs it seems the feature supports possible
scenarios where some resource groups can support global AND per-domain limits while other
resource groups can just support global or just support per-domain limits. Is this correct?
System can support up to 16 CLOSIDs. All of them support all the features LLC, MB, GMB, SMBA. Yes. We have separate enumeration for each feature. Are you suggesting to change it ?
The new approach is not final so please provide feedback to help improve it socan be seen as a single "resource" that can be allocated differently based onAfter reviewing the new proposal again, I’m still unsure how all the pieces will fit together. MBA and GMBA share the same scope and have inter-dependencies. Without the full implementation details, it’s difficult for me to provide meaningful feedback on new approach.
the various schemata associated with that resource. This currently has a
dependency on the various schemata supporting the same number of CLOSID which
may be something that we can reconsider?
that the features you are enabling can be supported well.
Yes, I am trying. I noticed that the proposal appears to affect how the schemata information is displayed(in info directory). It seems to introduce additional resource information. I don't see any harm in displaying it if it benefits certain architecture.
Thanks
Babu
Reinette
[1] https://lore.kernel.org/lkml/d58f70592a4ce89e744e7378e49d5a36be3fd05e.1769029977.git.babu.moger@xxxxxxx/
[2] https://lore.kernel.org/lkml/e0c79c53-489d-47bf-89b9-f1bb709316c6@xxxxxxx/