Re: [PATCH 3/3] gpu: nova-core: fix wrong use of barriers in GSP code

From: Joel Fernandes

Date: Thu Apr 02 2026 - 17:59:45 EST


Btw, nouveau@ list has issues at the moment, I suggest CC Nvidia nova folks
directly. Adding some ++ and dropping Nouveau. In case my reply didn't make it,
it is below.

On 4/2/2026 5:56 PM, Joel Fernandes wrote:
> Hi Gary,
>
> On 4/2/2026 11:24 AM, Gary Guo wrote:
>> From: Gary Guo <gary@xxxxxxxxxxx>
>>
>> Currently, in the GSP->CPU messaging path, the current code misses a read
>> barrier before data read. The barrier after read is updated to a DMA
>> barrier (with release ordering desired), instead of the existing (Rust)
>> SeqCst SMP barrier; the location of barrier is also moved to the beginning
>> of function, because the barrier is needed to synchronizing between data
>> and ring-buffer pointer, the RMW operation does not internally need a
>> barrier (nor it has to be atomic, as CPU pointers are updated by CPU only).
>>
>> In the CPU->GSP messaging path, the current code misses a write barrier
>> after data write and before updating the CPU write pointer. Barrier is not
>> needed before data write due to control dependency, this fact is documented
>> explicitly. This could be replaced with an acquire barrier if needed.
>>
>> Signed-off-by: Gary Guo <gary@xxxxxxxxxxx>
>> ---
>> drivers/gpu/nova-core/gsp/cmdq.rs | 19 +++++++++++++++++++
>> drivers/gpu/nova-core/gsp/fw.rs | 12 ------------
>> 2 files changed, 19 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/gpu/nova-core/gsp/cmdq.rs b/drivers/gpu/nova-core/gsp/cmdq.rs
>> index 2224896ccc89..7e4315b13984 100644
>> --- a/drivers/gpu/nova-core/gsp/cmdq.rs
>> +++ b/drivers/gpu/nova-core/gsp/cmdq.rs
>> @@ -19,6 +19,12 @@
>> prelude::*,
>> sync::{
>> aref::ARef,
>> + barrier::{
>> + dma_mb,
>> + Read,
>> + Release,
>> + Write, //
>> + },
>> Mutex, //
>> },
>> time::Delta,
>> @@ -258,6 +264,9 @@ fn new(dev: &device::Device<device::Bound>) -> Result<Self> {
>> let tx = self.cpu_write_ptr() as usize;
>> let rx = self.gsp_read_ptr() as usize;
>>
>> + // ORDERING: control dependency provides necessary LOAD->STORE ordering.
>> + // `dma_mb(Acquire)` may be used here if we don't want to rely on control dependency.
>
> Just checking, does control dependency on CPU side really apply to ordering for
> IO (what the device perceives?). IOW, the loads are stores might be ordered on
> the CPU side, but the device might be seeing these operations out of order. If
> that is the case, perhaps the control dependency comment is misleading.
>
>
>> +
>> // SAFETY:
>> // - We will only access the driver-owned part of the shared memory.
>> // - Per the safety statement of the function, no concurrent access will be performed.
>> @@ -311,6 +320,9 @@ fn driver_write_area_size(&self) -> usize {
>> let tx = self.gsp_write_ptr() as usize;
>> let rx = self.cpu_read_ptr() as usize;
>>
>> + // ORDERING: Ensure data load is ordered after load of GSP write pointer.
>> + dma_mb(Read);
>> +
>
> I suggest taking it on a case by case basis, and splitting the patch for each
> case, for easier review. There are many patterns AFAICS, load-store, store-store
> etc.
>
> I do acknowledge the issue you find here though. thanks,
>
> --
> Joel Fernandes
>
>
>
>> // SAFETY:
>> // - We will only access the driver-owned part of the shared memory.
>> // - Per the safety statement of the function, no concurrent access will be performed.
>> @@ -408,6 +420,10 @@ fn cpu_read_ptr(&self) -> u32 {
>>
>> // Informs the GSP that it can send `elem_count` new pages into the message queue.
>> fn advance_cpu_read_ptr(&mut self, elem_count: u32) {
>> + // ORDERING: Ensure read pointer is properly ordered.
>> + //
>> + dma_mb(Release);
>> +
>> super::fw::gsp_mem::advance_cpu_read_ptr(&self.0, elem_count)
>> }
>>
>> @@ -422,6 +438,9 @@ fn cpu_write_ptr(&self) -> u32 {
>>
>> // Informs the GSP that it can process `elem_count` new pages from the command queue.
>> fn advance_cpu_write_ptr(&mut self, elem_count: u32) {
>> + // ORDERING: Ensure all command data is visible before updateing ring buffer pointer.
>> + dma_mb(Write);
>> +
>> super::fw::gsp_mem::advance_cpu_write_ptr(&self.0, elem_count)
>> }
>> }
>> diff --git a/drivers/gpu/nova-core/gsp/fw.rs b/drivers/gpu/nova-core/gsp/fw.rs
>> index 0c8a74f0e8ac..62c2cf1b030c 100644
>> --- a/drivers/gpu/nova-core/gsp/fw.rs
>> +++ b/drivers/gpu/nova-core/gsp/fw.rs
>> @@ -42,11 +42,6 @@
>>
>> // TODO: Replace with `IoView` projections once available.
>> pub(super) mod gsp_mem {
>> - use core::sync::atomic::{
>> - fence,
>> - Ordering, //
>> - };
>> -
>> use kernel::{
>> dma::Coherent,
>> dma_read,
>> @@ -72,10 +67,6 @@ pub(in crate::gsp) fn cpu_read_ptr(qs: &Coherent<GspMem>) -> u32 {
>>
>> pub(in crate::gsp) fn advance_cpu_read_ptr(qs: &Coherent<GspMem>, count: u32) {
>> let rptr = cpu_read_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;
>> -
>> - // Ensure read pointer is properly ordered.
>> - fence(Ordering::SeqCst);
>> -
>> dma_write!(qs, .cpuq.rx.0.readPtr, rptr);
>> }
>>
>> @@ -87,9 +78,6 @@ pub(in crate::gsp) fn advance_cpu_write_ptr(qs: &Coherent<GspMem>, count: u32) {
>> let wptr = cpu_write_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES;
>>
>> dma_write!(qs, .cpuq.tx.0.writePtr, wptr);
>> -
>> - // Ensure all command data is visible before triggering the GSP read.
>> - fence(Ordering::SeqCst);
>> }
>> }
>>
>

--
Joel Fernandes