Re: [PATCH v10 09/28] gpu: nova-core: Hopper/Blackwell: new location for PCI config mirror
From: John Hubbard
Date: Fri Apr 17 2026 - 21:47:07 EST
On 4/17/26 7:23 AM, Alexandre Courbot wrote:
> On Sat Apr 11, 2026 at 11:49 AM JST, John Hubbard wrote:
...
>> @@ -24,7 +30,10 @@ pub(crate) struct GspSetSystemInfo {
>> impl GspSetSystemInfo {
>> /// Returns an in-place initializer for the `GspSetSystemInfo` command.
>> #[allow(non_snake_case)]
>> - pub(crate) fn init<'a>(dev: &'a pci::Device<device::Bound>) -> impl Init<Self, Error> + 'a {
>> + pub(crate) fn init<'a>(
>> + dev: &'a pci::Device<device::Bound>,
>> + chipset: Chipset,
>> + ) -> impl Init<Self, Error> + 'a {
>> type InnerGspSystemInfo = bindings::GspSystemInfo;
>> let init_inner = try_init!(InnerGspSystemInfo {
>> gpuPhysAddr: dev.resource_start(0)?,
>> @@ -35,7 +44,14 @@ pub(crate) fn init<'a>(dev: &'a pci::Device<device::Bound>) -> impl Init<Self, E
>> // Using TASK_SIZE in r535_gsp_rpc_set_system_info() seems wrong because
>> // TASK_SIZE is per-task. That's probably a design issue in GSP-RM though.
>> maxUserVa: (1 << 47) - 4096,
>> - pciConfigMirrorBase: 0x088000,
>> + // Hopper, Blackwell, and later moved the PCI config mirror window to 0x092000.
>> + // Older architectures continue to use the legacy window at 0x088000.
>> + pciConfigMirrorBase: match chipset.arch() {
>> + Architecture::Turing | Architecture::Ampere | Architecture::Ada => 0x088000,
>> + Architecture::Hopper
>> + | Architecture::BlackwellGB10x
>> + | Architecture::BlackwellGB20x => 0x092000,
>> + },
>
> Mmm, similarly to the previous patch, I would prefer to have this behind
> a HAL, but I am not quite sure which one would fit. Any idea?
This really bothered me, because I distinctly recall very recently putting
this behind a HAL! And now I see that I did that in one of the "much later
this year: future firmware directions" branches. ha.
I went with gpu/hal.rs for this. Yes, it could get large, but let's see
how it goes. This is the first item in there. I'll apply it to this series.
thanks,
--
John Hubbard