Re: [RFC v3 00/27] lib: Rust implementation of SPDM

From: DanX Williams

Date: Mon Mar 09 2026 - 19:11:30 EST


Jason Gunthorpe wrote:
[..]
> > Whether anyone actually implements root ports via standard DOE flows or
> > everyone does this a custom way at the host is an open question.
>
> I'm expecting Linux will be able to setup Link IDE, either through a
> platform TSM as you say, or through someone plugging in the IDE
> registers into some Linux drivers.. I certainly don't want to close
> that door by bad uAPI design.

Right now there is no extra uAPI for IDE. It is an implicit detail of
the given TSM whether the "connect" operation additionally establishes
IDE. The result of whether or not "connect" established
selective-stream-IDE with the device is conveyed in the arrival of
"stream" links in sysfs, see:

Documentation/ABI/testing/sysfs-devices-pci-host-bridge

You also asked:

> Yeah, and I don't really know the details, just have some general idea
> how attestation and PCI link encryption should work in broad strokes.
>
> But I know people who do, so if we can get a series that clearly lays
> out the proposed kernel flow I can possibly get someone to compare
> it..

tl;dr: can you point them at http://lore.kernel.org/20260303000207.1836586-1-dan.j.williams@xxxxxxxxx

A couple notes that the host kernel is unable to establish IDE without a
platform TSM on all but Intel platforms (that I know of). At a minimum,
this is why I think native SPDM should behave as a TSM driver. Platform
TSM involvement for IDE is the predominant architecture in the
ecosystem.

As for link encryption and attestation it is all rooted in the launch
attestation of the VM. Once you trust that the TSM that claims to be
present is valid then you trust all of that TSMs ABIs to enforce
confidentiality and integrity.

Now, a TSM is free to decide, "I do not need PCI link encryption because
I have apriori knowledge that $device has a connection to the system
that meets confidentiality + integrity expectations". So link encryption
is present for discrete devices, but maybe not integrated devices.

Assuming VM launch attesation gets you trust in the guest TSM driver
responses, then the attestation flow to the kernel is mostly just
marshaling blobs and digests:

1/ Host collects a fresh copy of device measurements with a guest
provided nonce (response emitted by PCI/TSM netlink, nonce received
via guest-to-host communication, see AF_VSOCK comment in 2/).

2/ Host marshals cert chain, measurements (signed transcript with nonce
from 1/), and interface report blob to guest via an untrusted channel. I
am currently thinking just use a common transport like AF_VSOCK to get
those blobs into the guest and not have each implementation reinvent
that blob transfer wheel.

3/ Guest needs to validate that blobs are indeed the ones the TSM
expects. Each TSM has a private message protocol to request digests of
the blob contents for this purpose.

At no point is the guest offered explicit PCI link encryption details,
nor the host for that matter. I think some TSMs might include the key
exchange steps in the SPDM transcript. However, that happens within an
SPDM secure session, so host can not otherwise observe it. SPDM does
support mutual authentication so the device could in theory challenge
whether it is talking to a device-approved TSM.

The open question I generated typing this up, is that if a common
transport is used to get the blobs into guest userspace, that userspace
still needs to push the "interface report" blob into the guest kernel.
Kernel needs that to determine how to map private vs shared MMIO. I
still think I prefer that to each implementation having a set of
implementation specific message passing ioctls() to do the same.