Re: [PATCH v9 00/19] DCD: Add support for Dynamic Capacity Devices (DCD)
From: Fan Ni
Date: Mon Apr 14 2025 - 22:51:52 EST
On Mon, Apr 14, 2025 at 09:37:02PM -0500, Ira Weiny wrote:
> Fan Ni wrote:
> > On Sun, Apr 13, 2025 at 05:52:08PM -0500, Ira Weiny wrote:
> > > A git tree of this series can be found here:
> > >
> > > https://github.com/weiny2/linux-kernel/tree/dcd-v6-2025-04-13
> > >
> > > This is now based on 6.15-rc2.
> > >
> > > Due to the stagnation of solid requirements for users of DCD I do not
> > > plan to rev this work in Q2 of 2025 and possibly beyond.
> > >
> > > It is anticipated that this will support at least the initial
> > > implementation of DCD devices, if and when they appear in the ecosystem.
> > > The patch set should be reviewed with the limited set of functionality in
> > > mind. Additional functionality can be added as devices support them.
> > >
> > > It is strongly encouraged for individuals or companies wishing to bring
> > > DCD devices to market review this set with the customer use cases they
> > > have in mind.
> >
> > Hi Ira,
> > thanks for sending it out.
> >
> > I have not got a chance to check the code or test it extensively.
> >
> > I tried to test one specific case and hit issue.
> >
> > I tried to add some DC extents to the extent list on the device when the
> > VM is launched by hacking qemu like below,
> >
> > diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
> > index 87fa308495..4049fc8dd9 100644
> > --- a/hw/mem/cxl_type3.c
> > +++ b/hw/mem/cxl_type3.c
> > @@ -826,6 +826,11 @@ static bool cxl_create_dc_regions(CXLType3Dev *ct3d, Error **errp)
> > QTAILQ_INIT(&ct3d->dc.extents);
> > QTAILQ_INIT(&ct3d->dc.extents_pending);
> >
> > + cxl_insert_extent_to_extent_list(&ct3d->dc.extents, 0,
> > + CXL_CAPACITY_MULTIPLIER, NULL, 0);
> > + ct3d->dc.total_extent_count = 1;
> > + ct3_set_region_block_backed(ct3d, 0, CXL_CAPACITY_MULTIPLIER);
> > +
> > return true;
> > }
> >
> >
> > Then after the VM is launched, I tried to create a DC region with
> > commmand: cxl create-region -m mem0 -d decoder0.0 -s 1G -t
> > dynamic_ram_a.
> >
> > It works fine. As you can see below, the region is created and the
> > extent is showing correctly.
> >
> > root@debian:~# cxl list -r region0 -N
> > [
> > {
> > "region":"region0",
> > "resource":79725330432,
> > "size":1073741824,
> > "interleave_ways":1,
> > "interleave_granularity":256,
> > "decode_state":"commit",
> > "extents":[
> > {
> > "offset":0,
> > "length":268435456,
> > "uuid":"00000000-0000-0000-0000-000000000000"
> > }
> > ]
> > }
> > ]
> >
> >
> > However, after that, I tried to create a dax device as below, it failed.
> >
> > root@debian:~# daxctl create-device -r region0 -v
> > libdaxctl: __dax_regions_init: no dax regions found via: /sys/class/dax
> > error creating devices: No such device or address
> > created 0 devices
> > root@debian:~#
> >
> > root@debian:~# ls /sys/class/dax
> > ls: cannot access '/sys/class/dax': No such file or directory
>
> Have you update daxctl with cxl-cli?
>
> I was confused by this lack of /sys/class/dax and checked with Vishal. He
> says this is legacy.
>
> I have /sys/bus/dax and that works fine for me with the latest daxctl
> built from the ndctl code I sent out:
>
> https://github.com/weiny2/ndctl/tree/dcd-region3-2025-04-13
>
> Could you build and use the executables from that version?
>
> Ira
That is my setup.
root@debian:~# cxl list -r region0 -N
[
{
"region":"region0",
"resource":79725330432,
"size":2147483648,
"interleave_ways":1,
"interleave_granularity":256,
"decode_state":"commit",
"extents":[
{
"offset":0,
"length":268435456,
"uuid":"00000000-0000-0000-0000-000000000000"
}
]
}
]
root@debian:~# cd ndctl/
root@debian:~/ndctl# git branch
* dcd-region3-2025-04-13
root@debian:~/ndctl# ./build/daxctl/daxctl create-device -r region0 -v
libdaxctl: __dax_regions_init: no dax regions found via: /sys/class/dax
error creating devices: No such device or address
created 0 devices
root@debian:~/ndctl# cat .git/config
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = https://github.com/weiny2/ndctl.git
fetch = +refs/heads/dcd-region3-2025-04-13:refs/remotes/origin/dcd-region3-2025-04-13
[branch "dcd-region3-2025-04-13"]
remote = origin
merge = refs/heads/dcd-region3-2025-04-13
Fan
>
> >
> > The dmesg shows the really_probe function returns early as resource
> > presents before probe as below,
> >
> > [ 1745.505068] cxl_core:devm_cxl_add_dax_region:3251: cxl_region region0: region0: register dax_region0
> > [ 1745.506063] cxl_pci:__cxl_pci_mbox_send_cmd:263: cxl_pci 0000:0d:00.0: Sending command: 0x4801
> > [ 1745.506953] cxl_pci:cxl_pci_mbox_wait_for_doorbell:74: cxl_pci 0000:0d:00.0: Doorbell wait took 0ms
> > [ 1745.507911] cxl_core:__cxl_process_extent_list:1802: cxl_pci 0000:0d:00.0: Got extent list 0-0 of 1 generation Num:0
> > [ 1745.508958] cxl_core:__cxl_process_extent_list:1815: cxl_pci 0000:0d:00.0: Processing extent 0/1
> > [ 1745.509843] cxl_core:cxl_validate_extent:975: cxl_pci 0000:0d:00.0: DC extent DPA [range 0x0000000000000000-0x000000000fffffff] (DCR:[range 0x0000000000000000-0x000000007fffffff])(00000000-0000-0000-0000-000000000000)
> > [ 1745.511748] cxl_core:__cxl_dpa_to_region:2869: cxl decoder2.0: dpa:0x0 mapped in region:region0
> > [ 1745.512626] cxl_core:cxl_add_extent:460: cxl decoder2.0: Checking ED ([mem 0x00000000-0x3fffffff flags 0x80000200]) for extent [range 0x0000000000000000-0x000000000fffffff]
> > [ 1745.514143] cxl_core:cxl_add_extent:492: cxl decoder2.0: Add extent [range 0x0000000000000000-0x000000000fffffff] (00000000-0000-0000-0000-000000000000)
> > [ 1745.515485] cxl_core:online_region_extent:176: extent0.0: region extent HPA [range 0x0000000000000000-0x000000000fffffff]
> > [ 1745.516576] cxl_core:cxlr_notify_extent:285: cxl dax_region0: Trying notify: type 0 HPA [range 0x0000000000000000-0x000000000fffffff]
> > [ 1745.517768] cxl_core:cxl_bus_probe:2087: cxl_region region0: probe: 0
> > [ 1745.524984] cxl dax_region0: Resources present before probing
> >
> >
> > btw, I hit the same issue with the previous verson also.
> >
> > Fan
>
> [snip]