Re: [RFC PATCH 1/9] ntsync: Introduce the ntsync driver and character device.
From: Elizabeth Figura
Date: Wed Jan 24 2024 - 22:42:28 EST
On Wednesday, 24 January 2024 16:56:23 CST Elizabeth Figura wrote:
> On Wednesday, 24 January 2024 15:26:15 CST Andy Lutomirski wrote:
>
> > On Tue, Jan 23, 2024 at 4:59 PM Elizabeth Figura
> > <zfigura@xxxxxxxxxxxxxxx> wrote:
> >
> > >
> > >
> > > ntsync uses a misc device as the simplest and least intrusive uAPI
> > > interface.
> >
> > >
> > >
> > > Each file description on the device represents an isolated NT instance,
> > > intended to correspond to a single NT virtual machine.
> >
> >
> > If I understand this text right, and if I understood the code right,
> > you're saying that each open instance of the device represents an
> > entire universe of NT synchronization objects, and no security or
> > isolation is possible between those objects. For single-process use,
> > this seems fine. But fork() will be a bit odd (although NT doesn't
> > really believe in fork, so maybe this is fine).
> >
> > Except that NT has *named* semaphores and such. And I'm pretty sure
> > I've written GUI programs that use named synchronization objects (IIRC
> > they were events, and this was a *very* common pattern, regularly
> > discussed in MSDN, usenet, etc) to detect whether another instance of
> > the program is running. And this all works on real Windows because
> > sessions have sufficiently separated namespaces, and the security all
> > works out about as any other security on Windows, etc. But
> > implementing *that* on top of this
> > file-description-plus-integer-equals-object will be fundamentally
> > quite subject to one buggy program completely clobbering someone
> > else's state.
> >
> > Would it make sense and scale appropriately for an NT synchronization
> > *object* to be a Linux open file description? Then SCM_RIGHTS could
> > pass them around, an RPC server could manage *named* objects, and
> > they'd generally work just like other "Object Manager" objects like,
> > say, files.
>
>
> It's a sensible concern. I think when I discussed this with Alexandre
> Julliard (the Wine maintainer, CC'd) the conclusion was this wasn't
> something we were concerned about.
>
> While the current model *does* allow for processes to arbitrarily mess
> with each other, accidentally or not, I think we're not concerned with
> the scope of that than we are about implementing a whole scheduler in
> user space.
>
> For one, you can't corrupt the wineserver state this way—wineserver
> being sort of like a dedicated process that handles many of the things
> that a kernel would, and so sometimes needs to set or reset events, or
> perform NTSYNC_IOC_KILL_MUTEX, but never relies on ntsync object state.
> Whereas trying to implement a scheduler in user space would involve the
> wineserver taking locks, and hence other processes could deadlock.
>
> For two, it's probably a lot harder to mess with that internal state
> accidentally.
>
> [There is also a potential problem where some broken applications
> create a million (literally) sync objects. Making these into files runs
> into NOFILE. We did specifically push distributions and systemd to
> increase those limits because an older solution *did* use eventfds and
> *did* run into those limits. Since that push was successful I don't
> know if this is *actually* a concern anymore, but avoiding files is
> probably not a bad thing either.]
Of course, looking at it from a kernel maintainer's perspective, it wouldn't
be insane to do this anyway. If we at some point do start to care about cross-
process isolation in this way, or if another NT emulator wants to use this
interface and does care about cross-process isolation, it'll be necessary. At
least it'd make sense to make them separate files even if we don't implement
granular permission handling just yet.
The main question is, is NOFILE a realistic concern, and what other problems
might there be, in terms of making these heavier objects? Besides memory usage
I can't think of any, but of course I don't have much knowledge of this area.
Alternatively, maybe there's another more lightweight way to store per-process
data?