I hope you'll not develop an emotional block to devfs based on your
reaction to the "cheerleaders". If you didn't want a big debate on the
list, you could have always emailed me privately and discussed it.
When I started talking about devfs late last year, I got fairly
minimal response from the kernel developers. HPA just pointed at the
news archives and said the issue was already discussed and rejected. I
read those archives and saw that there were more issues than had been
discussed. Of the dissenters, no-one was willing to actually sit down
and discuss all the issues with me. I find it quite easy to have a
discussion and stick to the technical argumentes and avoid personal
attacks.
I'm sure it's much easier if you go to Linux Expo and can talk
face-to-face and discuss and explain your ideas, give counter
arguments and so on. Not everyone has that luxury.
> 1) devfs doesn't have to be mounted onto /dev if you don't like. The
> essential thing is that devfs provides a unique, logical
> namespace. You can always mount devfs elsewhere and make symlinks
> to it if you don't like the names
>
> We already have a unique, logical namespace; it's called minor and major
> device numbers. I know you (and others) don't like them, but many of
> the arguments against them are strawman arguments --- such as assuming
> that you will create all possible device files in /dev, whether or not
> the devices exist, and then complaining about the speed problem. Or by
> dismissing the reality that the dcache really does make the speed lookup
> problem pretty much irrelevant. (Yet in the last 18 hours, I can't
> count how many times I've just hit 'd' to messages which made the same
> flawed arguments over and over again.)
I do think that major&minor numbers are conceptually a dirty hack. To
me it looks like a quick and dirty way of supporting devices in an
area where there was only one FS. The solution to device files may
have been very different if Unix had a VFS right from the start. Who
knows.
Certainly the dcache does speed future accesses up
enormously. It's also true that millions of inodes in /dev is simply
unworkable. At the very least a solution like scsidev is required.
However, scsidev only solves the issue of SCSI devices. USB devices
are another case looking for a solution. I think the USB case is even
harder, because here you have a case of hot-plugging. While also
possible with SCSI, it's less common there, so perhaps hot-plugging
SCSI devices is more likely to be ignored. As I understand it,
hot-plugging is pretty intrinsic to the ideas of USB, so you can't
ignore it.
> 2) which hacks are these? You mean using tar to save and restore the
> permissions? Would you prefer a C programme (something I'm
> contemplating doing)
>
> Precisely. In Unix we have a very well developed abstraction for saving
> this kind of state: permissions, user/group ownership, modtimes, etc.
> It's called a filesystem. Tar is an unmitigated hack; using a C program
> helps hide the fact that what you're doing is a hack, but it's still a
> hack.
OK, this is separate from the devfs concept. I've already stated
several times that I could add persistence to devfs. I could either
write things to a block device or peek through to the mounted-over
inodes.
Would such a change make you feel better?
> What about the problem of when we move to 16 bit majors and the major
> table is dropped and we go to searching a list when we open a device
> node? How do you suggest we solve that?
>
> As far as searching a list when we open a major number, again this is a
> extremely flawed and weak argument. First of all, the vast majority of
> systems out there will only have less than 16 major devices. A typical
> system has less than 10 major devices. (cat /proc/devices and see!) So
> searching the list is simply not a problem. If searching the list were
> an issue, there are plenty of ways of solving this problem internal to
> the kernel, without needing to make any user-visible changes --- such
> using hash table.
>
> We use hash tables for searching the inode cache --- you're not going to
> tell me that inode caches are bad just because a stupid implementation
> would have to sequentially search the entire list, are you?!? :-) This
> is what I call a strawman argument, and many of the devfs cheerleeders
> have been using such strawmans to argue their case.
I think that the extra layer between device nodes and device drivers
is an ugly hack. I see the extra level of indirection as unnecessary
and adding some (small, but avoidable) performance overhead. I also
see it as a conceptual and administrative overhead. We now have device
information kept in two places: in the source of each driver and in
devices.txt, and has to be synchronised manually. Devfs avoids that
entirely by keeping it in one place: in the driver.
These are not "killer argument" for it, they're just some (small)
reasons in a long list. As I've said in the FAQ, IMHO the totality of
these reasons does show that devfs is a good idea.
Yes, devfs is conceptually quite different from the existing scheme,
but it doesn't follow that it is ugly. I think that there is indeed an
elegance to it.
Regards,
Richard....
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html