On Apr 6, 2005, at 12:11 AM, Kyle Moffett wrote:Please don't remove Linux-Kernel from the CC, I think this is an
important discussion.
As I see it, there are a number of issues
- Use of double underscores invades compiler namespace (except in those cases
where kernel definitions end up as the basis for definitions in /usr/include/*, i.e.
those that actually are part of the C-implementation for Linux.
- Some type that does not conflict with compiler namespace to replace the variety
of definitions for e.g. 32-bit unsigned integers we have now.
- Removal of anything prefixed with a double underscore from non-C-implementation
files.
Personally, I don't care what you feel like requiring for purely
in-kernel interfaces, but __{s,u}{8,16,32,64} must stay to avoid
namespace collisions with glibc in the kernel include files as used
by userspace.
Aye, but as I have pointed out several times, these types should be restricted
to those files and *only* those files which eventually end up in the compilers
includes. In every other place, they invite exactly the trouble they are intended
to avoid.
So in every place exept those files which may actually cause a namespace conflict or
a bug because some newer version does not support __foobar, or changed the
semantics. Since using any __foobar type implies relying on the compiler internals,
which may change without prior notice, it is ipso facto undesirable.
This is kinda arguing semantics, but:
A particular set of software (linux+libc+gcc), running in a particular
translation environment (userspace) under particular control options
(Signals, nice values, etc), that performs translation of programs for
(emulating missing instructions), and supports execution of functions
(syscalls) in, a particular execution environment (also userspace).
Ok. And where exactly are linux and libc when compiling code for an
Atmel ATmega32 (40 pin DIL) using gcc?
The 'set of software' does
*not* include any OS. Not Windows, not Linux, not MacOSX, since the
whole thing might be directed at a lowly microcontroller, which DOES
NOT HAVE ANY OPERATING SYSTEM WHATSOEVER.
Nevertheless, gcc works fine.
Without the kernel userspace wouldn't have anything, because anything
syscall-related (which is basically everything) involves the kernel.
Sure. The same goes for every other program. However, it would be pretty
stoopid to say the kernel is an integral part of (say) the Gimp . More so, since
the Gimp and GCC run on completely different architectures aswell.
By the same token, linux is part of XFree86 despite the fact XFree86 does not
require linux to run.
Heck, the kernel and its ABI is _more_ a part of the implementation
than glibc is! I can write an assembly program that doesn't link to
or use libc, but without using syscalls I can do nothing whatsoever.
I can write entire applications using gcc without even thinking of using
any 'syscall' or any other part of linux/bsd/whatever. Still... it's gcc.
<Wishful Thinking>
It would be nice if Linux became totally independent of any compiler, or at least that
coupling between them would be minimal and that the amount of assembly needed
would be minimal.
It would be nice if linux defined and documented its own platform specific types
somewhere in the arch directory, using a consistent (across platforms) naming scheme
and used those types consistently throughout the kernel, drivers,daemons and other
associated code.
</Wishfull Thinking>
<Nightmare>
Your scenario above. Never-ending streams of compatibility issues and gcc drifting
further and further from the ISO-C standard and more and more developers depending
on non-standard interfaces, linux growing ever more dependent on support fro features
ABC and XYZ being implemented consistently cross platform, so that if I want to use
gcc to compile for an AVR, i'm stuck with a shitload of linux issues, kept "for backward
compatibility".
</Nightmare>
Nope. The syscall interface is employed by the library, no more,
no less. The C standard does not include *any* platform specific
stuff.
Which is why it reserves __ for use by the implementation so it can
play wherever it wants.
The C-implementation,. which still does not include the kernel. At most
a few header files, which are used as a basis for standard types by the C
implementation, but no more. Any double underscore in a .c file is a blatant
error. Most used in .h files are, too.
Fine. I assume it does. But #include <linux/fb.h> does not make the
framebuffer (nor linux, for that matter) part of the c-implementation. From
the two files mentioned above, only stdlib.h is.
I want it to get the correct types, I don't want it to clash with or require the
libc types (My old sources might redefine some stdint.h names, and I don't want it
to clash with my user-defined types.
Redefining stdint types is (for this reason) a Bad Idea.
Anything you like. 'kernel_' or simply 'k_' would be appropriate.
As long as you do not invade compiler namespace. It is separated
and uglyfied for a purpose.
But the _entire_ non _ namespace is reserved for anything user
programs want to do with it.
The above prefix was an alternative to using a double underscore prefix. Using *no*
prefix, should not conflict with the compiler, excepting, of course, the types required by
the standard.
When a program
compiled as ppc32 gets run on my ppc64 box, the kernel understands
that anything pushed onto the stack as arguments is 32-bit, and must
use specifically sized types to handle that properly.
And thus you end up using a 32-bit interface between a 64 bit OS and a 64 bit
application? Or two separate syscall interfaces?
Neither option seems very desirable. What about pointers which are
32 bit on one platform and 64 on the other? IOW, i'm not sure "backwards
compatibility" is the thing to strive for. We all know what it did to Intel-processors
and if it means having to jam data from a 64-bit App to a 64 bit OS through a 32-bit
syscall interface, it stinks.
Especially since most packages need only to be recompiled for the new situation and
source (commonly) is available.