Re: kill -9 <pid of X>

Jon M. Taylor (taylorj@ecs.csus.edu)
Mon, 17 Aug 1998 18:34:36 -0700 (PDT)


On Mon, 17 Aug 1998, Gregory Maxwell wrote:

> On 17 Aug 1998, Jes Sorensen wrote:
>
> > As for space I do care - imagine having to do Linux installation
> > floppies and having to put 50 video drivers on there which includes
> > acceleration code just to get a simple console display ... its bad
> > enough with the amount of drivers as it is already.
>
> I dont think the the KGIcon people want that.. And I know I dont.

An SVGA super-driver would be the best bet here.

> What I would see is a small set of BASIC drivers (like vesafb) that get
> you going. If you want/need better performance you can get an optimized
> driver and load it at runtime..

Better to put it all in one driver, if you are doing unaccelerated
dumb framebuffering. It might be a somewhat chunky driver, but it will be
the only driver you need.

> Yesturday I got KGIcon working.. It took awhile because the ggi package
> didn't want to compile.. :)
>
> Heres my expirence (I have a matrox II card):
>
> I setup a stock 2.1.116-2 kernel with framebuffer support and compliation
> fixes. I installed this.
>
> I then compiled libggi and kgicon (which took me hours to get to compile)
>
> KGIcon was compiled with support for my Video, Ramdac, clockchip, and
> monitor. I made NO kernel modifications.
>
> I booted up using vesafb (though I could have just used text mode).

Kewl! I haven't tried that yet. Glad to see it works.

> Then I ran a little script that loaded kgicon.o and then ran a little
> utility that moves the console onto the kgi frame buffer.
>
> My console video was instantly TONS faster. (yes, vesafb is slow!)

If you have a PPro/PII, try using an MTRR. It hauls ass. I am
looking at a way to set up MTRRs automatically when kgicon.o initializes.

> GGI apps would run either way, but faster with the accelerated driver.
>
> My computer crashed with lotsa oppsen in a little bit, it doesn't look
> like the KGI code is SMP safe. :)

Maybe not. kgicon drivers also seem to trigger that console race
more often than the stock kernel drivers do. Run lynx after you've
inserted a kgicon driver module - it'll oops before you've looked at many
pages.

> > Most cards do X fine in user space as it is now and it works pretty
> > well. Having the kernel work as an arbiter telling the library and/or
> > processes whether or not they can use acceleration will solve the
> > problem.
>
> I can hardly call 'running as root with direct hardware access' userspace.
> Perhaps we need a name for things like that.. Dangerspace? :)

That's exactly what it is. It is a no-man's land where the rules
of both userspace AND the kernel have to some degree been lifted. It is
nice to have such a mode, but it shouldn't be used for ongoing tasks.
Capabilities is one part of the solution, proper drivers the other.

> Actually, we are missing out on the features of newer X86 video cards
> because of this. My matrox has the ability to do scather-gather dma and to
> use IRQs, be we can't really do this from userspace.

Imagine how much more could be done on the Amiga's custom chipsets
if there was a KGI driver for OCS/ECS/AGA! All that cycle-counting that
was needed to get the most out of the hardware can't be done realiably in
userspace. Also, all those wierd elements of the hardware (copper, DMA,
playfields, sprites, blitter, etc) could be safely wrapped in ioctls,
contextualized and made available to userspace. You could write LibGGI
driver libraries to use the exported functions. You could work them into
the console itself with GII Console classes.

> Furthermore, X is prone to messing up the computer BAD. It's both faster
> and safer to have video support in the kernel. I'm not advocating putting
> all of X in the kernel.. Just the basics and I feel that the basic frame
> buffer driver should be seperate from an accelerated driver.

I maintain that having one SVGA super-driver and a bunch of other
hardware-specific drivers is the way to go. That is part of XFree86 that
I like.

> GGI supports talking to dumb drivers, smart drivers, or acting as it's own
> smart driver by banging the hardware directly.

That last isn't really true. LibGGI always renders to a target.
In a sense, the targets are really drivers, and LibGGI is the function API
and dynamic library system. That's why the target-oriented system is so
powerful - it can use any type of display system as a driver.

> You can happily use the
> first (by just loading the unaccelerated driver, a 'safe' approach)

Right. This is another way to get rid of acceleration if you
don't want it. Modular drivers let you do that sort of thing.

> or the
> third (not safe, and on some hardware not as fast as #2).. While I load an
> accelated module into the kernel and use method #2 (and achieve the
> balence of safty, and speed I want)..

I wand different balances at different times. When I want an
embedded game system, I want maximal speed and minimal safety. When I
want my secretary to run X, I want speed but stability comes first. On a
distro install, I want safety and simplicity so I get rid of acceleration
altogether. GGI/KGI let you pick your preferences.

> You obviously approve to method #3, so why not keep an open mind about #2.
> #2 doesn't need to be in the kernel itself. Those accelerated drivers
> would be maintailed by outside people (like Xfree is). All they need is
> the proper interfaces to stay in the kernel.

Just don't compile the acceleration into the KGI driver. Keeping
a lot of kernel source around for unneeded options is a time-honored Linux
tradition |-<.

> > linker> No, he's saying that people who dont wish to use accelerated
> > linker> drivers with libGGI can never have acceleration on the current
> > linker> FBCON driver. You could write your own libggi target that
> > linker> banged the hardware and got acceleation.
> >
> > Same thing, no difference.
>
> No, thats not the case. There are three main ways of doing video:
>
> 1. Dumb frame buffers in the kernel (we have this, it's slow, it's safe)

And not all hardware is supported.

> 2. Userland banging the hardware (we have this, it's fairly fast, it's unsafe)
> 2a. same but with an arbrator.. (a little safer)
> 3. Accellerated Frame buffer in ther kernel (kgicon, it's safe, it's fast,
> and you don't like it :))..
>
> GGI supports all of these. So I can write an APP using GGI (like an X
> server or a game) and when I use this app it will use #3. When you use it
> it will use #2 (like Xfree) and when someone with a Neomagic uses it it
> will use vesafb and #1..
>
> So there is no reason for you to object to the existance of #3 drivers. If
> those are not available it will fall back to #2,#1 depending on how you
> care about safty and whats available. You can happily pretend that #3
> drivers dont exist and never go download them.
>
> > linker> It should go into the kernel, thats the only place it can be
> > linker> multiplexed, used fully (irqs,dmas, atomic operations), and
> > linker> used safely.
> >
> > And be slow.
>
> No when done correctly it's fast. You can't use DMA in userspace right
> now, so with cards that do that it has the potential of being MUCH faster.
> Furthermore future cards will make more use of these features: AGP adds
> alot of stuff that can't be done from userspace.

Really. The ability to hit the hardware directly from your API
might get you more speed, but it won't be worth the loss of speed because
you can't use the features of the card fully.

> > So far X has developed parallel to the kernel and people have been
> > able to run new X servers with older kernels - using something like
> > vesafb will allow this to work in the future, ie. you can continue to
> > use your known to be stable kernel to set the video modes and get the
> > new accelerated X server when it becomes available. Afaik X servers
> > often first become available in a non-accelerated version and later
> > comes the smart and improved version.
>
> Thats fine, they can still use an X server that bangs the hardware.
>
> The only thing that needs to be upto date in the kernel is the basic video
> support to get the computer to boot up.

They can use the old vgacon system or skip using fbcon altogether
if they want.

> > linker> Ok fine, I'll take your bet and double it. What PROPER
> > linker> hardware can I get for X86 that can do FULL ACCELERATION
> > linker> safely from userspace? It would need to have support for
> > linker> hardware context switches and be able to get steller
> > linker> performance without the use of DMAs or IRQs. AFIK there is
> > linker> NONE, and if there it it's not common..
> >
> > Someone mentioned the Matrox cards, but admittedly I don't know PC
> > graphics cards that well. Anyway what I am opposed to is the idea that
> > because some PC hardware is broken, we degrade the performance for
> > everybody by putting it in the kernel as default (I don't expect
> > anybody to seriously want that we have two parallel developments of
> > graphics drivers).
>
> Nope, the matrox cards have some of the normal PC brokenness.

Most cards do.

> Furthermore,
> to get full performance you need DMA support.
>
> How about we just have video drivers that can be compiled so that they are
> kernel modules, or compiled so that they work from userspace. Dont laugh,
> that's that the GGI people are upto..

Just so. That's what suidkgi is. It is a userspace wrapper
around normal KGI drivers. It is a lot like the usersapce server that
Linus claimed no one was working on....

> I'm not suggesting accelerated drivers should be the default, I dont even
> want them shipped with the kernel.

Why not? A lot of other stuff you don't want is shipped with the
kernel. Only reason I can see is if the drivers are binary-only due to
NDA problems.

> > linker> But again, you've wanted us to include various strange network
> > linker> drivers in the kernel which never outta be included in the
> > linker> kernel. :)
> >
> > Those you can just decide not to compile in (I asume you are referring
> > to the HIPPI stuff) - on the other hand if I want to build something
> > generic I need to put in a ton of graphics drivers and the
> > introduction of new PC graphics cards on the market seems to be going
> > a lot faster than anybody can keep up with.
>
> Then you can just not go download the accelerated drivers.

If I have to keep downloading all those damn SCSI drivers when all
I use is IDE, people who don't want acceleration can damn well download
the acceleration driver code. Sorry.

> > linker> Come on, this isn't a microkernel here. It's monilithic. We
> > linker> include HARDWARE drivers in the kernel..
> >
> > Just because its a hardware driver it doesn't necessarily need to go
> > in the kernel.
>
> Then lets remove all the hardware from the kernel. It's possible. :)
>
> Lets see.. Heres some DUs
> 8496 scsi
> 3324 video
> 8992 net
> 5102 char
> 2948 sound
>
> Wow, that would shrink the tree by alot. :)
>
> You are saying video should go in the kernel because there are too many
> cards? I imagine that if you combine scsi, sound and network you have ALOT
> more new devices per year then video.

Look, the kernel source tree is too big and needs to be
modularized. That is a fact. But as long as it isn't, video drivers
should get the same rights as everything else.

> > I thought the EvStack concept was killed on linux-kernel earlier as
> > multiheading could be done a lot simpler.
>
> Multiheading can be done easier. However multicant.. I'de love to be able
> to get a nice dual 300 with 256mes of ram and share it with my girlfriend
> (who's computer is next to mine) this would save me the bother of having
> two seperate computers for us but allow us to use it at the same time..
> This how this would go over in a computer lab..

This is the JINI philosophy. Computers, especially on a network,
are just a bunch of resources. How you use those resources should not be
determined by the OS. With GGI Console, you have sources of sinks of
messages, networked together. This lets you hook IO devices together in
arbitrary ways, arbitrary combinations and with as much or as little logic
in between as you want.

> A single PII dual 300 with two mice/kbds/screens costs less then two
> single 266s with their own stuff. Further more, when user A is idle user B
> gets more processing. Furthermore, computers today have more CPU then most
> people need. When they do need CPU is only in short little bursts, like
> when they open up apps and such.. 90% of the time the computer is IDLE..
> So putting 4 people to a single fairly fast computer is a great way to
> save money, and they would only notice when they all load differnt things
> at once..

The future is a bunch of plastic boxes with USB ports, all of
which can be hooked together. A console system which works in the
environment must be able to handle it all, in any combination, and be able
to handle hot-plugging of devices.

Imagine this scenario: someone hooks together a USB set-top box,
two USB joysticks, four sets of USB digital headphones, three LCD
flat-panel USB digital LCD displays, three USB keyboards and one braille
reader. All of this needs to be used to create a little workgroup for
four people, one of whom is blind. In addition, there's a USB ADSL link
to another workgroup around the world.

These people want to be able to plug everything together,
configure the binding of IO devices together into consoles, and start
working. The braille reader needs to be both a display and a
keyboard/mouse for the blind guy. Both the joystick will be used to test
the simulator they programmers are working on, and all three sighted guys
would like to be able to use the same joysticks without having to do more
than toggle a software switch as to which console the joysticks are bound
to. Oh, and the guys from the other workgroup would like to be able to
use their joysticks remotely as well, with the same restrictions.
Finally, the LCD displays may sometimes be used two or three to a person.
That too should be a matter of changing a few software settings.

To say that the current Linux console system would have a
difficult time in this environment would be like saying that the Titanic
had a spot of trouble. It is too bound to the old Unix TTY concept, which
itself is built around character displays and keyboards and command lines.
Consoles are a lot more complex and diverse than they were when Unix was
invented. Nowadays, it is more correct to say that a console is the sum
total of the IO hardware in use by one person at one time. The user is
the deciding factor.

To get this kind of flexibility, you need to use an abstract
message passing interface. The devices are not important, what matters is
the messages they send and how they treat recieved messages. If that
braille reader/writer talks the same messages that the LCD panels and the
keyboards do, the system doesn't need to care about the true nature of the
device. Starting to sound like JINI again. GGI Console will provide such
a system. All the old traditional console features of Unix will be 100%
preserved for those that want them, but encapsulated within the larger
messaging system.

Unix is based on files and streams of data flowing from place to
place through these files. GGI Console is based on the same idea. Just
as shell commands can be strug together with piping and redirection to
create larger meta-commands, so too can GGI Console classes be struct
together with switching and routing of messages to create larger
meta-classes. Modularity, minimalism and flexiblity are core Unix
principles. GGI Console follows them.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
	- Scientist G. Richard Seed

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html