Re: preempting I/O (audio latency + making X more responsive)

David Olofson (audiality@swipnet.se)
Fri, 20 Aug 1999 23:06:43 +0200


Benno Senoner wrote:
>
> > A source code would be a good one because it saves time even we choose
> > a slightly different way to implement it. My plan was to give disk process
> > higher priority so that disk process is always suspended by read(), write()
> > or by waiting new tasks. What do you think how this differs from your
> > suggestion?
>
> David Olofson is actually designing an killer audio-engine,
> but I plan to implement some demo code to show to vendors, the
> TRUE audio capabilities of Linux.

...which would also mean we have something to play with, so we can test
our ideas, and perhaps avoid some serious design mistakes. :-)

> My plan is to develop a demo app,
(...)
> Qt
(...)
> I think I will use about 3 threads.
> One for pure audio I/O, one for MIDI I/O , since it seems that I'm unable to
> open the MIDI device in non-blocking mode, or maybe I could use select()
> to see if data is available, but since I must read 1 byte at time,
> to avoid blocking, I must do multiple select() iterations in my DSP cycle,
> and I don't like to issue that many syscalls in the same cycle.
> The third thread is the disk I/O thread, will run at lowest priority, but still
> higher than normal tasks, to avoid interferences.
> Anyone got better ideas ?

Sounds nice to me, but there's one thing that worries me a bit: Qt. I
know it's a nice toolkit, but I have two problems with it in this case:

1) Many Linux distributions don't install it by default, which means
trouble for the ones we really need around here: The hardware and
software developers. Keep in mind that the ones behind the most
sucessful products are forced to develop for Windoze and MacOS, and many
of them haven't got much Linux experience.

2) The Licence... Ok, QPL is nice, but what about small companies that
want to release applications and plugins for Audiality/Linux, ALSA, or
whatever becomes the next audio standard? This is not a technical or
licence issue, but one of avoiding old FUD getting in our way.

My suggestion: WxWindows or some other multi target toolkit, and compile
2 or 3 versions. Would make it easier for "the Windoze tech guy who's
supposed to take a look at this new Linux development".

Or is it enough just carefully pointing out that we're demonstrating
*Linux*, and not any complete multimedia API/engine or whatever? I'm
afraid we still have to fight old FUD when we're acting outside the
Linux community, so this might be worth considering.

> > But loading large buffers blocks the disk usage from other audio streams?
> > What about making the large buffers fragmented so that fragments of different
> > buffers are interleaved in loading/writing? Using small buffers might
> > increase the load of the audio engine and thus fragmented large buffers
> > might be a good compromise.
>
> Sorry, I meant sum of all buffers = large , of course you may buffer about
> 1-2sec per track, and therefore when you do miltitrack playback you read
> one buffer at time in an interleaved fashion.

That's the only way to do it. Unless you're happy with using at most 10%
of the possible transfer rate... Disk caching and read-ahead is screwed
up by this kind of operation, so the simple rule is

buffers_per_second = bytes_per_seconds_and_stream / buffer_size;
seeks_per_second = number_of_tracks * buffers_per_second;

and the drive is completely *dead* when

seeks_per_second == average_seek_time

Play some with this, and you'll see that you need quite big buffers even
for the fastest drives...

(...Nemesystech's Gigasampler ( http://www.nemesystech.com )...)

Use the rule above. Nemesystech do, and that's why Gigasampler 1) works
(well, reliability is another issue...) and 2) requires MASSES of RAM.

Probably obvious: Keep in mind the you must keep all start point buffers
in RAM all the time; the data can't be stored in the streaming buffers.
The starting point buffers should be handled as parts of the loaded
instruments (multitimbrality), while the streaming buffers belong to the
voices (polyphony).

(...)
> I read something that the QLinux folks are developing a bandwidth allocating
> disk scheduling algorithm , so that you could request
> guaranteed 2MB/sec for your app, without the risk of an "overbooked" disk.
> Hopefully it will will get usable soon.
> But in the mantime you have only to worry about very high disk I/O in
> background IF you use more than 70% of the theoretical disk bandwidth in
> your audio-recording app.

70% on a 7200 rpm EIDE disk is A LOT of tracks...

(...)
> shows me that the higher priority (nice -20) "dd" writes the data to the disk
> about 2-2.5 times faster than the "dd" running at normal priority.
> Therefore if you plan to run other I/O intensive tasks, just leave some 30-40%
> headroom when doing audio-from/to-disk stuff,
> (this means if theoretically the disk is able to deliver 80 track playback,
> just don't use more than 50-60)
> or use a separate disk for audio, or use Linux software RAID code an make a
> RAID0 array with up to n-fold disk bandwidth.
> IMHO a powerful PII box, with 3-4 UDMA EIDE disks
> (like my nice IBM 16GB -:) ), combined with the soft-RAID features of Linux
> could deliver so many tracks that even the most demanding audio-recording folks,
> even David. :-)

Oh, I can live with some 100 tracks or so... ;-)

But people who are going to use huge multilayer sampled pianos and that
kind of things probably can't get enough tracks. Step on the pedal, play
a long, big sliding appreggio, and see almost ANY sampler or synth start
cutting notes... 128 voices are standard, and for a really good piano
sound, you need at least two voices per key for crossfading through the
layers to follow the velocity. That's 176 voices for the full scale.

No problem wasting tracks and voices, that is! :-)

> To do some comparison on the Windoze side: Gigasampler will not deliver many
> from-disk-streaming voices, if you run a harddisk recording app in background.

If you really expect some real sampler performance, you should probably
at least have a separate disk for the soft sampler anyway, perhaps even
a separate computer. Well, with Windoze, you most likely *have* to do it
that way, but we're still dealing with the same disks and CPUs, so don't
expect great improvements in other areas than latency and reliability.
(And that's by all means good enough, IMO! :-)

//David

PS. Look at the cc line... Time to start a new list, perhaps? :-) Or is
there any Linux multimedia list (not directly related to a project) that
actually has some activity, and is followed by the active hackers? I
guess there will soon be a need for a general Linux high end multimedia
discussion forum. And perhaps a site with an FAQ...

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/