> Musicians are perfectly accustomed to working with a "raw" mix in
> their cans, and hearing the far-superior results in the mixdown, and
> this is a natural extension of that process. It also tallies better
> with the real process in the studio, with temporary effects used in
> the cans, but the final effects chosen during the lengthy mixing
> process after all sund has been recorded.
What about interactive playback when a key is responding to velocity ?
That's the core of the issue and certainly isn't the 5ms delay BS you
said earlier.
> When it came to playback after the original recording was done, an
> asynchronous process or thread could be used to read ahead of the
> playback and "render" the raw audio into processed sound buffers which
> would play back just as the "actual" playback off disk happened. This
> would mean that you could provide nice big buffers for your
> third-party plugins, and wouldn't need realtime for them at all.
Audio programmers aren't *dipshits*, of course they know this stuff and
neural network noise reduction, adaptive signal processing, complex math,
etc...
> The case of making a instrument (eg a syth) or an effects processor
> for live use (eg a guitar effects unit) is pretty different from this,
> and clearly (in my mind anyway) is a hard realtime project.
The above that you've outlined is the core of the issue. This can be done
either using soft or hard realtime factilities. It's realtime [soft/hard] DSP hell
and fudge around the problem using typical Unix time sharing concepts
misses the point[s] completely.
All the previous stuff about buffering is largely useless to this
discussion. Everybody including their little sister[s] know about data
buffering N seconds before the audio hardware output.
> Sean Hunter
bill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Fri Jul 07 2000 - 21:00:10 EST