Posted by: xml
audio devices - 16/09/1999 04:55
I tried cat sample.au > /dev/audio and also compiled mpg123 and ran that
(which uses /dev/dsp I think). Neither of them produced any intelligible
output. Does audio programming involve non linux standard apis? I'd like
to compile a text to speech program, but if I have to convert any audio
app to empeg standard it could be a bit tricky. Should they work? If not,
would it be possible for you to provide standard /dev/audio and /dev/dsp
support at some time?
Paul
Posted by: altman
Re: audio devices - 16/09/1999 09:16
/dev/audio is vaguely standard, except that it *requires* fills of DMA buffer size (4608 bytes) and is locked at 44.1khz stereo. The buffer fill bit is a bug, but it's likely to stay locked at that rate - we can't actually change the rate (the player does interpolation & flash stuff for lower bitrates).
The mixer calls are pretty standard I believe, though there are lots of other bits for controlling the DSP - many of which we simply can't document as we're under NDA with the docs on the DSP.
Hugo
Posted by: xml
Re: audio devices - 16/09/1999 09:25
> dev/audio is vaguely standard, except that it *requires* fills of DMA buffer
> size (4608 bytes) and is locked at 44.1khz stereo.
Ok, I'm sure I can work around that.
> The mixer calls are pretty standard I believe, though there are lots of
> other bits for controlling the DSP - many of which we simply
> can't document as we're under NDA with the docs on the DSP.
Surely that can't be right? Anybody can write a binary device driver for
linux under NDA but can still document the ioctl's used to control the
driver?
Paul
Posted by: altman
Re: audio devices - 16/09/1999 11:51
The ioctls to use it are fine - it's more if users want to twiddle settings in the DSP we can't help - eg, we can't give out the algorithms used to create some of the tables, or explain some of the arcane methods in there.
Of course, you could ask us nicely to give you a different table or whatever... we're allowed to do that :)
Hugo