I'm writing a Windows application using WASAPI for audio, following the "Rendering a Stream" and "Capturing a Stream" examples at http://ift.tt/1Jaye9C. As a start, I'm capturing microphone input and playing it back through a headset.
As a sanity check, I'm saving the captured frames to a file. I can play those frames in Linux using aplay with the FLOAT_LE format, and the sound is fine.
But there seems to be a transcoding issue here. Using GetMixFormat for the capture and render device indicates that the capture device has one channel at a sample rate of 48000 Hz, while the render device reports two channels and a sample rate of 44100 Hz.
So I don't think I can simply copy the captured frames to the render device buffer, unless the frames are transcoded somehow. If I just copy, the played audio doesn't sound right at all.
Does WASAPI offer a convenient way to do such transcoding (I couldn't find one)? Any other Windows API for that?
Aucun commentaire:
Enregistrer un commentaire