I am in the early stages of developing a software synthesizer in Java and have found that managing buffers gets very complicated, especially when components have multiple inputs and outputs.
My synthesiser is comprised of components (e.g. oscillators, filters, mixers) that generate, process, input and output audio in the form of arrays of floats that represent audio samples. For example, when an oscillator outputs samples to multiple components (e.g. to two different filters), how are the buffers between those components usually arranged? Would the pattern involve the oscillator having seperate output buffers for each component it's sending audio to?
If anyone could enlighten me a little more as to typically how buffers are arranged in complex audio applications, it'd be much appreciated!
Aucun commentaire:
Enregistrer un commentaire