lundi 26 janvier 2015

Realtime/embedded music computation

I have a school project where I have to do some music computation in realtime on a embedded system.

I have two things to do :



  • from a music saved on the system I need to determine the moment where a played music is.

  • from notes saved on the system I need to determine when they're played in live. For example, a guitarist of a band play a sequence of notes, and saved it in the system, and when the (whole) band play in live, the system has to know when the guitarist play the sequence of notes.


I need some advices.


The first one is for the material to use. I thought about the Raspberry Pi and use an audio input board (ADC-DAC PI, ADC PI or Wolfson audio card). But I saw there is the Banana Pi too, and it seems to be more powerful, and more, there is an on-board microphone.

So my questons are :



  • Should I use the Raspberry Pi or the Banana Pi for what I want ?

  • If I take the banana is the on-board microphone quality enough ? And if not is the others boards can be used with the banana ?


The second advice I need is about language and library to used.

For my project I'll use the Fujishima method, where the first thing to do is to "transforms an input sound to a Discret Fourier Transform (DFT) spectrum". So for this I need to manipulate the spectrum of the sound.

In his paper he used the Common Lisp Music/Realtime evironment, but I saw that this library was translate to the C as sndlib, and I also saw the libsndfile library (always for the C).

So my questions are :



  • Is there a more suitable language than the C (I don't think so) ?

  • Are theses libraries good enough (low-level, optimized,...) ? Or is there others libraries ?

  • Is there a tutorial for the "best" library ?


I hope I was understandable, and I didn't made to much language mistakes.

Thanks for all help.

Phantom


Aucun commentaire:

Enregistrer un commentaire