dimanche 1 février 2015

How do I record an audio in JUCE that have headers without the 'JUNK' subchunk?

I am trying to develop an application using the JUCE library that can record an audio or open an audio file. The audio file is to be passed into the openSMILE program to have its feature values extracted. All audio files are in wave format and the application is to be finally built for the iPhone platform.


I have developed the part of the application that allows the application to record audio and open an audio file from the file directory. I am able to pass some audio files into openSMILE to have their feature values extracted, but not others. All those recorded from the JUCE application itself all cannot be passed in.


The error produced when passing those audio files that cannot be passed is as follows:



smilePcm: Riff: 46464952
Format: 45564157
Subchunk1ID: 4b4e554a
Subchunk2ID: 0
AudioFormat: 0
Subchunk1Size: 34smilePcm: bogus wave/riff header or file in wrong format ('Audio/Audio Recording.wav')! (maybe you are trying to read a 32-bit wave file which is not yet supported (new header type...)?)(ERROR) [1] in cWaveSource: failed reading wave header from file 'Audio/Audio Recording.wav'! Maybe this is not a WAVE file?


To try to find the cause of the error, I then extracted information about the wave headers of the passable and non-passable audio files using Riffpad.


In the audio files that could be passed into the openSMILE program, the wave file header information are as follows:


Audio 1



RIFF-WAVE - (len= 180260, off= 12)
fmt - (len=16, off=20)
data - (len=180224, off=44)


Audio 2



RIFF-WAVE - (len= 19236, off= 12)
fmt - (len=16, off=20)
data - (len=19200, off=44)


And the non-passable ones are as follows:


Audio 3 <---Recorded from my JUCE application



RIFF-WAVE - (len= 128096, off= 12)
JUNK - (len=52, off=20)
fmt - (len=16, off=80)
data - (len=128000, off=104)


Audio 4 <---A random audio file that also can't be passed into openSMILE



RIFF-WAVE - (len= 21289308, off= 12)
fmt - (len=40, off=20)
fact - (len = 4, off=68)
data - (len=21289248, off=80)


I am guessing (correct me if I am wrong) that the error would be removed if I can remove the JUNK subchunk from the wave file recorded, i.e. Audio 3, so that the headers will be similar to that in the passable audio files.


I thought of 2 possibilities that might be able to resolve this issue:




  1. Record Juce Audio with a header format similar to the passable audio file headers (most straightforward and preferred method, if workable)




  2. Convert the Audio file after recording, so that the headers will be similar (I have read that using libsndfile and Audio Compression Manager (ACM) might work, but I am not sure if they are workable for cross platform that JUCE can build to, e.g. iPhone)




For the first way, is there any way I can record Audio in the 'right' format as with the passable audio files?


For the second way, could I use a library that can be built for cross platform, or somehow take out the data chunk of the recorded audio, and add a header with the 'right' format to it? (What i gathered from what I read is that, the JUNK allows for information to be included, and if it is not required, can be skipped. I presume that removing it would not be a problem, as long as i edit the total length from the RIFF-WAVE subchunk.)


Are any of the methods above possible, and if so, how should I carry them out?


Thanks!


Playing media in the background controlled by an ongoing notification

I am attempting to make an app that receives a URL via an intent and then plays the audio in the background, providing play/pause functionality through an 'ongoing' notification.


I have looked at similar examples like the following: How to put media controller button on notification bar? and Music player implementation in android.


The two above attempt to achieve similar things to what I need, however I need something that is a combination of the two. In order to play the audio in the background I believe I require a service but I am unsure then how I can also create the notification and send play/pause/stop requests with it back to the service.


How can I make a simple media player that receives a URL and then plays it in the background leaving a way to play/pause and stop it via a notification?


Thank you, Daniel


PyAudio with MultiProcessing

sorry if this has been asked before, I checked but couldn't find an answer to problem. I am trying to play a sound using pyaudio using multiprocessing so I can acquire input (ultimately from an NI board, but just keyboard for now) concurrently. I tried to use the multiprocessing module and ended up with this code ( the gensin function returns two numpy arrays, a time vector and a 'sin vector'). I'm new to both the multiprocessing and pyaudio module so any help would be very much appreciated :)



def play_sound(frequency,duration,sampRate):

#generate the sin wave
t, wave = gensin(frequency,duration,sampRate)

#open the audio file
p = pyaudio.PyAudio()

#create a stream to play
stream = p.open(format = pyaudio.paFloat32,
channels = 1,
rate = sampRate,
output = True)


stream.write(wave.astype(np.float32).tostring())
p.close(stream)

frequency = 1200
duration = 0.5
sampRate = 64000

p1 = multiprocessing.Process(target=play_sound,name='audioOut',args=(frequency,duration,sampRate))


When I then issue the command



p1.run()


it plays fine, but I don't think I can get concurrency that way.


but when I try



if __name__ == '__main__':
p1.start()
p1.join()


I get the following error:


Process play sounds: Traceback (most recent call last): File "/Applications/anaconda/http://ift.tt/1z1lgaZ", line 258, in _bootstrap self.run() File "/Applications/anaconda/http://ift.tt/1z1lgaZ", line 114, in run self._target(*self._args, **self._kwargs) File "", line 13, in play_sound output = True) File "/Applications/anaconda/lib/python2.7/site-packages/pyaudio.py", line 747, in open stream = Stream(self, *args, **kwargs) File "/Applications/anaconda/lib/python2.7/site-packages/pyaudio.py", line 442, in init self._stream = pa.open(**arguments) IOError: [Errno Internal PortAudio error] -9986


I'm running Yosemite on a 2013 Macbook pro, this code is executed in Ipython but it doesn't work in scripted python either and I get the same PortAudio error number. I've tried with billiard instead of multiprocessing and that didn't change anything. Any advice would be super helpful. Thanks :).


I don't understand how to save my new audio file on iPhone and got an error -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array'

I create a trim audio function.


select the file on iPod library and want to select some time range store in new file to draw as waveform


this is an create new file in trim function



NSString *path;
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
path = [[paths objectAtIndex:0] stringByAppendingPathComponent:@"cropedFile.m4a"];

NSURL *fileInput = self.globalURL;
NSURL *fileOutput = [[NSURL alloc] initFileURLWithPath:path];

NSLog(@"Output destination: %@", fileOutput);

if(!fileInput || !fileOutput) return NO;

[[NSFileManager defaultManager] removeItemAtURL:fileOutput error:nil];
AVAsset *asset = [AVAsset assetWithURL:fileInput];
NSLog(@"asset: %@", asset);
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:asset presetName:AVAssetExportPresetAppleM4A];

if(exportSession == nil) return NO;
NSLog(@"%@",exportSession);

CMTime startTime = CMTimeMake((int)(floor(beginCropMarker * 100)),100);
CMTime stopTime = CMTimeMake((int)(floor(endCropMarker * 100)), 100);
CMTimeRange exportTimeRange = CMTimeRangeFromTimeToTime(startTime, stopTime);

exportSession.outputURL = fileOutput;
exportSession.outputFileType = AVFileTypeAppleM4A;
exportSession.timeRange = exportTimeRange;

[exportSession exportAsynchronouslyWithCompletionHandler:^{
if(AVAssetExportSessionStatusCompleted == exportSession.status)
NSLog(@"Crop completed");
else if(AVAssetExportSessionStatusFailed == exportSession.status)
NSLog(@"Crop failed");
}];


when I run this an error is in draw waveform function in this line



AVAssetTrack * songTrack = [songAsset.tracks objectAtIndex:0];


with message


Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array'


I don't know how to fix it help me please.


Play audio upon each button click

My project has 2 audio files, 'dog.mp3' and 'cat.mp3' and so the code below enables me to play either sound based on particular list view item clicks however how can I modify the code so that I want to achieve the following?:



  • When 'cat.mp3' is playing, if I tap the list view item for 'dog.mp3', I want the 'cat.mp3' file to stop playing and the 'dog.mp3' audio to play.

  • When 'dog.mp3' is playing, if I tap the list view item for 'cat.mp3', I want the 'dog.mp3' file to stop playing and the 'cat.mp3' audio to play.


All help will be appreciated.



mainList.setOnItemClickListener(new AdapterView.OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> adapterView, View view, int position, long id) {
if (position == 0) {
mp = MediaPlayer.create(Test.this, R.raw.cat);
mp.start();
}

if (position == 1) {
mp = MediaPlayer.create(Test.this, R.raw.dog);
mp.start();
}
}
});

How to get url path of file selected using MPMediaPickerController using SWIFT?

i am beginner in coding))


Who knows solution for this question:


How to get url path of file selected using MPMediaPickerController using SWIFT?


Can I play blob object( I mean audio file ) in mobile device?

I'm trying to make a web page. If I input my voice, I can listen my recording voice. It works in desktop. But, It doesn't work in android device(Chrome on Android). (actually, I don't have any iOS device. So, I'm not sure whether It works in iOS devices.)


Dynamically Created Audio Object(blob) doesn't play in mobile device. Can I fix it? please give me some suggestions.