mercredi 18 février 2015

Detecting audio track playing from background app in iOS Swift

I'm creating a mood tracking app that, among other things, should use information about the songs the user listens to. Specifically, I'm interested in just extracting the titles that are otherwise visible from the locked screen view, when a track is playing. I've search the interwebs and have had no luck finding a solution to access this data using Swift.


Can anyone help?


How to interrcept and read audio data as it is being output in RaspberryPi

Specifically, I want to write a program to read audio data that is being output to the analog jack on a RaspberryPi that is running Pi MusicBox for a led visualizer. Previously I had a Processing sketch that could run on your average computer, but I am less experienced with ARM. I've considered writing a Java program to do this, but I haven't been able to find any good documentation.


Any help much appreciated, thanks!


mardi 17 février 2015

How do I stop playing a sound effect with SpriteKit and Swift?

I'm writing a storybook app for my niece and I have a question about SpriteKit. I'm trying to set it so that there are different types of audio that play.



  1. Background music that loops (AVFoundation)

  2. Narration that plays when on a new page, or when you press the narrate button to replay the narration (SKAction)


My problem is that the narration will play on top of each other if the user changes the page, or if the user plays the replay narration button. So it ends up sounding like two people talking over each other.


How can I stop all narrations that are playing when a new narration is triggered?


I can't find any relevant help on the internet. I've seen some posts saying us AVFoundation, but from my understanding (albeit limited) that seems more for the background music and can only have one track playing.


Am I misinterpreting the documentation? Can someone help me answer this problem?



import SpriteKit


import AVFoundation


class Page1: SKScene {



// MARK: Touch handling

override func touchesBegan(touches: NSSet, withEvent event: UIEvent) {
/* Called when a touch begins */

for touch: AnyObject in touches {
let location = touch.locationInNode(self)
println("\(location)")

//checks if someone touched the menu button
if menuButton.containsPoint(location) {
println("Play story with sound!")
settings.setInProgress()
goToPauseMenu()
}

if soundButton.containsPoint(location) {
println("Play story with sound!")
settings.setInProgress()
runAction(playNar)
}

//checks if someone touched the forward button
if pageForward.containsPoint(location) {
println("Next Page!")
settings.setInProgress()
nextPage()
}

}
}

Android media player : Stop buffering of audio stream on pause()

I am using Media player in my application, My application is totally depend on media player. I am streaming the music from the server. however, the streaming working perfectly.But I have to do some optimization as part of better performance of app and load balancing.I have to Stop buffering while user pause the music player.


Currently While i am playing the music player, music player Buffering can't stop while i pause the player. there is need to save bandwidth of the stream.It will save user data usage.and also server will save some process.


Please give me way how to stop buffering while music player pause?


however buffer will release on stop of music player.


My code is below



private MediaPlayer mMediaPlayer;
public String path = "mp3url";


mMediaPlayer = new MediaPlayer(this);
mMediaPlayer.setDataSource(path);
mMediaPlayer.prepare();
mMediaPlayer.start();

mMediaPlayer.setOnBufferingUpdateListener(new OnBufferingUpdateListener() {

@Override
public void onBufferingUpdate(MediaPlayer mp, int percent) {
// TODO Auto-generated method stub
Log.d("LOG", "Biffering update ||| "+percent);// it show that buffering cann't stop after pausing mediaplayer
}
});

Can't get jquery on click code to work

I'm having a problem with a piece of code which plays a short mp3/ogg file when a font awesome volume icon is clicked. The html works OK. The problem is with the js code



<p id="pathVar">/templates/beez_20/audio/dialogs/buy_flowers/</p>
<div id="dialog">
<div id="title_block">Buying flowers </div>
<div id="dlg_container">
<div id="audio_player" >{audio}Buying flowers|dialogs/buying_flowers/buying_flowers.mp3{/audio}</div>
<p class="dlg_content eng_dlg"><span class="dlg_text" id="bf01">Shopkeeper: Good afternoon, how can I help you?</span>&#xa0;<span class ="fa fa-volume-up fa-volume-up-dlg"></span> </p>
<p class="dlg_content eng_dlg"><span class="dlg_text">สวัสดีตอนบ่าย,มีอะไรให้ฉันช่วยไหม?</span></p>
...
</div></div>


js code



jQuery.noConflict();
jQuery(document).ready(function() {
jQuery("div#dlg_container").on("click",function (evnt) {
var elementId = evnt.target.id,
pathVar = document.getElementById("pathVar").innerHTML,
oggVar = pathVar+elementId+".ogg",
mp3Var = pathVar+elementId+".mp3",
audioElement = document.createElement("audio");
audioElement.src = Modernizr.audio.ogg ? oggVar : mp3Var;
audioElement.load();
audioElement.play();
});
});


Firebug shows that the elementId variable is nil, whereas it should contain, in example above, "bf01". I can't see why this is the case as similar code elsewhere works. I guess I'm missing something obvious here. Thanks in advance for any help.


Android, AOA2, USB Isochronous Audio Streaming

Using AOA v2 protocol, a android device can output its audio stream to some accessory connected over an USB. But is it possible for the accessory to send over its audio stream to android device so that the android device will act as an USB speaker?


I'm actually planning to write a USB speaker driver using AOA protocol, but I just got stuck here. Because I can make the device initialize in AOA mode, but can't get the endpoints for audio interface. So I kind of leaning towards to believe that audio input to android device isn't possible using AOA. Anyone has any experience with that?


How store iOS audio recordings on Parse?

Newbie iOS coder here, apologies if the answer is really simple.


So I set up my audio recording in viewDidLoad



// Set the audio file
NSArray *pathComponents = [NSArray arrayWithObjects:
[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject],
@"MyAudioMemo.m4a",
nil];
NSURL *outputFileURL = [NSURL fileURLWithPathComponents:pathComponents];

// Setup audio session
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];

// Define the recorder setting
NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];

[recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey];

// Initiate and prepare the recorder
recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:NULL];
recorder.delegate = self;
recorder.meteringEnabled = YES;
[recorder prepareToRecord];


I have a bar button that records new audio files:



// Stop the audio player before recording
if (player.playing) {
[player stop];
}

if (!recorder.recording) {
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setActive:YES error:nil];

// Start recording
[recorder record];

} else {

// Pause recording
[recorder pause];
}

self.navigationItem.rightBarButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"Stop" style:UIBarButtonItemStylePlain target:self
action:@selector(stopTapped)];


Then the start becomes a stop button



[recorder stop];

AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setActive:NO error:nil];
self.navigationItem.rightBarButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"New" style:UIBarButtonItemStylePlain target:self
action:@selector(actionNew)];


How can I add this as a PFFile and save in a dictionary in Parse? I've read through a lot of the Parse documentation but still don't really get the hang of it. Any help much appreciated.


pauses during song playback in Raspberry Pi

I'm programing a playlist player in python using spotify and is running in Raspberry Pi. Everything is great just the songs have pauses during the playback. I done everything, update, check de ALSA driver, the code, I overclocking the raspberry, change the output between analog/HDMI, prefetch the song, but just still having pauses.


What am I doing wrong?



def player(session):

end_of_track = threading.Event()
global volume_effect

track = tracks[0]

def on_end_of_track(self):
end_of_track.set()

session.on(spotify.SessionEvent.END_OF_TRACK, on_end_of_track)
logger.info('End Track Event On')

session.player.prefetch(track)
session.player.load(track)
session.player.play()

logger.info('Playing: %r and %r', track.name, track.duration)

tracks.rotate(1)

logger.info('Next Song: %r', tracks[0].name)

if volume_effect == False:
for v in range(volume_min, volume_max, 5):
mixer.setvolume(int(v))
time.sleep(2)

volume_effect = True

try:
while not end_of_track.wait(track.duration/1000):
pass
except KeyboardInterrupt:
pass
else:
end_of_track.clear
session.player.unload()
player(session)

How can I reliably evaluate performance of Android app involving Audio recording and analysis

I am working on my masters thesis project titled "Speaker Detection and Conversation Analysis on Mobile Devices" which involves audio recording, feature extraction and analysis on Android phones. So I am developing an android app to test the hypothesis and to evaluate feasibility of doing it on Android in offline mode.


Currently I am done with Audio recording and feature extraction part. I have 3 different implementation using different toolkits. I have to evaluate performance of each project to decide which one to chose and take forward for next steps.


Battery Usage : My initial strategy is to record for 1 hour at least with phone in idle state and then with normal use, while dumping battery level at start and end of experiment. It will give me an idea about how much my android app contributes to battery consumption. Battery consumption is our major concern to test. Does it looks good ? Official documentation states that constantly monitoring battery usage actually adds up to more consumption.


CPU Consumption : I can also use http://ift.tt/1FcBZKx to evaluate performance of overall app or specific chunks of application like FFT module which are resource intensive.


Can I include any other parameter as well which is more suitable for app which involve audio processing ?


javascript AudioRecorder play sound from buffer

I am using cwilso/AudioRecorder http://ift.tt/1s0kdFP Demo: http://ift.tt/10MzWti


I can't figure out how to play the sound once recording is finished. Can somebody help please?


Using javax.sound to add background music

I'm not experienced in the topic, please understand. I am trying to add a background music to my game using javax.sound class. I tried many methods described in the internet, but none of them works. Please give me a simple class capable of playing background music. (I am using Eclipse)


Can't get audio file to play

I can't seem to figure out why my audio file won't play. The audio file is a wav file and is just. The error i am getting is javax.sound.sampled.UnsupportedAudioFileException.



public class MusicProgress {
public static void main(String[] args) {
// TODO Auto-generated method stub
JFrame b = new JFrame();
FileDialog fd = new FileDialog(b, "Pick a file: ", FileDialog.LOAD);
fd.setVisible(true);
final File file = new File(fd.getDirectory() + fd.getFile());
//URI directory = new URI (fd.getDirectory() + fd.getFile());
try {
AudioInputStream inputStream = AudioSystem.getAudioInputStream(file);
AudioFormat audioFormat = inputStream.getFormat();
Clip clip = AudioSystem.getClip();
clip.open(inputStream);
clip.start();
} catch (LineUnavailableException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (UnsupportedAudioFileException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}



}

}

IOS 8: Real Time Sound Processing and Sound Pitching - OpenAL or other framework

I'm trying to realize an app which plays a sequence of tones in a loop. Actually, I use OpenAL and my experiences with such framework are positive, as I can perform a sound pitch also.


Here's the scenario:



  1. load a short sound (3 seconds) from a CAF file

  2. play that sound in a loop and perform a sound shift also.


This works well, provided that the tact rate isn't too high - I mean a time of more than 10 milliseconds per tone.


Anyhow, my NSTimer (which embeds my sound sequence to play) should be configurable - and as soon as my tact rate increases (I mean less than 10 ms per tone), the sound is no more echoed correctly - even some tones are dropped in an obvious random way.


It seems that real time sound processing becomes an issue. I'm still a novice in IOS programming, but I believe that Apple sets a limit concerning time consumption and/or semaphore.


Now my questions:



  1. OpenAL is written in C - until now, I didn't understand the whole code and philosophy behind that framework. Is there a possibility to resolve my above mentioned problem making some modifications - I mean setting flags/values or overwriting certain methods?

  2. If not, do you know another IOS sound framework more appropriate for such kind of real time sound processing?


Many thanks in advance! I know that it deals with a quite extraordinary and difficult problem - maybe s.o. of you has resolved a similar one? Just to emphasize: sound pitch must be guaranteed!


Audio recording and playback option on HTML5

I am trying to develop a browser application that records the audio at 16kHz and has an option to play it back at 16kHz on HTML5.


I want the browser to be compatible with ios and android, primarily ios. What would be a good api to use to achieve this?


Write numpy array to wave file in buffers using wave (not scipy.io.wavfile) module

This caused me a day's worth of headache, but since I've figured it out I wanted to post it somewhere in case it's helpful.


I am using python's wave module to write data to a wave file. I'm NOT using scipy.io.wavfile because the data can be a huge vector (hours of audio at 16kHz) that I don't want to / can't load into memory all at once. My understanding is that scipy.io.wavfile only gives you full-file interface, while wave can allow you to read and write in buffers. I'd love to be corrected on that if I'm wrong.


The problem I was running into comes down to how to convert the float data into bytes for the wave.writeframes function. My data were not being written in the correct order. This is because I was using the numpy.getbuffer() function to convert the data into bytes, which does not respect the orientation of the data:



x0 = np.array([[0,1],[2,3],[4,5]],dtype='int8')
x1 = np.array([[0,2,4],[1,3,5]],dtype='int8').transpose()
if np.array_equal(x0, x1):
print "Data are equal"
else:
print "Data are not equal"
b0 = np.getbuffer(x0)
b1 = np.getbuffer(x1)


result:



Data are equal

In [453]: [b for b in b0]
Out[453]: ['\x00', '\x01', '\x02', '\x03', '\x04', '\x05']

In [454]: [b for b in b1]
Out[454]: ['\x00', '\x02', '\x04', '\x01', '\x03', '\x05']


I assume the order of bytes is determined by the initial allocation in memory, as numpy.transpose() does not rewrite data but just returns a view. However since this fact is buried by the interface to numpy arrays, debugging this before knowing that this was the issue was a doozy.


A solution is to use numpy's tostring() function:



s0 = x0.tostring()
s1 = x1.tostring()
In [455]: s0
Out[455]: '\x00\x01\x02\x03\x04\x05'

In [456]: s1
Out[456]: '\x00\x01\x02\x03\x04\x05'


This is probably obvious to anyone who say the tostring() function first, but somehow my search did not dig up any good documentation on how to format an entire numpy array for wave file writing other than to use scipy.io.wavfile. So here it is. Just for completion (note that "features" is originally n_channels x n_samples, which is why I had this data order issue to begin with:



outfile = wave.open(output_file, mode='w')
outfile.setnchannels(features.shape[0])
outfile.setframerate(fs)
outfile.setsampwidth(2)
bytes = (features*(2**15-1)).astype('i2').transpose().tostring()
outfile.writeframes(bytes)
outfile.close()

How to detect high volume and play something

I have searched the question and didn't find the answer. I have an Audio recorder and I want to edit the audio, play a sound of "STOP" voice when I hear a high volume. How can I detect a high volume, add a sound to that audio and get it in an AutdioRecorder audio?


HTML5/JavaScript library for audio record/play on iPhone device

I'm searching for web based solution to record 16kHz audio voice and play it on iPhone 5 and above devices. The existing solutions like recorder.js is only working on Chrome and FF.


Not allowing audio file to be played if its already playing

I have this code in my project -


-(IBAction)Bpm70:(id)sender{



CFBundleRef mainBundle = CFBundleGetMainBundle();
CFURLRef soundFileURLRef;
soundFileURLRef = CFBundleCopyResourceURL(mainBundle, (CFStringRef) @"70bpm", CFSTR("mp3"), NULL);
UInt32 soundID;
AudioServicesCreateSystemSoundID(soundFileURLRef, &soundID);
AudioServicesPlaySystemSound(soundID);

Timer = [NSTimer scheduledTimerWithTimeInterval:0.1 target:self selector:@selector(TimerCount) userInfo:nil repeats:YES];


}




  • the code starts a metronome audio file when the button is pressed, it also starts an NSTimer object.




  • Once the button is pressed I dont want the button to be able to be pressed again or audio file to play again) until the metronome audio file has finished playing.




Any help would be greatly appreciated, still relatively new to programming Thanks


how to play mp3 using javafx?

I am trying to play an mp3 file however I am getting an error: Cannot instantiate the type Media. I have no idea how to fix this error. I need the code to play the mp3 file, I also need to get the length of the song in millisecond. Here is my code:



import java.awt.FileDialog;
import java.io.File;
import java.io.IOException;
import java.net.URL;

import javafx.embed.swing.JFXPanel;
import javafx.scene.media.MediaPlayer;

import javax.print.attribute.standard.Media;
import javax.sound.sampled.UnsupportedAudioFileException;
import javax.swing.JFrame;

public class MusicProgress{
static URL url = null;

public static void main(String[] args) {
// TODO Auto-generated method stub
JFrame b = new JFrame();
final JFXPanel fxPanel = new JFXPanel();
FileDialog fd = new FileDialog(b, "Pick a file: ", FileDialog.LOAD);
fd.setVisible(true);
//final File file = new File(fd.getDirectory()+ fd.getFile());
try{
final Media medias = new Media(fd.getDirectory()+ fd.getFile());
MediaPlayer mediaPlayer = new MediaPlayer(medias);
} catch (UnsupportedAudioFileException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}

PULSEAUDIO does not list my ALSA capture device as a capture source

First of all, Hello eberybody!


I have recently written and ALSA driver for my audio capture card and I have problems with PULSEAUDIO recognising it as a capture source.


Here are some facts and hopefully one of you experienced this in the past and will be able to help:


(1) The capture card has 2 audio inputs (stereo): one HDMI and one analog. I can capture sound from both of them when using ALSA directly.


(2) When I use the PULSEAUDIO command to list audio capture sources:



$ pactl list | grep -A2 'Source #' | grep 'Name: ' | cut -d" " -f2


The output shows the following, which are on-board devices and do not use my ALSA driver:



alsa_output.pci-0000_00_1b.0.analog-stereo.monitor
alsa_input.pci-0000_00_1b.0.analog-stereo


Therefore, it can be seen from the above that my devices are not listed as capture sources.


(3) When I use:



$ pactl list


The output, among others, shows 'my' ALSA cards. Please see below 2 exemplary outputs: (a) for an on-board card (which does not use my driver) and (b) for the card which uses my driver:


(a) On-board card:



Card #0
Name: alsa_card.pci-0000_00_1b.0
Driver: module-alsa-card.c
Owner Module: 4
Properties:
alsa.card = "10"
alsa.card_name = "HDA Intel PCH"
alsa.long_card_name = "HDA Intel PCH at 0xfbf20000 irq 67"
alsa.driver_name = "snd_hda_intel"
device.bus_path = "pci-0000:00:1b.0"
sysfs.path = "/devices/pci0000:00/0000:00:1b.0/sound/card10"
device.bus = "pci"
device.vendor.id = "8086"
device.vendor.name = "Intel Corporation"
device.product.name = "6 Series/C200 Series Chipset Family High Definition Audio Controller"
device.form_factor = "internal"
device.string = "10"
device.description = "Built-in Audio"
module-udev-detect.discovered = "1"
device.icon_name = "audio-card-pci"
Profiles:
output:analog-stereo: Analogue Stereo Output (sinks: 1, sources: 0, priority. 6000)
output:analog-stereo+input:analog-stereo: Analogue Stereo Duplex (sinks: 1, sources: 1, priority. 6060)
output:analog-surround-40: Analogue Surround 4.0 Output (sinks: 1, sources: 0, priority. 700)
output:analog-surround-40+input:analog-stereo: Analogue Surround 4.0 Output + Analogue Stereo Input (sinks: 1, sources: 1, priority. 760)
output:analog-surround-41: Analogue Surround 4.1 Output (sinks: 1, sources: 0, priority. 800)
output:analog-surround-41+input:analog-stereo: Analogue Surround 4.1 Output + Analogue Stereo Input (sinks: 1, sources: 1, priority. 860)
output:analog-surround-50: Analogue Surround 5.0 Output (sinks: 1, sources: 0, priority. 700)
output:analog-surround-50+input:analog-stereo: Analogue Surround 5.0 Output + Analogue Stereo Input (sinks: 1, sources: 1, priority. 760)
output:analog-surround-51: Analogue Surround 5.1 Output (sinks: 1, sources: 0, priority. 800)
output:analog-surround-51+input:analog-stereo: Analogue Surround 5.1 Output + Analogue Stereo Input (sinks: 1, sources: 1, priority. 860)
input:analog-stereo: Analogue Stereo Input (sinks: 0, sources: 1, priority. 60)
off: Off (sinks: 0, sources: 0, priority. 0)
Active Profile: output:analog-stereo+input:analog-stereo
Ports:
analog-output: Analogue Output (priority 9900)
Part of profile(s): output:analog-stereo, output:analog-stereo+input:analog-stereo, output:analog-surround-40, output:analog-surround-40+input:analog-stereo, output:analog-surround-41, output:analog-surround-41+input:analog-stereo, output:analog-surround-50, output:analog-surround-50+input:analog-stereo, output:analog-surround-51, output:analog-surround-51+input:analog-stereo
analog-input-microphone: Microphone (priority 8700)
Part of profile(s): output:analog-stereo+input:analog-stereo, output:analog-surround-40+input:analog-stereo, output:analog-surround-41+input:analog-stereo, output:analog-surround-50+input:analog-stereo, output:analog-surround-51+input:analog-stereo, input:analog-stereo
analog-input-linein: Line In (priority 8100)
Part of profile(s): output:analog-stereo+input:analog-stereo, output:analog-surround-40+input:analog-stereo, output:analog-surround-41+input:analog-stereo, output:analog-surround-50+input:analog-stereo, output:analog-surround-51+input:analog-stereo, input:analog-stereo


(b) One of my cards (I have more than 1 card):



Card #11
Name: alsa_card.1
Driver: module-alsa-card.c
Owner Module: 31
Properties:
alsa.card = "1"
alsa.card_name = "OEM_VISIONRGB_AV"
alsa.long_card_name = "OEM_VISIONRGB_AV Analog DGC dada"
device.bus_path = "/devices/virtual/sound/card1"
sysfs.path = "/devices/virtual/sound/card1"
device.string = "1"
device.description = "OEM_VISIONRGB_AV"
module-udev-detect.discovered = "1"
device.icon_name = "audio-card"
Profiles:
input:analog-stereo: Analogue Stereo Input (sinks: 0, sources: 1, priority. 60)
off: Off (sinks: 0, sources: 0, priority. 0)
Active Profile: input:analog-stereo
Ports:
analog-input: Analogue Input (priority 10000)
Part of profile(s): input:analog-stereo


Notice that the "Name" fields are much different in their form between my card and the on-board one. I tested that:




  • capturing from alsa_card.pci-0000_00_1b.0 (on_board card) works correctly.




  • it fails to capture from alsa_card.1 (my card).




Please note that my ALSA driver is still lacking some elements, for example, mixer element is not included and I presume it may be the reason for the lack of cooperation between ALSA and PULSEAUDIO. From your experience, do you know if mixer element is necessary for PULSEAUDIO to qualify ALSA devices as capture sources.


I do apologise for this verbose message. Nonetheless, does the above ring a bell?


Thanks a lot for your help and suggestions,


Przemek


Getting the audio output of a computer

With the following code you obtain the audio input (microphone) in Processing (Java):



in = new AudioIn(this, 0);


How do I get the audio OUTPUT of the computer?


Android audio_hw_primary

I have a custom android device which has root android OS installed in it. Earlier the "sound" and "mic" was working properly. But now none of the above hardwares are working and whenever I play any video I am getting the following exceptions in the log:



02-17 21:10:37.564: E/audio_hw_primary(2378): cannot open pcm_out driver 0: cannot open device '/dev/snd/pcmC4294967295D0p': No such file or directory
02-17 21:10:37.574: W/audio_hw_primary(2378): card -1, port 0 device 0x2
02-17 21:10:37.574: W/audio_hw_primary(2378): rate 44100, channel 2 period_size 0xc0


Can anyone help me to solve this issue.


Thanks


Sound in videos is full of static

I'm trying to play sound from an FFMpegFrameGrabber by getting the Frame and sending the audio samples to a SourceDataLine. Here's what I have so far:


Creating the SourceDataLine:



int channels = _grabber.getAudioChannels();
int format = _grabber.getSampleFormat();
AudioFormat fmt = new AudioFormat(_grabber.getSampleRate(), format, channels, true, true);
_sourceDataLine=(SourceDataLine)AudioSystem.getLine(new DataLine.Info(SourceDataLine.class, fmt));
_sourceDataLine.open(fmt);
_sourceDataLine.start();


Attempting to play sound (images are handled in the else block):



org.bytedeco.javacv.Frame f = _grabber.grabFrame();

if (f.samples != null && f.samples.length > 0)
{
byte[] bytes = new byte[4096];
for (Buffer buffer : f.samples)
{
FloatBuffer floatBuffer = (FloatBuffer) buffer;
ByteBuffer byteBuffer = ByteBuffer.allocate(floatBuffer.capacity() * 4);
byteBuffer.asFloatBuffer().put(floatBuffer);
byteBuffer.rewind();
byteBuffer.get(bytes);
_sourceDataLine.write(bytes, 0, bytes.length);
}
}


(Note: I tried a few different versions of this and they all have static. The versions I tried included combining the buffers into one large buffer, only trying to play one sample instead of each channel, and changing the audio format to many different permutations.)


The problem is the sound is full of static, and almost completely unintelligible. This is my first time doing any audio programming, so I'm sure I'm doing something completely ridiculous.


I appreciate any help. Thank you.


This very simple code for a sound button builds but doesn't work on iPhone


#import "ViewController.h"

@implementation ViewController

-(IBAction) playSound1; {
CFBundleRef mainBundle = CFBundleGetMainBundle();
CFURLRef soundFileURLRef;
soundFileURLRef = CFBundleCopyResourceURL(mainBundle, (CFStringRef)@"Knave", CFSTR ("caf"), NULL);
UInt32 soundID;
AudioServicesCreateSystemSoundID(soundFileURLRef, &soundID);
AudioServicesPlaySystemSound(soundID);
}


This is my m file code for the IBAction. As I say, it builds, and the button press 'visually' works... but the audio file doesn't play. It's exactly the same as the tutorial I followed, and his worked fine... So what on earth am I doing wrong?


How to play sound _ during _ launch, not after launch in xcode

I've seen suggestions to play sound in:



- (void)viewDidLoad

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions


The latter being in AppDelegate.m.


These two methods, play a sound after the launch. I've tested it and it should be obvious from their names.


I would like to play either before or preferably during the launch. Is it possible? I've seen at least one popular app that looks like it does this, but it may be a fake out and instead just showed a black screen at start up.


How to play sound _ during _ launch, not after launch in xcode

I've seen suggestions to play sound in:



- (void)viewDidLoad

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions


The latter being in AppDelegate.m.


These two methods, play a sound after the launch. I've tested it and it should be obvious from their names.


I would like to play either before or preferably during the launch. Is it possible? I've seen at least one popular app that looks like it does this, but it may be a fake out and instead just showed a black screen at start up.


lundi 16 février 2015

How to add .wav / audio file in C#.Net project and play it?



  • Audio file location should not depend on it's path. So, if file path changed it'll not affect on application.




  • It should load and play without any button.




Xamarin - AudioRecord read not working

ok my code works on an emulator, but now I am trying to test it on a 7 Voyager android 4.4.2 I run this: returnSize = Read (audioData,0,audioData.Length);


and I get all 0s for audioData... again this code works on the emulator no problem


also I did Required permissions for both RecordAudio and for ReorderTasks and I did check the mic phone on the 7 Voyager android 4.4.2 and it does work.


so how can i troubleshoot this ? also returnSize = 2084


Convert data to .caf audio in ios

Currently I'm calling webservice and get the datas from the webservices. I'm trying to make the audio file but it doesn't create any playable audio file. I can create it .caf file format but the audio is not playing.



NSString *dataPath = [[[Global sharedGlobals] documentsDirectoryPath] stringByAppendingPathComponent:[NSString stringWithFormat:@"/%@/recent.caf",_incidentID]];
[data writeToFile:dataPath atomically:YES];

uuid_record not recording audio on second record command

I have a setup where I open a connection to freeswitch through the ESL and start exchanging commands.


In one specific scenario I want for freeswitch to call me and record a message. So I call a phone number with sofia and park the call


originate {set some private variables and origination_caller_id_number}sofia/gateway// &park()


During the call I play a few messages


uuid_broadcast playback::


And listen to events waiting specific for DTMF tones so I can take action. Play another message or start recording


To stop a playback and start recording


uuid_break uuid_record start


I also playback the recorded file to the user using the same playback command


Now the issue, the first time a message is recorded it works fine, I can listen to it. After I record a new message on the same call nothing is recorded in the file. I can download the file to listen to it directly and still no sound. I see that the file is created and it's size is compatible with the length recorded but even looking with Audacity there is no audio in it.


What can be causing this and does anyone have an idea on how to fix it?


Thanks for the help!


MFT Audio fadeout effect

I'm writing Windows Phone C# application and I need some custom effects, which are done by MFT (Media Foundation Transform).


I know C++ just a little, so I would need an advice how to start with implementing some effect, for example audio fadeout.


JavaScript audio will not play when tab is not in focus

I have a webpage that uses JQuery Countdown and I have an audio clip that plays when the coundown reaches zero.


Here's the relevant code:



<script>
$(function (){
$('#Timer1Timer').countdown({until: +(28800), onExpiry: play_single_sound});
$('#Timer2Timer').countdown({until: +(172800), onExpiry: play_single_sound});
$('#Timer3Timer').countdown({until: +(10), onExpiry: play_single_sound});
});
function play_single_sound() {
document.getElementById('audiotag1').play();
}
</script>
<audio id="audiotag1" src="audio/alert.wav" preload="auto"></audio>


The audio plays just fine when I have the page open in it's own window and I'm using a separate window to view another page. However, if I have a different tab open and active in the same window, the audio will not play when the countdown completes.


Any sort of assistance or explanation would be wonderful!


Read a WAV file and convert it to an array of amplitudes in Swift

I have followed a very good tutorial on udacity.com to explore the basis of audio applications with Swift. I would like to extend its current functionalities, starting with displaying the waveform of the WAV file. For that purpose, I would need to retrieve the amplitude versus sample from the WAV file. How could I proceed in swift, given that I have a recorded file already?


Thank you!


Generate a sound (not from a file) in Swift / SpriteKit

I'm building a small game prototype, and I'd like to be able to play simple sounds whose length/tone/pitch will vary based on what the user is doing.


This is surprisingly hard to do. Closest resource I found was:


http://ift.tt/1zHlqoU


But this does not actually generate any sound on my device or on the iOS simulator.


Does anyone know of any working code to play ANY procedurally generated audio? Simple Sine Wave would do.


Web Audio API audio editor saving edited clip back onto web server

I am making a drum machine and have implemented a recording function using recorderJS library. The problem as you may expect is limited functionality in terms of not been able to edit the recorded clips. So my question is if I was to implement an audio editor that allows the user to trim the clip, how would I go about saving the edited clip back onto the web server?


Is this even possible using Web Audio API?


Many Thanks


mozChannels/mozSampleRate is undefined

I am using the this within an audio experiment of mine.



audiometa: function(){
channels = audio.mozChannels;
rate = audio.mozSampleRate;
frameBufferLength = audio.mozFrameBufferLength;

fft = new FFT(frameBufferLength / channels, rate);
},


For some reason, mozChannels/mozSampleRate and mozFrameBufferLength is undefined using the latest version of Firefox. Reading the cos, I can't explain myself, why this could happen.


Is there something within the about:config page which I need to turn on? (Have tried it local and on a webserver)


By the way, I am using this example. http://ift.tt/1L2WFoG


Thanks


Stop sound with button event

I have code for sound that loops and plays on my GUI contained in the main class. Main class code:



public class SoundTest {
public static Clip clip;
public static Mixer mixer;
/**
* @param args the command line arguments
*/
public static void main(String[] args) {
// TODO code application logic here

StartGUI GUI = new StartGUI();
GUI.setVisible(true);

Mixer.Info[] mixInfos = AudioSystem.getMixerInfo();
mixer = AudioSystem.getMixer(mixInfos[0]);

DataLine.Info dataInfo = new DataLine.Info(Clip.class, null);
try{
clip = (Clip)mixer.getLine(dataInfo);
}
catch(LineUnavailableException l){
l.printStackTrace();

}

try{
URL soundURL = Main.class.getResource("/soundtest/8-Bit-Noise-1.wav");
AudioInputStream audioStrim = AudioSystem.getAudioInputStream(soundURL);
clip.open(audioStrim);
}
catch(LineUnavailableException l){
l.printStackTrace();
}
catch(UnsupportedAudioFileException e ){
e.printStackTrace();
}
catch (IOException i){
i.printStackTrace();
}
clip.start();
do{
System.out.println(clip.isActive());
try{
clip.loop(Clip.LOOP_CONTINUOUSLY);
Thread.sleep(50);

}
catch(InterruptedException ie){
ie.printStackTrace();
}
}while(clip.isActive());


}

public void stop() {
clip.stop();
}


}


In my JFrame class I want to make a button event that will stop the sound, I have tried to make a stop() method in the main class to use it in the button but so far it is not working.


JFrame code:


public class StartGUI extends javax.swing.JFrame {



SoundTest q;

/**
* Creates new form SoundTestGUI
*/
public StartGUI() {
initComponents();
}



private void SoundBtnActionPerformed(java.awt.event.ActionEvent evt) {
// TODO add your handling code here:
q.stop();
}

/**
* @param args the command line arguments
*/
public static void main(String args[]) {


/* Create and display the form */
java.awt.EventQueue.invokeLater(new Runnable() {
public void run() {
new StartGUI().setVisible(true);
}
});
}

Open a file audio in R.Raw

I have to open a file that in the /res/raw/ folder, but it seems tha android, doesn't reccognize the path. Here is my code:



public static void openRec()
{
//this is the wav file that I have to analyze
File file = new File("/res/raw/chirp.wav");

try {
FileInputStream in = new FileInputStream(file);
chirp = new byte[(int) file.length()];
in.read(chirp);


Log.d("xxx", "" + chirp.length);
in.close();


} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}

}

Tinyalsa - Tinycap not working

I try to record audio using Tinycap (from Tinyalsa-NDK), I encounter the following issues:



  • On a Nexus 5 device, capturing seems to start but generated wav file is always invalid

  • On an Samsung Galaxy S4 device, Tinycap always hangs until PCM is released, and then device crashes and restarts.


I tried various pcm configurations but all result in the same behavior. Am I missing something? Is there something else to apply before capturing?


Play multiple audio files at once visual basic

I'm making a virtual piano in Visual Basic, Visual Studio 2013. However I found a problem. I've been using the function "My.Computer.Audio.Play(My.Resources.C, AudioPlayMode.Background)" to play a certain note, but then if I want to play another note it just cuts the sound of the previous, when what I wanted was to play the two notes at the same time.


Isn't there any function that allows to play an entire audio file until the end?


Thanks in advance


Preventing overlapping audio using Objective C

I am only on day 5 of learning to code for iPhones, so please forgive me for my current levels of stupidity.


I have an app that's working well, with short audio tracks being played whenever a button is pressed. So far, so good. However, the audio overlaps if another button is played before the previous sound is finished.


I have looked online to find a solution, but I can't get any of them to work.


Here's my .m file code for each button press:



-(IBAction)PlayAudioButton1:(id)sender;{

NSURL *resourceURL = [[NSBundle mainBundle] URLForResource:@"Coward.mp3" withExtension:nil];
AudioServicesCreateSystemSoundID((__bridge CFURLRef)resourceURL, &playSoundID);
AudioServicesPlaySystemSound(playSoundID);


}


OpenSL ES VS MediaRecorder / AudioRecord

In terms of audio recording, is there an advantage for using OpenSL ES instead of API recorders (MediaREcorder / AudioRecord)?

Are there options that can be utilized to produce better quality / use wider range of devices?

Can OpenSL-ES support more options / sources if device is rooted?


AVURLAsset tracksWithMediaType:AVMediaTypeVideo Return crash

I am woking with audio and video merge . I am getting crash in



[[AVURLAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]


because of empty array AVURLAsset tracksWithMediaType.


I set .mp4 to .mov still its not working.


Following is the code



NSURL *video_url = [NSURL fileURLWithPath:[UserDefaultsClass getVideoFile]];
NSLog(@"video_url:%@",video_url);

videoAsset = [[AVURLAsset alloc]initWithURL:video_url options:nil];
CMTimeRange video_timeRange = CMTimeRangeMake(kCMTimeZero,audioAsset.duration);

AVMutableCompositionTrack *a_compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];


Here i set fileURLWithPath: still its not working.


In console video_url print : video_url:file:///Users/indecommglobal/Library/Developer/CoreSimulator/Devices/35638854-89D1-429A-A01F-994A34F4E8B3/data/Containers/Data/Application/B5E2DA7E-E8F6-4078-B584-A0335FEEA84F/Documents/RecordedVideo/output03.mov


Really appreciated your help. Thanks.


Android - Import audio file as array of double

I've a sound recorded as m4a file. I need ti import this file as array of double so I can calculate FFT on it. How can I do it?


dimanche 15 février 2015

How to retreive / get / download indirectly loading audio from web page?

I know that media can be loaded into web page, without using conventional HTML tags, using JavaScript and (or) Flash. There are lot of audio and video player plugins are available, like JW Player. Usually, these media can be caught and traced, using net console of Firebug. I use Firebug with Firefox, for the past years. Recently, I have came through a web page, in which, audio cant be caught using Firebug. Here is the page. Can somebody explain me how to catch audio from that page / pages like these?


Playing and Recording audio on/from Bluetooth headset

As I'm new to android-bluetooth, I started reading this. But, am confused on where to start.


I have an android application and a bluetooth headset.


1> I want to connect and play and audio on bluetooth headset.


2> Simultainiously, I also want to record from the bluetooth headset's mic.


Please, direct me with the docs or some samples.


Detecting BPM in a Windows Phone 8.1 Application

I'm here to hopefully get an answer/suggestion/help for this problem, I've been searching the entire week-end to find a solution, Yes I found some, but none really cover my needs, So let's get started.


Some background:


I'm in my 3rd year, I study computer science and in my free time I develop things for fun, I came up with a very cool Idea and now am on half-path of finishing it, It's an app that allows you to play music, mix tracks around ... so let's say it's a DJ app.


The problem:


In my App I need to detect the MP3 Tempo or BPM (beats per minute), You know like any "DJ app" should do, But I'm building for Windows Phone and all the Third-party libraries are not compatible with WP. FMOD, BASS, Naudio, FFTW ...


Researches:


I think this is possible using native API for Windows Phone, but my humble experience with C++ doesn't allow me to write something heavy like a BPM detecting, I also know there are some Online WEB APIs that allows to calculate BPM easily, but I need to do it locally (for speed sake), I'm kinda surprised that there is no such library for Windows Phone to do this... I tried to use MediaPlayer which is in XNA but it doesn't work with WP.


My expectations: I'm Asking this question, if anyone encountered this problem and found a library or something, I know some will arg in the comments, what have you tried ... But i don't even know what to try. Also the other topics you might say it's a duplicate are outdated (2010-2011) .


Thanks in advance.


Signed 16-bit ALSA PCM data to U8 Conversion on Linux

I'm attempting to convert 16-bit ALSA PCM Samples to Unsigned 8-bit PCM samples for wireless transmission on Linux. The receiving machine is playing the transmitted data successfully and the recorded voice is there and recognizable, but the quality is terrible and noisy. I've tried ALSA mixer on both ends to tune the stream but it doesn't seem to get much better with that. I believe there is something wrong with my conversion of the samples to 8-bit PCM but its just a simple shift so I'm not sure what could be the error. Does anyone have any suggestions or see anything wrong with my conversion code? Thanks.


Conversion Code:



// This byte array needs to be the packet size we wish to send
QByteArray prepareToSend;
prepareToSend.clear();

// Keep reading from ALSA until we fill one full frame
int frames = 1;
while ( prepareToSend.size() < TARGET_TX_BUFFER_SIZE ) {

// Create a ByteArray
QByteArray readBytes;
readBytes.resize(size);

// Read with ALSA
short sample[1]; // Data is signed 16-bit
int rc = snd_pcm_readi(m_PlaybackHandle, sample, frames);
if (rc == -EPIPE) {
/* EPIPE means overrun */
fprintf(stderr, "Overrun occurred\n");
snd_pcm_prepare(m_PlaybackHandle);
} else if (rc < 0) {
fprintf(stderr,
"Error from read: %s\n",
snd_strerror(rc));
} else if (rc != (int)frames) {
fprintf(stderr, "Short read, read %d frames\n", rc);
}
else {
// Copy bytes to the prepare to send buffer
//qDebug() << "Bytes for sample buffer: " << sizeof(sample);
prepareToSend.append((qint16)(sample[0]) >> 8); // signed 16-bit becomes u8
}

}


ALSA Configuration:



// Setup parameters
int size;
snd_pcm_t *m_PlaybackHandle;
snd_pcm_hw_params_t *m_HwParams;
char *buffer;

qDebug() << "Desire to Transmit Data - Setting up ALSA Now....";

// Error handling
int err;

// Device to Write to
const char *snd_device_in = "hw:1,0";

if ((err = snd_pcm_open (&m_PlaybackHandle, snd_device_in, SND_PCM_STREAM_CAPTURE, 0)) < 0) {
fprintf (stderr, "Cannot open audio device %s (%s)\n",
snd_device_in,
snd_strerror (err));
exit (1);
}

/* Allocate a hardware parameters object. */
snd_pcm_hw_params_alloca(&m_HwParams);

if ((err = snd_pcm_hw_params_malloc (&m_HwParams)) < 0) {
fprintf (stderr, "Cannot allocate hardware parameter structure (%s)\n",
snd_strerror (err));
exit (1);
}

if ((err = snd_pcm_hw_params_any (m_PlaybackHandle, m_HwParams)) < 0) {
fprintf (stderr, "Cannot initialize hardware parameter structure (%s)\n",
snd_strerror (err));
exit (1);
}

if ((err = snd_pcm_hw_params_set_access (m_PlaybackHandle, m_HwParams, SND_PCM_ACCESS_RW_INTERLEAVED)) < 0) {
fprintf (stderr, "Cannot set access type (%s)\n",
snd_strerror (err));
exit (1);
}

if ((err = snd_pcm_hw_params_set_format(m_PlaybackHandle, m_HwParams, SND_PCM_FORMAT_S16)) < 0) { // Has to be 16 bit
fprintf (stderr, "Cannot set sample format (%s)\n",
snd_strerror (err));
exit (1);

}

uint sample_rate = 8000;
if ((err = snd_pcm_hw_params_set_rate (m_PlaybackHandle, m_HwParams, sample_rate, 0)) < 0) { // 8 KHz
fprintf (stderr, "Cannot set sample rate (%s)\n",
snd_strerror (err));
exit (1);
}

if ((err = snd_pcm_hw_params_set_channels (m_PlaybackHandle, m_HwParams, 1)) < 0) { // 1 Channel Mono
fprintf (stderr, "Cannot set channel count (%s)\n",
snd_strerror (err));
exit (1);
}

/*
Frames: samples x channels (i.e: stereo frames are composed of two samples, mono frames are composed of 1 sample,...)
Period: Number of samples tranferred after which the device acknowledges the transfer to the apllication (usually via an interrupt).
*/

/* Submit params to device */
if ((err = snd_pcm_hw_params(m_PlaybackHandle, m_HwParams)) < 0) {
fprintf (stderr, "Cannot set parameters (%s)\n",
snd_strerror (err));
exit (1);
}

/* Free the Struct */
snd_pcm_hw_params_free(m_HwParams);

// Flush handle prepare for record
snd_pcm_drop(m_PlaybackHandle);

if ((err = snd_pcm_prepare (m_PlaybackHandle)) < 0) {
fprintf (stderr, "cannot prepare audio interface for use (%s)\n",
snd_strerror (err));
exit (1);
}

qDebug() << "Done Setting up ALSA....";

// Prepare the device
if ((err = snd_pcm_prepare (m_PlaybackHandle)) < 0) {
fprintf (stderr, "cannot prepare audio interface for use (%s)\n",
snd_strerror (err));
exit (1);
}

c++ - How to play music in thread without destructing a channel?

The general question is how to play music correctly in c++? I've read music should be played in another task. But playing from channel seems to be out of threads. AudioPlayer's destructor is called right after starting playback so channel is freed immediately and I can't hear any sound. How to deal with this problem? Using Bass audio library



void AudioPlayer::play()
{
endOfStream = false;
BASS_ChannelStop(channel);
if (!(channel = BASS_StreamCreateFile(false, filename.c_str(), 0, 0, BASS_SAMPLE_LOOP)) && !(channel = BASS_MusicLoad(false, filename.c_str(), 0, 0, BASS_MUSIC_RAMP | BASS_MUSIC_STOPBACK, 1)))
std::cout << "Can't play file";
BASS_ChannelSetSync(channel, BASS_SYNC_END, 0, &streamEndCallback, 0);
BASS_ChannelPlay(channel, true); //what's going on here? is it another thread already?
is_playing = true;
}

//main
AudioPlayer player(file_path);
std::thread th(&AudioPlayer::play, player); //th's destructor is called immediately, channel and resources are freed
th.join();

while(true) ; //do some processing in main thread

Audio is not playing on hover over HTML button

I'm trying to get some audio to play on hover – it's currently working locally, but when I upload to server it's not playing.


Any one able to point me in the write direction?


You can see it live here http://ift.tt/1DvPEvK (hover over musical note icon in top left corner).


Current javascript is –



var audio = $("#bleepbleepsound")[0];
$(".notes").mouseenter(function() {
audio.play();
});


HTML is –



<audio id="bleepbleepsound">
<source src="../audio/TonyTempaBleep1.mp3">
<source src="../audio/TonyTempaBleep1.ogg">
</audio>


Any help would be greatly appreciated!


Where can I find a database of music for genre analysis?

I've been working on a project to classify music automatically, working with the GTZAN collection, by George Tzanetakis. Its kinda small though, only 1000 tracks of 10 genres. Are there any bigger databases available for this kinda thing? For reference, the GTZAN collection is a set of uncompressed audio files, each about 30 seconds long. Something like this would be preferred. I've looked up the million songs database, but that only gives a selected set of data of each song, and doesn't give out the audio itself for analysis. I've also checked out the EchoNest API, but couldn't use it for the same reasons.


AudioContext HTML5 Player

So I've been playing with the Web Audio API and have the following issue.


I am making a project in which I call an external library's API with Ajax and get audio back (arraybuffer).



  • I send them the text and get audio back.

  • This is not necessarily a GET request (can be POST, etc.)

  • If text is too large, I split it into smaller chunks and send multiple requests


So far so good, now comes the issue of how to play the multiple audios that I got back.


Since users do not care that I have split the text and actually have multiple audio tracks, I need somehow to make it look like a single track or as a playlist.


So I have tried to:



  • merge arraybuffer (apparently it does not work like that and most likely I need ffmpeg or simiar tools to do the merging, which is hard to do on client-side? (like there is ffmpeg for browsers, but I don't know how good is it to burden a client with it). If it's not so, maybe you can suggest something here)

  • load it as a playlist, but so far cannot find a library that accepts multiple audiobuffers/audiocontexts and/or gives a playlist with it back.


The easiest solution that I see so far is to create my own small library that accepts AudioBuffers/arraybuffers and go either with the playlist approach or play the 'chunked' audios one by one and make scrubber that jumps between audio contexts.


Is there a library/easier approach?


Will be thankful for any suggestions :]


AVAudioRecorder not saving recording

I am making an game for iOS. One of the things I need to do is to allow the user to make a quick little audio recording. This all works, but the recording is only temporary saved. So when the user closes the app and reopens it, the recording should be able to play again, but it doesn't, it gets deleted when you close the app. I don't understand what I am doing wrong. Below is my code:


I setup the AVAudioRecorder in the ViewDidLoad method like so:



// Setup audio recorder to save file.
NSArray *pathComponents = [NSArray arrayWithObjects:[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject], @"MyAudioMemo.m4a", nil];
NSURL *outputFileURL = [NSURL fileURLWithPathComponents:pathComponents];

// Setup audio session.
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];

NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt:2] forKey:AVNumberOfChannelsKey];

audio_recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:nil];
audio_recorder.delegate = self;
audio_recorder.meteringEnabled = YES;
[audio_recorder prepareToRecord];


I have got the AVAudio delegate methods too:



-(void)audio_playerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag {
NSLog(@"Did finish playing: %d", flag);
}

-(void)audio_playerDecodeErrorDidOccur:(AVAudioPlayer *)player error:(NSError *)error {
NSLog(@"Decode Error occurred");
}

-(void)audio_recorderDidFinishRecording:(AVAudioPlayer *)recorder successfully:(BOOL)flag {
NSLog(@"Did finish recording: %d", flag);
}

-(void)audio_recorderEncodeErrorDidOccur:(AVAudioPlayer *)recorder error:(NSError *)error {
NSLog(@"Encode Error occurred");
}


When I want to play, record or stop the audio, I have made the following IBActions which are linked to UIButtons:



-(IBAction)play_audio {

NSLog(@"Play");

if (!audio_recorder.recording){
audio_player = [[AVAudioPlayer alloc] initWithContentsOfURL:audio_recorder.url error:nil];
[audio_player setDelegate:self];
[audio_player play];
}
}

-(IBAction)record_voice {

NSLog(@"Record");

if (!audio_recorder.recording) {
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setActive:YES error:nil];

// Start recording.
[audio_recorder record];
}

else {
// Pause recording.
[audio_recorder pause];
}
}

-(IBAction)stop_audio {

NSLog(@"Stop");

[audio_recorder stop];

AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setActive:NO error:nil];
}


If you try my code you will see that it works, but it only seems to save the audio file temporarily.


What am I doing wrong? I thought I had used all the correct AVAudioRecorder methods?


Thanks for your time, Dan.


Music On and Off

In The Moment I going to make an Application with music. I want to made a Button with you could turn out the music, but I don't know the right code. I Hope you could help me. Thanks for Help and sorry for my Englisch.


samedi 14 février 2015

Audio Player for ionic

I am learning ionic and want to embed an Audio Player. I have found this Plnkr example of Video Player:



angular.module('app',[])

.directive('youtubeIframe', ['$timeout', function ($timeout, $sce ) {
return {
restrict: 'A',
link: function (scope, element, attrs) {
$timeout( function () {
var temp1 = '<iframe width="400px" height="200px" src="http://ift.tt/Ajy3Fh';
var temp2 = '?&autoplay=0&autohide=1&fs=1&cc_load_policy=1&loop=0&rel=0&modestbranding=1&&hd=1&playsinline=0&showinfo=0&theme=light" frameborder="1" allowfullscreen></iframe>';
var finalvar = temp1 + attrs.youtubeIframe + temp2 ;
console.log('Finalvar is: ' + finalvar); //just to check if url is ok
element.prepend( finalvar );
}, 150);
// The timeout is to give enough time for the Dom to be built and checked for its structure, so that we can manipulate it.
}
};
}])



.controller('VideoCtrl', function($scope) {

$scope.angularvideos = [
{
name: 'Angular on the go: Using Angular to power Mobile Apps',
youtubeId: 'xOAG7Ab_Oz0',
publishdate: 'Dec 2013'
},
{
name: 'Crafting the Perfect AngularJS Model and Making it Real Time',
youtubeId: 'lHbWRFpbma4',
publishdate: 'April 2014'
},
{
name: 'AngularJS & D3: Directives for Visualizations',
youtubeId: 'aqHBLS_6gF8',
publishdate: 'Jan 2014'
},
{
name: 'The Thick Front End',
youtubeId: 'hv2NEW0uC1o',
publishdate: 'Nov 2013'
}
];
})


Can someone please point me to a similar example for Audio Player within iframe for a mobile App (Android for the time being, but later on iOS as well).


Thanks & Regards


mciSendString() setaudio volume error 261

I'm using MCI to do some sound-related stuff, and everything works, except I cannot alter the volume. I have the following code:



mciSendStringA("open res/theme.wav type waveaudio alias maintheme", nullptr, 0, nullptr);
MCIERROR error = mciSendStringA("setaudio maintheme volume to 50", nullptr, 0, nullptr);


error is 261. The program works fine but the volume does not change. Any suggestions on what's wrong? (Two pages of google searching and there's nothing)


generate sound from 100Hz to 4000Hz using c++

what i got is a basic sine wave generator, I will have to modify it so it will generate sound from 100Hz to 4000Hz in 5 seconds.



if(!GenerateBegin())
return;

short audio[1];

for(double time=0.; time < 5; time += 1. / m_sampleRate)
{
audio[0] = short(m_amplitude * sin(time * 2 * M_PI * m_freq1));


GenerateWriteFrame(audio);

// The progress control
if(!GenerateProgress(time / 5))
break;
}


// Call to close the generator output
GenerateEnd();


any help will be appericiated :D


Strange sine wave on output of an alsa linux c program

I'm studying alsa programming on ubuntu.


I'm trying to output a sine wave to line-out of my laptop soundcard, and then redirecting to line-in (microphone) via an audio cable.



___LINE_IN(microphone)\
/ \
| \ _____________
| |soundcard-pc |
cable /
| /
\ ___LINE_OUT(speakers)/


I'm using this code



#include<stdio.h>
#include<stdlib.h>
#include<alsa/asoundlib.h>
#include<math.h>
#define SIZEBUF 2048
int main(void)
{
int i;
int err;
double x;
double cost;
double frequency=500;
unsigned int rate=44100;
short buf[SIZEBUF];
snd_pcm_t *phandle;
snd_pcm_hw_params_t *hw_params;
snd_pcm_open(&phandle,"default",SND_PCM_STREAM_PLAYBACK,0);
snd_pcm_hw_params_malloc(&hw_params);
snd_pcm_hw_params_any(phandle,hw_params);
if((err=snd_pcm_hw_params_set_access(phandle,hw_params,SND_PCM_ACCESS_RW_INTERLEAVED))<0)
{
printf("Cannot set access type.\n");
exit(1);
}
snd_pcm_hw_params_set_format(phandle,hw_params,SND_PCM_FORMAT_S16_LE);
snd_pcm_hw_params_set_rate_near(phandle,hw_params, &rate,0);
snd_pcm_hw_params_set_channels(phandle,hw_params,1);
snd_pcm_hw_params(phandle,hw_params);
snd_pcm_hw_params_free(hw_params);
snd_pcm_prepare(phandle);
cost=2.0*M_PI*frequency/(double)rate;
printf("cost=%f.\n",cost);
for(i=1;i<SIZEBUF;i++)
{
x=sin(i*cost);
buf[i]=(short)(32767*x+32768);
}
for(i=0;i<50;i++)
{
snd_pcm_writei(phandle,buf,SIZEBUF);
}
snd_pcm_close(phandle);
exit(0);
}


I'm using audacity to see the wave, but it appears strange, like in this image


enter image description here


It seems not to have the behaviour of a sine wave. Why?


Algorithm suggestion: comparing sound clips

(Not sure if this is the right place for this question)


We are analyzing thousands of sound clips of people talking in an attempt to find patterns in the pitch, syllable rate, etc. in order to come up with a signature database to match new sound bites to emotions.


While I am familiar with some AI algorithms (Bayes, for instance) I'm curious if anyone has any ideas on the types of algorithms we could employ.


Overall concept (figure short 2-5 second .wav clips):



soundClip1 -> 'anger'
soundClip2 -> 'happy'
soundClip3 -> 'sad'
...
emotion = predict(newSoundClip)


Given a new sound clip, we would like to do something similar to Shazzam except for returning a probability that the clip represents a particular emotion.


Any suggestions would be appreciated!


Playing an audio file repeatedly with AVAudioEngine

I'm working on an iOS app with Swift and Xcode 6. What I would like to do is play an audio file using an AVAudioEngine, and until this point everything OK. But how can I play it without stopping, I mean, that when it ends playing it starts again?


This is my code:



/*==================== CONFIGURATES THE AVAUDIOENGINE ===========*/
audioEngine.reset() //Resets any previous configuration on AudioEngine

let audioPlayerNode = AVAudioPlayerNode() //The node that will play the actual sound
audioEngine.attachNode(audioPlayerNode) //Attachs the node to the audioengine

audioEngine.connect(audioPlayerNode, to: audioEngine.outputNode, format: nil) //Connects the applause playback node to the sound output
audioPlayerNode.scheduleFile(applause.applauseFile, atTime: nil, completionHandler: nil)

audioEngine.startAndReturnError(nil)
audioPlayerNode.play() //Plays the sound


Before saying me that I should use AVAudioPlayer for this, I can't because later I will have to use some effects and play three audio files at the same time, also repeatedly.


C# equivalent for Java's AudioFormat.isBigEndian and AudioFormat.Encoding.PCM_SIGNED

I am having hard time trying to port some Java code to C# for my simple project. The Java code makes use of format.isBigEndian and checks if the audio file data is signed or not. My C# project makes use of NAudio for handling audio files.


Here is the Java code



public void LoadAudioStream(AudioInputStream inputStream) {
AudioFormat format = inputStream.getFormat();
sampleRate = (int) format.getSampleRate();
bigEndian = format.isBigEndian();
AudioFormat.Encoding encoding = format.getEncoding();
if (encoding.equals(AudioFormat.Encoding.PCM_SIGNED))
dataIsSigned = true;
else if (encoding.equals(AudioFormat.Encoding.PCM_UNSIGNED))
dataIsSigned = false;
}


and the C# code that I am working with..



public void LoadAudioStream(WaveFileReader reader)
{
var format = reader.WaveFormat;
sampleRate = format.SampleRate;
//bigEndian = ??
var encoding = format.Encoding;
if (encoding.Equals( /*????*/))
{
dataIsSigned = true;
}
else if (encoding.Equals( /*?????*/))
{
dataIsSigned = false;
}
}


How can I check if the Audio file data is big-endian or not? and lastly is there a way to check if the AudioFormat is PCM signed or unsigned?


C# windows phone - combine audio and video file

I'm building an app where I need to combine audio and video files, I'm able to record the video but I have to replace the original audio with an audio file I have on the phone, so the question is: is there any third party library or API to combine audio and video?


Thanks in advance


Can't make HTML5 Audio Tag to work on mobile browsers

I have a web app that uses the HTML5 tag and for some reason while it works fine on Windows and Mac PCs it doesn't work on iOS and Android. Here's a relevant snippet of my code:


Javascript:



var audioElement = document.querySelector('#audioplayer');
var source = document.querySelector('#mp3');
source.src = tokObject._url;
audioElement.load();
audioElement.play();


HTML:



<center><audio id="audioplayer" style="width:480px;">
<source id="mp3" src="random-placeholder" type="audio/mp3" />
</audio>
</center>


Cheers and thanks!


How can I use Apple's Core Audio C API to create a simple, real-time I/O stream on OS X?

After spending quite a while traversing the extensive Core Audio docs maze, I'm still unsure of what part of the C API I should be using to create a basic audio sample I/O stream in OS X.


When I say "I/O stream" I mean a low-latency stream that is spawned for a specific audio device (with params such as sample rate, number of channels, bit depth, etc) and receives/requests buffers of interleaved audio samples to be played back by the device with.


I would really appreciate it if someone could point me towards the header and associated functions that I need to achieve this (perhaps even an example) :) Thanks!


PS: Normally I would use PortAudio to achieve this, however in this case I'm interested in accessing the Core Audio framework directly in order to assist a friend in creating a purely Rust portable audio platform. Also, I've posted this question to the Apple developer forums but have not yet received a response so I thought I'd try here. If there is a more suitable exchange/forum to ask at, please let me know.


vendredi 13 février 2015

PortAudio on Raspberry Pi Configuring Recording of 8KHz 8-Bit Mono Sample from USB Microphone

I think I'm configuring PortAudio on Raspberry Pi correctly to gather data from my USB Microphone and configure them for XBee wireless transmission. The problem is the following line:



inputParameters.sampleFormat = paUInt8; // paInt16; // paUInt8 SOME REASON CAUSES ALWAYS IDENTICAL BUFFERS;


I want to record in Mono at 8KHz with 8-bit samples for telephony quality, but if I set the sampleFormat of the stream to paUInt8, then I alway get identical buffers in the callback. I check this by calculating the MD5Sum of the QByteArray. If I set the format to anything other than paUInt8, then I get different buffers. I have tried various values for the 'framesPerBuffer' size in the callback as well but nothing changes the values in the buffer. This exact behavior must be achievable because I have accomplished decent sound at the desired rates/sample types with ALSA and Qt5's QAudioInput ( with the same USB Microphone ).. It just seems to be PortAudio that I cannot configure properly - What am I doing wrong? Thanks!


Here is the code:



#include "audiorecorder.h"

// Startup audio capture
AudioRecorder::AudioRecorder()
{

// Debug
qDebug() << "Setting up Audio Recording Now";

// Do the Audio setup
doSetup();

}

// Do the setup
void AudioRecorder::doSetup() {

// Variables for configuration
int err;
int indevice;
const PaDeviceInfo *info;
PaStream *stream;

// Set up PortAudio
err = Pa_Initialize();
if ( err != paNoError ) {
handlePaError(101,err,"Error initializing PortAudio: %s\n");
}

// Learn about default input device and look for usb device
int numDevices = Pa_GetDeviceCount();
int index;
bool foundUSB = false;
int foundUsbIndex = -1;
for ( index = 0; index < numDevices; index++ )
{
deviceInfo = Pa_GetDeviceInfo( index );
printf( "[%d] %s #in=%d #out=%d\n",
index, deviceInfo->name,
deviceInfo->maxInputChannels,
deviceInfo->maxOutputChannels );
QString portName = QString::fromUtf8(deviceInfo->name);
if ( portName.contains("USB") ) {
qDebug() << "Found USB Microphone: " << portName << " At index: " << index;
foundUsbIndex = index;
foundUSB = true;
}
}

// Did we find the USB Microphone?
if ( !foundUSB ) {
qDebug() << "Could not find USB Microphone... Using default device...";
indevice = Pa_GetDefaultInputDevice();
}
else {
qDebug() << "Found USB Microphone!" << foundUsbIndex << Pa_GetDefaultInputDevice();
indevice = foundUsbIndex;
}
if (indevice == paNoDevice) {
handlePaError(200,0,"No default input device\n");
}
info = Pa_GetDeviceInfo(indevice);
if ( !info ) {
handlePaError(201,0,"Can't read audio input device info\n");
}

qDebug() << "Selected device name: " << info->name;
qDebug() << "Selected device input channels: " << info->maxInputChannels;
qDebug() << "Selected device default sample rate: " << info->defaultSampleRate;

// Setup the input params
PaStreamParameters inputParameters;
bzero( &inputParameters, sizeof( inputParameters ) ); //not necessary if you are filling in all the fields
inputParameters.channelCount = 1;
inputParameters.device = Pa_GetDefaultInputDevice();
inputParameters.hostApiSpecificStreamInfo = NULL;
inputParameters.sampleFormat = paUInt8; // paInt16; // paUInt8 SOME REASON CAUSES ALWAYS IDENTICAL BUFFERS;
inputParameters.suggestedLatency = Pa_GetDeviceInfo(indevice)->defaultHighInputLatency ;
inputParameters.hostApiSpecificStreamInfo = NULL; //See you specific host's API docs for info on using this field

// Open Specific input
err = Pa_OpenStream(&stream,&inputParameters,NULL,8000.0,64.0,paNoFlag,paCallback,NULL);
if ( err != paNoError ) {
handlePaError(102,err,"Error opening audio stream: %s\n");
}

// Create a XBee object
m_RouterApi = new XBeeApi();

// Start stream
err = Pa_StartStream(stream);
if (err!=paNoError) {
handlePaError(104,err,"Error starting audio stream: %s\n");
}

Pa_Sleep(10000); // This is how long to collect data in ms

// Stop
err = Pa_StopStream(stream);
if ( err != paNoError ) {
handlePaError(104,err,"Error stopping audio stream: %s\n");
}

// Close down
err = Pa_CloseStream(stream);
if (err!=paNoError) {
handlePaError(105,err,"Error closing audio stream: %s\n");
}

// Done with PortAudio
err = Pa_Terminate();
if (err!=paNoError) {
handlePaError(103,err,"Error terminating PortAudio: %s\n");
}

}

// Audio data comes in through this callback
int AudioRecorder::paCallback(const void *in, void *out, unsigned long framesPerBuffer,
const PaStreamCallbackTimeInfo *timeinfo,
PaStreamCallbackFlags statusFlags,
void *userdata)
{

qDebug() << "********* Got this many 'frames': " << framesPerBuffer;

const char* buffer_ptr = (const char*)in;

//Copy data to user buffer
QByteArray data;
for(int i = 0; i < framesPerBuffer; ++i) {
data.append(buffer_ptr + i);
}

// Always the same if the Format is PaUInt8
qDebug() << QString(QCryptographicHash::hash(data,QCryptographicHash::Md5).toHex());

// This is a singleton which is the current one
// AudioRecorder *currentOne = &Singleton<AudioRecorder>::Instance();
// currentOne->transmitData(data);

// Done
return 0;

}

// Data ready for XBee transport
void AudioRecorder::transmitData(QByteArray data) {

// Start the timer on the first go-around
if ( m_CycleCounter == 0 ) {
m_RateTimer.start();
}

// Create a ByteArray
QByteArray toSendBytes(data.data(), data.size());

// Create packet
QString toSend = "";

for ( int s = 0 ; s < toSendBytes.length() ; s++ ) {

// Create Hex String
QString hexadecimal;
hexadecimal.setNum((quint8)toSendBytes.at(s),16);
QString toAppend = QString("%1").arg(hexadecimal);
QString paddedString = QString(" %1").arg(toAppend.rightJustified(2, '0').toUpper());
toSend.append(paddedString);

}

toSend = m_RouterApi->createPacket(toSend.toUpper());
//toSend = "7E 00 11 20 00 0A 0A 0A 01 26 16 00 00 01 00 48 45 4C 4C 4F 0F"; // TEST MESSAGE
//qDebug() << toSend;
int amount = m_RouterApi->writeToXBee(toSend);

m_CycleCounter++;
m_BytesSent += amount;
if ( m_CycleCounter % 100 == 0 ) {
QString update = QString("Transmitting Now!\nXBee Bytes / Sec\n = %1 / %2\n = %3 B/s").arg((int)m_BytesSent).arg((int)((int)m_RateTimer.elapsed() / (int)1000.0f)).arg((int)((int)m_BytesSent/((int)m_RateTimer.elapsed() / (int)1000.0f)));
qDebug() << "Update: " << update;
emit updateScreen(update);
}

// Clear it out
toSend = "";

}

// Generic error routine
void AudioRecorder::handlePaError(int n,int err,char *s)
{

fprintf(stderr,s,Pa_GetErrorText(err));
Pa_Terminate(); // already had an error so quit checking
exit(n);

}

HTML5 audio player. Cannot change the src element

I am trying to dynamically change the value of the src of my HTML audio player and for some reason I can't change it and it stuck on it's placeholder value. This is my HTML element:



<audio controls preload="none" style="width:480px;">
<source id="mp3" src="placeholder" type="audio/mp3" />
</audio>


And this is my Javascript:



var tempMp3 = "http://ift.tt/1EnpKZi";
$("#mp3").attr("src", tempMp3);


Anybody knows what silly mistake am i doing here?


Cheers


teamviewer running on hyper-v machine is not detecting audio

i have installed Hyper-v machine, i can RDP to it, all the drivers seems to work find. but when i install teamviewer into this hyper-v machine, the audio devices become undetectable (no audio) any one faced this issue? please share your suggestions.


thank you.


Play multiple audio files in sequence as one

I'm trying to implement an Audio Player to play multiple files as if they were a single one. However, the initial buffer time should be only the first part's duration, and the other files should be loaded in sequence.


For example:



  • File1:

    • Part1 - 0:35s

    • Part2 - 0:47s

    • Part3 - 0:07s




The File1 should be played as if it had 1:29, but we'd only wait (at most) until Part1 is loaded to start playing.


I've had a look at AVAsset, but it doesn't seem to solve the problem. I also thought of implementing it using AVAudioPlayer and doing all the logic myself.


Has anyone had this issue before?


Thanks!


Where to start with android audio social network?

I want to enable users to upload audio clips to the cloud from their android devices. I'm new to this kind of stuff and need to know what cloud technology to integrate into my android application to allow users to upload 20 second clips and also listen to other clips uploaded. I don't know a lot about streaming or anything, so I need a general direction to look.


Java Sound dramatically slower after JVM 8 update

My Application loads a bunch of audio clips at startup. It uses java.applet.Applet.newAudioClip(URL audioFileURL) to load the files, which are in the same folder. I could see that this function is basically a wrapper to get JavaSoundAudioClip objects


Until yesterday, I compiled the JAR with JDK 7 and launched it with JVM version 7 update 45. Then I updated the JVM to version 8 update 31.

Now, the loading of each audio takes ten times longer than before (was 0.2 seconds each, now it is between 2 and 3 seconds)


Digging deeper in debugging, I found that the methods that slowed down the most are AudioSystem.getAudioInputStream , AudioSystem.isLineSupported , AudioSystem.getLine


Audio format shouldn't be involved: I tried both OGG and WAV with the same results.


The settings for both JVMs are the same


Data transmitting from jd2xx to hijack iOS library

I bought the Hijack Development kit from Seeedstudio, including Hijack main board and programmer daughterboard. After connecting the Hijack main board to the programmer board, I suppose that the data could be transmitted from PC client to libHijack in iPhone/iPad.


Let’s say:

J: jd2xx client on windows 7.

P: programmer board.

H: Hijack main board

L: an iOS library developed for receiving data from the iPhone's audio interface.


The expected data flow: J —>P —>H —>L


While sending some data by jd2xx client, the red LED D5 is lit(looks good) but no response in L(only some noises invoking the callback method).


Did I miss something?

I appreciate any idea or suggestion.


Here is jd2xx code:



JD2XX jd = new JD2XX();
try {
jd.open(0);

jd.setBaudRate(9200);
jd.setDataCharacteristics(8, JD2XX.STOP_BITS_1, JD2XX.PARITY_NONE);
jd.setFlowControl(JD2XX.FLOW_NONE, 0, 0);
jd.setTimeouts(1000, 1000);

String msg = "Hey, FDTI Chip!";
jd.write(msg.getBytes());

jd.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}


Here is iOS code:



- (void)viewDidLoad {
[super viewDidLoad];
self.hiJackMgr = [HiJackMgr new];
[self.hiJackMgr setDelegate:self];
}

- (int)receive:(UInt8)data{
NSString *msg = [NSString stringWithFormat:@"%f", data];
NSLog(msg);
dispatch_async(dispatch_get_main_queue(), ^{
[self.msgLabel setText:msg];
});
return data;
}

iOS play .m4a sound file while music is playing in the background [duplicate]


This question already has an answer here:




When I play an .m4a sound file with music playing from the iTunes library the music stops and then the sound plays. How can I get the sound to play over the music?



NSString *path = [NSString stringWithFormat:@"%@/sound.m4a", [[NSBundle mainBundle] resourcePath]];
NSURL *soundUrl = [NSURL fileURLWithPath:path];

NSError *error;

_avp = [[AVAudioPlayer alloc] initWithContentsOfURL:soundUrl error:&error];

_avp.volume =1.0;

[_avp prepareToPlay];
[_avp play];

Adding Audio instead of

I have Orekaweb a audio recorder. It runs on Apache tomcat. What it does is it records all calls on VOIP network and provides a easy access to view them on web browser.


The issue is the recorded calls are in WAV format and it works perfect in IE. the codes are.



<td colspan="2">

<script language="JavaScript">
function play (audioFilename)
{
document.all.player.autoStart = true;
document.all.player.fileName = audioFilename;
}
</script>

<OBJECT ID="player"
CLASSID="CLSID:22d6f312-b0f6-11d0-94ab-0080c74c7e95"
CODEBASE="http://ift.tt/1KSFNTb
en/nsmp2inf.cab#Version=5,1,52,701"
STANDBY="Loading Microsoft Windows Media Player components..."
TYPE="application/x-oleobject"
WIDTH=280 HEIGHT=50 >
<PARAM NAME="fileName" VALUE="">
<PARAM NAME="animationatStart" VALUE="false">
<PARAM NAME="transparentatStart" VALUE="true">
<PARAM NAME="showControls" VALUE="true">
<PARAM NAME="ShowStatusBar" VALUE="true">
<PARAM NAME="ShowDisplay" VALUE="false">
<PARAM NAME="ShowPositionControls" VALUE="true">
<PARAM NAME="ShowTracker" VALUE="true">
<PARAM NAME="CurrentPosition" VALUE="0">
<PARAM NAME="autoStart" VALUE="true">
</OBJECT>


The audio play back function is not playing on Google chrome or other browsers.


So i thought of adding up inbuilt webkit player.


I added up the codes its below.



<td colspan="2">

<script language="JavaScript">
function play (audioFilename)
{
document.all.player.autoStart = true;
document.all.player.fileName = audioFilename;
}
</script>
<audio width="280" height="50" controls="controls" src="audiofileName" >
<object width="280" height="50" type="audio/x-wav" data="audiofileName">
<PARAM NAME="fileName" VALUE="">
<!-- Image as a last resort -->
</object>
</audio>


I see the player, but its playing.


Can anyone help me in this regards.


Stephen Paulraj


Decoding compressed audio byte array in Python

My goal is to process audio data captured from web stream (internet radio) in Python 3.4. Capturing is done with the use of urllib package:



radio_data = urllib.request.urlopen(url)
while SOME_STATEMENT:
samples = r.read(n_bytes)
# decoding
# processing


'Samples' array contains of audio compressed values (bytes), in my case encoded with OGG. Writing the stream to the file works fine, so the data are good. I need to decode them every single time new frame is captured to apply some processing in real time, without writing to file. I tried it with pyglet, but it accept only a name of the file as argument, and I don't want to change internal code of the library. PyAudio do not support encoded files. There was a solution like Pymedia, but it wasnt ported to Python 3. There is also GStreamer package, but I found solutions working only on saved files, not on binary data. I have found some other packages like decoder-1.5XB-Win32, but they work only on files or cant be used with python 3. Does anyone know solution for decoding audio data (ogg, mp3, aac) directly from array?


Web Audio API 24db Filter

The web audio biquad filter is 12db. Is it possible to create a 24db filter by connecting 2 of these together?


I have tried connecting 2 together and it certainly creates a much more extreme effect with the resonance being particularly harsh. I divided the resonance value by 2 to compensate for this.


Is what I have created here a 24db filter?


Convert raw bytes to audio in Matlab

Here's the problem: I send a small audio file (~10Kb) from android to matlab through tcp socket. The matlab script get the file, but android's outputstream sends raw byte. How can I reconstruct the original audio file in matlab?


Wrong URL in [audio] WordPress shortcode

I have a problem with WordPress Audio Shortcode. I used it like this:



<?php
echo do_shortcode('[audio mp3="http://ift.tt/1A3N4e2" ogg="http://ift.tt/1MiYzVy"]');
?>


but in front, in HTML code I got:



<!--[if lt IE 9]><script>document.createElement('audio');</script><![endif]-->
<audio class="wp-audio-shortcode" id="audio-362-1" preload="none" style="width: 100%; visibility: hidden;" controls="controls">
<source type="audio/mpeg" src="http://ift.tt/1A3N4e4" />
<source type="audio/ogg" src="http://ift.tt/1MiYykh" />
<a href="http://ift.tt/1A3N4e2">
http://ift.tt/1A3N4e2
</a>
</audio>


like you can see, in <source/> tag URL to audio file is incorrect (in <a/> tag, URL is OK). It has some strange "?_=1" at the end of URL, and of course, player does not work. Browser does not recognize multimedia file.


Can you help me? Do you know how can I fix it?


regards