samedi 31 janvier 2015

Android : How to set ringing mode to silent in Lollipop

Prior Lollipop i was using below code to Mute Ringing Tone



// Mute Ringtone

AudioManager amanager = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
amanager.setRingerMode(AudioManager.RINGER_MODE_SILENT);


It is not working anyomore on devices runnning on lollipop. It although sets the priority mode but do not silence at all. Any help will be appreciated.


Reading raw audio values from ADC chip on raspberry pi

I wired up the MCP3008 ADC chip to an Electret Microphone and to my pi. I'm reading the input using bit-banging in python, and I'm getting an integer from 0-1024.


I followed this tutorial to do the bit-banging: http://ift.tt/1znrthW


My question is how do I take this integer and convert it to something meaningful? Can I somehow write these bytes to a file in python to get the raw audio data that Audacity can play? Right now when I try to write the values they just show up as the integer instead of binary. I'm really new to python, and I've found this link for converting the raw data, but I'm having trouble generating the raw data first:Python open raw audio data file


I'm not even sure what these values represent, are they PCM data that I have to do math with related to time?


Why is genymotion emulator not playing sound?

I am working on creating a flappy bird clone with the Libgdx framework -http://ift.tt/1mIByNU


I managed to launch the application on my google nexus 7 Genymotion emulator but I am not able to play any of the sound effects that I was able to hear when launching the desktop version of the game I tried turning the sound up


enter image description here


but even that didn't make a difference. Does anyone know what the issue is? I did some research and read up on here Genymotion emulator not detecting sound that a bug in the Windows(my machine) version of Genymotion prevents the emulator from detecting the sound but could you also imply that if it cannot detect a sound, it cannot play a sound? I wasn't so sure about that conditional statement.


PHP:: Is is possible to process sound via php?

I don't remember where I saw it lately but I saw a website that can trim an audio files and convert them to other audio formats. I was wondering if it's possible to do it via php or any other way (dll?).


Thanks!


How to capture just the audio or video while streaming a video from a URL?

I would like to capture just audio and just video (without audio) while streaming a video from Web on my iOS application.


I have googled it but could not find one similar resource.


Is it possible at all?


Any pointers to some related resources?


Thanks.


Java midi api: direct connection between external midi devices

I'm trying to connect two external MIDI devices to each other, in order to press a button on one piano and play a sound on the other one. So far I'm able to receive midi data and sending it to my own receiver which basically just prints out the received information into the console, and I'm able to send a sequence to my external synth. But I can't seem to connect both. Is there someone that has done something similar and maybe can point me to a piece of code that could help me? Thanks, FlyingV


How to use different codec than audio/pcm with QAudioRecorder

How to record audio with Qt4.8 with other codec than wave (audio/pcm)? I am only able to record to a .wav-file. Here is the core of the code:



QString fileName = "C:/Audio/testRecording"; //.wav removed
audioRecorder->setOutputLocation(QUrl::fromLocalFile(fileName));
audioRecorder->setAudioInput(boxValue(ui->audioDeviceBox).toString());

QAudioEncoderSettings settings;
//settings.setCodec ("audio/pcm");
settings.setCodec ("audio/vorbis");
settings.setSampleRate (44100);
settings.setBitRate (8000);
settings.setChannelCount (1);
settings.setQuality (QMultimedia::EncodingQuality(2));
settings.setEncodingMode (QMultimedia::EncodingMode(3));

//QString container = "audio/x-wav";
QString container = "audio/ogg";

audioRecorder->setEncodingSettings(settings, QVideoEncoderSettings(), container);
audioRecorder->record();


If I switch to "audio/vorbis" and "audio/ogg" it still records to a .wav-file.


How to install a codec? Where are the codecs installed on Windows? What is the application expecting when it reads "audio/pcm"?


How to convert a .wav file to a .caf file for use in iOS

I don't know squat about audio or the Terminal. I've use this Terminal command to convert a wav file for use in iOS:



afconvert -v -f 'caff' -d LEI16 -s 1 /users/myUserName/Desktop/hibeep.wav /users/myUserName/Desktop/hibeep.caf


After adding the file to my project, nothing happens when I execute:



NSURL * softURL = [[NSBundle mainBundle] URLForResource: @"hibeep" withExtension: @"caf"];
CFURLRef softSoundURL = (__bridge CFURLRef) softURL;
AudioServicesCreateSystemSoundID(softSoundURL, &_beepSound);
AudioServicesPlaySystemSound (_beepSound);


Yet, when I click on hibeep.caf in the Project Navigator, the sound will play fine.


I have tried this in both the simulator and on an iPad.


Any suggestions?


Thanks


Incorrect playing audio files in UITableView - Swift, iOS

I am making an app with the function of playing audio in Swift. All audio files are showed in UITableView, when I click the row an appropriate audio files must be played. The problem is that, for example, when I click the first row it doesn't play the audio, but if I click the second one, the audio of first row is played. Image of tableView: http://ift.tt/1uMx23K The code is below:



func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
let cell: UITableViewCell = UITableViewCell(style: UITableViewCellStyle.Subtitle, reuseIdentifier: "MyTestCell")
var mp3file: AnyObject = mp3sPaths[indexPath.row]
cell.textLabel?.text = mp3file.lastPathComponent
return cell
}

func tableView(tableView: UITableView, didDeselectRowAtIndexPath indexPath: NSIndexPath) {
var mp3file = mp3sPaths[indexPath.row] as NSString
var mp3URL: NSURL = NSURL.fileURLWithPath(mp3file)!
var error: NSError?
audioPlayer = AVAudioPlayer(contentsOfURL: mp3URL, error: &error)
audioPlayer?.stop()
audioPlayer?.prepareToPlay()
audioPlayer?.play()
}

Read video bytes array in Android SurfaceView

I'm currently on a project in my engineering school which consists in broadcasting videos ( with the audio ) in Multicast to Android devices. Firstly, we succeed to read a .mp4 file on a Android device thanks to 2 MediaExtractor, 2 MediaCodec , a SurfaceView and a AudioTrack. And it works pretty well.


Secondly , we want to send a .mp4 from a Computer to this Android device. For this, we use Xuggle on the PC, and we send the audio and the video bytes arrays through 2 separate ports ( with UDP ). We achieve to read the audio thanks to a AudioTrack ( with the write() method ) on the Android device. But, how can we read the bytes sent for the video? Do you have any ideas ? Can we use a BitmapFactory and after send it to the SurfaceView ?


Loop Audio with Javacript

I have an audio player that I want to use in "loop mode" so the sound that I play can repeat continously and I don't understand how to do this. I've uploaded the js file from the player that I have, so you can see if you understand something, I have another two but I hope you can find something ere since I cannot insert another link for now. Thank you very much!


jquery.jplayer.min.js


JavaScript code doen't work in Safari (Web audio API)

I followed this http://ift.tt/1CKUxAL tutorial to visualize Sound with JavaScript and the Web audio API.


It works great in Google Chrome but not in Safari ! I think it is a problem with this "prefixes" or something.


I would be soo happy if someone could tell me what do I have to change in my code to get this working in Safari.(Sorry for my bad English)


Here is my code (Attention, you have to play a mp3 file to test it):





<!doctype html>
<html>
<head>

<style>
div#mp3_player{ width:500px; height:60px; background:#000; padding:5px; margin:50px auto; }
div#mp3_player > div > audio{ width:500px; background:#000; float:left; }
div#mp3_player > canvas{ width:500px; height:30px; background:#002D3C; float:left; }
</style>

<script>
// Create a new instance of an audio object and adjust some of its properties
var audio = new Audio();
audio.src = 'track1.mp3';
audio.controls = true;
audio.loop = true;
audio.autoplay = true;

// Establish all variables that your Analyser will use
var canvas, ctx, source, context, analyser, fbc_array, bars, bar_x, bar_width, bar_height;

// Initialize the MP3 player after the page loads all of its HTML into the window
window.addEventListener("load", initMp3Player, false);

function initMp3Player(){
document.getElementById('audio_box').appendChild(audio);
context = new webkitAudioContext(); // AudioContext object instance
analyser = context.createAnalyser(); // AnalyserNode method
canvas = document.getElementById('analyser_render');
ctx = canvas.getContext('2d');
// Re-route audio playback into the processing graph of the AudioContext
source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
frameLooper();
}

// frameLooper() animates any style of graphics you wish to the audio frequency
// Looping at the default frame rate that the browser provides(approx. 60 FPS)

function frameLooper(){
window.webkitRequestAnimationFrame(frameLooper);
fbc_array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(fbc_array);
ctx.clearRect(0, 0, canvas.width, canvas.height); // Clear the canvas
ctx.fillStyle = '#00CCFF'; // Color of the bars
bars = 100;
for (var i = 0; i < bars; i++) {
bar_x = i * 3;
bar_width = 2;
bar_height = -(fbc_array[i] / 2);
// fillRect( x, y, width, height ) // Explanation of the parameters below
ctx.fillRect(bar_x, canvas.height, bar_width, bar_height);
}
}
</script>

</head>
<body>


<div id="mp3_player">
<div id="audio_box"></div>
<canvas id="analyser_render"></canvas>
</div>


</body>
</html>



how to disable and enable sound effects in my application?

In my application all my Button have their own sound effects. I just want two button in my setting to disable and enable sound effects in whole application. I enable sound effects for each button like this:



MediaPlayer mp = MediaPlayer.create(getApplicationContext(), R.raw.laser);
mp.start();


but how i can disable , and also i try this code:



setSoundEffectsEnabled(false);


but the point is that i just want to disable and enable sound effects by user's selection . I was wondering if anyone could suggest me any solution.


vendredi 30 janvier 2015

GSM - EFR Speech Coding

Does any body knows were can I find a source code for the EFR Speech Coding using in the Enhanced Full Rate Speech Channel in GSM.


I have found GSM FR, however I couldn't find any code which belongs to GSM 6.60(gsm-efr).


I have searched in SO and internet a lot, but I couldn't find anythings.


Language is not important.


How to programatically convert m4a audio into a mp4 video with a single image as background?

There are some applications that seem to get the job done, but I'd like to do the conversion programaticallly. Is this possible, perhaps with a combination of ffmpeg and some other packages?


How to play sound on website whenever specific tweet is made?

I am working on a project with a personal website. I need to figure out how to have my website play a sound by detecting whenever a tweet with a specific hashtag is used. Is this possible? If so, what tools should I go about using to implement it?


I have what I would describe as novice experience with programming websites to use javascript or similar. I would be interested in being pointed towards a good tutorial on how to use javascript along with a Twitter API to achieve what I am looking for.


Thanks.


Matlab Record USB Mic


input = dsp.AudioRecorder;
output = dsp.AudioPlayer;

tic;

while (toc < 5)
stream = step(input);
step(output, stream);
plot(stream);
drawnow;
end


I'm creating a matlab program and I need to record and playback (in real-time) audio input from a USB mic. This is what I have but it does not work. The data coming in is all 0s.


Write the name tag of an element JQuery

I am making a custom HTML5/JQuery Audio Player and I have my audio tags named like name="Song Name" and I want to use JQuery to grab that name tag value and write it in a <p></p> tag. But whenever it runs it just says [object Object] instead of the name value.


HTML



<p id="song-name"></p>
<audio name="Song 1" src="/song1.mp3" type="audio/mpeg">
<audio name="Song 2" src="/song2.mp3" type="audio/mpeg">
<audio name="Song 3" src="/song3.mp3" type="audio/mpeg">


JQuery



var songName = $("audio[name]")

$("#song-name").text(songName);


JSfiddle


Editing or intereacting with usb drivers for the sake of audio latency

This is more of a request for information than how to create a driver. The digital audio recording world is constantly trying to find ways to reduce the latency from the moment a signal hits the USB port, to when it comes back out of your speakers or headphones. The more lag time between when you hit or strum a note, the harder (or impossible) it becomes to actually play while listening to yourself.


This article mentions USB buffer/timing/latency settings and how lowering the interrupt timer from 6 to 2 or so milliseconds can greatly reduce the overall latency: http://ift.tt/1684fBj. Somehow, they achieve this with their external audio interface. I'm assuming a driver they created let's you mess with your computer's USB drivers.


If I am interpreting this correctly, there must be some way to "overclock" the USB ports timing. I can't find a good set of information on this. Can anyone explain these setting s further, or point me in the right direction to achieve this?


Why is only one channel of audio being played in Delphi app?

I have a delphi app in which I direct the sound from a text2speech SAPI component so that one channel is driving a LED and the other is driving a coil. My point is that normally, I do not hear the sound output.


Yesterday on doing some testing with the app, I had headphones on instead of using the LED and Coil arrangement (normally, when testing I just mute the application via the sound mixer), and I noticed that the sound was only coming from the right channel.


I tested the headphones, media player and all worked fine. I confirmed that the L/R channels had a balanced output in the Windows sound properties area.


I used a simple PlaySound API call in the mainform create event and that, too, played just the right channel. I moved the PlaySound call to the top of the DPR file and that, too, played only the right channel right after the start of the app.


I tried creating a simple app in Delphi with this same PlaySound call and that sent sound to both channels just fine.


I tried the app on two other systems, another win7 system and a win8.1 system, the sound worked fine on the other win7 system, but played again, only the right channel on the Win8.1 system. I checked with another user and he too, only got the right channel output.


What is going on here? How can I get the left channel back working in the app. It does not appear to be code related, nor Delphi related, perhaps a Windows App specific registry setting that Windows has made (I certainly didn't!)? Thank you for any suggestions!


Chuck Belanger


Javascript/PHP ignore quotes in string

I'm making a music player using php and javascript. I list the files like this:



<?php

if (isset($_GET["action"])) {
$action = htmlspecialchars($_GET["action"]);

if ($action == "listen") {
function listFolderFiles($dir) {
$ffs = scandir($dir);
echo '<ol>';
foreach($ffs as $ff){
if($ff != '.' && $ff != '..'){
echo '<li><a href="#" onclick="changesong(\'' . $dir . '/' . $ff . '\');">'. $ff . '</a>';
if (is_dir($dir . '/' . $ff)) listFolderFiles($dir . '/' . $ff);
echo '</li>';
}
}
echo '</ol>';
}

listFolderFiles("music");
}
} else {
echo '<a href="?action=listen">> listen</a>';
}
?>


And I change the song like this:



<script>
function changesong(url) {
$("#audioplayer").attr("src", url);
$("#audioplayer").trigger('play');
}
</script>


The problem is that songs with quotes in them won't play (for example Don't Stop Me Now). Is there an easy way to fix this?


Analyze Sound file C++

I am trying to write a node.js module that will let me take an mp3 and separate the rhythm and the melody. Are there any c++ libraries that will let me do this or does anyone have any ideas where to start? There's a node.js library that will analyze and display the waveform of a sound file, so I'd like it to be like that except produce two tracks, one as the melody and one as the rhythm.


Playing sound on Android with SoundPool while Recording microphone with AudioRecord

I'm using AudioRecord to record mic data while at the same time playing sounds with SoundPool. My problem is: When I call startRecording() on the AudioRecord, the SoundPool becomes choppy or stops completely. When I call stopRecording() all the cued sounds come in at once. Is there a better library I should use? I've noticed the Buttons in the app are still responsive and are still able to play their default sounds.


SoundPool stops playing sounds even when the thread that called AudioRecord.startRecording() has finished. It doesn't play sounds again until I call AudioRecord.stopRecording(). I'd like to do both at once. Seeing as the sounds I'm playing are short sounds, I thought SoundPool was the correct class to use. If someone knows how the Android UI Button class plays its sounds I could copy that and it should work.


This is in the emulator with the latest Android Studio 1.0.2


Edit: MediaPlayer is worse, it fails after calling AudioRecord.startRecording() and it has much more latency than SoundPool (when not recording at the same time.) I know SoundPool.play() was being called regularly because of Log statements.


Android Sound Record both mic and application sound

I am trying to implement a sound recording functionality to my app. What I need is to both capture the sound that comes from the mic and the sound that I am playing within the application, but not through the speakers, somehow internally.


I am thinking the SoundPool could provide me with the solution but I need someone to point me in the right direction.


Looking forward for your answers!


Detect network sample rate of phone call on Android phone via Android SDK

I have been putting together some sample applications for a goofy Android application where I capture audio uplink/downlink data during a normal phone call (ie: recording downlink audio, and then streaming modified audio and sending it out over the network), and I do some pitch shifting and filtering in the middle, so the audio you hear sounds like Alvin and the chipmunks, and your voice also sounds high pitched to the person on the other side.


I've noticed that when I'm in certain coverage zones, the audio uplink/downlink is a 16kHz signal (wideband audio), but when I go into certain coverage zones the signal is 8kHz (narrowband audio). I always have the sample sample rate I'm getting from the Android SDK (24kHz I think), but the maximum frequency content actually present in the signal is limited by carried limitations.


Is there a way to query some sort of "mode" or something that lets me know the maximum frequency content present in the call via the Android SDK? Maybe there's some sort of AT command or query I can issue to the cellular radio?


I want to do this because I do a discrete Fourier Transform before applying my pitch shift transformation, and if I know the signal is band limited, I would only have to process half as many frequency bins. I know I could do a peak detection or something like that, but it seems that even in the case of 8kHz band limited calls, there is still some noise in the [8kHz,10kHz] band, so rather than determining the band limit via signal analysis, I was hoping there is some metric I could query from the cellular radio or the network itself to have a cut-and-dry answer.


Thanks!


MSVAD Virtual Audio Sample Driver "Inf2Cat Signability test failed" (Windows WDK 8.1)

So I'm working on a virtual audio driver for Windows.


HOST MACHINE: Windows 8.1 w/Windows Driver Kit 8.1


TEST/TARGET MACHINE: Windows 8.1 connected via Network (Ethernet/Wi-Fi).


IDE: Visual Studio 2013 Express


PROJECT: MSVAD (Virtual Audio Driver)


Deployment configuration is for Win7x64.


For reference please see this sample tutorial: http://ift.tt/1HossUw


PROBLEM: See tutorial link above. Under "Build Sample" after "5. Locate the built driver package" the tutorial shows a list of files you should have in the directory. For me I have those files under C:\MSVAD\C++\x64\Win7Debug\package. They are all there except msvad.inf and msvad.cat. However msvad.inf does show up under C:\MSVAD\C++.


When I build the project I get these two errors:


Error 1 error : Driver Deployment Task Failed: Driver Preparation (x64) (possible reboot) C:\Program Files (x86)\Windows Kits\8.1\build\x64\ImportAfter\DriverDeployment.targets 69 9 package (Package\package)


Error 2 error : Driver Deployment Task Failed: Driver Install (x64) (possible reboot) C:\Program Files (x86)\Windows Kits\8.1\build\x64\ImportAfter\DriverDeployment.targets 69 9 package (Package\package)


But then I was able to get the msvad.inf file to the correct directory by going into project settings (in Solution viewer) by adding the msvad.inf to include in the \package directory (still not the msvad.cat file though), but when I build the project this error showed up:



Inf2Cat Tool Output: ................................ Signability test failed.
Errors: 22.9.7: DriverVer set to incorrect date (must be postdated to 4/21/2009 for newest OS) in \msvad.inf
Warnings: None "


I have been trying hard to figure this out. I'm pretty sure that this has to do with the msvad.cat file. I tried using Inf2Cat.exe under the \bin of the WDK directory but it won't open for me. When I try opening it from CMD in Admin Mode it says access is restricted or something. Even if I got it to open I'm not 100% sure what to do. I am completely stumped.


jquery slider that controls html5 audio volume

I am creating a custom JQuery/HTML5 audio player and I want a slider that can control the volume of the audio element. I have the slider that is tagged with class="volume_slider" and the songs are tagged with class="audio-player".

For some reason this code will not change the volume, I got the code from here, any ideas on why the volume isn't changing?



$(".volume_slider").slider({
value : 75,
step : 1,
range : 'min',
min : 0,
max : 100,
slide : function(){
var value = $(".volume_slider").slider("value");
$(".audio-player").volume = (value / 100);
}
});

HTML5 - Check if autoplay is allowed

My website plays background music with autoplay. I made it use my custom controls for play and pause. Now, I'd like to set the initial state according to what is going on. If the music is about to play for real, it should show pause icon, otherwise (e.g. on mobile) play icon.


I would use audio.paused boolean value, but it's always false before the audio is loaded.


I would use audio.autoplay value, but it's always true for me, even on devices that don't support it.


Is there any clean way to know whether the audio will be played? I would like to keep it in sync with autoplay attribute, so if I decided to remove it, the state should always show play icon in the beginning.


How do I query Android system accessibility settings?

Android allows the user to adjust the audio balance on their device (example screenshot at http://ift.tt/1tFZo4v).


I'm creating an app that will be sending unique audio to each channel, so I need to make sure the accessibility balance setting is still at the default middle setting.


I discovered the AccessibilityManager class (http://ift.tt/1edzcEm) which states that it "provides facilities for querying the accessibility state of the system". But I'm at a loss as to how to use it to retrieve the audio balance setting.


Is AccessibilityManager the correct class to use? If so, how can I use it for my needs? If not, which class/method in the Android SDK should I use?


ogg audio errors in chrome audio player

I have an audio player that loads the src from a php script


this seems to have stopped working in Chrome with ogg files at some point, still works in Firefox, also works in chrome for mp3 and if the same ogg file is set directly as the src.





<audio id="audio1" controls >
<source src="" >
</audio>

<script>
var myaudio = document.getElementById("audio1");

myaudio.src = "http://localhost/f=filename&m=ogg";
//myaudio.src = "filename.ogg";//this works for the same ogg file

myaudio.addEventListener('loadeddata', function(){console.log('loaded ')}, false);
myaudio.addEventListener('canplay', function(){console.log('canplay ')}, false);
myaudio.addEventListener('canplaythrough', function(){console.log('canplaythrough ')}, false);
myaudio.addEventListener('error', function(){for( p in myaudio.error){console.log('error '+p)}; }, false);


</script>



this is the response header:



HTTP/1.1 200 OK
Date: Fri, 30 Jan 2015 15:33:52 GMT
Server: Apache/2.4.9 (Win32) mod_fcgid/2.3.9
X-Powered-By: PHP/5.4.25
Content-Disposition: attachment; filename=filename.ogg
X-Pad: avoid browser bug
Cache-Control: no-cache
Content-Transfer-Encoding: binary
Access-Control-Allow-Origin: *
Content-length: 13754
Keep-Alive: timeout=5, max=99
Connection: Keep-Alive
Content-Type: audio/ogg


In Chrome I get the following errors logged to the console:



error code
error MEDIA_ERR_ABORTED
error MEDIA_ERR_NETWORK
error MEDIA_ERR_DECODE
error MEDIA_ERR_SRC_NOT_SUPPORTED

ffmpeg: Image + Audio + Text to Video

I'm new to ffmpeg and am trying to create a video (mp4) by combining two images (png) and an audio file (mp3) and overlay it with a text.


I can successfully create a video by combining the images and audio but I struggle to add the text.


Here's my script to overlay on image with another and add the audio:



ffmpeg -loop 1 -i background.png -i overlay.png -filter_complex overlay -i audio.mp3 -shortest -acodec copy -f mov video.mov


How can I add an overlay-text to the video? The text remains unchanged for the entire video.


Thanks in advance.


Stop all playbacks from own application when activity starts

I am aware that when I want to stop other application from playing audio, I use audiofocus: http://ift.tt/1py0w48


However, I want to stop the audio that has been started by my own application.


I have few activities, one is the main screen, then the list of songs, each with own play/pause button. That works fine, when I hit play:



  1. I check if I am already playing something and if so, I stop the player.

  2. I start the song and set playing = songTitle so I will know that I'm already playing something.


However, when I change the activity or even go back from application to desktop, the music still plays - that's fine. But when I go back to my application it does not know that I am already playing a song and now when I click the song, two songs are playing at the same time.


What I want to do:


To make it work I would like to stop all the music I started to play (if any) on the very start of the onCreate for my SongsActivity. The problem is, I cannot just craete a new MediaPlayer and use stop() method because it's not attached with the songs that was previously started by another instance of MediaPlayer.


I use the WebView to display the interface:



public class SongsActivity extends ActionBarActivity {

@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);

final String path = getExternalFilesDir(null).toString() + "/";
final WebView webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setJavaScriptEnabled(true);
final MediaPlayer mediaPlayer = MediaPlayer.create(getApplicationContext(), R.raw._first_song);

WebViewClient webViewClient= new WebViewClient(){
String playing = ""; //what we have started lately
@Override
public boolean shouldOverrideUrlLoading(WebView view, String url)
{
if(url.contains("mp3")){
String songTitle = url.replace("file:///android_asset/mp3-", "");
if(mediaPlayer.isPlaying()){
mediaPlayer.stop();
}

if(!playing.equals(songTitle)) {

mediaPlayer.reset();
int resID = getApplicationContext().getResources().getIdentifier("_" + songTitle, "raw", getApplicationContext().getPackageName());

try {
AssetFileDescriptor afd = getApplicationContext().getResources().openRawResourceFd(resID);
if (afd != null) {
mediaPlayer.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getLength());
}
afd.close();
} catch (IOException e) {
e.printStackTrace();
}
try {
mediaPlayer.prepare();
} catch (IOException e) {
e.printStackTrace();
}
playing = songTitle;
mediaPlayer.start();
mediaPlayer.setOnCompletionListener(new MediaPlayer.OnCompletionListener(){

@Override
public void onCompletion(MediaPlayer mp) {
webView.reload(); //so the pause button will restart to play state
}
});
}
}
return true;
}

@Override
public void onLoadResource(WebView view, String url){}
};
webView.setWebViewClient(webViewClient);
webView.loadUrl("file:///android_asset/songs.html");
}
...

}

Android support v7 MediaRouter sometimes doesn't work properly

I use v7 support library MediaRouter for switching routes between phone's speaker and bluetooth device.


And sometimes it works strange, for example, when I turn off the bluetooth, corresponding route seems to be removed (playback switches to the speaker), but my application doesn't receive any callback about it. And moreover, when I manually get all the routes via MediaRouter.getRoutes(), it returns that bluetooth route, but when I try to switch to it, it seems to be selected, but actually playback still goes through the speaker.


I tried all the flags CALLBACK_FLAG_FORCE_DISCOVERY, CALLBACK_FLAG_REQUEST_DISCOVERY etc, without result. Only phone reboot helps. Any suggestions?


I used Android 4.2, 4.4.


Error while playing Audio in iOS AudioPlayer

I receive the following error code while playing an audio file which is kept in Documents folder.


Error Domain=NSOSStatusErrorDomain Code=-43 "The operation couldn’t be completed. (OSStatus error -43.)"


However, when i play the file which is in resource folder plays without any error.


Please help !!


jPlayer make some adjustments

I want to use jPlayer to make a ul with a few li items inside of I've a song to play.


I found a simple kind of, demo08 this is javascript code



$(document).ready(function(){

var stream = {
title: "ABC Jazz",
mp3: "http://ift.tt/1AcrPEd"
},
ready = false;

$("#jquery_jplayer_1").jPlayer({
ready: function (event) {
ready = true;
$(this).jPlayer("setMedia", stream);
},
pause: function() {
$(this).jPlayer("clearMedia");
},
error: function(event) {
if(ready && event.jPlayer.error.type === $.jPlayer.error.URL_NOT_SET) {
// Setup the media stream again and play it.
$(this).jPlayer("setMedia", stream).jPlayer("play");
}
},
swfPath: "../../dist/jplayer",
supplied: "mp3",
preload: "none",
wmode: "window",
useStateClassSkin: false,
autoBlur: false,
keyEnabled: true
});


});


The problem is following, I've a few li'items with different song.. how can I set in this case.. to change this to get from one attr ? I was thinking for this solution but I'm not so able to implement..


Can someone to help me with this?


Play audio from file to speaker with Media Foundation

I'm attempting to play the audio track from an mp4 file to my speaker. I know Media Foundation is able to decode the audio stream as I can play it with the TopoEdit tool.


In the sample code below I'm not using a media session or topology. I'm attempting to manually connect the media source to the sink writer. The reason I want to do this is that I ultimately intend to be getting the source samples from the network rather than from a file.


The error I get on the pSinkWriter->WriteSample line when running the sample below is MF_E_INVALIDREQUEST (0xC00D36B2). So I suspect there's something I haven't wired up correctly.



#include <stdio.h>
#include <tchar.h>
#include <mfapi.h>
#include <mfplay.h>
#include <mfreadwrite.h>

#pragma comment(lib, "mf.lib")
#pragma comment(lib, "mfplat.lib")
#pragma comment(lib, "mfplay.lib")
#pragma comment(lib, "mfreadwrite.lib")
#pragma comment(lib, "mfuuid.lib")

#define CHECK_HR(hr, msg) if (hr != S_OK) { printf(msg); printf("Error: %.2X.\n", hr); goto done; }

int _tmain(int argc, _TCHAR* argv[])
{
CoInitializeEx(NULL, COINIT_APARTMENTTHREADED | COINIT_DISABLE_OLE1DDE);
MFStartup(MF_VERSION);

IMFSourceResolver *pSourceResolver = NULL;
IUnknown* uSource = NULL;
IMFMediaSource *mediaFileSource = NULL;
IMFSourceReader *pSourceReader = NULL;
IMFMediaType *pAudioOutType = NULL;
IMFMediaType *pFileAudioMediaType = NULL;
MF_OBJECT_TYPE ObjectType = MF_OBJECT_INVALID;
IMFMediaSink *pAudioSink = NULL;
IMFStreamSink *pStreamSink = NULL;
IMFMediaTypeHandler *pMediaTypeHandler = NULL;
IMFMediaType *pMediaType = NULL;
IMFMediaType *pSinkMediaType = NULL;
IMFSinkWriter *pSinkWriter = NULL;

// Set up the reader for the file.
CHECK_HR(MFCreateSourceResolver(&pSourceResolver), "MFCreateSourceResolver failed.\n");

CHECK_HR(pSourceResolver->CreateObjectFromURL(
L"big_buck_bunny.mp4", // URL of the source.
MF_RESOLUTION_MEDIASOURCE, // Create a source object.
NULL, // Optional property store.
&ObjectType, // Receives the created object type.
&uSource // Receives a pointer to the media source.
), "Failed to create media source resolver for file.\n");

CHECK_HR(uSource->QueryInterface(IID_PPV_ARGS(&mediaFileSource)),
"Failed to create media file source.\n");

CHECK_HR(MFCreateSourceReaderFromMediaSource(mediaFileSource, NULL, &pSourceReader),
"Error creating media source reader.\n");

CHECK_HR(pSourceReader->GetCurrentMediaType((DWORD)MF_SOURCE_READER_FIRST_AUDIO_STREAM, &pFileAudioMediaType),
"Error retrieving current media type from first audio stream.\n");

// printf("File Media Type:\n");
// Dump pFileAudioMediaType.

// Set the audio output type on the source reader.
CHECK_HR(MFCreateMediaType(&pAudioOutType), "Failed to create audio output media type.\n");
CHECK_HR(pAudioOutType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Audio), "Failed to set audio output media major type.\n");
CHECK_HR(pAudioOutType->SetGUID(MF_MT_SUBTYPE, MFAudioFormat_Float), "Failed to set audio output audio sub type (Float).\n");

CHECK_HR(pSourceReader->SetCurrentMediaType((DWORD)MF_SOURCE_READER_FIRST_AUDIO_STREAM, NULL, pAudioOutType),
"Error setting reader audio output type.\n");

// printf("Source Reader Output Type:");
// Dump pAudioOutType.

CHECK_HR(MFCreateAudioRenderer(NULL, &pAudioSink), "Failed to create audio sink.\n");

CHECK_HR(pAudioSink->GetStreamSinkByIndex(0, &pStreamSink), "Failed to get audio renderer stream by index.\n");

CHECK_HR(pStreamSink->GetMediaTypeHandler(&pMediaTypeHandler), "Failed to get media type handler.\n");

// My speaker has 3 audio types of which I got the furthesr with the third one.
CHECK_HR(pMediaTypeHandler->GetMediaTypeByIndex(2, &pSinkMediaType), "Failed to get sink media type.\n");

CHECK_HR(pMediaTypeHandler->SetCurrentMediaType(pSinkMediaType), "Failed to set current media type.\n");

// printf("Sink Media Type:\n");
// Dump pSinkMediaType.

CHECK_HR(MFCreateSinkWriterFromMediaSink(pAudioSink, NULL, &pSinkWriter), "Failed to create sink writer from audio sink.\n");

printf("Read audio samples from file and write to speaker.\n");

IMFSample *audioSample = NULL;
DWORD streamIndex, flags;
LONGLONG llAudioTimeStamp;

for (int index = 0; index < 10; index++)
//while (true)
{
// Initial read results in a null pSample??
CHECK_HR(pSourceReader->ReadSample(
MF_SOURCE_READER_FIRST_AUDIO_STREAM,
0, // Flags.
&streamIndex, // Receives the actual stream index.
&flags, // Receives status flags.
&llAudioTimeStamp, // Receives the time stamp.
&audioSample // Receives the sample or NULL.
), "Error reading audio sample.");

if (flags & MF_SOURCE_READERF_ENDOFSTREAM)
{
printf("End of stream.\n");
break;
}
if (flags & MF_SOURCE_READERF_STREAMTICK)
{
printf("Stream tick.\n");
pSinkWriter->SendStreamTick(0, llAudioTimeStamp);
}

if (!audioSample)
{
printf("Null audio sample.\n");
}
else
{
CHECK_HR(audioSample->SetSampleTime(llAudioTimeStamp), "Error setting the audio sample time.\n");

CHECK_HR(pSinkWriter->WriteSample(0, audioSample), "The stream sink writer was not happy with the sample.\n");
}
}

done:

printf("finished.\n");
getchar();

return 0;
}


I've omitted the code that dumps the media types for brevity but their output is shown below. It could well be that I haven't got the media types connected properly.



File Media Type:
Audio: MAJOR_TYPE=Audio, PREFER_WAVEFORMATEX=1, {BFBABE79-7434-4D1C-94F0-72A3B9E17188}=0, {7632F0E6-9538-4D61-ACDA-EA29C8C14456}=0, SUBTYPE={00001610-0000-0010-8000-00AA00389B71}, NUM_CHANNELS=2, SAMPLES_PER_SECOND=22050, BLOCK_ALIGNMENT=1, AVG_BYTES_PER_SECOND=8000, BITS_PER_SAMPLE=16, USER_DATA=<BLOB>, {73D1072D-1870-4174-A063-29FF4FF6C11E}={05589F81-C356-11CE-BF01-00AA0055595A}, ALL_SAMPLES_INDEPENDENT=1, FIXED_SIZE_SAMPLES=1, SAMPLE_SIZE=1, MPEG4_SAMPLE_DESCRIPTION=<BLOB>, MPEG4_CURRENT_SAMPLE_ENTRY=0, AVG_BITRATE=64000,

Source Reader Output Type:
Audio: MAJOR_TYPE=Audio, SUBTYPE=Float,

Sink Media Type:
Audio: MAJOR_TYPE=Audio, SUBTYPE=Float, NUM_CHANNELS=2, SAMPLES_PER_SECOND=48000, BLOCK_ALIGNMENT=8, AVG_BYTES_PER_SECOND=384000, BITS_PER_SAMPLE=32, ALL_SAMPLES_INDEPENDENT=1, CHANNEL_MASK=3,


Any hints as to where I could look next would be appreciated.


Autohotkey Audio Device Numeration keeps changing

I am running an Autohotkey Script to set/changethe Default Audio Device in Windows. It runs very well, unless my USB Hub - where my Headset is attached to - has no power. Then the Device will not appear in the Audio Device overview, which makes sense. But after getting the USB Hub has power again, the numbering of the devices has changed, as my headset now shows up for example at the end of the list. Is there a way to use the name of the audio device instead of the number?


Here is the Autohotkey Script:



;Selects the internal Audio with PageUp
#PgUp up::
SelectAndShowAudioDevice(0,"Headset")
return

;Selects the external Audio with PageDown
#PgDn up::
SelectAndShowAudioDevice(1,"SPDIF-Out")
return

SelectAndShowAudioDevice(deviceNumber, deviceName)
{
error := ActivateAudioDevice(deviceNumber)
if error
TrayTip % "Fehler beim Aktivieren von " . deviceName, % error
else
TrayTip % deviceName . " aktiv", % "Audiowiedergabe erfolgt ueber " . deviceName
}

ActivateAudioDevice(deviceNumber)
{
IfWinNotExist Sound
{
; Öffne Sound Fenster
Run % "RunDll32.exe shell32.dll,Control_RunDLL mmsys.cpl,,0"
WinWait Sound,,2
if ErrorLevel
Return "Sound Fenster nicht gefunden"
CloseSoundWindowAtEnd := True
}

ControlSend SysListView321, {HOME} ; Zum Anfang der Liste mit Pos1
ControlSend SysListView321, {DOWN %deviceNumber%} ; Zum Audiogerät navigieren
SetControlDelay -1 ; Aktiviere schnellen Mausklick
ControlClick Button2 ; Mausklick auf 'Als Standard'

if CloseSoundWindowAtEnd
WinClose
}

Android - Cut audio file for 2 seconds in beginning and 1 second in end

I require to cut my 1 audio sample to another which have no data of staring of 2 seconds and 1 second of last. Currently I am trying following code , but its not get cut properly.


Header is of 44 bytes , 2 second is of 176400 and 1 sec 88200



private void copyWaveFileForAlgo(String inFilename, String outFilename) {
Log.v("copyWaveFile ", "---------copyWaveFile---------");
FileInputStream in = null;
FileOutputStream out = null;
long totalAudioLen = 0;
long totalDataLen = totalAudioLen + 36;
long longSampleRate = RECORDER_SAMPLERATE;
int channels = 1;
long byteRate = RECORDER_BPP * RECORDER_SAMPLERATE * channels / 8;

byte[] tmpdata = new byte[88200];
try {
in = new FileInputStream(inFilename);
out = new FileOutputStream(outFilename);
totalAudioLen = in.getChannel().size();
totalDataLen = totalAudioLen + 36;



int lenth = 0;
Log.v("data ", "----totalDataLen-----"+totalDataLen+"---------"+tmpdata.length);
long dataToCopy = totalDataLen - 88200;
long dataLenthTransfered = 0;
WriteWaveFileHeader(out, totalAudioLen, totalDataLen, longSampleRate, channels, byteRate);
long skippedB = 0;
lenth = in.read(tmpdata);

while (lenth != -1)
{

if (skippedB <176400)
{
skippedB+= in.skip(88200);
Log.v("skippedB ", "----skippedB-----"+skippedB+"---------");
}else
{
if(dataLenthTransfered < dataToCopy)
{Log.v("dataLenth ", "----elseee-----"+dataLenthTransfered+"-----dataToCopy----"+dataToCopy);
out.write(tmpdata);

}
else
{
Log.v("dataLenthTransfered ", "----dataLenthTransfered-----"+dataLenthTransfered+"----dataToCopy-----"+dataToCopy);
}

}
dataLenthTransfered+=lenth;
lenth = in.read(tmpdata);
}
in.close();
out.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}`


Any help is appreciated. Thanks, Vyoma


How to get the current time duration in real time using Audio Recorder (Android)

I am using Audio Recorder to record voice, but i am not getting to how to get the current duration in real time.


Audio streaming server for mobile app

I have placed some mp 3 files on my websites ftp account which is simple hosting area of website and from iPhone app i am playing the audio mp 3 file which working well. But i am want to create a stream server for storing 200 songs on to it and stream them from my mobile app. Do i have to set up a streaming server (i don't even know how streaming server is set up)?


jeudi 29 janvier 2015

How to add playable(such as wav,wmv) header with PCM data/buffer in iOS?

I am trying to add a wav header on top of raw PCM data to make it playable via AVAudioPlayer. But i couldn't find any solution or source code to do that on iOS using Objective-C/Swift. Though i found this but it doesn't have correct answer.


But i found a piece of code here which is in C and also contains some issue. The wav file doesn't play properly which is generated from that code.


I have given my codes below which i have coded so far.



int NumChannels = AUDIO_CHANNELS_PER_FRAME;
short BitsPerSample = AUDIO_BITS_PER_CHANNEL;
int SamplingRate = AUDIO_SAMPLE_RATE;
int numOfSamples = [[NSData dataWithContentsOfFile:filePath] length];

int ByteRate = NumChannels*BitsPerSample*SamplingRate/8;
short BlockAlign = NumChannels*BitsPerSample/8;
int DataSize = NumChannels*numOfSamples*BitsPerSample/8;
int chunkSize = 16;
int totalSize = 36 + DataSize;
short audioFormat = 1;

if((fout = fopen([wavFilePath cStringUsingEncoding:1], "w")) == NULL)
{
printf("Error opening out file ");
}

fwrite("RIFF", sizeof(char), 4,fout);
fwrite(&totalSize, sizeof(int), 1, fout);
fwrite("WAVE", sizeof(char), 4, fout);
fwrite("fmt ", sizeof(char), 3, fout);
fwrite(&chunkSize, sizeof(int),1,fout);
fwrite(&audioFormat, sizeof(short), 1, fout);
fwrite(&NumChannels, sizeof(short),1,fout);
fwrite(&SamplingRate, sizeof(int), 1, fout);
fwrite(&ByteRate, sizeof(int), 1, fout);
fwrite(&BlockAlign, sizeof(short), 1, fout);
fwrite(&BitsPerSample, sizeof(short), 1, fout);
fwrite("data", sizeof(char), 3, fout);
fwrite(&DataSize, sizeof(int), 1, fout);


The file is playing too fast, the sound is distorted and only first 10 to 20(around) seconds are playing. I think, the wav header isn't generating correctly(Because i am able to play same PCM data/buffer using AudioUnit/AudioQueue). So what i am missing in my code ? Any help would be highly appreciated.


Thanks in advance.


Split a video file into separate video and audio files using a single ffmpeg call?

Background: I would like to use MLT melt to render a project, but I'd like that render to result with separate audio and video files. I'd intend to use melt's "consumer" avformat which uses ffmpeg's libraries, so I'm formulating this question as for ffmpeg.


According to Useful FFmpeg Commands For Converting Audio & Video Files (labnol.org), the following is possible:



ffmpeg -i video.mp4 -t 00:00:50 -c copy small-1.mp4 -ss 00:00:50 -codec copy small-2.mp4


... which slices the "merged" audio+video files into two separate "chunk" files, which are also audio+video files, in a single call; that's not what I need.


Then, ffmpeg Documentation (ffmpeg.org), mentions this:



ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1


... which splits the entire duration of the content of two channels of a stereo audio file, into two mono files; that's more like what I need, except I want to split an A+V file into a stereo audio file, and a video file.


So I tried this with elephantsdream_teaser.ogv:



ffmpeg -i /tmp/elephantsdream_teaser.ogv \
-map 0.0 -vcodec copy ele.ogv -map 0.1 -acodec copy ele.ogg


... but this fails with "Number of stream maps must match number of output streams" (even if zero-size ele.ogv and ele.ogg are created).


So my question is - is something like this possible with ffmpeg, and if it is, how can I do it?


Record/Convert AUDIO data to WAV in Real-time

I am new when it comes to audio signal processing.


Currently I have connected device to my PC that sends me audio data from mic/playback track. I have already created host application with usage of Steinberg ASIO SDK 2.3 which connects to the device and in repeating callback returns raw data. Signal is 24bit and frequency can be chosen whatever I like, let's say 44100 hZ, 2pan's, single channel. I have converted this signal also to double <-1.0, 1.0> because I am doing some signal processing on it.


What I would like to do now is to add recording functionality to my host. For example on button click, incoming data is being continuously converted to WAV file and when I click other button it stops and saves.


I have read already about WAV files, file formats, bitstream formats (RIFF), and somehow have an overall idea how the WAV file looks like. I also checked a lot of forum threads, stackoverflow's threads or code-projects posts and everywhere I find something related to topic but I can't get an idea how can I make ongoing recording in real time. A lot of code I had found is about converting data array to WAV after doing modifications to it. I would like to make ongoing conversion and make WAV file appending/expanding till I tell it to stop.


For example could I somehow modify this?



#include <fstream>

template <typename T>
void write(std::ofstream& stream, const T& t) {
stream.write((const char*)&t, sizeof(T));
}

template <typename T>
void writeFormat(std::ofstream& stream) {
write<short>(stream, 1);
}

template <>
void writeFormat<float>(std::ofstream& stream) {
write<short>(stream, 3);
}

template <typename SampleType>
void writeWAVData(
char const* outFile,
SampleType* buf,
size_t bufSize,
int sampleRate,
short channels)
{
std::ofstream stream(outFile, std::ios::binary);
stream.write("RIFF", 4);
write<int>(stream, 36 + bufSize);
stream.write("WAVE", 4);
stream.write("fmt ", 4);
write<int>(stream, 16);
writeFormat<SampleType>(stream); // Format
write<short>(stream, channels); // Channels
write<int>(stream, sampleRate); // Sample Rate
write<int>(stream, sampleRate * channels * sizeof(SampleType)); // Byterate
write<short>(stream, channels * sizeof(SampleType)); // Frame size
write<short>(stream, 8 * sizeof(SampleType)); // Bits per sample
stream.write("data", 4);
stream.write((const char*)&bufSize, 4);
stream.write((const char*)buf, bufSize);
}


And in callback somehow:



writeWAVData("mySound.wav", mySampleBuffer, mySampleBufferSize, 44100, 1);


I am grateful for any hint/link/form of help.


Which are the best web hosting service that allow audio playing along slideshows?

Which are the best web hosting service that allow audio playing along slideshows? I am a photographer building my website. My current site mottvisualsweddings.com annot have audio plaing along with slideshows


why ape decoder(monkeysaudio) process all data at the beginning?

when ape decoder decode, it would process all data at the beginning. why ? this operation will cost too much time, it will process millions dot product computation, so what purpose about this?


Downsampling a PCM audio buffer in javascript

I am attempting to downsample the sample rate i am getting from audioContext. I believe it is coming in at 44100, and i want it to be 11025. I thought i could just average every 3 samples and it plays back at the correct rate, but the pitch of is too high, as if we were all on helium.


What is the correct way to downsample a float32Array from 44100 to a int16Array at 11025 samples.



var context = new Flash.audioContext();
var audioInput = context.createMediaStreamSource(stream);
var recorder = context.createScriptProcessor(null, 1, 1);
recorder.onaudioprocess = onAudio;
audioInput.connect(recorder);
recorder.connect(context.destination);

var onAudio = function (e) {
var left = e.inputBuffer.getChannelData(0);
bStream.write(Flash.convertFloat32ToInt16(left));
}

var convertFloat32ToInt16 = function(buffer) {
var l = buffer.length;
var point = Math.floor(l/3);
var buf = new Int16Array(point);
for (var x = l; x > 0;) {
var average = (buffer[x] + buffer[x-1] + buffer[x-2]) / 3;
buf[point] = average*0x7FFF;
point -= 1;
x -= 3;
}
return buf.buffer;
}

Get a segment of audio from a blob in javascript

I've used this recorder example which records audio and then encodes it into a blob of type "audio/wav" (which can be played by HTML5 audio elements).


If I want to retrieve only the first 5 seconds of this audio as a blob, I presume Blob.slice(start, end) could be used, but I don't know what to specify as the start and end indexes.


The audio sample rate is 44100Hz.


Convert videos with all contained subtitles and audio into mp4 via commandline using handbrake

I'd like to convert my videos with Handbrake from mkv to mp4. But I want all audio files and subtitles contained in the mkv into the new mp4 container!


I use Handbrake 0.9.9 GUI because in this version you can predefine the amount of audio tracks and subtitles per default under preferences ('Add all remaining' / 'Add all'. Now i'd like to achive the same via HandBrakeCLI.


Combining an audio file with video file in python

I am writing a program in Python on RaspberryPi(Raspbian), to combine / merge an audio file with video file.


Format of Audio file is WAVE Format of VIdeo file is h264


Audio and video already recorded and created at same time successfully, I just need to merge them now.


Can you please guide me on how do I do that?


C# Disable Webbrowser Sound/Application sound

I would like to disable a webbrowser sound but i don't think it's possible so i saw that it was possible to disable an application sound on systems higher than win xp, now i just need to know how and i can't find it!


Current code :



Form.ActiveForm.Hide();
webBrowser1.ScriptErrorsSuppressed = true;
try
{
webBrowser1.Navigate(args[2], null, null, "User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0; Xbox; Xbox One)");
}
catch (Exception ex)
{
Environment.Exit(0);
}


i don't think there is a webrowser.noSound thing , also i used activeform.hide() to hide the webbrowser


How to calculate the audio amplitude in real time (android)

I am working on an Android app where I need to calculate the audio amplitude in real time. As of now I am using MediaPlayer to play the track. Is there a way to calculate its amplitude in real time while playing it.


Here i my code:


int counterPlayer = 0; static double[] drawingBufferForPlayer = new double[100];



private byte[] mBytes;
private byte[] mFFTBytes;
private Visualizer mVisualizer;

public void link(final MediaPlayer player)
{
if(player == null)
{
throw new NullPointerException("Cannot link to null MediaPlayer");
}

// Create the Visualizer object and attach it to our media player.
mVisualizer = new Visualizer(player.getAudioSessionId());
mVisualizer.setCaptureSize(Visualizer.getCaptureSizeRange()[1]);
//mVisualizer.setMeasurementMode(Visualizer.MEASUREMENT_MODE_PEAK_RMS);

// Pass through Visualizer data to VisualizerView
Visualizer.OnDataCaptureListener captureListener = new Visualizer.OnDataCaptureListener()
{
@Override
public void onWaveFormDataCapture(Visualizer visualizer, byte[] bytes,
int samplingRate)
{
updateVisualizer(bytes);
}

@Override
public void onFftDataCapture(Visualizer visualizer, byte[] bytes,
int samplingRate)
{
updateVisualizerFFT(bytes);
}
};

mVisualizer.setDataCaptureListener(captureListener,
Visualizer.getMaxCaptureRate() / 2, true, true);
// Enabled Visualizer and disable when we're done with the stream
mVisualizer.setEnabled(true);
player.setOnCompletionListener(new MediaPlayer.OnCompletionListener()
{
@Override
public void onCompletion(MediaPlayer mediaPlayer)
{
mVisualizer.setEnabled(false);
}
});
}
public void updateVisualizer(byte[] bytes) {

int t = calculateRMSLevel(bytes);
Visualizer.MeasurementPeakRms measurementPeakRms = new Visualizer.MeasurementPeakRms();
int x = mVisualizer.getMeasurementPeakRms(measurementPeakRms);
mBytes = bytes;
}

/**
* Pass FFT data to the visualizer. Typically this will be obtained from the
* Android Visualizer.OnDataCaptureListener call back. See
* {@link android.media.audiofx.Visualizer.OnDataCaptureListener#onFftDataCapture }
* @param bytes
*/
public void updateVisualizerFFT(byte[] bytes) {
int t = calculateRMSLevel(bytes);
mFFTBytes = bytes;
}
public int calculateRMSLevel(byte[] audioData) {
//System.out.println("::::: audioData :::::"+audioData);
double amplitude = 0;
for (int i = 0; i < audioData.length; i++) {
amplitude += Math.abs((double) (audioData[i] / 32768.0));
}
amplitude = amplitude / audioData.length;
//Add this data to buffer for display
if (counterPlayer < 100) {
drawingBufferForPlayer[counterPlayer++] = amplitude;
} else {
for (int k = 0; k < 99; k++) {
drawingBufferForPlayer[k] = drawingBufferForPlayer[k + 1];
}
drawingBufferForPlayer[99] = amplitude;
}

updateBufferDataPlayer(drawingBufferForPlayer);
setDataForPlayer(100,100);

return (int)amplitude;
}

HTML5 Audio - Not displayling length of track



I'm having some problems regarding HTML5 and audio.

The markup is standard and minimalistic:



<audio controls>
<source src="deliverAudio.cgi?test.mp3">
<em>Meh, your browser does not support HTML5 audio</em>
</audio>


As you can see this is completely minimalistic, the only quirk is, that the audio content is delivered via a cgi script.

In fact this works, the audio is played, but the player does not show the complete length of the track, instead it displays the amount already played and you can only navigate in already played portions.

This behaviour applies to Firefox 31 and Internet Explorer 11, I have not tested with Chrome, but I highly expect it to be the same.


The cgi is very minimalistic too, it reads the audio file in binary mode and prints it out.

I really don't know where to go now, because it actually plays, but is not displaying the length.


Any help and useful comments are highly appreciated!

Thanks in advance!


How to give audio play file path in javascript for push notification?

I have written following code in javascript to play audio file on receving push notification :


var my_media = new Media("http://ift.tt/1HiGO8W"); my_media.play();


Tried this also:


var my_media = new Media("/android_asset/www/soundw.mp3"); my_media.play();

and tried this also:


var my_media = new Media("soundw.mp3"); my_media.play();


but nothing is working .Please tell me what exact path should I give?How can I play audio files using javascript?


Sound Crashing client if user does not have sound

Hello so I was adding sound to my client and if the user does not have sound it crashes the client and comes up with this error



Exception in thread "Thread-3" java.lang.NullPointerException
at Org.Game.Client.playMidi(Client.java:101)
at Org.Game.Client.processOnDemandQueue(Client.java:3702)
at Org.Game.Client.processGameLoop(Client.java:2959)
at Org.Game.RSApplet.run(RSApplet.java:214)
at Org.Game.Client.run(Client.java:5569)
at java.lang.Thread.run(Unknown Source)


Now I was wondering how can I see if the user has sound and if they don't make it so it bypass that error



public void playMidi(byte abyte0[]) {
try {
boolean quickSong = (prevSong > 0 ? true : false);
boolean loopMusic = loop;
if(midiPlayer.playing() && !quickSong){
midiPlayer.play(abyte0, loopMusic, midiVolume);//add fading to this one
}else{
midiPlayer.play(abyte0, loopMusic, midiVolume);
}
}
}

set Audio Attributes in SoundPool.Builder class for API 21

I am following an Android Programming video lecture series which was designed in the pre-API 21 times. Hence it tells me to create a SoundPool variable in the following manner.



SoundPool sp = new SoundPool(5, AudioManager.STREAM_MUSIC, 0);
//SoundPool(int maxStreams, int streamType, int srcQuality)


However, I want to use this SoundPool for API 21 as well. So, I am doing this:



if((android.os.Build.VERSION.SDK_INT) == 21){
sp21 = new SoundPool.Builder();
sp21.setMaxStreams(5);
sp = sp21.build();
}
else{
sp = new SoundPool(5, AudioManager.STREAM_MUSIC, 0);
}


sp21 is a variable of Builder type for API 21 and sp is of SoundPool type.


This works very well with my AVD having API 21 and real device having API 19. (Haven't tried with a real device with API 21 but I think it will work well). Now, I want to set the streamType to USAGE_MEDIA in the if-block before sp = sp21.build();. So I type:



sp21.setAudioAttributes(AudioAttributes.USAGE_MEDIA);


But the Lint marks it in red and says:



The method setAudioAttributes(AudioAttributes) in the type SoundPool.Builder is not applicable for the arguments (int)



I know that even if I do not set it to USAGE_MEDIA it will be set to the same by default. But I am asking for future reference if I have to set it to something else like : USAGE_ALARM.


How should I proceed ?


Please Help!


I have referred to Audio Attributes, SoundPool, SoundPool.builder and AudioManager.


mercredi 28 janvier 2015

No sounds on Android emulator on Yoesmite

so i just upgraded my mac from Mavericks to Yosemite and installed the latest version of android studio with new emulator definitions, and now none of my emulators can output sound. They worked fine on mavericks but now i cant seem to get them to play sound at all. And its not just my app, going to youtube on the device or Spotify still doesn't output sound through my mac. I have tried everything but cant get it to work. I have tried multiple devices and apis, and none of them work. any advice?


Manipulate QML Canvas from C++ in Qt5

In my Qt5 application I have some C++ and some QML working in harmony (aka sending signals back and forth).


At this point I want to implement a widget that shows a real-time updated visualization of a playing audio stream in forms of the actual waveform showing in my QML. So I wonder which alternatives ways exist to solve this? What is the easiest alternative to code and which alternative has the best performance?


My naive ideas are:



  • Create a Canvas in my QML and then draw directly to this canvas from C++

  • Send actual samples as a buffer to QML and draw them in canvas from js

  • Send actual samples as a buffer to QML and draw them in some other manner

  • Write a custom C++ widget and somehow display that in QML


PS: I already have access to the actual samples to generate the visualization from, however if you have a clever solution to this as well then I would be overjoyed!


Thanks


Google Chrome on iOS 8 does not respect the mute rocker. Is there any way to detect / compensate for this in javascript?

I have a website that plays sounds. It works great in mobile safari, playing sound when the phone is in an unmuted state. It does not play sound through the phone speakers when the phone is in a muted state. It plays sound through the headphones even when muted.


In Chrome, it always plays sound. If headphones are in it only plays through the headphone but if the phone is muted and no headphones are present it still plays sounds through the phone speaker.


Is there a way for me to detect / turn off sounds if the phone is muted?


"resetting" an HTML5 audio element

I'm building an audio player, using the default audio controls. The user dynamically selects from a list of tracks which to play.


When the <audio> is first created, no 'src' is defined and load() hasn't been called, and when I click the PLAY button, nothing happens.


Here's the problem: If the user clears all the selections, the player shouldn't have anything to play. I can't find a way (beyond allocating a new <audio> object) to "reset" the player.


If the user clicks the <audio>'s PLAY button it starts playing the last thing that was loaded. If I try to set the 'src' attribute to an empty string, the load() method turns it into some kind of URL. Hitting PLAY starts the player, even if its 'src' is garbage (at least in the browsers I've tried).


Some workarounds include



  • Creating my own controls.

  • Allocating a new <audio>. This situation shouldn't arise very often, so I guess it's OK, but I'd rather not do it.

  • Adding an EventListener("play", playHandler) to intercept when the selection list is empty. But then the player has already started.


I'm sure there are other remedies, but now I'm curious whether a 'reset' is possible.


Simple one button html5 audio player

Would really appreciate som help with this. Is not all that great with code.


I need to create a one button audio player.


The button can be an image or a button with text toggle Play/Pause.


Also if someone knows, it would also be great if it could use different playlists. So you have one pause play button, and then three different buttons for different playlists. This isn't a priority, but would be great.


Ps. I am going to use this on a wordpress site, if that matters.


How to play a simple sound in IOS with adobe air on background

Apple is not allowing anymore to access the background for my app and I need to be able to play a simple sound in IOS with adobe air when the app is on background without using "audio" UIBackgroundModes.


If my app include the UIBackgroundModes "audio" the app is able to play a simple sound alert in background but if I removed the "audio" mode like apple requested the app is not able to play a sound anymore.


Anyone know how to play a simple sound in background on adobe air?


Realtime Band-Limited Impulse Train Synthesis using SDL mixer

I'm trying to implement a audio synthesizer using this technique:


http://ift.tt/1lsSMNy


I'm doing it in standard C, using SDL2_Mixer library.


This is my BLIT function implementation:



double blit(double angle, double M, double P) {
double x = M * angle / P;
double denom = (M * sin(M_PI * angle / P));
if (denom < 1)
return (M / P) * cos(M_PI * x) / cos(M_PI * x / M);
else {
double numerator = sin(M_PI * x);
return (M / P) * numerator / denom;
}
}


The idea is to combine it to generate a square wave, following the paper instructions. I setted up SDL2_mixer with this configuration:



SDL_AudioSpec *desired, *obtained;
SDL_AudioSpec *hardware_spec;

desired = (SDL_AudioSpec*)malloc(sizeof(SDL_AudioSpec));
obtained = (SDL_AudioSpec*)malloc(sizeof(SDL_AudioSpec));

desired->freq=44100;
desired->format=AUDIO_U8;
desired->channels=1;
desired->samples=2048;
desired->callback=create_rect;
desired->userdata=NULL;


And here's my create_rect function. It creates a bipolar impulse train, then it integrates it's value to generate a band-limited rect function.



void create_rect(void *userdata, Uint8 *stream, int len) {
static double angle = 0;
static double integral = 0;
int i = 0;
// This is the freq of my tone
double f1 = tone_table[current_wave.note];
// Sample rate
double fs = 44100;
// Pulse
double P = fs / f1;
int M = 2 * floor(P / 2) + 1;

double oldbipolar = 0;
double bipolar = 0;
for(i = 0; i < len; i++) {
if (++angle > P)
angle -= P;
double angle2 = angle + floor(P/2);
if (angle2 > P)
angle2 -= P;

bipolar = blit(angle2, M, P) - blit(angle, M, P);

integral += (bipolar + old bipolar) * 0.5;
oldbipolar = bipolar;
*stream++ = (integral + 0.5) * 127;
}
}


My problem is: the resulting wave is quite ok, but after few seconds it starts to make noises. I tried to plot the result, and here's it:


Standard version More zoomed version Critique area


Any idea?


Matlab Psychtoolbox: left to right moving sound using openAL

I'm using matlab psychtoolbox with openAL to make a pink noise burst moving slowly from left to right in the virtual space. The sound should move at ~1 meter in front of the listener, in a straight line. I tried to modify the openAL demo I found (i.e. AudioTunnel3D). I managed to have the sound moving from left to right by updating the position on the x axis with a for loop. Though, I guess there is a better way, e.g. by using all the functions (AL.VELOCITY; AL.DIRECTION) that openAL provide. Plus the result I got is not satisfacting: the sound get closer to the left hear and then kind of jump on the other side and then slowly fade away. Though, right now I'm a bit stuck, can you help me move one step forward maybe giving me a hint about how to use the velocity and direction parameters properly. This is my ugly code, THANKS A LOT IN ADVANCE FOR YOU HELP! (a good tutorial for dummies would also be good)



nsources = 1;

% Establish key mapping: ESCape aborts, Space toggles between auto-
% movement of sound source or user mouse controlled movement:
KbName('UnifyKeynames');
space = KbName('space');
esc = KbName('ESCAPE');

% Initialize OpenAL subsystem at debuglevel 2 with the default output device:
InitializeMatlabOpenAL(2);

% Generate one sound buffer:
buffers = alGenBuffers(nsources);

% Query for errors:
alGetString(alGetError)

% Try to load some impressive sound...
sounddir = [PsychtoolboxRoot 'PsychDemos/SoundFiles/'];
soundfiles = dir([sounddir '*.wav']);

alListenerfv(AL.POSITION, [0, 0, 0]);
alListenerfv(AL.VELOCITY, [0, 0, 0]);

if IsOSX
alcASASetListener(ALC.ASA_REVERB_ON, 1);
alcASASetListener(ALC.ASA_REVERB_QUALITY, ALC.ASA_REVERB_QUALITY_Max);
alcASASetListener(ALC.ASA_REVERB_ROOM_TYPE, ALC.ASA_REVERB_ROOM_TYPE_Cathedral);
end

% Create a sound source:
sources = alGenSources(nsources);

perm = randperm(nsources);

%Assign soundname
soundname = [sounddir 'motor_a8.wav'];

% Load it...
[mynoise freq]= wavread(soundname);
mynoise = mynoise(:, 1);

% Convert it...
mynoise = int16(mynoise * 32767);

% Fill our sound buffer with the data from the sound vector. Tell AL that its
% a 16 bpc, mono format, with length(mynoise)*2 bytes total, to be played at
% a sampling rate of freq Hz. The AL will resample this to the native device
% sampling rate and format at buffer load time.
alBufferData( buffers, AL.FORMAT_MONO16, mynoise, length(mynoise)*2 , freq);

% Attach our buffer to it: The source will play the buffers sound data.
%alSourceQueueBuffers(sources(i), 1, buffers(i));

alSourceQueueBuffers(sources, 1, buffers);
%alSourcei(sources, AL.BUFFER, buffers);


% Switch source to looping playback: It will repeat playing the buffer until
% its stopped.
alSourcei(sources, AL.LOOPING, AL.TRUE);

% Set emission volume to 100%, aka a gain of 1.0:
alSourcef(sources, AL.GAIN, 1);

% alSourcef(sources(i), AL.CONE_INNER_ANGLE, 30);
% alSourcef(sources(i), AL.CONE_OUTER_ANGLE, 270);

pos= -5;

alSource3f(sources, AL.POSITION, pos, 0, -5);
alSource3f(sources, AL.DIRECTION, 1, 0, 0);
alSource3f(sources, AL.VELOCITY, 0, 0, 0); %usato per simulare un effetto doppler


if IsOSX
% Source emits some sound that gets reverbrated in room:
alcASASetSource(ALC.ASA_REVERB_SEND_LEVEL, sources, 0.0);
end


% Start playback for these sources:
alSourcePlay(sources);




while 1
% Check keyboard:
[isdown dummy, keycode]=KbCheck;
if isdown
if keycode(esc)
break;
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%HERE IS THE LOOP TO MAKE THE SOUND MOVING%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for i=1:nsources
if pos < 5
pos = pos+0.001;
x = pos;
y = 0;
z = 0;
end
alSource3f(sources, AL.POSITION, x, y, z);


end

end


% Wait a bit:
WaitSecs(0.1);

% Delete sources:
alDeleteSources(nsources, sources);


% Wait a bit:
WaitSecs(0.1);

% Delete buffer:
alDeleteBuffers(nsources, buffers);

% Wait a bit:
WaitSecs(0.1);


% Shutdown OpenAL:
CloseOpenAL;

% Done. Bye.
return;

JavaScript audio multi track player

I am new to this and I hope someone could help me to finish this JavaScript code by enabling it to pull the img of the audio links provided.


As it does now with the play and pause and next song


Here is the full code:





</script>


<script type="text/javascript">

function loadPlayer() {
var audioPlayer = new Audio();
audioPlayer.controls="";
audioPlayer.addEventListener('ended',nextSong,false);
audioPlayer.addEventListener('error',errorFallback,true);
document.getElementById("player").appendChild(audioPlayer);
nextSong();
}
function nextSong() {
if(urls[next]!=undefined) {
var audioPlayer = document.getElementsByTagName('audio')[0];
if(audioPlayer!=undefined) {
audioPlayer.src=urls[next];
audioPlayer.load();
audioPlayer.play();
next++;
} else {
loadPlayer();
}
} else {
alert('Error due to end Of Stations list or the Web Browser is not supported. Please use with Google Chrome');
}
}
function errorFallback() {
nextSong();
}
function playPause() {
var audioPlayer = document.getElementsByTagName('audio')[0];
if(audioPlayer!=undefined) {
if (audioPlayer.paused) {
audioPlayer.play();
} else {
audioPlayer.pause();
}
} else {
loadPlayer();
}
}
function pickSong(num) {
next = num;
nextSong();
}


var urls = new Array();

urls[-1] = 'http://ift.tt/1tr9ENU';
urls[-2] = 'http://ift.tt/1Bp6Yye';
urls[-3] = 'http://ift.tt/1Bp6Yye';
urls[-4] = 'http://ift.tt/1Bp6YOu';
var next = 0;

</script>



<!-- player start -->
<a href="#" onclick="playPause()" id="player" title="Play">Play</a>
<a href="#" onclick="playPause()" id="player" title="Stop">Stop</a>
<a href="#" onclick="nextSong()" id="player" title="Next Station">Next Track</a>

<!-- player ends -->

<br>
<br>
<!-- img links start -->

<a href="#" onclick="pickSong(-1)">
<img src="radio/radio almazighia.png" />
</a>
<a href="#" onclick="pickSong(-2)">
<img src="radio/alwatania.png" />
</a>
<a href="#" onclick="pickSong(-3)">
<img src="radio/inter.jpg" />
</a>
<a href="#" onclick="pickSong(-4)">
<img src="radio/france maghrib.jpg" />
</a>

<!-- img links ends -->



Java: Append zeros (silence) to a sound file

I want to implement a metronome app in Java for playing beats in complicated rhythm patterns. There are many kinds of beats, (drums and other percussion instruments) that's why using timers and threads may cause a not precise and optimal performance. I decided firstly to generate the sound by adding silence intervals for each instrument and then to mix them together for better performance (not sure if this is the best solution, but anyway). Now my problem is to add silence intervals to each beat.



public AudioStream create(String file, String rhythm) {
InputStream in = null;
AudioStream audioStream = null;
try {
in = new FileInputStream(path + file);
audioStream = new AudioStream(in);

} catch (FileNotFoundException e1) {
e1.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}



}


This function should create and return the sample for one instrument. So how can I add silence milliseconds to the file and then return the final sample?


What data is being transferred in Bluetooth audio distribution profile?

I wonder what data specifically are transferred between a device (e.g. a mobile phone) and bluetooth headphones. This is likely using one of the audio distribution profiles. I am aiming at minimizing anything that could anyhow modify the original audio data.


To put it in an example:



  1. I have an MP3 file, the mobile device decompresses the data. I suppose this is not done in a bad way that would influence the sound quality.

  2. Magic in the phone and in the transfer.

  3. What is now at the input of the headphones? Do I get the exact output of the step no. 1? Or can there be any distractions from the phone bus like for regular sound cards that are built-in the computers?


Many thanks for your answers.


Monitoring Audio Input on iPhone Without Recording it?

I am trying to write an app in Apple Swift that monitors audio from the microphone and displays the volume level on a VU meter style graph. I know how to do it using AVAudioRecorder, but I don't want to record and save the audio, I just want to monitor and observe the volume levels, as I will be monitoring the audio for multiple hours and saving this to the phone would take up tons of space.


Can anybody lead me in the right direction as to how I can do this? Thanks!


I do not have any code to show, as I a just looking for the right direction to go not debugging.


Node.js streams audio only when alsa's arecord stops recording

I've been trying to create a Node.js audio streaming script using Socket.io and Node.js microphone library.


The problem is that Socket.io does not pipe the audio to a remote server when SoX/ALSA is still recording the audio.


audio-stream-tx (client):



var io = require('socket.io-client');
var ss = require('socket.io-stream');
var mic = require('microphone');
var fs = require('fs');

var socket = io.connect('ws://localhost:25566');
var stream = ss.createStream();

ss(socket).emit('stream-data', stream);

mic.startCapture();

mic.audioStream.pipe(stream);

process.on('SIGINT', function () {
mic.stopCapture();
console.log('Got SIGINT. Press Control-D to exit.');
});


audio-stream-rx (server):



var io = require('socket.io')();
var ss = require('socket.io-stream');
var fs = require('fs');

var port = 25566;

io.on('connection', function (socket) {
ss(socket).on('stream-data', function(stream, data) {
console.log('Incoming> Receiving data stream.');
stream.pipe(fs.createWriteStream(Date.now() + ".wav", { flags: 'a' }));
});
});

io.listen(port);


The microphone library spawns either a SoX/ALSA command (depending on the platform) to record the microphone's audio. The scripts above work fine though. Just that the audio data is only piped to the stream once mic.stopCapture() is called.


Is there a workaround to force socket.io-stream to stream audio data the moment mic.startCapture() is called?


Intel XDK: Playing audio that is not in the application folder

I'm making an app with javascript and wrap it with intel xdk. I read here the method "requires the sound file to be included within the application file folder".


Is there a way to play a sound with it's full path from the web (like http://ift.tt/1v3ggDe )? Should I use an audio library for that ?


Can't convert MP4 to MP3 with FFmpeg in Java

I want to extract an audio file(.mp3) from a video file(.3gp) in Android.


Now I'm trying to use ffmpeg in java file, but only to get an broken mp3 file. It only produces a short noise. How can it change to an audio format?


Here is the code I tried.



public void convertToMp3 (String audioFilePath, String soundTitle){

try {
//TODO: Grabber
FrameGrabber grabber = new FFmpegFrameGrabber( audioFilePath ); // 3gpfile
grabber.setFormat("mp3");
grabber.setSampleFormat(6);
grabber.setSampleRate(44100);
grabber.setFrameRate(30.0);
grabber.setAudioBitrate(192000);
grabber.setAudioChannels(2);
grabber.start();


//TODO: Recorder
String stragePath = Environment.getExternalStorageDirectory().getAbsolutePath();
FrameRecorder recorder = new FFmpegFrameRecorder(stragePath + "/Music/"+soundTitle+".mp3",grabber.getAudioChannels());
recorder.setSampleFormat(grabber.getSampleFormat());
recorder.setSampleRate(44100);
recorder.setSampleFormat(6);
recorder.setAudioBitrate(128000);
recorder.setAudioChannels(2);
recorder.setFormat("mp3");
recorder.setFrameRate(30.0);
recorder.setAudioCodec(avcodec.AV_CODEC_ID_MP3);
recorder.start();


Frame frame;
while ((frame = grabber.grabFrame()) != null) {
recorder.record(frame);
}
recorder.stop();
grabber.stop();

} catch (FrameGrabber.Exception e) {
e.printStackTrace();
} catch (FrameRecorder.Exception e) {
e.printStackTrace();
}
}


I used these libraries. Build.Grade(app)



dependencies {
compile group: 'org.bytedeco', name: 'javacv', version: '0.10'
compile group: 'org.bytedeco', name: 'javacpp', version: '0.10'
compile group: 'org.bytedeco.javacpp-presets', name: 'opencv', version: '2.4.9-0.9', classifier: 'android-arm'
compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '2.3-0.9', classifier: 'android-arm'
}


Do you have any idea? Any advice will be appreciated.


what is the best way to compare two recorded sounds and see if they are close?

I want to know whether there are any techniques available to compare two recorded sounds(different voices) and see if they are close.That means two different person pronounce the same word, I want to identify both the words are same. I don't have any idea how to do this. I don't even know whether it is possible or not.Please help me if you know anything about this.Any idea about this, highly appreciate.Thank you.


C++ - CGI - Audio not working properly



I have a website with an HTML5 audio element whose audio data shall be served via a cgi script.

The markup is rather simple:



<audio controls>
<source type="audio/mpeg" src="audio.cgi?test.mp3">
<em>Me, your browser does not support HTML5 audio</em>
</audio>


The cgi is written in C++ and is pretty simple too, I know there is need of optimizing, e.g. reading the whole file in a buffer is really bad, but that's not the point.

This basic version kinda works, meaning the audio is played, but the player does not display the full length and one can only seek through the track in parts that have already been played.


If the audio file is placed in a location accessible via the web-server everything works fine.

The difference between these two methods seems to be, that the client issues a partial-content request if the latter method is chosen and an ordinary 200 if I try to serve the audio data via the cgi at once.


I wanted to implement partial-content serving into the cgi but I failed to read out the environment variable Request-Range, which is needed to serve the requested part of data.


This leads me to my questions:



  1. Why does the HTML5 player not display the full length of the track if I'm serving the audio data via the cgi script?

  2. Would implementing a partial-content handling solve this issue?

  3. If the partial-content handling is the right approach, how would I access the required environment variables in apache, since I have not found anything about them? Do I need to send a complete HTTP header indicating partial-content is coming, so the client knows he needs to send the required fields?


This is the source of the .cgi:



void serveAudio()
{
//tried these, were not the right ones
//getenv("HTTP_RANGE");
//getenv("HTTP_CONTENT_RANGE");

ifstream in(audioFile, ios::binary | ios::ate);
size_t size = in.tellg();
char *buffer = new char[size];

in.seekg(0, ios::beg);
in.read(buffer, size);

cout<<"Content-Type: audio/mpeg\n\n";
cout.write(buffer, size);
}


Any suggestions and helpful comments are appreciated!

Thanks in advance!


mardi 27 janvier 2015

Multiple Outputs using ffmpeg

I need to convert an audio file(.mp3) to another format(.wav) & also to http stream.


I'm using this to convert to 2 different files at a time. ./ffmpeg -i test.mp3 -c:a mp3 -f tee -map 0:a "1.wav|2.wav"


But, I cannot convert to a file and http stream. I used this syntax



ffmpeg -i input.file -c:v libx264 -c:a mp2 \ -f tee -map 0:v -map 0:a "output.mkv|[f=mpegts]udp://10.0.1.255:1234/"



Its not working.


speed playback for audio streaming in iOS using AVPlayer

I'm doing iOS Swift project and my only problem right now is about adjusting speed playback for audio streaming.


Here is what I've done.


I use AVPlayer for this project



streamingPlayer = AVPlayer(playerItem: audioItem)


To adjust speed



streamingPlayer.rate = slider.value


For the code above, I start to test with my test server which is a normal link to access the audio file directly and I found that there is no problem with adjusting speed rate. Every functions works fine.


I got problem when this project is required to use links which contain more security. To access the audio, it has to request through .ashx. Problem is the speed rate can be adjust to slower (AVPlayer rate < 1) but I speed rate can not be faster (Normal rate is 1, so problem is rate cannot be greater than 1)


I also tried to use the function below but it still didn't help.



audioItem.canPlayFastForward


Another weird problem is after it has been tested with many users, some users who has very high speed internet don't have problem with speed. I have tested on speed between 15 - 30 mbps and still got this problem. It's really confusing me. I assume it is problem about connection with server. But I don't want user to use a very high speed internet like that in order to use this app, what can I do?


Please help me if anyone know about this issue or have some recommendation. I have looked through the Document and tried almost every functions provided in AVPlayer; but it didn't help.


Here is the example link if you can help http://ift.tt/1zWpceN (This link will be expired soon)


Thank you