mercredi 31 décembre 2014

Seek issue with ffmpeg

I implemented one audio player using with ffmpeg for playing all format audio files in android.I used following code for seek the song.



int64_t seekTime = av_rescale_q(seekValue * AV_TIME_BASE,
AV_TIME_BASE_Q,
fmt_ctx->streams[seekStreamIndex]->time_base);

int64_t seekStreamDuration =
fmt_ctx->streams[seekStreamIndex]->duration;

int flags = AVSEEK_FLAG_BACKWARD;
if (seekTime > 0 && seekTime < seekStreamDuration)
flags |= AVSEEK_FLAG_ANY;

int ret = av_seek_frame(fmt_ctx, seekStreamIndex, seek_target,
flags);

if (ret < 0)
ret = av_seek_frame(fmt_ctx, seekStreamIndex, seekTime,
flags);

avcodec_flush_buffers(dec_ctx);


Its working fine for most of the songs.But some of the mp3 songs getting duration problem.For example if the song length is 2 mins if i seek the song to some position then finally song ends with 2 mins 10 seconds.I get this issue with only mp3 songs.Without seeking the same song ends with exact time. I am using ffmpeg 2.1 The some code working fine with ffmpeg 0.11.1 Please provide any information about this issue.


decoding AIF url from Parse.com in Chrome using javascript

Parse.com supplies me with a url for a .aif audio file. Obviously, if I use an audio tag playback doesn't work in Chrome, since .aif isn't supported in Chrome. So I thought about using aurora.js: http://ift.tt/1zzadlv but that bad boy uses XMLhttprequest, and I'm requesting from a different domain so that doesn't work because it violates the same-origin policy. What should I do?


Concatenating 2 or more .wav file in vb.net

I updated a answer found at http://ift.tt/1rD5ilR from C# to VB but I get a strange result. The audio is faster than it should be and the 3 wav files (5 seconds, 1 second and 6 second) amount to a file of just over 6 seconds. All the files have the same waveformat (22050Hz Rate, mono, 32bit float).


In this project I am using NAudio.


My code:



Public Shared Sub Concatenate(outputFile As String, sourceFiles As IEnumerable(Of String))
Dim buffer As Byte() = New Byte(1023) {}
Dim waveFileWriter As WaveFileWriter = Nothing
Try
For Each sourceFile As String In sourceFiles
Using reader As New WaveFileReader(sourceFile)
If waveFileWriter Is Nothing Then
' first time in create new Writer
waveFileWriter = New WaveFileWriter(outputFile, reader.WaveFormat)
Else
If Not reader.WaveFormat.Equals(waveFileWriter.WaveFormat) Then
Throw New InvalidOperationException("Can't concatenate WAV Files that don't share the same format")
End If
End If
Dim read As Integer
While (reader.Read(buffer, 0, buffer.Length) > 0)
read = reader.Read(buffer, 0, buffer.Length)
waveFileWriter.Write(buffer, 0, read)
End While
End Using
Next
Finally
If waveFileWriter IsNot Nothing Then
waveFileWriter.Dispose()
End If
End Try
End Sub


The function will be called like this:



Dim m_oEnum As IEnumerable(Of String) = New List(Of String)() From {"c:\1.wav", "c:\2.wav", "c:\3.wav"}
Concatenate("c:\joined.wav", m_oEnum)


Can anyone help me with this? I have a susoiction that it may have something to do with the sample format being 32 bit float.


c# What are these volume changing values/messages

I found this script to change the System sound volume and it works. But what are these constant volume codes called and where can I find a full list of these codes that do more things.



[DllImport("user32.dll")]
static extern IntPtr SendMessage(IntPtr hWnd, int Msg, IntPtr wParam, IntPtr lParam);

//Volume codes, or messages, or whatever they are called
const int VOLUME_MUTE = 0x80000;
const int VOLUME_DOWN = 0x90000;
const int VOLUME_UP = 0xA0000;

SendMessage(this.Handle, WM_APPCOMMAND, IntPtr.Zero, (IntPtr)VOLUME_UP);

Program not working when playing song

I'm having trouble putting background music in my WinForm c# program. I'm kinda new to programming, so be gentle hahahaha >.<


Basically what I have tried are the following:


1) Using Sound Player



System.Media.SoundPlayer playsong = new System.Media.SoundPlayer();
playsong.Stream = Properties.Resources.song;
playsong.PlayLooping();


This kind of works, but only for a random few seconds. The program would, after a random amount of time, suddenly say "[name of program] has stopped working"


2) Using Sound Player, but with timer


I was guessing that, maybe there was some problem with "PlayLooping". And so I made 2 timers. One to play the song using .play(), and another timer to stop playing with .stop(). Every 30 seconds, timer 1 would play the song, while every 29 seconds timer 2 would stop playing.


This also works for a random few seconds and the program would also say "[name of program] has stopped working".


3) Used Windows Media Player


I did all the Com stuff and wrote the following code



WMPLib.WindowsMediaPlayer wplayer = new WMPLib.WindowsMediaPlayer();

wplayer.URL = Application.StartupPath + @"/song.mp3";


This works for a very short amount of time before the music stops playing. Program doesn't stop working, just that the music cuts and doesn't play again till I press the button to make the program run the code again.


I have no idea where to start, there's no errors no nothing. Sigh.


It should be noted that I have other lines of code other than the above lines though, including in my class files and stuff, although I really don't think it would affect playing the song.


Help would be much appreciated. If any more information would help, I would be glad to provide them.


Wait for multiple audio files to finish loading then play them in sequence

HTML5 has onloadeddata for calling a method when the audio is loaded, and you can add listeners for a callback when it's finished playing. But how would it work with multiple audio files? How do you trigger a call when all of them are loaded?


What are the correct arguments for requestaudiofocus?

I am new to Android and Java. I have been working with the MediaPlayer and AudioManager examples provided by the Android Developer and other websites.


What I have noticed is that for the call to requestAudioFocus() there seems to be two separate signatures that are used. For example, from the http://ift.tt/1gulC2h site there is:



AudioManager audioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
int result = audioManager.requestAudioFocus(this, AudioManager.STREAM_MUSIC,
AudioManager.AUDIOFOCUS_GAIN);

if (result != AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
// could not get audio focus.
}


With the following text:


"The first parameter to requestAudioFocus() is an AudioManager.OnAudioFocusChangeListener, whose onAudioFocusChange() method is called whenever there is a change in audio focus. Therefore, you should also implement this interface on your service and activities. For example:" (With the following code:)



class MyService extends Service
implements AudioManager.OnAudioFocusChangeListener {
// ....
public void onAudioFocusChange(int focusChange) {
// Do something based on focus change...
}
}


Then from the site: http://ift.tt/10sElNh there is:



AudioManager am = mContext.getSystemService(Context.AUDIO_SERVICE);
...

// Request audio focus for playback
int result = am.requestAudioFocus(afChangeListener,
// Use the music stream.
AudioManager.STREAM_MUSIC,
// Request permanent focus.
AudioManager.AUDIOFOCUS_GAIN);

if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
am.registerMediaButtonEventReceiver(RemoteControlReceiver);
// Start playback.
}


with:



OnAudioFocusChangeListener afChangeListener = new OnAudioFocusChangeListener() {
public void onAudioFocusChange(int focusChange) {
if (focusChange == AUDIOFOCUS_LOSS_TRANSIENT
// Pause playback
} else if (focusChange == AudioManager.AUDIOFOCUS_GAIN) {
// Resume playback
} else if (focusChange == AudioManager.AUDIOFOCUS_LOSS) {
am.unregisterMediaButtonEventReceiver(RemoteControlReceiver);
am.abandonAudioFocus(afChangeListener);
// Stop playback
}
}
};


I've seen this dichotomy across numerous sites giving sample code for handling changes in audio focus. My understanding is that "this" provides context of the application's current state. I do not understand why in some cases "this" is the correct parameter while in other cases a handle to a change listener is the correct parameter when calling requestAudioFocus().


In fact the first example I provided states the first parameter should be an AudioManager.OnAudioFocusChangeListener. But "this" is used.


If you could explain why "this" is used instead of an AudioManager.OnAudioFocusChangeListener is used as a parameter it would be greatly appreciated. Thanks in advance and Happy New Year! Jim


Sharing Audio IOS

I am making a music app and i was wondering what options there would be for sharing audio on ios. I have googled around and found out about MMS but you can't send MMS through wifi only through a data plan. Is there any other options that would allow to send an audio file through a text message or something to that degree?


How to compare Vector2s? (AndEngine)

I'm working on an app, and I want a sound to play when two objects collide. However, I realized that this will just cause the sound to play a lot, so I want to modify it by making it only play when one of the objects is going some speed. This is my code:



public void beginContact(Contact contact) {
if(contact.getFixtureA().getBody().getUserData() == Box.class &&
contact.getFixtureA().getBody().getLinearVelocity().len() >= new Vector2(1,1).len())
res.boxCollision.play();


The Box class is just one of my classes that I want to cause the sound to play. However, this does not work, and I'm not sure how you would make it work. Any ideas? And this is not the entire function, it is much long so I cut most of it out.


"ViewController.Type does not have a member named 'var

I've been working on an App with Swift on the IOS where when you press a button it makes sound (I'm a beginner). This block of code though has been giving me problems, the code works when I just put "The Wilhelm scream sound effect", but when I try to put string it gives me the "ViewController.Type does not have a member named scream" error.


I've been stuck on this for a while, so any help would be appreciated.



class ViewController: UIViewController {

required init(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}


let scream: String = "The Wilhelm scream sound effect"



var pianoSound = NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource(scream, ofType: "mp3")!)
var audioPlayer = AVAudioPlayer()
}

Simple app for android

i have this simple android app and i have to have it done until friday so I need help as I dont own internet on my home.


The progam is simple. Its a timer for a game of cards.


When the app start you long press to begin the routine. The routine consists of playing a 3.40 seconds mp3 of 4 beeps. When the time reaches 3 seconds (the 4th beep) the screen goes red until the sound ends and immediatly the next sound begins (next player turn)


If you tap the screen under 3 seconds, the sound starts over ( the next player round)


This is for the persons dont pass 3 seconds on their moves.


Also, if the screen goes red the touch is inactive until the next sound.


You long press when you win and it reurns to the beggining with white screen and no sounds playing.


It looks simple but I cant program this in the moment because i dont have internet so i need help.......


how to make a screen recorder in c#

id like to make a screen recorder in c# that uses no pictureboxes and saves the file as an mp4 or avi file and records full screen with sound from the microphone. I would also like it so it is simple and is not required to download any other things. thanks


Playing MP3s in Java from within the Project

I am working on a Java project that involves playing mp3 files. I want my application to play the files from within the project so I have the music files stored in a folder called music which is in a source folder called resources. This is the code I have right now but when I run it I get a Bitstream errorcode 102. I can't seem to figure out what is wrong, any help? I am using the javazoom library (javazoom.jl.player.Player) Any help would be appreciated



public void play() {
try {
InputStream stream = MP3.class.getClassLoader()
.getResourceAsStream("/music/LoveStory.mp3");
BufferedInputStream bis = new BufferedInputStream(stream);
player = new Player(bis);
} catch (Exception e) {
System.out.println("Problem playing file " + filename);
System.out.println(e);
}

// run in new thread to play in background
new Thread() {
public void run() {
try {
player.play();
} catch (Exception e) {
System.out.println(e);
}
}
}.start();

}

Playing audio clips and lag in javascript

I'm having an issue with lag in javascript. I have many clips of audio a user can click on. I have a global "audioPlayer" that I start/pause and redefine everytime I play/stop something. But suppose I click on and off of the posts many times. Even though the posts stop and start as as they should, there's a HUGE lag before the last one plays. Such a large lag, too, that I believe instead of playing just the current audio player audio it's loading all previous audio from the posts I started to load and play and then stopped, even though it will only play the post most recently clicked. How should I fix this?



function didFinishPlayingAudio() {
IS_PLAYING_SHOUT=false;
alert("finished audio playing!")
}

var animationDiv;
var CURRENTLY_PLAYING_SHOUT;
var IS_PLAYING_SHOUT;
var audioPlayer;
function playShout(sender) {
console.log(sender.id)
var uniqueIdentifier;
if (sender.nodeName.toLowerCase() === "span") {


uniqueIdentifier=sender.parentNode.id
} else if (sender.nodeName.toLowerCase() === "img") {

uniqueIdentifier=sender.id

}
var shout;
for (i=0;i<shoutsArray.length;i++) {
if (shoutsArray[i].id==uniqueIdentifier) {
shout=shoutsArray[i];
}
}
CURRENTLY_PLAYING_SHOUT=shout;
var shoutAudioUrl=shout.get("audioData").url()

if (!IS_PLAYING_SHOUT) {
playAudioAtUrl(shoutAudioUrl)

} else {
if (shoutAudioUrl==audioPlayer.src) {
<!-- clicked on shout thats already playing-->
audioPlayer.pause();
audioPlayer=null;
IS_PLAYING_SHOUT=false;
console.log("stopped audio player")
} else {
playAudioAtUrl(shoutAudioUrl)
}
}
}

function playAudioAtUrl(url) {
console.log(url)
if (IS_PLAYING_SHOUT) {
audioPlayer.pause();
audioPlayer=null;
}
console.log("playin dat audio")
audioPlayer = new Audio(url);
audioPlayer.addEventListener("ended", didFinishPlayingAudio);
audioPlayer.id="shoutAudioPlayer"
audioPlayer.play();
IS_PLAYING_SHOUT=true;
}

Replace Audio of a video file

I recorded a video with my application and now I want to change the original video audio to a diffrent audio file. Is this possible ? So I actually want to replace the audio of the video which I just recorded. Thanks in advance !


Playing audio in HTML5 javascript

I understand that audio can be played by using an tag in html with a scr parameter and autoplay. But in my case, I'm working in javascript and would just like to play audio data using javascript. Does it still make sense to make an tag? I have a method called playAudioAtUrl(url) and I am not sure how to write that in javascript without using some kind of outside library.


Ionic + Howler.JS - How to Play sound until user is holding button?

I would like to ask how can i do following using Ionic and Howler.js (https://github.com/goldfire/howler.js/).


If user is holding button is played sound file using Howler.JS, in case that user release button is sound stopped.


I tried to find in Ionic documentation event for holding button but without luck (http://ionicframework.com/docs/api/directive/onTap/)


How can i do it please?


Thanks for any help.


Javascript - change voice from mp3 to weird voice

My code i here: http://jsfiddle.net/ejkmbtsw/4/


I am trying to change the voice to a really weird / robotic like voice or anyway weird but not noisy


Do you have any clue?


I don't get what to change from the audio context


Getting only white noise when playing sound by Audio Track Class

*I want to play a mp3 with mono and stereo effect . currently i am working on play stereo effect i have read all the documents. but i am getting only white noise. my code is:






public class AudioTest extends Activity
{ byte[] b;
public void onCreate(Bundle savedInstanceState)
{
AndroidAudioDevice device = new AndroidAudioDevice( );
super.onCreate(savedInstanceState);
File f=new File("/sdcard/lepord.mp3");
try {
FileInputStream in=new FileInputStream(f);
int size=in.available();
b=new byte[size];
in.read(b);
in.close();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
while( true )
{
device.writeSamples(b);
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}



and my AndroidAudioDevice class is:





public class AndroidAudioDevice
{
AudioTrack track;
byte[] buffer = new byte[158616];
@SuppressWarnings("deprecation")
public AndroidAudioDevice( )
{
int minSize =AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT );
track = new AudioTrack( AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT,
158616, AudioTrack.MODE_STREAM);
Log.e(""," sizewe are using for track buffer is 158616");
track.setStereoVolume(.6f,.6f);
track.play();
}
public void writeSamples(byte[] b) {
// TODO Auto-generated method stub
Log.e("","bytes to be write in track is "+b.length );
fillBuffer(b);
track.write( buffer, 0, b.length );
}
private void fillBuffer( byte[] samples )
{
Log.e("","track buffer length="+buffer.length+" samle length"+samples.length );
if( buffer.length < samples.length )
buffer = new byte[samples.length];
for( int i = 0; i < samples.length; i++ )
buffer[i] = (byte)(samples[i] * Byte.MAX_VALUE);;
}
}



first i am not getting any sound rather than white noise, i just want to play a sound by AudioTrack and then i will work on mono and stereo sound effect please help me Thanks in advance.*

Is there a method to figure out the audio channel layout in Linux?

I'm making a player for Linux and I want to know the audio channel layout (stereo, 5.1ch, etc) of user's system (not channels included in media file). For now, it's set by user but I want to implement an auto-detection of channel layout.


Is there any (de-facto) standard method to accomplish this? If not, can I find a solution for ALSA at least?


TextLayoutCache error in android

My program runs through but won't show me my graph. I'm using android-graphview api to graph some audio file, but when my program ends reading the audio file and then proceeds to graphing it, the whole graph won't show. And it only give me this in LogCat



12-31 16:55:16.314: D/TextLayoutCache(12579): Enable myanmar Zawgyi converter
12-31 16:55:16.314: D/TextLayoutCache(12579): Enable myanmar Zawgyi converter
12-31 16:55:16.354: D/GC(12579): <tid=12579> OES20 ===> GC Version : GC Ver rls_pxa988_KK44_GC13.20
12-31 16:55:16.404: D/OpenGLRenderer(12579): Enabling debug mode 0
12-31 16:55:16.454: D/TextLayoutCache(12579): Enable myanmar Zawgyi converter
12-31 16:55:16.454: D/TextLayoutCache(12579): Enable myanmar Zawgyi converter
12-31 16:55:16.454: D/TextLayoutCache(12579): Enable myanmar Zawgyi converter
12-31 16:55:16.464: D/TextLayoutCache(12579): Enable myanmar Zawgyi converter


I don't think I have no error on me reading the file but it won't show me the graph. this is my code



public void graphing() throws IOException
{
System.out.println("in process...");
File f = new File(Environment.getExternalStorageDirectory().getAbsolutePath()+"/sample.wav");

LineGraphSeries<DataPoint> series = new LineGraphSeries<DataPoint>();
graph = (GraphView) findViewById (R.id.graph);
graph.getViewport().setScrollable(true);
graph.getViewport().setScalable(true);
graph.getViewport().setMinX(0);
graph.getViewport().setMaxX(f.length()/2);

FileInputStream fis;
fis = new FileInputStream(f);
BufferedInputStream bis = new BufferedInputStream(fis);

int channelCount = 1;
int[][] samples = new int[channelCount][(int) (f.length()/2)];
int sampleIndex = 0;
long cursor = 0;
byte[] data = new byte[(int) ((f.length()/2)*2)];

int result = fis.read(data);

for(int i = 0; i < data.length;)
{
for(int ch = 0; ch < channelCount; ch++)
{
int low = (int) data[i];
i++;
int high = (int) data[i];
i++;
int sample = get(high, low);
samples[ch][sampleIndex] = sample;
System.out.println(cursor+" "+samples[ch][sampleIndex]);
series.appendData(new DataPoint(cursor, samples[ch][sampleIndex]), true, 10);
cursor++;
}
sampleIndex++;
}
System.out.println("...end");
}

public static int get(int high, int low)
{
return (high << 8) + (low & 0x00ff);
}


BTW, HAPPY NEW YEAR GUYS!


Javascript - change audio file voice with pitch or filters like

The code below takes an mp3 file and play it, what i would like to do is to change in some way the pitch ( for example of the voice) to return a strange/weird voice instead of the normal one.


Is it possible in any way?



var src
, fftSize = 1024
, audio = new Audio()
, ac = new webkitAudioContext()
, comp = ac.createDynamicsCompressor()
, bar = document.querySelector('.bar')
, url = 'http://media.tts-api.com/2aae6c35c94fcfb415dbe95f408b9ce91ee846ed.mp3';

audio.src = url;
comp.ratio.value = 0.95;

audio.addEventListener('canplaythrough', function() {
src = ac.createMediaElementSource(audio);
src.connect(comp);
comp.connect(ac.destination);
audio.play();
}, false);

mardi 30 décembre 2014

Generate frequency using sox, bad quality

I am now using the SOX library to generate a sound file with different frequency and waveform.Sometimes, I need to combine more than 20 frequencies together in a single sound file. I will use the command like the following



./sox -e a-law -r 44100 -n output.wav synth 5 sine 3 square 250 sine 300 ...


However, sometimes the output file have many noise. If I remove -e a-law argument the output will have no sound.



./sox -r 44100 -n output.wav synth 5 sine 3 square 250 sine 300 ...


I am new to use the SOX library. Does anyone know how to generate the frequency with high quality output? Or any other alternatives to generate the frequency.


Crash when SAFE_RELEASE is called on IMMDeviceEnumerator

I am using a service to detect the connection of head set. When the microphone is connected and disconnected, I am getting notification. The thing is, when I end the service manually, I am facing a crash in the service while doing SAFE_RELEASE. Here is the code...



NotifyHandsetConnectionStatus::NotifyHandsetConnectionStatus() : _cRef(1), _pEnumerator(NULL){}

NotifyHandsetConnectionStatus::~NotifyHandsetConnectionStatus()
{
// SAFE_RELEASE(_pEnumerator)
if (_pEnumerator)
{
_pEnumerator->Release(); // CRASH
_pEnumerator = NULL;
}
}

void NotifyHandsetConnectionStatus::Init(DWORD threadID)
{
PTTThreadID = threadID;
HRESULT hr = S_OK;

CoInitialize(NULL);

if (_pEnumerator == NULL)
{
// Get enumerator for audio endpoint devices.
hr = CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_INPROC_SERVER,
__uuidof(IMMDeviceEnumerator), (void**)&_pEnumerator);
}

if (hr == S_OK)
{
_pEnumerator->RegisterEndpointNotificationCallback(this);
}
}


and here is the class declaration...



class NotifyHandsetConnectionStatus : public IMMNotificationClient
{
LONG _cRef;
IMMDeviceEnumerator *_pEnumerator;
DWORD PTTThreadID;
public:
NotifyHandsetConnectionStatus();
~NotifyHandsetConnectionStatus();

void Init(DWORD threadID);
ULONG STDMETHODCALLTYPE AddRef() override;
virtual ULONG STDMETHODCALLTYPE Release() override;
HRESULT STDMETHODCALLTYPE QueryInterface(REFIID riid, VOID **ppvInterface) override;
HRESULT STDMETHODCALLTYPE OnDeviceStateChanged(LPCWSTR pwstrDeviceId, DWORD dwNewState) override;
HRESULT STDMETHODCALLTYPE OnDeviceAdded(LPCWSTR pwstrDeviceId) override
{
return S_OK;
}
HRESULT STDMETHODCALLTYPE OnDeviceRemoved(LPCWSTR pwstrDeviceId) override
{
return S_OK;
}
HRESULT STDMETHODCALLTYPE OnPropertyValueChanged(LPCWSTR pwstrDeviceId, const PROPERTYKEY key) override
{
return S_OK;
}
HRESULT STDMETHODCALLTYPE OnDefaultDeviceChanged( EDataFlow flow, ERole role, LPCWSTR pwstrDeviceId) override
{
return S_OK;
}
};

Normalized audio in sox: no such file

I'm trying to use this script to batch normalize audio using Sox. I'm having a problem because it appears that it's not creating a tmp file for some reason and then of course there is no normalized audio file either. I'm getting this error for every file:


norm_fade.sh: line 57: /Applications/sox/WantNotSamples/Who Am I-temp.wav: No such file or directory


Normalized File "wav_file" exists at "/Applications/sox/WantNotSamplesNormalize"


rm: /Applications/sox/WantNotSamples/Who Am I-temp.wav: No such file or directory


#!/bin/sh



# Script.sh
#
#
# Created by scacinto on 1/31/13.
#
# For now, only put audio files in the working directory - working on a fix

# This is the directory to the source soundfiles that need to be
# normalized and faded (the first argument on the command line.)
src=$1
# This is the directory to write the normalized and faded files to
# (The second path you must supply on the command line.)
dest=$2
# This is the sox binary directory. Please set this to your sox path.
# As it is now, this assumes that the sox binary is in the same directory
# as the script.
SOX= ./sox

#enable for loops over items with spaces in their name
IFS=$'\n'

# This is the 'for' loop - it will run for each file in your directory.
for original_file in `ls "$src/"`
do
# Get the base filename of the current wav file
base_filename=`basename "$original_file" .wav`

# We need a temp file name to save the intermediate file as
temp_file="${base_filename}-temp.wav"

echo "Creating temp file: \"$temp_file\" in \"$src\""

# And we need the output WAV file
wav_file="${base_filename}-nf.wav"

# Convert all spaces to hyphens in the output file name
wav_file=`echo $wav_file | tr -s " " "-"`

#Print a progress message
echo "Processing: \"$original_file\". Saving as \"$wav_file\" ..."

# We need the length of the audio file
original_file_length=`$SOX $src/"$original_file" 2>&1 -n stat | grep Length | cut -d : -f 2 | cut -f 1`

# Use sox to add perform the fade-in and fade-out
# saving the result as our temp_file. Adjust the 0.1s to your desired fade
# times.
#$SOX $src/"$original_file" $src/"$temp_file" fade t 0.1 $original_file_length 0.1

# If files have readable headers, you can skip the above operation to get the
# file length and just use 0 as below.
#$SOX $src/"$original_file" $src/"$temp_file" fade t 0.5 0 0.5

# normalize and write to the output wave file
$SOX $src/"$temp_file" $dest/"$wav_file" norm -0.5

echo "Normalized File \"wav_file\" exists at \"$dest\""

# Delete that temp file
rm $src/$temp_file

done

Play sound in while app in background

Trying to get a sound to play while the app is in the background. My code plays sound correctly if I start the sound in the following method.



- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {

[self startSilentBackgroundSound]; // works fine

}


If I remove the code above and do it here it does NOT play the sound while the app is in the background.



- (void)applicationDidEnterBackground:(UIApplication *)application {

self.newTaskId = UIBackgroundTaskInvalid;
self.newTaskId = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:NULL];
[self startSilentBackgroundSound]; // does NOT work
}


In required background modes I have the following:



App plays audio or streams audio/video using AirPlay
App downloads content from the network
App registers for location updates
App provides Voice over IP services
App downloads content in response to push notifications


If anyone knows of a GitHub project I would very thankful!!!


//// CODE IN MY APP DELEGATE ////////



-(void)startSilentBackgroundSound {
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *error = nil;
NSLog(@"Activating audio session");
if (![audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionDuckOthers error:&error]) {
NSLog(@"Unable to set audio session category: %@", error);
}
BOOL result = [audioSession setActive:YES error:&error];
if (!result) {
NSLog(@"Error activating audio session: %@", error);
}
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents];

[self startAlarmSound];
}

-(void)stopSilentBackgroundSound {
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *error = nil;
NSLog(@"Deactivating audio session");
BOOL result = [audioSession setActive:NO error:&error];
if (!result) {
NSLog(@"Error deactivating audio session: %@", error);
}

[self stopSounds];
}



-(void)startAlarmSound {
self.silentAudioController = [[SilentAudioController alloc] init];
[self.silentAudioController tryPlayMusic];
}

-(void)stopSounds {
[self.silentAudioController tryStopMusic];
self.silentAudioController = nil;
}


/// HERE IS MY FILE THAT HANDLES ALL THE SOUNDS STUFF



#import "SilentAudioController.h"

@import AVFoundation;

@interface SilentAudioController () <AVAudioPlayerDelegate>

@property (strong, nonatomic) AVAudioSession *audioSession;
@property (strong, nonatomic) AVAudioPlayer *backgroundMusicPlayer;
@property (assign) BOOL backgroundMusicPlaying;
@property (assign) BOOL backgroundMusicInterrupted;
@property (assign) SystemSoundID pewPewSound;

@end

@implementation SilentAudioController

#pragma mark - Public

- (instancetype)init
{
self = [super init];
if (self) {
[self configureAudioSession];
[self configureAudioPlayer];
[self configureSystemSound];
}
return self;
}

- (void)tryPlayMusic {
// If background music or other music is already playing, nothing more to do here
if (self.backgroundMusicPlaying || [self.audioSession isOtherAudioPlaying]) {
return;
}

// Play background music if no other music is playing and we aren't playing already
//Note: prepareToPlay preloads the music file and can help avoid latency. If you don't
//call it, then it is called anyway implicitly as a result of [self.backgroundMusicPlayer play];
//It can be worthwhile to call prepareToPlay as soon as possible so as to avoid needless
//delay when playing a sound later on.
[self.backgroundMusicPlayer prepareToPlay];
[self.backgroundMusicPlayer play];
self.backgroundMusicPlaying = YES;
}

- (void)tryStopMusic {
[self.backgroundMusicPlayer stop];
self.backgroundMusicPlaying = NO;
}

- (void)playSystemSound {
AudioServicesPlaySystemSound(self.pewPewSound);
}

#pragma mark - Private

- (void) configureAudioSession {
// Implicit initialization of audio session
self.audioSession = [AVAudioSession sharedInstance];

// Set category of audio session
// See handy chart on pg. 46 of the Audio Session Programming Guide for what the categories mean
// Not absolutely required in this example, but good to get into the habit of doing
// See pg. 10 of Audio Session Programming Guide for "Why a Default Session Usually Isn't What You Want"

NSError *setCategoryError = nil;
if ([self.audioSession isOtherAudioPlaying]) { // mix sound effects with music already playing
[self.audioSession setCategory:AVAudioSessionCategorySoloAmbient error:&setCategoryError];
self.backgroundMusicPlaying = NO;
} else {
[self.audioSession setCategory:AVAudioSessionCategoryAmbient error:&setCategoryError];
}
if (setCategoryError) {
NSLog(@"Error setting category! %ld", (long)[setCategoryError code]);
}
}

- (void)configureAudioPlayer {
// Create audio player with background music
NSString *backgroundMusicPath = [[NSBundle mainBundle] pathForResource:@"background-music-aac" ofType:@"caf"];
NSURL *backgroundMusicURL = [NSURL fileURLWithPath:backgroundMusicPath];
self.backgroundMusicPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:backgroundMusicURL error:nil];
self.backgroundMusicPlayer.delegate = self; // We need this so we can restart after interruptions
self.backgroundMusicPlayer.numberOfLoops = -1; // Negative number means loop forever
}

- (void)configureSystemSound {
// This is the simplest way to play a sound.
// But note with System Sound services you can only use:
// File Formats (a.k.a. audio containers or extensions): CAF, AIF, WAV
// Data Formats (a.k.a. audio encoding): linear PCM (such as LEI16) or IMA4
// Sounds must be 30 sec or less
// And only one sound plays at a time!
NSString *pewPewPath = [[NSBundle mainBundle] pathForResource:@"pew-pew-lei" ofType:@"caf"];
NSURL *pewPewURL = [NSURL fileURLWithPath:pewPewPath];
AudioServicesCreateSystemSoundID((__bridge CFURLRef)pewPewURL, &_pewPewSound);
}

#pragma mark - AVAudioPlayerDelegate methods

- (void) audioPlayerBeginInterruption: (AVAudioPlayer *) player {
//It is often not necessary to implement this method since by the time
//this method is called, the sound has already stopped. You don't need to
//stop it yourself.
//In this case the backgroundMusicPlaying flag could be used in any
//other portion of the code that needs to know if your music is playing.

self.backgroundMusicInterrupted = YES;
self.backgroundMusicPlaying = NO;
}

- (void) audioPlayerEndInterruption: (AVAudioPlayer *) player withOptions:(NSUInteger) flags{
//Since this method is only called if music was previously interrupted
//you know that the music has stopped playing and can now be resumed.
[self tryPlayMusic];
self.backgroundMusicInterrupted = NO;
}

@end

detect audio data peaks using waveform image (not using web audio)

I was reading G. Skinner's take on using an image to detect volume peaks in order to create a custom visualizer display for audio and was wondering If anyone ever tried it using waveform images( like from soundcloud) to detect the peaks( volume/ amplitude/ frequency or whatever the term is) from an audio file. I want to use this as an alternative to using the Web Audio API which is a bit more tedious if you ask me and not supported in older devices( android 4.0).


Is there a way to process the colour data from a waveform image that can use to simulate such tasks?


Howler.JS + Angular - path for playing sound file

I'm trying to play mp3 file in cordova + ionic hybrid app.


Sound is stored in:


www/sounds/dubstep/sound.mp3


And i'm trying to play file from service placed in /www/scripts/services/global.js using following code:



var sound = new Howl({
src: ['sounds/dubstep/sound.mp3'],
onend: function() {
console.log('Finished!');
},
onloaderror: function() {
console.log('Error!');
},
});

sound.play();


But it is always throwing onloaderror.


How i should set path in right way?


Thanks for any help.


Convert wav to mp3 using Meteor FS Collections on Startup

I'm trying to transcode all wav files into a mp3 using Meteor and Meteor FS Collections. My code works when I upload a wav file to the uploader -- That is it will convert the wav to a mp3 and allow me to play the file. But, I'm looking for a Meteor Solution that will transcode and add the file to the DB if the file is a wav and exist in a certain directory. According to the Meteor FSCollection it should be possible if the files have already been stored. Here is their example code: *GM is for ImageMagik, I've replaced gm with ffmpeg and installed ffmpeg from atmosphereJS.



Images.find().forEach(function (fileObj) {
var readStream = fileObj.createReadStream('images');
var writeStream = fileObj.createWriteStream('images');
gm(readStream).swirl(180).stream().pipe(writeStream);
});


I'm using Meteor-CollectionFS [https://github.com/CollectionFS/Meteor-CollectionFS]-



if (Meteor.isServer) {
Meteor.startup(function () {
Wavs.find().forEach(function (fileObj) {
var readStream = fileObj.createReadStream('.wavs/mp3');
var writeStream = fileObj.createWriteStream('.wavs/mp3');
this.ffmpeg(readStream).audioCodec('libmp3lame').format('mp3').pipe(writeStream);
Wavs.insert(fileObj, function(err) {
console.log(err);
});
});
});
}


And here is my FS.Collection and FS.Store information. Currently everything resides in one JS file.



Wavs = new FS.Collection("wavs", {
stores: [new FS.Store.FileSystem("wav"),
new FS.Store.FileSystem("mp3",

{
path: '~/wavs/mp3',
beforeWrite: function(fileObj) {
return {
extension: 'mp3',
fileType: 'audio/mp3'
};
},
transformWrite: function(fileObj, readStream, writeStream) {
ffmpeg(readStream).audioCodec('libmp3lame').format('mp3').pipe(writeStream);
}
})]
});


When I try and insert the file into the db on the server side I get this error: MongoError: E11000 duplicate key error index:


Otherwise, If I drop a wav file into the directory and restart the server, nothing happens. I'm new to meteor, please help. Thank you.


Playing MP3s in Java from within the Project

I am working on a Java project that involves playing mp3 files. I want my application to play the files from within the project so I have the music files stored in a folder called music which is in a source folder called resources. This is the code I have right now but when I run it I get a Bitstream errorcode 102. I can't seem to figure out what is wrong any help?



public void play() {
try {
InputStream stream = MP3.class.getClassLoader()
.getResourceAsStream("/music/LoveStory.mp3");
BufferedInputStream bis = new BufferedInputStream(stream);
player = new Player(bis);
} catch (Exception e) {
System.out.println("Problem playing file " + filename);
System.out.println(e);
}

// run in new thread to play in background
new Thread() {
public void run() {
try {
player.play();
} catch (Exception e) {
System.out.println(e);
}
}
}.start();

}

ffmpeg video and audio duration differ

I am converting a jpg picture and an mp3 audio into a video, like so:



ffmpeg -y -loop 1 -i pics\jam_03.jpg -i mp3\jam_03.mp3 -shortest -c:v mpeg4 -b:v 4000k -c:a libmp3lame jam_03.mp4


When concatenating several of such videos to one (a narrated presentation) like this:



ffmpeg -i "concat:jam_00.avi|jam_01.avi|jam_02.avi|jam_03.avi|jam_04.avi|jam_05.avi|jam_06.avi|jam_07.avi|jam_08.avi" -c:v copy -c:a copy out.mp4 -y


video and audio successively get out of sync. Indeed using ffprobe



ffprobe -show_streams jam_03.avi


different length are shown for video and audio e.g. 5.12 and 5.19 sec. I have tried a lot: x264 and aac codecs instead of the above ones, avi or mp4 container format, cutting the video using -t 5.12, also several variants of audio and video synchronisation (-async 1, -async 25, -async 100, -vsync 1, -vsync 2, ...) all yield more or less the same. My idea is that each video frame is 1/25 sec long and audio must be multiple of this. However, 5.12sec (=multiple of 1/25sec) shows no success.


Any ideas?


thanks


Error with audio related Java code


AudioPlayer AP = AudioPlayer.player;
AudioStream AS;
AudioData AD;

sun.audio.ContinuousAudioDataStream loop = null;

try{
AS = new AudioStream( new FileInputStream("all_shook_up.wav"));
AD = AS.getData();
loop = new sun.audio.ContinuousAudioDataStream(AD);
}catch(IOException error){}

AP.start(loop);


This code doesn't seem to play the .wav file. No sound is played. What am I doing wrong? (I am aware that I could have declared and initialized my variables on the same line, but the separation has been done for clarity)


How do I edit a recorded wav file format in java?

Hi I am developing an audio that allows me to record play and edit audio in wave format, I managed to record and play but still I have found no way to edit these once already saved or recorded files, I tried to do to travez of a byte array but I only managed to insert audio at the beginning and end of nomas file, Here I give you my code, Sorry if I do not write properly because my English is very bad



public class GrabadorAudio extends javax.swing.JDialog {

private ControladorAudio controladorAudio;
AudioFormat audioFormat;
TargetDataLine targetDataLine;
TargetDataLine targetDataLine2;
int tamañoAudio;
File audioFile = null;
File yourFile=null;
File copia=null;
Timer timer;
Clip clip;
Capture capturar;
int audioPosicion;
boolean controlPausa=false;
boolean controlStop=false;
boolean controlPlay= false;
boolean controlInicio=false;
boolean controlFin= false;
boolean controlGrabar=false;
boolean controlResume=false;
CaptureThread cap;
AudioFileFormat.Type fileType = null;
double durationInSeconds;
byte[] datatec=null;
byte[] datatec2=null;
long frames;
int prueba=0;
AudioInputStream audioInputStream;

private String ext="wav";

String directorio;


public GrabadorAudio(java.awt.Frame parent, boolean modal) {
super(parent, modal);
initComponents();
setLocationRelativeTo(this);
controladorAudio= new ControladorAudio();
Btnguardar.setEnabled(false);
Btnplay.setEnabled(false);
Btninicio.setEnabled(false);
Btnfin.setEnabled(false);
Btnpause.setEnabled(false);
Sliprogreso.setEnabled(false);

timer= new Timer();
capturar= new Capture();
Properties p = System.getProperties();
directorio=p.getProperty("user.home");
fileType = AudioFileFormat.Type.WAVE;

audioFile = new File(directorio+"/.vmaudio/audio1."+ext);
}

@SuppressWarnings("unchecked")

private void initComponents() {

PanelGrabador = new javax.swing.JPanel();
Btnplay = new javax.swing.JButton();
Btngrabar = new javax.swing.JButton();
Btnguardar = new javax.swing.JButton();
Btninicio = new javax.swing.JButton();
Btnfin = new javax.swing.JButton();
Sliprogreso = new javax.swing.JSlider();
Lbltiempo = new javax.swing.JLabel();
Btnsalir = new javax.swing.JButton();
Btnpause = new javax.swing.JButton();

setDefaultCloseOperation(javax.swing.WindowConstants.DISPOSE_ON_CLOSE);

Btnplay.setText("Play/Stop");
Btnplay.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
BtnplayActionPerformed(evt);
}
});

Btngrabar.setText("Rec");
Btngrabar.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
BtngrabarActionPerformed(evt);
}
});

Btnguardar.setText("Guardar");
Btnguardar.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
BtnguardarActionPerformed(evt);
}
});

Btninicio.setText("inicio");
Btninicio.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
BtninicioActionPerformed(evt);
}
});

Btnfin.setText("fin");
Btnfin.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
BtnfinActionPerformed(evt);
}
});

Sliprogreso.setMajorTickSpacing(1);
Sliprogreso.setMaximum(10);
Sliprogreso.setPaintLabels(true);
Sliprogreso.setPaintTicks(true);
Sliprogreso.setToolTipText("");
Sliprogreso.setValue(0);
Sliprogreso.addChangeListener(new javax.swing.event.ChangeListener() {
public void stateChanged(javax.swing.event.ChangeEvent evt) {
SliprogresoStateChanged(evt);
}
});

Btnsalir.setText("Salir");
Btnsalir.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
BtnsalirActionPerformed(evt);
}
});

Btnpause.setText("Pause/Resume");
Btnpause.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
BtnpauseActionPerformed(evt);
}
});

javax.swing.GroupLayout PanelGrabadorLayout = new javax.swing.GroupLayout(PanelGrabador);
PanelGrabador.setLayout(PanelGrabadorLayout);
PanelGrabadorLayout.setHorizontalGroup(
PanelGrabadorLayout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(javax.swing.GroupLayout.Alignment.TRAILING, PanelGrabadorLayout.createSequentialGroup()
.addGap(110, 110, 110)
.addComponent(Btninicio)
.addGap(105, 105, 105)
.addComponent(Btnfin)
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED, 114, Short.MAX_VALUE)
.addGroup(PanelGrabadorLayout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(PanelGrabadorLayout.createSequentialGroup()
.addGap(10, 10, 10)
.addComponent(Btnsalir))
.addComponent(Lbltiempo, javax.swing.GroupLayout.PREFERRED_SIZE, 102, javax.swing.GroupLayout.PREFERRED_SIZE))
.addGap(34, 34, 34))
.addGroup(PanelGrabadorLayout.createSequentialGroup()
.addGroup(PanelGrabadorLayout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(PanelGrabadorLayout.createSequentialGroup()
.addGap(51, 51, 51)
.addComponent(Btnplay)
.addGap(18, 18, 18)
.addComponent(Btnpause)
.addGap(31, 31, 31)
.addComponent(Btngrabar, javax.swing.GroupLayout.PREFERRED_SIZE, 60, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGap(42, 42, 42)
.addComponent(Btnguardar))
.addGroup(PanelGrabadorLayout.createSequentialGroup()
.addGap(67, 67, 67)
.addComponent(Sliprogreso, javax.swing.GroupLayout.PREFERRED_SIZE, 437, javax.swing.GroupLayout.PREFERRED_SIZE)))
.addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE))
);
PanelGrabadorLayout.setVerticalGroup(
PanelGrabadorLayout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(PanelGrabadorLayout.createSequentialGroup()
.addGap(32, 32, 32)
.addGroup(PanelGrabadorLayout.createParallelGroup(javax.swing.GroupLayout.Alignment.BASELINE)
.addComponent(Btnplay)
.addComponent(Btngrabar)
.addComponent(Btnguardar)
.addComponent(Btnpause))
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED, 62, Short.MAX_VALUE)
.addComponent(Sliprogreso, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE)
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.UNRELATED)
.addComponent(Lbltiempo, javax.swing.GroupLayout.PREFERRED_SIZE, 26, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGroup(PanelGrabadorLayout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(PanelGrabadorLayout.createSequentialGroup()
.addGap(9, 9, 9)
.addComponent(Btninicio))
.addGroup(PanelGrabadorLayout.createSequentialGroup()
.addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED)
.addGroup(PanelGrabadorLayout.createParallelGroup(javax.swing.GroupLayout.Alignment.BASELINE)
.addComponent(Btnsalir)
.addComponent(Btnfin))))
.addGap(71, 71, 71))
);

javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane());
getContentPane().setLayout(layout);
layout.setHorizontalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addComponent(PanelGrabador, javax.swing.GroupLayout.Alignment.TRAILING, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)
);
layout.setVerticalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(layout.createSequentialGroup()
.addContainerGap()
.addComponent(PanelGrabador, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE))
);

pack();
}// </editor-fold>


private void BtnplayActionPerformed(java.awt.event.ActionEvent evt) {
if(controlStop){
Btnplay.setText("Play");
Btnpause.setEnabled(false);
controlPausa=false;
controlStop=false;
capturar.stop();
Btngrabar.setEnabled(true);
Btnplay.setEnabled(true);
Btnfin.setEnabled(true);
Btninicio.setEnabled(true);



}
else{


Btnpause.setEnabled(true);
controlPlay=true;
Sliprogreso.setEnabled(true);
try {
AudioInputStream stream;
AudioFormat format;
DataLine.Info info;

yourFile = new File(directorio+"/.vmaudio/audio1.wav");

stream = AudioSystem.getAudioInputStream(yourFile);


format = stream.getFormat();


frames =stream.getFrameLength();
durationInSeconds = (frames+0.0) / format.getFrameRate();


System.out.println("duracion:"+durationInSeconds);
Sliprogreso.setMaximum((int) durationInSeconds);


info = new DataLine.Info(Clip.class, format);


clip = (Clip) AudioSystem.getLine(info);
clip.open(stream);
clip.start();



TimerTask timerTask=new TimerTask(){
public synchronized void run() {
{
double timeNow=(durationInSeconds*clip.getFramePosition())/frames;
Sliprogreso.setValue((int)Math.round(timeNow));
if((int)Math.round(timeNow)==(int)durationInSeconds){
System.out.println("se cancelo");
this.cancel();}
}

}
};


}



catch (Exception e) {
System.out.println(e);

e.printStackTrace();

}

}

}

private void BtngrabarActionPerformed(java.awt.event.ActionEvent evt) {
// TODO add your handling code here:
controlPausa=true;
controlGrabar=true;
controlStop=true;
Btnpause.setText("Pause");
Btnplay.setText("Stop");
Btnpause.setEnabled(true);
Btnplay.setEnabled(true);


final SwingWorker worker= new SwingWorker() {

@Override
protected Object doInBackground() throws Exception {
try{
capturar.start();
Btngrabar.setEnabled(false);

}catch (Exception e) {
e.printStackTrace();
System.exit(0);
}
throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
}
@Override
protected void done() {


}

};

worker.addPropertyChangeListener(new PropertyChangeListener() {
public void propertyChange(PropertyChangeEvent pce) {
// progressBar.setValue(progreso);//actualizamos el valor del progressBar
}
});



worker.execute();
}

private void BtninicioActionPerformed(java.awt.event.ActionEvent evt) {
// TODO add your handling code here:
Sliprogreso.setValue(0);
}

private void BtnfinActionPerformed(java.awt.event.ActionEvent evt) {
// TODO add your handling code here:
Sliprogreso.setValue(10);
}

private void SliprogresoStateChanged(javax.swing.event.ChangeEvent evt) {
// TODO add your handling code here:



}

private void BtnsalirActionPerformed(java.awt.event.ActionEvent evt) {
// TODO add your handling code here:

System.exit(0);
}

private void BtnguardarActionPerformed(java.awt.event.ActionEvent evt) {
// TODO add your handling code here:
}

private void BtnpauseActionPerformed(java.awt.event.ActionEvent evt) {
// TODO add your handling code here:
String pause=Btnpause.getText();

if(controlPausa){

controlPausa=false;
controlResume=true;
capturar.line.stop();
Btngrabar.setEnabled(false);
Btnplay.setEnabled(false);
Btnpause.setText("Resume");
System.out.println(" entro al controlpausa");

} else{
if(controlPlay){
controlPlay=false;
Btnpause.setText("Resume");
Btnplay.setEnabled(false);
clip.stop();


}
else{
if (controlResume){

Btnpause.setText("Pause");
Btnplay.setEnabled(true);
controlStop=true;
capturar.line.start();
}
else{
clip.start();
Btnpause.setText("Pause");
controlPlay=true;
}


}





}

}

private AudioFormat getAudioFormat(){

float sampleRate = 4000.0F;

int sampleSizeInBits = 8;

int channels = 1;

boolean signed = true;

boolean bigEndian = false;

return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed,
bigEndian);

}



class CaptureThread extends Thread{
public synchronized void run(){
fileType = AudioFileFormat.Type.WAVE;
audioFile = new File(directorio+"/.vmaudio/audio1."+ext);

try{
targetDataLine.open(audioFormat);
targetDataLine.start();
AudioSystem.write(
new AudioInputStream(targetDataLine),
fileType,
audioFile);


}catch (Exception e){
e.printStackTrace();
}


}



}

class Capture implements Runnable {

TargetDataLine line;

Thread thread;


public void start() {
thread = new Thread(this);
thread.setName("Capture");
thread.start();

}

public void stop() {

thread = null;

}



public void run() {


audioInputStream = null;



AudioFormat format = getAudioFormat();
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);

if (!AudioSystem.isLineSupported(info)) {
// shutDown("Line matching " + info + " not supported.");
return;
}

// get and open the target data line for capture.

try {

line = (TargetDataLine) AudioSystem.getLine(info);
line.open(format, line.getBufferSize());

} catch (LineUnavailableException ex) {

ex.printStackTrace();
return;

} catch (SecurityException ex) {

ex.printStackTrace();
return;

} catch (Exception ex) {

ex.printStackTrace();
return;
}

// play back the captured audio data
ByteArrayOutputStream out = new ByteArrayOutputStream();
int frameSizeInBytes = format.getFrameSize();
int bufferLengthInFrames = line.getBufferSize() / 8;
int bufferLengthInBytes = bufferLengthInFrames * frameSizeInBytes;
byte[] data = new byte[bufferLengthInBytes];



int numBytesRead;

line.start();

while (thread != null) {
if ((numBytesRead = line.read(data, 0, bufferLengthInBytes)) == -1) {
break;
}
out.write(data, 0, numBytesRead);
}

line.stop();
line.close();
line = null;

// stop and close the output stream
try {
out.flush();
out.close();

} catch (Exception ex) {


ex.printStackTrace();
}

// load bytes into the audio input stream for playback
byte audioBytes[] = out.toByteArray();

try{
System.out.println("tamaño de audiobytes"+audioBytes.length);
if (prueba==0){

//datatec=data;
}
int j=0;
int tamañodatatec=0;
if(datatec==null){
tamañodatatec=0;
}
else{
tamañodatatec=datatec.length;
}

for(int i=0;i<audioBytes.length;i++){

if(prueba==1){
datatec= appendData(datatec, audioBytes[i]);
}

}

if (prueba==0){

datatec=out.toByteArray();
ByteArrayInputStream bais = new ByteArrayInputStream(audioBytes);
audioInputStream = new AudioInputStream(bais, format,
audioBytes.length / frameSizeInBytes);
prueba=1;
}else{

ByteArrayInputStream bais = new ByteArrayInputStream(datatec);
audioInputStream = new AudioInputStream(bais, format,
datatec.length / frameSizeInBytes);
}

}catch (Exception e){
e.printStackTrace();
}



try {

AudioSystem.write(audioInputStream, fileType, audioFile);

} catch (Exception ex) {

ex.printStackTrace();
}

try {

audioInputStream.reset();

} catch (Exception ex) {
ex.printStackTrace();
return;
}


}
} // End class Capture


protected byte[] appendData(byte firstObject,byte[] secondObject){
byte[] byteArray= {firstObject};
return appendData(byteArray,secondObject);

}

protected byte[] appendData(byte[] firstObject,byte secondByte){
byte[] byteArray= {secondByte};
return appendData(firstObject,byteArray);

}

protected byte[] appendData(byte[] firstObject,byte[] secondObject){
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
try {
if (firstObject!=null && firstObject.length!=0){
outputStream.write(firstObject);
}
if (secondObject!=null && secondObject.length!=0){
outputStream.write(secondObject);
}

} catch (IOException e) {
e.printStackTrace();
}
return outputStream.toByteArray();
}

Android - How to stop other apps playing audio?

I need play some audio through internal speaker.



audioManager.setMode(AudioManager.MODE_IN_CALL);
audioManager.setSpeakerphoneOn(true);


My problem - when i do this, if another app play audio, he start playing too through internal speaker. How i can event it?


I read this topic: How to stop other apps playing music from my current activity? but in my case i want force closing any audio playing.


Android MediaPlayer getCurrentPosition move up and down?


//audio total 6600ms

MediaPlayer mPlayer = new MediaPlayer();
mPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
setVolumeControlStream(AudioManager.STREAM_MUSIC);
try {
mPlayer.setDataSource(mFilePath);
mPlayer.prepare();
mPlayer.start();
mPb_record.setMax(mDuration);
mPb_record.setProgress(0);

// update progress
pbHandler.postDelayed(pbRunnable, 100);

mPlayer.setOnCompletionListener(new OnCompletionListener() {
@Override
public void onCompletion(MediaPlayer mp) {
// TODO ...
}
});
} catch (Exception e) {
e.printStackTrace();
}

Runnable pbRunnable = new Runnable() {
@Override
public void run() {
if (mPlayer != null) {
System.out.println("CurrentPosition()-- " + mPlayer.getCurrentPosition());

mPb_record.setProgress(mPlayer.getCurrentPosition());
pbHandler.postDelayed(this, 100);
}
}
};




6557 > 6504..so progress show goback ... Why?




Log





  • 12-30 20:48:56.428: I/System.out(2976): CurrentPosition()---- 6139

  • 12-30 20:48:56.528: I/System.out(2976): CurrentPosition()---- 6243

  • 12-30 20:48:56.668: I/System.out(2976): CurrentPosition()---- 6348

  • 12-30 20:48:56.768: I/System.out(2976): CurrentPosition()---- 6452

  • 12-30 20:48:56.868: I/System.out(2976): CurrentPosition()---- 6557

  • 12-30 20:48:56.968: I/System.out(2976): CurrentPosition()---- 6504 -- why?

  • 12-30 20:48:56.968: I/System.out(2976): CurrentPosition()---- 6504



Cannot load sound later during runtime

Hello I have created little class to play sounds in my game. Here:


package sk.tuke.oop.game.sounds;



import java.applet.AudioClip;
import java.io.File;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.Clip;
import javax.sound.sampled.FloatControl;

public class Sound implements AudioClip {

private Clip clip;
private FloatControl volume;
private int framePosition;

public Sound(String path) {
loadMusic(path);
}

public void loadMusic(String path) {
if (clip != null)
clip.stop();

clip = null;

if (!path.equals("")) {

File soundFile = null;

try {
soundFile = new File(path);
} catch (Exception e) {
return;
}

try {
AudioInputStream input = AudioSystem.getAudioInputStream(soundFile);
clip = AudioSystem.getClip();
clip.open(input);
} catch (Exception e) {
e.printStackTrace();
clip = null;
}

}

}

public void play() {
if (clip != null) {
stop();
clip.start();
}
}

public void stop() {
if (clip != null) {
clip.stop();
clip.setFramePosition(0);
}
}

public void pause() {
if (clip != null) {
if (clip.isRunning()) {
framePosition = clip.getFramePosition();
clip.stop();
}
}
}

public void unpause() {
if (clip != null) {
if (!clip.isRunning()) {
clip.setFramePosition(framePosition);
clip.start();
}
}
}

public void loop() {
if (clip != null) {
clip.loop(clip.LOOP_CONTINUOUSLY);
}
}

public void setVolume(float vol) {
if (volume.getMinimum()+ vol <= volume.getMaximum()) {
volume.setValue(volume.getMinimum());
volume.setValue(volume.getValue() + vol);
}
}
}


It's working fine when all actors are created before game loop but when I shoot a bullet and I want to play a sound i get:



javax.sound.sampled.LineUnavailableException: line with format PCM_SIGNED 44100.0 Hz, 16 bit, stereo, 4 bytes/frame, little-endian not supported.


Could you help me with that? Thank you.


Python PyAudio + mic input - specific frequency filter?

I'm currently working on a radio astronomy project where I need to monitor the amplitude of an audio signal over time.


I've used the simplified Python code suggested by user1405612 here Detect tap with pyaudio from live mic which takes the mic input and works out the RMS amplitude and I've added a part to simply log the value to a CSV file. This is working very well, and thanks must got ouser1405612 for it!


However is there a way I can implement a simple frequency filter to this code. For example I am interested in the RMS amplitude of frequency 19.580khz (in reality I would want to look at the range of say 19.4hkz to 19.6hkz)?


Is there a way to do this with PyAudio using the code in the link above by looking at the raw stream data for example, or any other way? I don't want anything complex like graphs, spectrum analysis etc, just a simple frequency filter. Unfortunately a band pass filter before the mic input is not possible, so it needs to be done on the computer.


Thanks in advance!


Converting from .opus to .wav

Hello I am supposed to write a utility that will extract an opus encoded audio payload from RTP packets which I will read from a pcap dumpfile. The utility should also have the functionality that can be used to decode the payload that I am extracting from the rtp packets and convert it to a .wav file. Currently I have written a code that extracts the payload from the rtp packets and dumps it into a file "log.opus". However I am stuck at this point. How should I proceed with writing the decoder logic? I am working on Windows platform and am using winpcap library and libopus.


HTML5 Audio element plays the source file even if there is no sound card installed

I want to check whether a sound card is installed in the system or not using JavaScript.


For that I used HTML5 Audio element and try to play an audio file, by uninstalling the sound card. In Firefox and IE11 it immediately pauses the file from playing and is throwing an error (expected result for me), where in Chrome its trying to play the file with out any sound(obvious as there is no sound card). I need the same behavior like IE and Firefox in Chrome too. Is there any way we can make it work in Chrome?


Or any other solution to detect the sound card using JavaScript?


Detect specific sound in audio

I have a short (~1 second) arbitrary sound file and two devices. At some unknown time, device 1 will play the sound file out of its speaker. Device 2 should then be able to detect that sound. There may be background noise. It's unknown how loud the sound will be played.


This feels like it should be a common solved problem, but searching for answers has left me with nothing.


If anyone has good a solution or could just point me in the right direction I'd be very grateful.


lundi 29 décembre 2014

Cross platform WAVE Audio playback in C++

I have written an application which converts an integer into its spoken English equivalent. This code passes filenames to be played by the PlaySoundA function in the Windows API:



vector<string> sounds = NumberConvert(atol(argv[1]));

int i;
for (i=0; i < sounds.size() - 1; i++) {
sounds[i].append(".wav");
PlaySoundA(sounds[i].c_str(), 0, SND_SYNC);
}


Would it be possible to create a wrapper function which would use the PortAudio API in the background, to maintain cross platform compatibility?


Is there a way to make NotificationListenerService get notifications first?

I have a NotifcationListenerService that is receiving notifications, but in some cases, I want my app to be able to prevent the notification from making the default sound. For example, with SMS messages, I'd like to prevent the SMS notification from making the default noise, but only if I've handled the notification (I'm reading the notifications using TextToSpeech).


I know I can use cancelNotification(), but sometimes the notification gets posted to the notification bar (and thus the notification sound is played) before I can cancel it. Is there a way to set the priority of this listener?


Alternatively, I've created a BroadcastReceiver to listen for SMS_RECEIVED intents, but I can't seem to find a way to prevent JUST SMS notifications from making a sound. I can prevent ALL notifications from making a sound, but this isn't exactly what I want, since I'm not handling all notifications, and I don't want the user not to be notified for an event I haven't handled.


Web Speech Synthesis custom voice download

I was reading here http://updates.html5rocks.com/2014/01/Web-apps-that-talk---Introduction-to-the-Speech-Synthesis-API


It seems that custom voices can be used with speech synthesis api, the question is where can i find samples?


I am trying searching but nothing comes out


Maybe i am wrong and i can't install a new one from the API but it needs to be installed in the system !?


Java audio library or Linux console audio player

I'm creating Java application which plays some music records. I use BasicPlayer library, which worked well on Windows but unfortunately doesn't work on Linux (Raspbian, to be more precise). Now what I'm looking for is :


1) Java library on linux which allows to



  • get total length of song

  • play/resume

  • fast forward/rewind

  • change volume


OR


2) Linux console audio player, which allows the same as above using console only. I would use something like Runtime.getRuntime().exec to play records throught external player.


I'd appreciate any suggestions.


Referencing and playing a mp3 file from within a Java project

I am currently working on a project that will play certain songs when buttons are clicked. I now have my code working so that it correctly plays the mp3 file however it is not the way I want to do it. Right now I am just referencing the mp3 file from my desktop but I would like to be able to reference it from the project itself. I created a source folder called resources and I have a folder in there called music that I put the mp3 files in. I'm having trouble figuring out how to correctly reference the mp3 file though.


This is my current code that plays the song off of my desktop:



import java.io.BufferedInputStream;
import java.io.FileInputStream;

import javazoom.jl.player.Player;

public class MP3 {
private String filename;
private Player player;

// constructor that takes the name of an MP3 file
public MP3(String filename) {
this.filename = filename;
}

public void close() {
if (player != null)
player.close();
}

// play the MP3 file to the sound card
public void play() {
try {
FileInputStream fis = new FileInputStream(filename);
BufferedInputStream bis = new BufferedInputStream(fis);
player = new Player(bis);
} catch (Exception e) {
System.out.println("Problem playing file " + filename);
System.out.println(e);
}

// run in new thread to play in background
new Thread() {
public void run() {
try {
player.play();
} catch (Exception e) {
System.out.println(e);
}
}
}.start();

}

// test client
public static void main(String[] args) {
String filename = "/Users/username/desktop/LoveStory.mp3";
MP3 mp3 = new MP3(filename);
mp3.play();

// when the computation is done, stop playing it
mp3.close();

// play from the beginning
mp3 = new MP3(filename);
mp3.play();

}

}

Should an extracted audio sample be contained inside its original source when comparing bytes?

Let's say that I have an audio wav file with the sentence:



+-----------+----------------------------------------+
| meta data | 'Audio recognition sometimes is trick' |.wav
+-----------+----------------------------------------+


Now consider opening this audio in Audacity and extracting and saving the word 'sometimes' in another file based on its wave draw.



+-----------+-------------+
| meta data | 'sometimes' |.wav
+-----------+-------------+


Then I used this Java code to get the audio data only from both files:



//...
Path source = Paths.get("source.wav");
Path sample = Paths.get("sometimes.wav");
int index = compare(transform(source), transform(sample));
System.out.println("Shouldn't I be greater than -1!? " + (index > -1));
//...

private int compare(int[] source, int[] sample) throws IOException {
return Collections.indexOfSubList(Arrays.asList(source), Arrays.asList(sample));
}

private int[] transform(Path audio) throws IOException, UnsupportedAudioFileException {
try (AudioInputStream ais = AudioSystem.getAudioInputStream(
new ByteArrayInputStream(Files.readAllBytes(audio)))) {

AudioFormat format = ais.getFormat();
byte[] audioBytes = new byte[(int) (ais.getFrameLength() * format.getFrameSize())];
int nlengthInSamples = audioBytes.length / 2;
int[] audioData = new int[nlengthInSamples];
for (int i = 0; i < nlengthInSamples; i++) {
int LSB = audioBytes[2*i]; /* First byte is LSB (low order) */
int MSB = audioBytes[2*i+1]; /* Second byte is MSB (high order) */
audioData[i] = (MSB << 8) | (255 & LSB);
}
return audioData;
}
}


Now comes my question again.


Shouldn't this code be able to find 'sometimes' audio data bytes inside the original audio file considering the extraction mentioned before?


I tried comparing contents as String but no lucky at all:



new String(source).contains(new String(sample));


Can someone point what I missing here?


Error with sound related code


InputStream in;
try{
in = new FileInputStream(new File ("C:\\Users\\Sony\\Desktop\\all_shook_up.wav"));
AudioStream audios = new AudioStream(in);
AudioPlayer.player.start(audios);
}
catch(Exception e){
System.out.println("Wrong.");
}


Whenever I run this program, the output seems to be "Wrong." I have no idea as to what I have done wrong.


How to do live audio stream in spring web application

i am newbie to voice based application.My requirement is to streaming a audio.I don't know where to start.In my project i have used spring framework.can anybody suggest what are the things i should learn for streaming a audio.


Any help will be greatly appreciated!!!


USB Soundcard Device Class Specification document

I'm looking for a specific document.


I was reading the 'USB Audio Device Class Specification for Basic Audio Devices' document (http://www.usb.org/developers/docs/devclass_docs/audio10.pdf). In section 1.1 in that document, it is mentioned that "More complex audio devices, such as USB soundcard devices are not part of this specification".


Can someone point me to a document that does have the specification for USB soundcard devices? Do I need to be a member of the USB-IF to get this information?


Playing a sound based on what part of text is clicked in Android

It's a little hard to explain my question so I'll explain it with an example.


This image is from the Quran Android app on the Playstore. When a user clicks on a verse, it highlights the verse, as can be seen in the example, and also brings up a menu from the bottom to let the user play a sound file corresponding to the text.


Another similar example. Let's say you had this text: "Hello. Thank you for answering my question. See you." If the user clicked on "Hello." part of this text, it would highlight it and play a sound corresponding to hello, let's say hello.mp3.


My question is, how do you go on about doing this? How do they do it in their app? Thank you.


Linux audio control web alsamixer

Linux audio control are managed by a command line interface, it would be useful to access them from outside.


Would it be possible to access the control of alsamixer from a web app, like a django app? Can please someone put me in the right direction of how to do this?


any advice is really appreciated


My piano game : Mixing several WAV files result in extra white noise

I have been making a piano game which records my performance after playing.


It is powered by AndEngine , jcodec and an AAC Encoder Library


Here is the full source code: https://github.com/VansonLeung/PianoGameAndroid


However, the WAV sound mixing production results in extra white noise.


Did my algorithm go wrong?


The algorithm is inside: jp.classmethod.sample.audiomixer.AudioMixer.java


Thanks!


Record audio, append audio files

I have a project in which the app records audio files using avrecorder. The requirement is that

I should be able to pause the recording and play the recorded file and then continue recording. I searched and found that, we are not able to play the paused recording and then continue the record. So I used the AVExportSession to append the audio file to first file. Now here arises the next problem. When I export the file, it only allows us to export to AVAssetExportPresetAppleM4A (M4a). I need the files to be in smaller size for uploading it to the server. The exported file is of large size. I am already recording the assets in very low quality and low sampling rate. The user may either send the first file or the appended file over the internet. Is there any solution for this problem.



- (BOOL)combineFilesFor:(NSURL*)url
{
NSError *error = nil;
BOOL ok = NO;

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *soundOneNew = [documentsDirectory stringByAppendingPathComponent:Kcombined];
CMTime nextClipStartTime = kCMTimeZero;
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];

AVAsset *avAsset = [AVURLAsset URLAssetWithURL:masterUrl options:nil];
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeAudio];
if ([tracks count] == 0)
return NO;
CMTimeRange timeRangeInAsset = CMTimeRangeMake(kCMTimeZero, [avAsset duration]);
AVAssetTrack *clipAudioTrack = [[avAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
ok = [compositionAudioTrack insertTimeRange:timeRangeInAsset ofTrack:clipAudioTrack atTime:nextClipStartTime error:&error];
if (!ok) {
NSLog(@"Current Video Track Error: %@",error);
}
nextClipStartTime = CMTimeAdd(nextClipStartTime, timeRangeInAsset.duration);

AVAsset *avAsset1 = [AVURLAsset URLAssetWithURL:url options:nil];
NSArray *tracks1 = [avAsset1 tracksWithMediaType:AVMediaTypeAudio];
if ([tracks1 count] == 0)
return NO;
CMTimeRange timeRangeInAsset1 = CMTimeRangeMake(kCMTimeZero, [avAsset1 duration]);
AVAssetTrack *clipAudioTrack1 = [[avAsset1 tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
ok = [compositionAudioTrack insertTimeRange:timeRangeInAsset1 ofTrack:clipAudioTrack1 atTime:nextClipStartTime error:&error];
if (!ok)
{
NSLog(@"Current Video Track Error: %@",error);
}

AVAssetExportSession *exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];

if (nil == exportSession)
return NO;
exportSession.outputURL = [NSURL fileURLWithPath:soundOneNew]; // output path

combinedUrl = [NSURL fileURLWithPath:soundOneNew];
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type

[exportSession exportAsynchronouslyWithCompletionHandler:^{

if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(@"AVAssetExportSessionStatusCompleted");

} else if (AVAssetExportSessionStatusFailed == exportSession.status) {

NSLog(@"AVAssetExportSessionStatusFailed :%@",exportSession.error);
} else {
NSLog(@"Export Session Status: %d", exportSession.status);
}
}];

return YES;
}

How do i get the framelength of audio in android?


How do I get the framelength, and other stuff such as the framesize, channels, etc., of an audio file in android?



I tried using the MediaFormat but I thinks its for formatting an audio not getting the details from it. Also, is there a way to import a javax.sound library to android so that if there's no way in getting the framelength, I will just use the classes in javax.sound?


How to efficiently create array of wave file amplitudes for a WaveGraph?

I am trying to add a wave graph to my android app, that displays the wave form data for the currently playing audio file. I am currently trying to write a method to build an arraylist with the wave file amplitudes (1 amplitude for every 100 millieseconds of audio length), however it takes ages (minutes) to finish running. It is extremely inefficient.


This is the code:



public ArrayList <Integer> buildAudioWaveData(Recording recording){
final Recording finalRecording = recording;
(new Thread(){
@Override
public void run(){
File recFile = finalRecording.getFile();
ArrayList <Integer> dataSeries = new ArrayList<Integer>();

try {
InputStream bis = new BufferedInputStream(new FileInputStream(recFile));
DataInputStream dis = new DataInputStream(bis);

long sampleRate = finalRecording.getSampleRate(new RandomAccessFile(recFile, "rw"));
long samplesPerDatum = sampleRate / 10; // One sample for every 100 ms.
long fileLengthInBytes = recFile.length();
long fileDataRemaining = fileLengthInBytes / 2; // 16 bit wave file = 2 bytes per sample.
int max = 0;

while(fileDataRemaining > 0){
if(fileDataRemaining > samplesPerDatum) {
for (int i = 0; i < samplesPerDatum; i++) {
short temp = dis.readShort();
if (temp > max) {
max = temp;

}
}
Log.i("temp", Integer.toString(max));

dataSeries.add(max);
max = 0;
}
fileDataRemaining -= samplesPerDatum;
}
int x = 0;
}catch(Exception e){

}
}
}).start();


return null;
}


Does anyone know of a more efficient way in which i can generate the array for my graph?


Thanks heaps in advance. Corey B :)