Wednesday 7 August 2013

Taking Control of Your Volume… Control


Have you ever played with an app that has volume control and thought that it really didn't work the way you expected? You start at full volume and move the slider down. At about half way you start to notice a difference, and then you start to loose control all together. You never get it to the exact volume you want. It's either too low or too high.

I have experienced this and it is rather annoying! When I started using volume controls in apps I've developed for Musicopoulos, I wanted things to work much better. So I went on a journey of discovery. Like many learning adventures you come across when programming, I discovered something I would never have otherwise come across.

The frustration I was experiencing with volume control in many apps wasn't related to broken sliders. The logic behind the volume control wasn't developed with the ear in mind. Most people implement volume as a liner function. For example, if you have a slider with a range of values between 0.0 and 1.0, you set the volume to whatever position the slider is on.

This is not the way it should be done, buy why?

As it turns out, we hear things on a logarithmic scale, not a linear scale.What this means is that our ears are more sensitive to changes at low volumes than they are at high volumes. That's just the way our ears have evolved over time, and allows us to experience a dynamic range of sound levels.

If we apply this thought process to our linear volume control, we don't perceive much of a change between say 0.5 and 1.0, however, between 0.0 and 0.5 we hear very minor changes in volume. This means that we need finer control over lower volumes, and less control over higher volumes.

So what is the solution here? We need to define an exponential function for managing our volume. Since our ears process sound logarithmically, using an exponential function to control sound will give us the perception of a linear change in sound. Here is some math to prove that:

log(e^x) = x

What is this equation saying for us? Think of it like this:
  • x is the volume we set on our slider, let's say 0.5
  • We change this value exponentially and set the volume to e^0.5
  • Our ear processes this volume logarithmically, so we perceive a volume of log(e^0.5)
  • The result of log(e^0.5) is 0.5!

There are a couple of challenges with exponential functions:
  1. An exponential decay function will never reach zero.
  2. Controlling the shape of an exponential curve can be tricky.

Yes, would could deal with these challenges and come up with a perfect solution, but we don't have to. We can make a fair approximation using a simpler equation:

y = x^4

Where
  • x is the selected volume on your slider
  • y is the 'perceived' volume by our ears

As it turns out, this is a good approximation of the ideal exponential curve for our volume control. When implemented in your apps, you will see that you gain fine control over sound levels in the lower range, exactly where you need it.

The Sample Project


I have put together a very quick sample project that you can use to see just how different each solution is. It's a very simple app. It just loops a music track (just a bunch of loops I put together in GarageBand) and you can control the volume using either a linear or (almost) exponential calculation.

When you play with the volume sliders, watch how changes on 1 affects the other. You can clearly see how much more control you have with the (almost) exponential solution.

The project is available for download on GitHub,https://github.com/OhNo789/VolumeControl, and you are free to use it any way you like.

Thursday 1 August 2013

OpenAL on iOS


OpenAL on iOS - The 12 Step Program


So you have a problem. It's ok, you're not alone. I am here to help you! This 12 Step Program for… programming, will help you get up and running with OpenAL for your iOS projects.

However, as with all self-help programs, this will help get you started. You will need to continue to work after completing all 12 steps. There are many amazing resources available to you. Here are a few I have used while developing these 12 steps, and I strongly recommend you review them as well.

iOS Audio & OpenAL / Beginning iPhone Games Development (CocoaHeads Silicon Valley August 2011) - http://www.youtube.com/watch?v=6QQAzhwalPI

Background


This is the follow up tutorial to Playing Audio Samples Using OpenAL on iOS (http://ohno789.blogspot.com.au/2013/08/playing-audio-samples-using-openal-on.html).

These are the first tutorials I have ever written, and OpenAL is a fairly complex subject, so why have I started with OpenAL? In my own projects, I need low latency sounds that can be mixed together. Thats it! OpenAL is a very powerful tool and it is used in many games to generate some pretty cool audio effects for moving objects. For me though, effects were not important. I just needed to know that my apps would play a sound at the exact time they needed to, and continue to play when a new sound was initiated. I am the lead developer of the Musicopoulos series of music education apps. I have developed a number of music apps that needed to play concurrent audio samples at the exact time they are needed. OpenAL gives me the power to do this. If you are looking for a tutorial on how to generate 3D sounds in your game, then look elsewhere. If you too need to play low latency, mixing sounds in your apps, then read ahead.

Review


In the last tutorial, I showed you how to play a sound in OpenAL using the following steps:
  1. Open a device
  2. Create and activate a context
  3. Generate sound sources
  4. Generate data buffers
  5. Open your audio data files
  6. Transfer your audio data to a buffer
  7. Attach a buffer to a sound source
  8. Play the audio sample

However, there were some serious problems with the code. Our goal with OpenAL is to play low latency sounds that could be played simultaneously. The sample project did not let us do this. We were loading a single sound into a buffer each time we wanted to play that sound, and each time we play a new sound the original audio was cut off.

Sounds bad, but it's even worse than that! There are a few other housekeeping chores that we didn't tend to that could lead to some serious problems. Do you remember when Steve Jobs announced the original iPhone? It was a phone, an iPod, and an internet device. What this meant from an audio perspective is that all apps, be they native Apple apps or third party apps will be competing for audio. If you don't manage your audio properly, your apps may loose the ability to produce sound.

So lets get into it. Here are the 12 steps you need to not only play audio samples, but get the most out of OpenAL:
  1. Set up an Audio Session
  2. Open a device
  3. Create and activate a context
  4. Generate sound sources
  5. Manage a collection of sources
  6. Open your audio data files
  7. Transfer your audio data to a buffer
  8. Generate data buffers
  9. Manage a collection of data buffers
  10. Attach a buffer to a sound source
  11. Play the audio sample
  12. Clean up and shutdown OpenAL

We will be modifying the code from the previous tutorial. This will leave us with a project that is usable, but still a little limited. I will go into the reasons why at the end of the tutorial. You will be able to download a complete project called SimpleMetronome that you should be able to adapt to your own needs.

So where did we leave off in the last tutorial? We have a class called AudioSamplePlayer that had a single method called playSound.

AudioSamplePlayer.h

#import <Foundation/Foundation.h>

#import <OpenAl/al.h>
#import <OpenAl/alc.h>
#include <AudioToolbox/AudioToolbox.h>

@interface AudioSamplePlayer : NSObject

- (void) playSound;

@end

AudioSamplePlayer.m

#import "AudioSamplePlayer.h"

@implementation AudioSamplePlayer

static ALCdevice *openALDevice;
static ALCcontext *openALContext;

- (id)init
{
    self = [super init];
    if (self)
    {
        openALDevice = alcOpenDevice(NULL);
        
        openALContext = alcCreateContext(openALDevice, NULL);
        alcMakeContextCurrent(openALContext);
    }
    return self;
}

- (void) playSound
{
    NSUInteger sourceID;
    alGenSources(1, &sourceID);
    
    NSString *audioFilePath = [[NSBundle mainBundle] pathForResource:@"ting" ofType:@"caf"];
    NSURL *audioFileURL = [NSURL fileURLWithPath:audioFilePath];
    
    AudioFileID afid;
    OSStatus openAudioFileResult = AudioFileOpenURL((__bridge CFURLRef)audioFileURL, kAudioFileReadPermission, 0, &afid);
    
    if (0 != openAudioFileResult)
    {
        NSLog(@"An error occurred when attempting to open the audio file %@: %ld", audioFilePath, openAudioFileResult);
        return;
    }
    
    UInt64 audioDataByteCount = 0;
    UInt32 propertySize = sizeof(audioDataByteCount);
    OSStatus getSizeResult = AudioFileGetProperty(afid, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataByteCount);
    
    if (0 != getSizeResult)
    {
        NSLog(@"An error occurred when attempting to determine the size of audio file %@: %ld", audioFilePath, getSizeResult);
    }
    
    UInt32 bytesRead = (UInt32)audioDataByteCount;
    
    void *audioData = malloc(bytesRead);
    
    OSStatus readBytesResult = AudioFileReadBytes(afid, false, 0, &bytesRead, audioData);
    
    if (0 != readBytesResult)
    {
        NSLog(@"An error occurred when attempting to read data from audio file %@: %ld", audioFilePath, readBytesResult);
    }
    
    AudioFileClose(afid);
    
    ALuint outputBuffer;
    alGenBuffers(1, &outputBuffer);
    
    alBufferData(outputBuffer, AL_FORMAT_STEREO16, audioData, bytesRead, 44100);
    
    if (audioData)
    {
        free(audioData);
        audioData = NULL;
    }
    
    alSourcef(sourceID, AL_PITCH, 1.0f);
    alSourcef(sourceID, AL_GAIN, 1.0f);
    
    alSourcei(sourceID, AL_BUFFER, outputBuffer);
    
    alSourcePlay(sourceID);
}

@end

1. Set up the Audio Session


While it is not needed to play audio using OpenAL, you need to manage your Audio Session to ensure the user experience is predictable. We are competing for audio on iOS devices. We need to clearly state what we need and when we need it. For example, you have an app that includes some background music as well as sound effects. If a user is listening to their own music through the iPod app when they launch your app what should happen? Should the iPod stop playing? Should your app forgo playing the background music and continue to play the users selection? Let's modify our init method so that we handle our Audio Session appropriately:

- (id)init
{
    self = [super init];
    if (self)
    {
        AudioSessionInitialize(NULL, NULL, AudioInterruptionListenerCallback, NULL);
        
        UInt32 session_category = kAudioSessionCategory_MediaPlayback;
        AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(session_category), &session_category);
        
        AudioSessionSetActive(true);
        
        openALDevice = alcOpenDevice(NULL);
        
        openALContext = alcCreateContext(openALDevice, NULL);
        alcMakeContextCurrent(openALContext);
    }
    return self;
}

We are using AudioSessionInitialize() to initialise our Audio Session. This function takes 4 parameters:

  1. The run loop that the interruption listener callback should be run on. We are passing NULL to indicate that we want to use the main run loop.
  2. The mode for the run loop that the interruption listener function will run on. We are passing NULL to indicate that we want to default mode.
  3. The function we want to call when our Audio Session is interrupted. Our function is called AudioInterruptionListenerCallback.
  4. Data that you would like to be passed to the interruption callback function.

Do you notice a theme here? When you initialise an Audio Session, you need to specify how your app will behave when it is interrupted. In our case, when the app is interrupted, we are going to call the function AudioInterruptionListenerCallback(). Lets have a look at this methods and see what we need to take care of:

void AudioInterruptionListenerCallback(void* user_data, UInt32 interruption_state)
{
    if (kAudioSessionBeginInterruption == interruption_state)
    {
        alcMakeContextCurrent(NULL);
    }
    else if (kAudioSessionEndInterruption == interruption_state)
    {
        AudioSessionSetActive(true);
        alcMakeContextCurrent(openALContext);
    }
}

Now, when our app is interrupted we need to handle 2 cases: 

  1. kAudioSessionBeginInterruption - What we do when our app is interrupted
  2. kAudioSessionEndInterruption - What we do when our app is relaunched

Let's discuss the Audio Session first. When your app is interrupted, your Audio Session will automatically deactivated. There is no need to call AudioSessionSetActive(false). However, when the interruption has ended, we need to explicitly state that we want to restore our Audio Session by calling AudioSessionSetActive(true).

We are also managing the OpenAL context within this function. If we think back to the previous tutorial, we described the context as a combination of the person hearing the audio, and the space or air in which the audio travels. Just like our Audio Session, we share the context with other apps. When we are interrupted, we need to give up that context. We do this by calling alcMakeContextCurrent(NULL). When our app is restored, we need to take back the context using the same function, but this time passing in our openALContext variable, alcMakeContextCurrent(openALContext).

2. Open a device and 3. Create and activate a context


In the previous tutorial, we created our device, created our context and made it active. Now that we are managing our context with the Audio Session, we can move on to dealing with sources.

4. Generate sound sources and 5. Manage a collection of sources


In the previous tutorial, we generated a single sound source. This meant that we could only play a single audio sample at any given time. We could not mix our audio samples. With OpenAL on the iPhone, we can generate up to 32 unique sound sources. In order to take advantage of this, we need to manage a collection of sound sources. First, lets define a constant for the maximum number of concurrent sources. Place this at the top of the AudioSamplePlayer.m file, underneath the #import "AudioSamplePlayer.h" statement. Also, declare a static variable that we can use to store our collection of sound sources:

#import "AudioSamplePlayer.h"

#define kMaxConcurrentSources 32

@implementation AudioSamplePlayer


static ALCdevice *openALDevice;
static ALCcontext *openALContext;

static NSMutableArray *audioSampleSources;

In our init method, initialise the audioSampleSources array and generate our sound sources:

- (id)init
{
    self = [super init];
    if (self)
    {
        openALDevice = alcOpenDevice(NULL);
        
        openALContext = alcCreateContext(openALDevice, NULL);
        alcMakeContextCurrent(openALContext);
        
        audioSampleSources = [[NSMutableArray alloc] init];
        
        NSUInteger sourceID;
        for (int i = 0; i < kMaxConcurrentSources; i++) {
            /* Create a single OpenAL source */
            alGenSources(1, &sourceID);
            /* Add the source to the audioSampleSources array */
            [audioSampleSources addObject:[NSNumber numberWithUnsignedInt:sourceID]];
        }
    }
    return self;
}

After we initialise the audioSampleSources array, we declare a single sourceID of type NSUInteger. We then go through a loop and use alGenSources() to generate our sound sources, and store the sound source ID in our array. Note, in order to store this number in an Objective-C array we must convert it to an NSNumber object. We now have an array of 32 sound sources that can be used to play 32 concurrent sounds.

6. Open your audio data files, 7. Transfer your audio data to a buffer, 8. Generate data buffers and 9. Manage a collection of data buffers


In the last tutorial, we were loading our file into a buffer each time we wanted to play an audio sample. This created a lag in our sound, something we really don't want. To remedy this, we need to preload all of our audio samples into buffers, and there they will wait until we are ready to play them. Preloading our audio samples into buffers is how we achieve low latency sounds. Now, instead of doing this in our playSound() method, we need to create an new method that takes the name of a sound and loads it into a buffer. We then need to maintain a collection of buffers that we can select when we want to play an specific sound. You can preload up to 256 buffers, so lets start by defining another constant and declaring a collection for storing our data buffers:

#import "AudioSamplePlayer.h"

#define kMaxConcurrentSources 32
#define kMaxBuffers 256

@implementation AudioSamplePlayer


static ALCdevice *openALDevice;
static ALCcontext *openALContext;

static NSMutableArray *audioSampleSources;
static NSMutableDictionary *audioSampleBuffers;

We are using an NSMutableDictionary to store our buffers so that we can provide a key and get the related buffer ID. Our key will simply be the name of our audio sample. Now, lets create a method that takes the name of an audio sample and loads it into a buffer:

- (void) preloadAudioSample:(NSString *)sampleName
{
    if ([audioSampleBuffers objectForKey:sampleName])
    {
        return;
    }
    
    if ([audioSampleBuffers count] > kMaxBuffers) {
        NSLog(@"Warning: You are trying to create more than 256 buffers! This is not allowed.");
        return;
    }
    
    NSString *audioFilePath = [[NSBundle mainBundle] pathForResource:sampleName ofType:@"caf"];
    
    AudioFileID afid = [self openAudioFile:audioFilePath];
    
    UInt32 audioFileSizeInBytes = [self getSizeOfAudioComponent:afid];
    
    void *audioData = malloc(audioFileSizeInBytes);
    
    OSStatus readBytesResult = AudioFileReadBytes(afid, false, 0, &audioFileSizeInBytes, audioData);
    
    if (0 != readBytesResult)
    {
        NSLog(@"An error occurred when attempting to read data from audio file %@: %ld", audioFilePath, readBytesResult);
    }
    
    AudioFileClose(afid);
    
    ALuint outputBuffer;
    alGenBuffers(1, &outputBuffer);
    
    alBufferData(outputBuffer, AL_FORMAT_STEREO16, audioData, audioFileSizeInBytes, kSampleRate);
    
    [audioSampleBuffers setObject:[NSNumber numberWithInt:outputBuffer] forKey:sampleName];
    
    if (audioData)
    {
        free(audioData);
        audioData = NULL;
    }
}

We discussed how to create buffers in the last tutorial, so here we will concentrate on how to manage a collection of buffers. Our preLoadAudioSample method takes a single parameter, the name of an audio sample. Now, the name of the audio sample should be stripped on any file type information. For example, if you want to play an audio sample called laser.caf, we will only be passing 'laser' into this method.

The first thing we do is check and make sure this sample hasn't already been added to the buffer. We need to make sure we are not using more memory than we need to! Then we check and make sure we are not trying to create more than 256 buffers. If all goes well, we open the file, read in its audio data and copy it to the buffer, just like we did in the last tutorial. 

Now, when we generate our buffer ID, outputBuffer, we are storing this in our AudioSampleBuffers dictionary using the name of the audio sample as the key. This will make the task of playing our sample much easier.

In addition to adding our buffer to the audioSampleBuffers dictionary, there is also 2 new helper methods, openAudioFile and getSizeOfAudioComponent. While not necessary, this helps make a code a little more presentable and easier to read. Here is the new helper method:

- (AudioFileID) openAudioFile:(NSString *)audioFilePathAsString
{
    NSURL *audioFileURL = [NSURL fileURLWithPath:audioFilePathAsString];
    
    AudioFileID afid;
    OSStatus openAudioFileResult = AudioFileOpenURL((__bridge CFURLRef)audioFileURL, kAudioFileReadPermission, 0, &afid);
    
    if (0 != openAudioFileResult)
    {
        NSLog(@"An error occurred when attempting to open the audio file %@: %ld", audioFilePathAsString, openAudioFileResult);
        
    }
    
    return afid;
}

This method takes a file path as a string, opens the audio sample and returns the AudioFileID.

- (UInt32) getSizeOfAudioComponent:(AudioFileID)afid
{
    UInt64 audioDataSize = 0;
    UInt32 propertySize = sizeof(UInt64);
    
    OSStatus getSizeResult = AudioFileGetProperty(afid, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataSize);
    
    if (0 != getSizeResult)
    {
        NSLog(@"An error occurred when attempting to determine the size of audio file.");
    }
    
    return (UInt32)audioDataSize;
}

This method takes an AudioFileID and returns the size of the audio data as a UInt32.

10. Attach a buffer to a sound source and 11. Play the audio sample


So we have now generated our sources and preloaded our sounds into buffers. Its time to attach a buffer to a source and play an audio sample! The old playSample method is not longer sufficient. We need to provide the name of the audio sample we would like to play. Here is the replacement method you use to do this:

- (void) playAudioSample:(NSString *)sampleName
{
    ALuint source = [self getNextAvailableSource];
    
    alSourcef(source, AL_PITCH, 1.0f);
    alSourcef(source, AL_GAIN, 1.0f);
    
    ALuint outputBuffer = (ALuint)[[audioSampleBuffers objectForKey:sampleName] intValue];
    
    alSourcei(source, AL_BUFFER, outputBuffer);
    
    alSourcePlay(source);
}

The first thing we need to do is get one of the sources from our audioSampleSources array. We are using the helper method, getNextAvailableSource, to do this. Here is the method:

- (ALuint) getNextAvailableSource
{
    ALint sourceState;
    for (NSNumber *sourceID in audioSampleSources) {
        alGetSourcei([sourceID unsignedIntValue], AL_SOURCE_STATE, &sourceState);
        if (sourceState != AL_PLAYING)
        {
            return [sourceID unsignedIntValue];
        }
    }
    
    ALuint sourceID = [[audioSampleSources objectAtIndex:0] unsignedIntegerValue];
    alSourceStop(sourceID);
    return sourceID;
}

The source ID's are store in an array. What we need to do is move through that array and locate the first source that isn't currently playing an audio sample. We do this by querying the source for its state, using the function alGetSourcei() which takes the following parameters:

  1. A source ID (remember, we stored our source ID's as NSNumbers, so we now need to called unsighedIntValue to retrieve the source ID)
  2. What information we are after, in this case we want the AL_SOURCE_STATE.
  3. A pointer to an ALint in which the source state will be stored

If we find a source that is not currently playing an audio sample, we return the source ID. Note, if we find that all the sources are currently being used, we will return the first source ID in the array. This means that any audio sample being played through this source will be cut off.

Back to our playAudioSample method, once we have our source, we then set some parameters for that source. In our case, we are setting the pitch and gain to 1.0. This method can easily be modified to take float values for these parameters, which you can see in the sample project.

Next, we retrieve the buffer for our audio sample. The buffer was stored in a dictionary using the audio sample name as the key. We retrieve the buffer ID using this key, convert it to and int value and finally casting it to be of type ALuint.

Now we have our source and our buffer, we can attach the buffer to the source using the function alSourceI(). Finally, we play our audio sample using the function alSourcePlay().

12. Clean up and shutdown OpenAL


This is our final step. When we have finished using OpenAL, we need to clean up and shut it down.

- (void) shutdownAudioSamplePlayer
{
    ALint source;
    for (NSNumber *sourceValue in audioSampleSources)
    {
        NSUInteger sourceID = [sourceValue unsignedIntValue];
        alGetSourcei(sourceID, AL_SOURCE_STATE, &source);
        alSourceStop(sourceID);
        alDeleteSources(1, &sourceID);
    }
    [audioSampleSources removeAllObjects];
    
    NSArray *bufferIDs = [audioSampleBuffers allValues];
    for (NSNumber *bufferValue in bufferIDs)
    {
        NSUInteger bufferID = [bufferValue unsignedIntValue];
        alDeleteBuffers(1, &bufferID);
    }
    [audioSampleBuffers removeAllObjects];
    
    alcDestroyContext(openALContext);
    
    alcCloseDevice(openALDevice);
}

First, we locate our source, stop it if it is playing an audio sample then delete the source. We then remove all the objects from the audioSampleSources array. Next, we locate and delete all the buffers. To do this, we are extracting and array off all the values in the audioSampleBuffers dictionary (we are not interested in the keys here as the buffer ID's are in the values), then we move through that array and delete all the buffers. We then remove all the objects from the audioSampleBuffers dictionary. Finally, we destroy the context and close the device.

And our 12 Step Program is complete. Hopefully you now have a better understanding of OpenAL and how to use it in your iOS projects. Remember, these 12 steps are just the beginning. You will need to continue to search for new information and try new techniques. It is the only way to stay on top of it.

The Sample Project

The code we have gone through today will work just fine in most cases, but it still has a some problems. Our biggest problem is that our AudioSamplePlayer class can be initialised from anywhere in your application. This means you could have more than 1 instance of AudioSamplePlayer. Well, actually, thats not true. If you try to create more than 1 of these objects you app will crash! Why? Remember the beginning of the first tutorial? We can only have a single context and a single device. If you try to create more than one of these your app is doomed!

The solution is to make AudioSamplePlayer a singleton class. That way you can call it from anywhere you choose and there will only every be a single instance of AudioSamplePlayer. The sample project includes AudioSamplePlayer as a singleton.

The sample project is a very simple metronome. I choose this genre because metronomes are all about keeping time. We need to make sure our sounds are being played with low latency. Lag is not acceptable! The project is full of comments, so it should be clear and easy to understand. Feel free to ask any questions if you are not sure about something.

That concludes this tutorial. You can download a sample app on GitHub, https://github.com/OhNo789/SimpleMetronome, and you are free to use it any way you like.

Playing Audio Samples Using OpenAL on iOS


Before we get into this tutorial I need to give you fair warning. Don't use this code in your iOS projects! It is rubbish and you shouldn't use OpenAL this way. The aim of this tutorial is to introduce you to the concepts of OpenAL. In the next tutorial, OpenAL on iOS - The 12 Step Program, you will find some code that you can actually use in your iOS apps.

Ok, with that out of the way lets learn a little bit about OpenAL.

What is OpenAL?


OpenAL is a cross-platform API for playing audio and was developed for use in gaming. OpenAL has many powerful features that we can take advantage of. OpenAL is a C API. If you haven't used or seen any code written in C it will look a little foreign to you. Don't let this discourage you! There is a little good news to put you at ease. Objective-C is based on C, so any code you write in C is completely acceptable in Objective-C. That means we don't have to do any additional work to make our C code work in an Objective-C environment. It will just look a little different.

Low Latency Playback


With OpenAL we can preload audio samples into buffers and play them at any time we choose. Since the samples can be preloaded, when you ask for a sound to be played it happens very quickly. 

Mixing Simultaneous Sounds


OpenAL allows you to play multiple samples at the same time. Mixing of samples happens automatically. This is very important in gaming. For example, you can have laser blast, explosions and aircraft sounds all being played at the same time.

Audio Effects


OpenAL allows you to define the location of a sound in space. This is know as positional or 3D audio. It's great for audio effects such as objects moving across the screen.

As objects move away from you the sound they product becomes fainter. OpenAL allows you to include distance effects to your objects sounds. As they move off screen you can reduce the gain of the sound they produce.

OpenAL allows you to alter the pitch of a sound. This is great for creating audio effects such as the Doppler Effect (as objects move towards you they have a higher pitch and the moment they move past you they have a lower pitch).

A Bit of Background…


Ok, so why did I start using OpenAL? I am not a game developer. It would be nice and who knows, maybe one day I will have the privilege but for now I am developing music apps. You can check some of them out on my site www.musicopoulos.com. As with games, music apps need low latency sounds that can be mixed together. When a user touches a piano key, the sound need to be played immediately. And if they touch a second key, the second sound should be played over the first sound. These are my goals for OpenAL. If your goals go beyond this then thats fine, you still have to start with the basics.

So lets get into some code! 

If you would like to follow along with this tutorial (which I recommend you do), you will need to:

  • Create a new Xcode project (call this project anything you like)
  • Create a new file, I call mine AudioSamplePlayer, that inherits from NSObject
  • Import the frameworks OpenAL.framework and AudioToolbox.framework
  • In your AudioSamplePlayer.h file (or the header for the file you created), import the following:
    • #import <OpenAl/al.h>
    • #import <OpenAl/alc.h>
    • #include <AudioToolbox/AudioToolbox.h>

Create a new method and call it playSound. Your header and implementation file should look something like this:

#import <Foundation/Foundation.h>

#import <OpenAl/al.h>
#import <OpenAl/alc.h>
#include <AudioToolbox/AudioToolbox.h>

@interface AudioSamplePlayer : NSObject

- (void) playSound;

@end

#import "AudioSamplePlayer.h"

@implementation AudioSamplePlayer

- (void) playSound
{
    
}

@end

Declare 2 static variables in your AudioSamplePlayer.m file. We will get to these in a second.

#import "AudioSamplePlayer.h"

@implementation AudioSamplePlayer

static ALCdevice *openALDevice;

static ALCcontext *openALContext;

- (void) playSound
{
    
}

@end

The first thing we need to do with OpenAL is open a device. A device is a physical thing that you use to process sound. For example a sound card would be a device. There can only be one device, and this is why we declared the static variable ALCdevice *openALDevice.

In your init method, use the function alcOpenDevice() to initialise the device. Note the use of 'NULL' here indicates that we want the default device.

#import "AudioSamplePlayer.h"

@implementation AudioSamplePlayer

static ALCdevice *openALDevice;

static ALCcontext *openALContext;

- (id)init
{
    self = [super init];
    if (self)
    {
        openALDevice = alcOpenDevice(NULL);
    }
    return self;
}

- (void) playSound
{
    
}

@end


Now that we have our device we need to set up the context. The context is the combination of the person hearing the sound and the air in which the sound travels. In other words, in what context is the sound being played?

#import "AudioSamplePlayer.h"

@implementation AudioSamplePlayer

static ALCdevice *openALDevice;

static ALCcontext *openALContext;

- (id)init
{
    self = [super init];
    if (self)
    {
        openALDevice = alcOpenDevice(NULL);
        
        openALContext = alcCreateContext(openALDevice, NULL);
        alcMakeContextCurrent(openALContext);
    }
    return self;
}

- (void) playSound
{
    
}

@end

In these 2 lines of code we are declaring a variable named openALContext of type ALCcontext and using the function alcCreateContext() to initialise the context. This function takes 2 variables:

  1. A valid ALCdevice, which we created previously.
  2. A list of attributes. Note, we are not providing any attributes so we are passing in NULL.

We are then using the function alcMakeContextCurrent() to make our context the current context.

We are now ready to create a source. A 'source' is something that produces sound, like a speaker. When we play a sound, we need to specify a source to play it through. We are going to concentrate on the playSound method you created earlier.

- (void) playSound
{
    NSUInteger sourceID;
    alGenSources(1, &sourceID);
}

So what's going on here? First, we declare a sourceID of type NSUInteger. We then generate a source using the function alGenSources(), passing in a pointer to our declared sourceID. The alGenSources() function takes 2 variables:

  1. The number of sources you would like to create.
  2. A pointer to where the generated source reference should be placed.

We are generating a single source and placing the generated source reference into our sourceID variable. You can generate up to 32 sources and pass in a pointer to an array, and this function will populate the array with sound sources. In the next tutorial, we will look at a better way to create and storing a collection of sourceID's.

Now the next part is a little involved. What we are going to do is locate our audio sample, open the file, determine the size of the audio data, load that data into a buffer, ready to play our sample.

So, lets get a reference to our audio sample.

- (void) playSound
{
    NSUInteger sourceID;
    alGenSources(1, &sourceID);

    NSString *audioFilePath = [[NSBundle mainBundle] pathForResource:@"ting" ofType:@"caf"];
    NSURL *audioFileURL = [NSURL fileURLWithPath:audioFilePath];
}

If you have dealt with saving and loading files in Objective-C this will look familiar to you. In the first line of code we are getting the path to a file called 'ting.caf' and returning that path as an NSString object. We are then using that string to generate an NSURL.

Now that we have a reference to the audio sample, we can open the file. However, we have to do this in an OpenAL audio friendly way.

- (void) playSound
{
    NSUInteger sourceID;
    alGenSources(1, &sourceID);

    NSString *audioFilePath = [[NSBundle mainBundle] pathForResource:@"ting" ofType:@"caf"];
    NSURL *audioFileURL = [NSURL fileURLWithPath:audioFilePath];

    AudioFileID afid;
    OSStatus openAudioFileResult = AudioFileOpenURL((__bridge CFURLRef)audioFileURL, kAudioFileReadPermission, 0, &afid);
    
    if (0 != openAudioFileResult)
    {
        NSLog(@"An error occurred when attempting to open the audio file %@: %ld", audioFilePath, openAudioFileResult);
        return;
    }
}

Audio File Services uses an AudioFileID to reference an audio file. The function AudioFileOpenURL() will open the audio sample and place the data into our AudioFileID variable, afid. The AudioFileOpenURL() function takes 4 parameters:

  1. A URL path to the file. Note, in the code above we generated an NSURL. We need to cast this to be of type CFURLRef. In this example, we are performing a __bridge cast as we are using Automatic Reference Counting (ARC).
  2. The type of permissions used to open the file.
  3. A hit for the file type. Note, in our case we are passing 0, which indicates that we are not providing a file hint.
  4. A pointer to an AudioFileID

AudioFileOpenURL() returns OSStatus. If the if() statement, we are checking if there was an error while opening the file. If an error has occurred, we will print a log statement and return (the remaining code will not be executed).

With the audio sample open, we need to determine the size of the audio data. Now, an audio file contains all sorts of information, but we are only interested in the audio data.

- (void) playSound
{
    NSUInteger sourceID;
    alGenSources(1, &sourceID);

    NSString *audioFilePath = [[NSBundle mainBundle] pathForResource:@"ting" ofType:@"caf"];
    NSURL *audioFileURL = [NSURL fileURLWithPath:audioFilePath];

    AudioFileID afid;
    OSStatus openAudioFileResult = AudioFileOpenURL((__bridge CFURLRef)audioFileURL, kAudioFileReadPermission, 0, &afid);
    
    if (0 != openAudioFileResult)
    {
        NSLog(@"An error occurred when attempting to open the audio file %@: %ld", audioFilePath, openAudioFileResult);
        return;
    }

    UInt64 audioDataByteCount = 0;
    UInt32 propertySize = sizeof(audioDataByteCount);
    OSStatus getSizeResult = AudioFileGetProperty(afid, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataByteCount);
    
    if (0 != getSizeResult)
    {
        NSLog(@"An error occurred when attempting to determine the size of audio file %@: %ld", audioFilePath, getSizeResult);
    }
}

So what on earth is happening here! As we said above, an audio file consist of many things, but we want the audio data. The function AudioFileGetProperty() will query the audio file and extract the property we are searching for, the audio data. Let's look at the parameters this function takes so we can see how it does this. The parameters are:

  1. The audio file we want to extract the information from.
  2. The property type we want to extract.
  3. The size of that property type.
  4. A pointer to the place we want the requested property to be delivered to.

The audio data byte count we are after is calculated as a UInt64. We declare a variable of this type called audioDataByteCount and set it initially to zero. Next, we create a variable of type UInt32 and set it to the size of the audioDataByteCount. We are then using this information to call the AudioFileGetProperty() function and return an OSStatus. Finally we check getSizeResult for any errors.

Now that we have the audio data property, we need to read that information.

- (void) playSound
{
    NSUInteger sourceID;
    alGenSources(1, &sourceID);

    NSString *audioFilePath = [[NSBundle mainBundle] pathForResource:@"ting" ofType:@"caf"];
    NSURL *audioFileURL = [NSURL fileURLWithPath:audioFilePath];

    AudioFileID afid;
    OSStatus openAudioFileResult = AudioFileOpenURL((__bridge CFURLRef)audioFileURL, kAudioFileReadPermission, 0, &afid);
    
    if (0 != openAudioFileResult)
    {
        NSLog(@"An error occurred when attempting to open the audio file %@: %ld", audioFilePath, openAudioFileResult);
        return;
    }

    UInt64 audioDataByteCount = 0;
    UInt32 propertySize = sizeof(audioDataByteCount);
    OSStatus getSizeResult = AudioFileGetProperty(afid, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataByteCount);
    
    if (0 != getSizeResult)
    {
        NSLog(@"An error occurred when attempting to determine the size of audio file %@: %ld", audioFilePath, getSizeResult);
    }
    
    UInt32 bytesRead = (UInt32)audioDataByteCount;
    
    void *audioData = malloc(bytesRead);
    
    OSStatus readBytesResult = AudioFileReadBytes(afid, false, 0, &bytesRead, audioData);
    
    if (0 != readBytesResult)
    {
        NSLog(@"An error occurred when attempting to read data from audio file %@: %ld", audioFilePath, readBytesResult);
    }
    
    AudioFileClose(afid);
}

We using the function AudioFileReadBytes() to read in our audio data. This function takes the following parameters:

  1. The audio file we want to read the data from.
  2. A bool that determines if we want to cache the data
  3. The position we want to start reading the data from
  4. A pointer to the number of bytes we want to read
  5. A pointer to a block of memory we can store the data

Now, we already have our AudioFileID, afid, we don't want to cache the data and we need to start from the beginning of the file. That is the first 3 parameters taken care of. We previously calculated the byte size of the audio data however, this gave us a variable of type UInt64. Our AudioFileReadBytes() function requires our bytes size to be of type UInt32. To achieve this, we are creating a new variable called bytesRead of type UInt32 and initialise it by casting our audioDataByteCount variable to be of type UInt32.

To store the data read in, we are allocating, malloc(), enough memory to store data the size of bytesRead and referencing that location in memory with our audioData variable.

As with previous functions, AudioFileReadBytes() returns an OSStatus and we are checking for any errors.

Now that we have read in the data, we close the file using the AudioFileClose() function, passing in our AudioFileID, afid.

Ok, we are almost there! We have opened our audio sample and read its audio data. Now we can place that data into a buffer.

- (void) playSound
{
    NSUInteger sourceID;
    alGenSources(1, &sourceID);

    NSString *audioFilePath = [[NSBundle mainBundle] pathForResource:@"ting" ofType:@"caf"];
    NSURL *audioFileURL = [NSURL fileURLWithPath:audioFilePath];

    AudioFileID afid;
    OSStatus openAudioFileResult = AudioFileOpenURL((__bridge CFURLRef)audioFileURL, kAudioFileReadPermission, 0, &afid);
    
    if (0 != openAudioFileResult)
    {
        NSLog(@"An error occurred when attempting to open the audio file %@: %ld", audioFilePath, openAudioFileResult);
        return;
    }

    UInt64 audioDataByteCount = 0;
    UInt32 propertySize = sizeof(audioDataByteCount);
    OSStatus getSizeResult = AudioFileGetProperty(afid, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataByteCount);
    
    if (0 != getSizeResult)
    {
        NSLog(@"An error occurred when attempting to determine the size of audio file %@: %ld", audioFilePath, getSizeResult);
    }
    
    UInt32 bytesRead = (UInt32)audioDataByteCount;
    
    void *audioData = malloc(bytesRead);
    
    OSStatus readBytesResult = AudioFileReadBytes(afid, false, 0, &bytesRead, audioData);
    
    if (0 != readBytesResult)
    {
        NSLog(@"An error occurred when attempting to read data from audio file %@: %ld", audioFilePath, readBytesResult);
    }
    
    AudioFileClose(afid);

    ALuint outputBuffer;
    alGenBuffers(1, &outputBuffer);
    
    alBufferData(outputBuffer, AL_FORMAT_STEREO16, audioData, bytesRead, 44100);
    
    if (audioData)
    {
        free(audioData);
        audioData = NULL;
    }
}

The first thing we need to do is create our buffer. We declare a variable of type ALuint called outputBuffer. We then using the function alGenBuffers() to generate our buffer. This function takes 2 parameters:

  1. The number of buffer you would like to create.
  2. A pointer to where the generated buffer reference should be placed.

We are generating a single buffer and placing the generated buffer reference into our outputBuffer variable. You can generate up to 256 buffers and pass in a pointer to an array, and this function will populate the array with buffers. In the next tutorial, we will look at a better way to create and storing a collection of buffers.

Now that we have our buffer, we can copy the audio data we extracted into the buffer. We do this using the alBufferData() function, which takes the following parameters:

  1. A variable of type Aluint where the buffer ID will be stored.
  2. The audio format
  3. The audio data we want to copy to the buffer
  4. The size of the data
  5. The frequency of the audio sample

In our case, we are setting the frequency to 44100 and choosing the AL_FORMAT_STEREO16 format. I will talk a little about audio format at the end of this tutorial.

Now that we have copied the data to our buffer, we can release the memory we allocated earlier.

We now have everything we need to play our audio sample, we just need to put it all together.

- (void) playSound
{
    NSUInteger sourceID;
    alGenSources(1, &sourceID);

    NSString *audioFilePath = [[NSBundle mainBundle] pathForResource:@"ting" ofType:@"caf"];
    NSURL *audioFileURL = [NSURL fileURLWithPath:audioFilePath];

    AudioFileID afid;
    OSStatus openAudioFileResult = AudioFileOpenURL((__bridge CFURLRef)audioFileURL, kAudioFileReadPermission, 0, &afid);
    
    if (0 != openAudioFileResult)
    {
        NSLog(@"An error occurred when attempting to open the audio file %@: %ld", audioFilePath, openAudioFileResult);
        return;
    }

    UInt64 audioDataByteCount = 0;
    UInt32 propertySize = sizeof(audioDataByteCount);
    OSStatus getSizeResult = AudioFileGetProperty(afid, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataByteCount);
    
    if (0 != getSizeResult)
    {
        NSLog(@"An error occurred when attempting to determine the size of audio file %@: %ld", audioFilePath, getSizeResult);
    }
    
    UInt32 bytesRead = (UInt32)audioDataByteCount;
    
    void *audioData = malloc(bytesRead);
    
    OSStatus readBytesResult = AudioFileReadBytes(afid, false, 0, &bytesRead, audioData);
    
    if (0 != readBytesResult)
    {
        NSLog(@"An error occurred when attempting to read data from audio file %@: %ld", audioFilePath, readBytesResult);
    }
    
    AudioFileClose(afid);

    ALuint outputBuffer;
    alGenBuffers(1, &outputBuffer);
    
    alBufferData(outputBuffer, AL_FORMAT_STEREO16, audioData, bytesRead, 44100);
    
    if (audioData)
    {
        free(audioData);
        audioData = NULL;
    }

    alSourcef(sourceID, AL_PITCH, 1.0f);
    alSourcef(sourceID, AL_GAIN, 1.0f);
    
    alSourcei(sourceID, AL_BUFFER, outputBuffer);
    
    alSourcePlay(sourceID);
}

First, let's give the source we created earlier some parameters so that it knows how to play the audio sample. We are using the function alSourcef() to set both the pitch and gain to 1 (we could choose any float value between 0.0 and 1.0).

We then attach the buffer to the source using the function alSourcei() and finally, use the function alSourcePlay(), passing in our sourceID to play our audio sample.

That's it! Everything you 'need' to play an audio sample using OpenAL. But as I said at the beginning of this article, this code is terrible. Yes, it works, but it has some serious problems. We will fix those in the next tutorial. What I wanted to do here is just introduce you to the basic concepts. After you understand this tutorial, you can take the next step and get the most out of OpenAL.

So other than dumping everything into a single method, what else is wrong with this code? Have a look at the first 2 points about OpenAL at the top of this tutorial. We are after low latency playback and mixing of simultaneous sounds. This code is not giving us any of that. Each time we play the audio sample, we are opening the file, copying the data to the buffer and attaching it to a single source before playing the sound. What we should be doing is loading the audio sample into a buffer once, and then access that buffer each time we want to play the sound. Also, we are only using one source. If the source is playing a sound when we try to play a second sound, the first sound will be cut short. They will not mix together. What we have done in this tutorial is lay the foundation. In the next tutorial we will solve these problems and give you some code you can actually work with.

Now, I said I would talk about audio format at the end of this tutorial.

You can get OpenAL to play all sorts of audio formats. There are mechanisms for converting audio formats on the fly. There are however overheads with doing this type of thing. You best bet is to use a format that OpenAL likes. This makes things much smoother. The right format to use is linear-PCM little-endian 16 bit caf files. If your audio files are in a different format, thats ok. You can use Terminal on OS X to convert them to the right format. Open Terminal and navigate to the directory that houses your audio files. Use the command:

afconvert -f caff -d LEI16@44100 audioFile.wav audioFile.caf

In this example, afconvert will take an audio file of type wav and convert it to the desired format. Your original file will remain in tact.

That concludes this tutorial. You can download a sample app on GitHub, https://github.com/OhNo789/BasicOpenAL, but please please please don't use this for any production products. It's for your own good. Stay tuned for the second instalment where I will show you how get a little more out of OpenAL.