Microsoft Lync and Skype for Business have a rich set of .NET APIs which make it easy to extend the platform and integrate it with other applications. This blog helps explain how to use those APIs.

Private Audio Channels with Microsoft Lync Server 2010 and UCMA 3.0

Posted: June 22nd, 2010 | Author: | Filed under: Lync Development, UCMA 3.0 | 2 Comments »
scrap

Manual audio routes before UCMA 3.0

One of my favorite new features in UCMA 3.0, the new version of the Unified Communications Managed API that goes with Lync Server 2010, is the interfaces that give developers control over audio routing within an audio/video MCU session. To say that again in English: in UCMA 3.0, you can tell the component that handles mixing the audio for the conference that Participants A, B, and C should hear the audio from your application, but Participants X, Y, and Z should not hear it.

Huh?

Okay, let’s take an example scenario. You are a supervisor at Company X, responsible for ten switchboard operators who answer calls to your main business number. Every so often, you like to listen in on calls,to make sure nothing untoward is happening in those switchboard phone conversations. Thanks to the technique for adding invisible conference participants detailed in my last post,you can slip on and off of those calls without anyone hearing creepy breathing sounds.

But what if you are listening on a call and one of those agents starts doing something odd, like transferring a telemarketer to the CEO’s private extension, endangering your hard-earned reputation? It would be nice if you could “whisper” something to the agent at that moment that the caller could not hear, such as “#&^@$(!!!!!”

Well, as it happens, with UCMA 3.0 you can.

Example two: you are setting up a phone system for one of those busy medical practices where any time you call, they answer the phone with “Dr. Smith’s office, please hold,” and you spend the next five minutes listening to the same light jazz tune and blurbs about allergy medicine. You need to play hold music so the caller doesn’t think they’ve been hung up on, but those poor office assistants shouldn’t have to listen to an endless repetition of Light Jazz Tune in the headsets on their ears while finishing the last lines of Form 41270501251253523152-FTA-4-X.

Using UCMA 3.0, you can turn the call into a conference and pipe the hold music to just one participant.

Let’s take a look at how this works in code.

It’s been a while since I covered the basics of getting a UCMA application up and running, and we’re talking about the new and improved UCMA 3.0, so let’s begin at the beginning.

Open Visual Studio and create a new project. Remember when starting Visual Studio to run it with administrator permissions by right-clicking on Visual Studio and choosing Run as administrator. Once the project is created, add references to Microsoft.Rtc.Collaboration (for UCMA) and to System.Configuration (so we can get settings from an App.config file)

Next, you can create a new class, AudioRouteTester, to hold the code for our sample application. Go ahead and stick the following instance variables at the beginning of the class.

CollaborationPlatform _platform;
ApplicationEndpoint _endpoint;
Conversation _conferenceConversation;
AudioVideoCall _conferenceAudioCall;
Player _musicPlayer;
WmaFileSource _musicSource;
IAsyncResult _musicSourcePrepareAsyncResult;

string _user1Uri = ConfigurationManager.AppSettings["userWhoHearsMusic"];
string _user2Uri = ConfigurationManager.AppSettings["userWhoDoesNotHearMusic"];

As an entry point for our tester class, we’ll create a Start method that starts up the collaboration platform and establishes an application endpoint.

public void Start()
{
    // Get the application ID from App.config.
    string applicationId = ConfigurationManager.AppSettings["applicationId"];

    // Create the settings object we'll use for the platform.
    ProvisionedApplicationPlatformSettings platformSettings =
        new ProvisionedApplicationPlatformSettings("audioRoute", applicationId);

    // Create the collaboration platform.
    _platform = new CollaborationPlatform(platformSettings);

    // Start it up as an asynchronous operation.
    _platform.BeginStartup(OnPlatformStartupCompleted, null);

    Console.WriteLine("Starting up platform...");
}

What we’re doing here is taking advantage of the fancy new application provisioning in UCMA 3.0. You can provide nothing but the application ID for a trusted application you’ve provisioned, and UCMA 3.0 will figure out the rest.

Let’s take a break here to talk about asynchronous methods, for those who are just joining us. Nearly every method in UCMA is asynchronous, so it’s important to understand how the pattern works.

Operations in UCMA consist of a Begin method, which initiates the asynchronous operation, and an End method, which finishes it and returns the return value, if any, and throws exceptions that have occurred during the operation, if any. Every Begin method has two parameters, userCallback and state, that tell the UCMA runtime what to do when the asynchronous operation finishes.

The userCallback parameter is a method that the runtime will invoke when the operation finishes; that callback method should take a single IAsyncResult as a parameter and have no return value, and it should call the corresponding End method, passing in the IAsyncResult.

The state parameter is an object that is passed back to the callback method when it is called. It is accessible through the AsyncState property on the IAsyncResult, and you will need to cast it back to its original type. The async state is useful for keeping track of the object you need to call the End method on, or for keeping track of other context on the operation.

Now, there is another clever way to provide the callback methods that largely removes the need for the async state. If you pass an anonymous delegate or lambda expression to the Begin method as the userCallback parameter, you can refer to local variables from the context in which Begin is called from within the delegate. This concept may be familiar to you as a “closure”; if not, you’ll see an example later.

And now, back to our regularly scheduled programming.

We need to define the callback method for the platform startup. Here it is:

void OnPlatformStartupCompleted(IAsyncResult result)
{
    // This is where we pass the IAsyncResult into
    // the End method for our asynchronous operation.
    _platform.EndStartup(result);

    Console.WriteLine("Platform started.");

    // Prepare the WMA file source for the music.
    _musicSource = new WmaFileSource(ConfigurationManager.AppSettings["wmaFilePath"]);
    _musicSourcePrepareAsyncResult =
        _musicSource.BeginPrepareSource(MediaSourceOpenMode.Buffered, OnPrepareSourceCompleted, null);

    Console.WriteLine("Preparing music...");

    // Get the application endpoint details from App.config.
    string contactUri = ConfigurationManager.AppSettings["endpointUri"];
    string csFqdn = ConfigurationManager.AppSettings["proxyServerFqdn"];

    // Create the application endpoint settings object.
    ApplicationEndpointSettings endpointSettings = new ApplicationEndpointSettings(
        contactUri, csFqdn, 5061);

    // Create the endpoint.
    _endpoint = new ApplicationEndpoint(_platform, endpointSettings);

    // Establish it asynchronously.
    _endpoint.BeginEstablish(OnEndpointEstablishCompleted, null);

    Console.WriteLine("Establishing endpoint...");
}

Nothing particularly extraordinary here. Creating and establishing the application endpoint works just as it did in UCMA 2.0. There is a way to “auto-discover” endpoints that belong to your trusted application, but that is a topic for another article. This code also begins preparing a WMA file source that we will use later to pipe music into the call. We’re storing the IAsyncResult in an instance variable so we can check later to ensure that the file source has finished preparing itself.

Here are another couple of callbacks we need. First, we have a very simple callback method for the preparation of the WMA file source; then we have a callback for the establishing of the endpoint.

void OnPrepareSourceCompleted(IAsyncResult result)
{
    try
    {
        _musicSource.EndPrepareSource(result);
        Console.WriteLine("Music prepared.");
    }
    catch (RealTimeException ex)
    {
        // Catch and handle exceptions.
        Console.WriteLine(ex);
    }
}

void OnEndpointEstablishCompleted(IAsyncResult result)
{
    try
    {
        _endpoint.EndEstablish(result);

        Console.WriteLine("Endpoint established.");

        JoinConference();
    }
    catch (RealTimeException ex)
    {
        // Catch and handle exceptions.
        Console.WriteLine(ex);
    }
}

Once we’ve finished establishing the endpoint, we have a basic UCMA application running, and we’re ready to do something a bit more exciting. The excitement begins in a method called JoinConference. We’ll take a look at that method in just a second, but first, let’s go over what exactly we’re planning.

This manual audio route handling only works for conferences, because what we are doing is telling the audio/video MCU, or multipoint control unit (the OCS service that mixes audio and video for conference participants), to create a separate, special mix for certain participants. So we’re going to need to create a conference, and invite some participants so we can test out our audio routing on them. We’ll stick to two participants for now.

By default, the MCU sends audio from all conference participants to all other participants, as shown below.

aroutes1

We’ll send the MCU a command telling it to remove our application from that default routing, so the MCU will stop sending audio from our application to other conference participants (and from other conference participants to our application).

aroutes2

We’ll then add a manual audio route that sends audio from our application to just one of the conference participants, and play some music to that conference participant that the other participant cannot hear.

aroutes3

Here’s the JoinConference method:

private void JoinConference()
{
    // Create a new conversation which we will use to join an ad hoc conference.
    _conferenceConversation = new Conversation(_endpoint);

    // Join a new ad hoc conference. Note that we're joining as a 
    // trusted participant by specifying the join mode.
    _conferenceConversation.ConferenceSession.BeginJoin(
        new ConferenceJoinOptions() { JoinMode = JoinMode.TrustedParticipant },
        // Here's one of those lambda expression callbacks I mentioned!
        // It's a bit more concise this way.
        ar =>
        {
            try
            {
                _conferenceConversation.ConferenceSession.EndJoin(ar);

                Console.WriteLine("Conference joined.");

                // Now that we've joined the conference, add the audio modality
                // by establishing a new call.
                _conferenceAudioCall = new AudioVideoCall(_conferenceConversation);
                _conferenceAudioCall.BeginEstablish(OnCallEstablished, null);
            }
            catch (RealTimeException ex)
            {
                // Catch and handle exceptions.
                Console.WriteLine(ex);
            }
        },
        null);
}

Rather than scheduling a conference and then joining it using the resulting conference URI, we simply join an ad hoc conference by calling BeginJoin on the conversation object’s conference session. We use one of the properties on ConferenceJoinOptions to specify that we are joining as a trusted participant. There are two reasons for doing this:

  1. The application won’t show up as a participant in the conference roster.
  2. The application will have permission to give commands to the MCU about audio routes.

Once we’ve joined the conference, we also need to add the audio modality on the conference by establishing an audio call for the conference conversation.

Now we’re ready to start the music and begin playing around with audio routes.

void OnCallEstablished(IAsyncResult result)
{
    try
    {
        _conferenceAudioCall.EndEstablish(result);
    }
    catch (RealTimeException ex)
    {
        // Catch and handle exceptions.
        Console.WriteLine(ex);
        return;
    }

    Console.WriteLine("Audio call established.");

    // Subscribe to be notified when participants join or leave the A/V MCU session.
    _conferenceConversation.ConferenceSession.AudioVideoMcuSession.ParticipantEndpointAttendanceChanged +=
        new EventHandler<ParticipantEndpointAttendanceChangedEventArgs>(
            AudioVideoMcuSession_ParticipantEndpointAttendanceChanged);

    // Make sure the WMA file source is ready by
    // blocking on the async wait handle.
    _musicSourcePrepareAsyncResult.AsyncWaitHandle.WaitOne();

    // Create a new Player object to pipe the music
    // into the audio call. Set its source to the one
    // we've prepared and attach it to the call.
    _musicPlayer = new Player();
    _musicPlayer.SetSource(_musicSource);
    _musicPlayer.SetMode(PlayerMode.Automatic);
    _musicPlayer.AttachFlow(_conferenceAudioCall.Flow);
    _musicPlayer.Start();

    AudioVideoMcuSession avMcu = _conferenceConversation.ConferenceSession.AudioVideoMcuSession;

    // Remove the application from the default audio routing for the MCU.
    avMcu.BeginRemoveFromDefaultRouting(_conferenceConversation.ConferenceSession.AudioVideoMcuSession.GetLocalParticipantEndpoints().First(),
        new RemoveFromDefaultRoutingOptions() { Duration = 3600000 },
        OnRemoveFromDefaultRoutingCompleted, null);
}

We’re subscribing to the ParticipantEndpointAttendanceChanged event on the audio/video MCU session so that we’ll know when the user who is supposed to hear the music joins the conference. Once the music player is started, we remove the application from the default audio routing. We do this by calling the BeginRemoveFromDefaultRouting method on the audio/video MCU session, and passing in the ParticipantEndpoint object that represents our application’s endpoint.

Notice that we’ve specified a duration in RemoveFromDefaultRoutingOptions. Once the duration expires, the endpoint in question will be returned to the default routing automatically, so it’s wise to set this high if you don’t want participants getting dumped back into default routing.

Now we’re all set to invite some participants. The ConferenceInvitation class is new to UCMA 3.0 and makes sending conference invitations through code somewhat more intuitive.

void OnRemoveFromDefaultRoutingCompleted(IAsyncResult result)
{
    AudioVideoMcuSession avMcu = _conferenceConversation.ConferenceSession.AudioVideoMcuSession;

    try
    {
        avMcu.EndRemoveFromDefaultRouting(result);
    }
    catch (RealTimeException ex)
    {
        // Catch and handle exceptions.
        Console.WriteLine(ex);
        return;
    }

    // Create conference invitation objects for the two users.
    ConferenceInvitation user1Invitation = new ConferenceInvitation(_conferenceConversation);
    ConferenceInvitation user2Invitation = new ConferenceInvitation(_conferenceConversation);

    // Deliver the invitations. More of those concise lambda expression callbacks!
    user1Invitation.BeginDeliver(_user1Uri, ar =>
        {
            user1Invitation.EndDeliver(ar);
            Console.WriteLine("Conference invitation delivered to user 1.");
        }, null);

    user2Invitation.BeginDeliver(_user2Uri, ar =>
        {
            user2Invitation.EndDeliver(ar);
            Console.WriteLine("Conference invitation delivered to user 2.");
        }, null);
}

If we left things at this point, we would play the music from our application but neither person in the conference would be able to hear it, because we’ve removed the application completely from audio routing:

aroutes2

We need to add a new audio route for one of the participants.

We’ll wait for that user to join and add the route at that point.

void AudioVideoMcuSession_ParticipantEndpointAttendanceChanged(object sender, ParticipantEndpointAttendanceChangedEventArgs e)
{
    // We'll only check the users that join; we're not as interested in when they leave.
    foreach (KeyValuePair pair
        in e.Joined)
    {
        Console.WriteLine("{0} joined conference", pair.Key.Participant.Uri);

        // The user we want to watch for is User 1, the one that should hear the music.
        if (pair.Key.Participant.Uri == _user1Uri)
        {
            // We create a new outgoing audio route for this user.
            // We're adding the route, not deleting it, so the RouteUpdateOperation is Add.
            OutgoingAudioRoute musicRoute = new OutgoingAudioRoute(pair.Key);
            musicRoute.Operation = RouteUpdateOperation.Add;

            // The method takes a list of audio routes.
            List outgoingRoutes = new List() { musicRoute };

            Console.WriteLine("Updating audio routes for user 1...");

            // This is where we actually update the audio routes. If there were
            // incoming routes to set, those would be the second parameter.
            _conferenceAudioCall.AudioVideoMcuRouting.BeginUpdateAudioRoutes(outgoingRoutes, null,
                ar =>
                {
                    try
                    {
                        _conferenceAudioCall.AudioVideoMcuRouting.EndUpdateAudioRoutes(ar);
                        Console.WriteLine("Updated audio routes for user 1.");
                    }
                    catch (RealTimeException ex)
                    {
                        // Catch and handle exceptions.
                        Console.WriteLine(ex);
                    }
                }, null);
        }
    }
}

To set up the manual audio route, we create an OutgoingAudioRoute object, passing in the participant who should receive the audio in the constructor. There is also an IncomingAudioRoute class we could use if we wanted to set up a route in the other direction. By default, the OutgoingAudioRoute object represents adding an audio route to the specified participant, but you can also remove audio routes you’ve previously defined by changing the value of the Operation property to RouteUpdateOperation.Remove.

The AudioVideoCall class in UCMA 3.0 has a new property called AudioVideoMcuRouting. This property holds a class that handles manual audio routes for that call. By calling BeginUpdateAudioRoutes and passing in a collection of OutgoingAudioRoute objects and a collection of IncomingAudioRoutes (or just one of the two), you can tell the MCU what to do with the audio from this call.

Here’s what the MCU is doing now:

aroutes3

We’re almost ready to go. Let’s add a few more things to finish up the application, such as a Stop method to terminate the endpoint and shut down the platform.

public void Stop()
{
    _endpoint.BeginTerminate(OnEndpointTerminateCompleted, null);
    _musicSource.Close();

    Console.WriteLine("Terminating endpoint...");
    Console.WriteLine("Closed music source.");
}

void OnEndpointTerminateCompleted(IAsyncResult result)
{
    try
    {
        _endpoint.EndTerminate(result);
    }
    catch (RealTimeException ex)
    {
        // Catch and handle exceptions.
        Console.WriteLine(ex);
    }

    Console.WriteLine("Terminated endpoint.");

    _platform.BeginShutdown(OnPlatformShutdownCompleted, null);

    Console.WriteLine("Shutting down platform.");
}

void OnPlatformShutdownCompleted(IAsyncResult result)
{
    try
    {
        _platform.EndShutdown(result);
    }
    catch (RealTimeException ex)
    {
        // Catch and handle exceptions.
        Console.WriteLine(ex);
    }

    Console.WriteLine("Shut down platform.");
}

You’ll need to add something like this to Program.cs to get your application to run:

AudioRouteTester sample = new AudioRouteTester();
sample.Start();

Console.WriteLine("Press any key");
Console.ReadLine();

sample.Stop();

And, finally, you’ll need an App.config file. Change the settings to match your own environment and trusted application. I’ve borrowed the WMA file used by the AutomaticCallDistributor sample in the UCMA SDK, but you can use any WMA file.

<?xml version="1.0"?>

    <add key="applicationId" value="urn:application:audioroutetest"/>
    <add key="proxyServerFqdn" value="cs-se.fabrikam.com"/>
    <add key="endpointUri" value="sip:audioroutes@fabrikam.com"/>
    <add key="userWhoHearsMusic" value="sip:michaelg@fabrikam.com"/>
    <add key="userWhoDoesNotHearMusic" value="sip:pa@fabrikam.com"/>
    <add key="wmaFilePath" value="I_Ka_Barra.wma"/>

  <supportedRuntime version="v2.0.50727"/>

At this point, you can run your application and see manual audio routes in action. The application will create a conference and invite two users; one user will hear music, the other won’t. Both users will be able to hear each other.

This is not perhaps the most practical application of audio routes. Only two uses of our sample application come to mind:

  1. Listening to calming music while taking calls from challenging customers.
  2. Confusing people by playing music to them that others on the call cannot hear. (“Guys, what is that music in the background? Does someone have a radio on?” “What music? I don’t hear any music, Bob.” “Yeah, Bob, I don’t hear it either…”)

You may be wondering, given what I’ve shown you, how it would work if you wanted to set manual audio routes for the audio coming from other participants on your call, not the audio coming from your application. For example, how would you do what’s diagrammed below?

aroutes4

The answer is that you can do this with a BackToBackCall; I’ll cover this in detail in a future post.

One other note: it is a good idea to “clean up” your manual audio routes if you decide to return a participant to the default audio routing. You can do this using that same BeginUpdateAudioRoutes method, but passing in OutgoingAudioRoute and IncomingAudioRoute objects with an Operation of RouteUpdateOperation.Remove.

As usual, I expect you to use this knowledge I have imparted to you for the greater good of humanity, and not for spooking people with weird noises on their important phone calls.

Feel free to write if you have any questions!