Archive

Archive for the ‘Azure’ Category

Integrating Cognitive Service Speech Recognition in UWP apps

09/08/2018 1 comment

The Speech service, part of Cognitive Services, is powered by the same technologies used in other Microsoft products, including Cortana and Microsoft Office.

We just need to create a speech resource in Azure Portal to obtain the keys to use it in our apps. Note that, at the time of writing, the service is in Preview and is available only in East Asia, North Europe and West US.

The service is available either using the SDK or the REST API. Let’s see how to use the former in a UWP app.

First of all, we have to add the Microsoft.CognitiveServices.Speech NuGet package to the solution:

Microsoft.CognitiveServices.Speech NuGet package

Microsoft.CognitiveServices.Speech NuGet package

Then, we create a simple UI with a Button to start recognition and a TextBox to show the result:

<Grid Padding="50">
    <Grid.RowDefinitions>
        <RowDefinition Height="Auto" />
        <RowDefinition Height="*" />
    </Grid.RowDefinitions>
    <Button
        x:Name="RecognitionButton"
        Grid.Row="0"
        Margin="0,0,0,20"
        Click="RecognitionButton_Click"
        Content="Start Recognition" />
    <TextBox
        x:Name="RecognitionTextBox"
        Grid.Row="1"
        HorizontalAlignment="Stretch"
        Header="Recognized text"
        IsReadOnly="True" />
</Grid>

As the app need to use the microphone, it’s important to add the corresponding capability by double clicking the Package.appxmanifest file and go to the Capabilities tab:

Addung Micophone Capability to app

Addung Micophone Capability to app

Then, we can finally write the code to perform recognition:

private async void RecognitionButton_Click(object sender, RoutedEventArgs e)
{
    const string SpeechSubscriptionKey = "";
    const string SpeechRegion = "";
    const string Culture = "it-IT";

    var isMicAvailable = await CheckEnableMicrophoneAsync();
    if (!isMicAvailable)
    {
        return;
    }

    RecognitionButton.Content = "Recognizing...";
    RecognitionButton.IsEnabled = false;
    RecognitionTextBox.Text = string.Empty;

    var cognitiveSpeechFactory = SpeechFactory.FromSubscription
        (SpeechSubscriptionKey, SpeechRegion);

    // Starts recognition. It returns when the first utterance has been 
    // recognized.
    using (var cognitiveRecognizer = cognitiveSpeechFactory.
        CreateSpeechRecognizer(Culture, OutputFormat.Simple))
    {
        var result = await cognitiveRecognizer.RecognizeAsync();

        // Checks result.
        if (result.RecognitionStatus == RecognitionStatus.Recognized)
        {
            RecognitionTextBox.Text = result.Text;
        }
        else
        {
            await new MessageDialog(result.RecognitionFailureReason,
                result.RecognitionStatus.ToString()).ShowAsync();
        }
    }

    RecognitionButton.Content = "Start recognition";
    RecognitionButton.IsEnabled = true;
}

private async Task<bool> CheckEnableMicrophoneAsync()
{
    var isMicAvailable = false;

    try
    {
        var mediaCapture = new MediaCapture();
        var settings = new MediaCaptureInitializationSettings
        {
            StreamingCaptureMode = StreamingCaptureMode.Audio
        };

        await mediaCapture.InitializeAsync(settings);
        isMicAvailable = true;
    }
    catch
    {
        await Windows.System.Launcher.LaunchUriAsync
            (new Uri("ms-settings:privacy-microphone"));
    }

    return isMicAvailable;
}

The lines 3-5 must be completed with the Speech subscription key that we can find on Azure portal, the name of the region in which the service has been created (eastasia, northeurope or westus at this time) and the language of the speech. At the moment, the service supports Arabic, Italian, German, Japanes, English, Portoguese, Spanish, Russian, France and Chinese.

At lines 7-11 we check whether the microphone is available. The CheckEnableMicrophoneAsync methods (43-65) tries to initialize a MediaCapture object for audio: in this way, if necessary, the app will prompt the user to consent the microphone usage.

After that, we can finally start the real recognition process. We instantiate a SpeechFactory at lines 17-18 and use it for creating the Cognitive Speech recognizer at lines 22-23. The RecognizeAsync method (line 25) actually starts speech recognition, and stops after the first utterance is recognized.

If the RecognitionStatus property is equal to Recognized (line 28), it means that the recognition succeeded, so we can read the Text property to access the recognized text.

You can download the sample app using the link below:

Integrating Cognitive Service Speech in UWP apps

As said before, RecognizeAsync returns when the first utterance has been recognized, so it is suitable only for single shot recognition like command or query. For long-running recognition, we can use the StartContinuousRecognitionAsync method instead.

Advertisements

Custom Vision and Azure Functions integration with UWP and Xamarin on ioProgrammo

16/01/2018 Comments off

I have written a new article for the n°221 of ioProgrammo (December 2017). This time I show how to use Custom Vision Service with Azure Functions in order to create a service that is able to provide information about the objects that are recognized. Then, I explain how to integrate it with UWP and Xamarin apps.

ioProgrammo n°221 (December 2017)

ioProgrammo n°221 (December 2017)

Categories: .NET, Azure, C#, Xamarin

Custom Vision Companion on ioProgrammo

20/10/2017 Comments off

I have written a new article for the n°219 of ioProgrammo (November 2017). I talk about Custom Vision Service and show how to build a Universal Windows Platform app that uses these APIs to get predictions from images and photos taken in the real world.

ioProgrammo n°219 (November 2017)

ioProgrammo n°219 (November 2017)

The app described in the article, Custom Vision Companion, is open source and is available on GitHub.

Xamarin.Forms and Azure Mobile apps on ioProgrammo

08/09/2016 1 comment

I have written a new article for the n°206 of ioProgrammo (September 2016). This time I talk about Xamarin.Forms and how to use it together with Azure Mobile apps in order to create an app that is strictly integrated with Facebook.

ioProgrammo September 2016

ioProgrammo September 2016

Categories: .NET, Azure, C#, Xamarin

Accessing service information in custom APIs of Mobile Services .NET Backend

07/04/2014 Comments off

When working with Mobile Services .NET Backend, we could have the need to access service information, like settings, log, push client, etc.

The TableController class provides a Services property that exposes all the service information, so if we inherit from this object, we have immediately use it. However, if we want to define a custom API, we need to inherit from the standard ApiController class, that doesn’t contain any reference to the Mobile Service.

If we look deeper, we’ll see that TableController inherits from ApiController and defines the following property:

public ApiServices Services { get; set; }

It provides an object that allows to access all the service information and that it is automatically set via Autofac property injection. So, we can add the same property in our custom APIs classes that directly inherit from ApiController. For example:

public class ImagesController : ApiController
{
    public ApiServices Services { get; set; }

    [HttpGet]
    public string Get()
    {
        // Writes a message in the Service Log.
        Services.Log.Info("Method Images/Get called");

        return "Data";
    }
}

Of course, we can also create a base class with this property and make custom APIs inherit from it, so that they can directly use it.

Categories: Azure, C#, Mobile Services

Using Dependency Injection in Mobile Services .NET Backend

24/03/2014 1 comment

The .NET Backend of Mobile Services makes use of Dependency Injection through the Autofac library. So, we can configure it by adding our dependencies, just like we would do in any other .NET application.

Suppose we have the following interface and one possible implementation:

public interface ILocationService
{
    Task<string> ReverseGeocodeAsync(double latitude, double longitude);
}

public class BingLocationService : ILocationService
{
    public Task<string> ReverseGeocodeAsync(double latitude, double longitude)
    {
        // ...
        return Task.FromResult("<value>");
    }
}

We want to pass it to our controllers using Inversion of Control. So, let’s open the WebApiConfig.cs file in the App_Start folder. First of all, we need to add the using Autofac; statement at the top of the file, because we need some extension methods that are defined within this namespace.

Then, locate the following line in the Register method:

// Use this class to set WebAPI configuration options
HttpConfiguration config = ServiceConfig.Initialize(new ConfigBuilder(options));

The ConfigBuilder constructor has an overload accepting an Action that receives the Inversion of Control container as argument. So, we can use it do add our dependency:

HttpConfiguration config = 
    ServiceConfig.Initialize(new ConfigBuilder(options, (container) =>
    {
        container.RegisterType<BingLocationService>().As<ILocationService>();
    }));

Now the dependcy is registered in the container, and so we can use it as usual:

public class LocationController : ApiController
{
    private ILocationService locationService;

    public LocationController(ILocationService locationService)
    {
        this.locationService = locationService;
    }

    public async Task<string> GetAddress(double latitude, double longitude)
    {
        var address = await locationService.ReverseGeocodeAsync(latitude, longitude);
        return address;
    }
}

If we prefer a more structured solution, instead of defining all the configurations within the Register method, we can also create a class that inherits from ConfigBuilder and overrides its ConfigureServiceDependencies method:

public class MyCustomConfigBuilder : ConfigBuilder
{
    public MyCustomConfigBuilder(ConfigOptions options) 
        : base(options)
    { }

    protected override void ConfigureServiceDependencies(HttpConfiguration config, 
        ContainerBuilder containerBuilder)
    {
        containerBuilder.RegisterType<BingLocationService>().As<ILocationService>();

        base.ConfigureServiceDependencies(config, containerBuilder);
    }
}

The last thing to do is to tell the application that we want to use our custom ConfigBuilder:

// Use this class to set WebAPI configuration options
HttpConfiguration config = ServiceConfig.Initialize(new MyCustomConfigBuilder(options));

The Autofac site contains a lot of documentation and example on how to use it, so we can refer to it for more information.

Categories: Azure, C#, Mobile Services

Authorization and public Help pages for Mobile Services written in .NET

28/02/2014 1 comment

Last week one of the most awaited feature for Azure Mobile Services has been released: the full support for writing backend logic using .NET and the ASP.NET Web API framework. We can now use Web API and Visual Studio to write, test and deploy our services. On Scott Guthrie’s blog you can read the official announcement, along with a step-by-step guide about how to use this new feature.

An interesting feature of this new support is that, because we’re working with a Web API, we can navigate to the /help page of the service to obtain the documentation of all the available methods (table functions, APIs, jobs). For each resource there are exanples of request and response messages, with a button that allows to directly invoke the function, so that it is easy to test the service.

This tool is very useful if we need to provide the documentation of our API to third parties, because all the pages are automatically generated for us.

However, it seems that the Help system only works when we execute the service within Visual Studio. In fact, after we have deployed the Mobile Service to Azure, if we try to access the /help page, we’ll obtain an HTTP erorr 403 – Forbidden. This is because, by default, all remote requests to mobile service resources are restricted to clients that present the application key.

In general, this behavior can be changed with the RequiresAuthorization attribute on contollers (or their methods). For example, we can use this attribute on a TableController like the following:

public class PeopleController : TableController<Person>
{
    [RequiresAuthorization(AuthorizationLevel.Anonymous)]
    public IQueryable<Person> Get()
    {
        ...
    }

    [RequiresAuthorization(AuthorizationLevel.User)]
    public async Task<IHttpActionResult> Post(Person item)
    {
        ...
    }
}

In this case, the Get method, that reads from the table, has anonymous access, while the Post operation, that performs an insert, requires an authenticated user. In other words, this attribute is used to specify the table methods permissions, like we do in the portal when we use Node.js for the backend.

The problem with Help is that it is provided by a class called HelpController that is part of the Azure Mobile Services SDK: so, like all the other controllers, once deployed it requires the application key, and this behavior cannot be changed because we don’t have direct access to the class (and so we can’t add a Custom Attribute to it). On the other hand, if we want to make the documentation of our service public, it must be accessed anonymously with a normal browser. Note that we have the same problem with ContentController, that is responsible for providing static content like CSS, Javascript, and so on.

The solution is to create a class that inherits from HelpController and another from ContentController, each of them with the anonymous authorization level:

[RequiresAuthorization(AuthorizationLevel.Anonymous)]
public class PublicContentController : ContentController
{
    [HttpGet]
    [Route("content/{*path}")]
    public new async Task<HttpResponseMessage> Index(string path = null)
    {
        return await base.Index(path);
    }
}

[RequiresAuthorization(AuthorizationLevel.Anonymous)]
public class PublicHelpController : HelpController
{
    [HttpGet]
    [Route("help")]
    public new IHttpActionResult Index()
    {
        return base.Index();
    }

    [HttpGet]
    [Route("help/api/{apiId}")]
    public new IHttpActionResult Api(string apiId)
    {
        return base.Api(apiId);
    }
}

As attribute routing takes precedence, we’re overwriting the default routes for /content and /help request, using controllers that allow anonymous access. We can now publish the Mobile Service to Azure and verify that the Help page can be correctly reached with a normal browser.

Note that the Help page of the deployed service has a different layout than the local one, but the information shown are the same (page layouts are embedded in the library). More important, even if the Help page is now public, other controllers mantain their own access rules, and so, for example, if we want to test a method of a TableController that has an Authorization level of Application (the default), in the Test Client Dialog we need to specify the Application Key of the Mobile Service using the X-ZUMO-APPLICATION header:

Azure Mobile Service Test Client Dialog

Azure Mobile Service Test Client Dialog

It is possible to refer to the official documentation for more information about this topic.