Tag: Face Detection

In the yesterday’s post I wrote on the new Windows 10 APIs to perform face detection. In today’s sample app (shared on GitHub), it is possible to see how a WebCam feed is analyzed in real time and displays the detected faces.

A couple of details about the App

The AppCommandBar button changes to show the FaceDetection in On/Off status. As soon as you see how to change an Icon with C# update this code.

When a face is detected, I make the process of painting the frame synchronizing Threads. Don’t have very clear why this.
image

Among the new APIs that brings Windows 10 to work with images we found one quite interesting: Windows.Media.FaceAnalysis. The name already tells us the functionality of this API: face detection.

Using this library, we have several ways of working with it. For example, in the FaceDetector class, we have a DetectFacesAsync() operation that using a SoftwareBitmap, performs an analysis on this image and returns an array with the collection of detected faces.

Another way of using it is to associate it with an instance of MediaCapture(), which is the class that we usually use to access the camera in Windows 8 / 8.1 / 10. The following example shows as a time to initialize the MediaCapture, we can add a new ‘effect’ (line 75) with a FaceDetectionEffectDefinition, for the detection of faces.

Then we can set other options as the frequency in which we want to perform the analysis, and subscribe to an event for each detected face (line 80).

In upcoming posts I will share an example use of a Universal Windows App using this feature.

In the previous post I shared the 10 sample lines that we can user as a Face APIs basic functionality in a console app to :

detect faces

detect age on each face

detect sex on each face

Moreover, another option that does Face APIs provide is the ability to identify the region on the original image for each detected face. In the next example, I’ve added a WPF project and I have referenced the ClientLibrary. It is based on one of the examples in the Face APIs SDKs.

This project has 2 important files

– lib \ FaceApiHelper.cs. This class is the one used for image processing using Face APIs service.

– UserControls \ Face.cs. Represents a User Control, with an image to show the face, and also a series of labels to show the age and sex.

The MainWindow.xaml main window features a button to select an image of the disk and by under 2 sections showing the original image with boxes on each side found and a list of the found faces. The code by pressing the button is very simple

Important:The key to use the Face API service is part of the settings of the app.

The StartFaceDetection()function returns 2 collections of Faces. One with the information age and sex of the found face, the other is a special object used to “paint” boxes on the original image.

If you’ve been able to see the code and not vomiting with error handling, you can try the application. An example of the app in operation is as follows:

If you want to see the Special transformation to paint the boxes, you can take a look at CalculateFaceRectangleForRendering().

After the setup of our Azure environment, we now can use Face APIs. Next step is to download the SDK and take a look at the examples. At this time the SDK contains examples for both, .Net and Android. Within the .Net sample there is a WPF app which consumes a PCL which is responsible for making the calls to the Machine LearningFace APIservices.

The good thing about this model is that the PCL is easily portable to other projects. The following steps show how consume Face API services in a console app, using the PCL included in the SDK.

1. Add a new console application in the solution

2. Add a reference to ClientLibrary

3. Then in the Main we need to define a variable with our subscription key, and for this example we will open a local image to be processed

4. The following function displays the steps necessary for processing an image

Every time I perform a Coding4Funsession, I always take the opportunity to talk a little about the progress in the process of face detection, facial recognition and detection of emotions, etc. If you like Azure, now is a great time to start testing something for this topic, since using Machine Learning experiments, there are a number of features available to perform these actions.

In this series of posts I’ll show you how to configure Azure for having an instance of Face APIs, how to active and publish it as a service and finally as consume it from an .Net app.

You should start adding a Face API instance from the Azure Machine Learning Market Place Gallery . Access the Market Place and add search for Face APIs.

The wizard is fairly simple, and it’s free. For now is only available in the West US region, although that doesn’t affect us much.

Once created the instance, it will appear in our list of items for the Market Place section. The next step is very important, since it is where to generate the key that we will identify to use this service from our apps. We must Access the portal Face APIs from the option “Manage”

You will find your primary and secondary keys and also the option to regenerate them

And that’s it! We already have our Azure environment ready to use Face APis. I’ll write later about the Face APIs capabilities, however the information and the SDK can be found at the official website of the Project Oxford , where besides Face APIs, there are APIs for Speech Recognitionand Computer Vision. Come on, that is a place to have some serious fun 😉