In this tutorial, we're going to focus on how to create a multimedia application for Windows Phone by taking advantage of the device's camera, interacting with the media library, and exploring the possibilities of the Photos Hub.

Using the Camera

The camera is one of the most important features in Windows Phone devices, especially thanks to Nokia, which has created some of the best camera phones available on the market.

As developers, we are able to integrate the camera experience into our application so that users can take pictures and edit them directly within the application. In addition, with the Lens App feature we’ll discuss later, it’s even easier to create applications that can replace the native camera experience.

Note: To interact with the camera, you need to enable the ID_CAP_IS_CAMERA capability in the manifest file.

The first step is to create an area on the page where we can display the image recorded by the camera. We’re going to use VideoBrush, which is one of the native XAML brushes that is able to embed a video. We’ll use it as a background of a Canvas control, as shown in the following sample:

Notice the CompositeTransform that has been applied; its purpose is to keep the correct orientation of the video, based on the camera orientation.

Taking Pictures

Now that we have a place to display the live camera feed, we can use the APIs that are included in the Windows.Phone.Media.Capture namespace. Specifically, the class available to take pictures is called PhotoCaptureDevice (later we’ll see another class for recording videos).

Before initializing the live feed, we need to make two choices: which camera to use, and which of the available resolutions we want to use.

We achieve this by calling the GetAvailableCaptureResolutions() method on the PhotoCaptureDevice class, passing as parameter a CameraSensorLocation object which represents the camera we’re going to use. The method will return a collection of the supported resolutions, which are identified by the Size class.

Tip: It’s safe to use the previous code because every Windows Phone device has a back camera. If we want to interact with the front camera instead, it’s better to check whether one is available first since not all the Windows Phone devices have one. To do this, you can use the AvailableSensorLocation property of the PhotoCaptureDevice class, which is a collection of all the supported cameras.

Once we’ve decided which resolution to use, we can pass it as a parameter (together again with the selected camera) to the OpenAsync() method of the PhotoCaptureDevice class. It will return a PhotoCaptureDevice object which contains the live feed; we simply have to pass it to the SetSource() method of the VideoBrush.

As already mentioned, we handle the camera orientation using the transformation we’ve applied to the VideoBrush: we set the Rotation using the SensorRotationInDegrees property that contains the current angle’s rotation.

Note: You may get an error when you try to pass a PhotoCaptureDevice object as a parameter of the SetSource() method of the VideoBrush. If so, you’ll have to add the Microsoft.Devices namespace to your class, since it contains an extension method for the SetSource() method that supports the PhotoCaptureDevice class.

Now the application will simply display the live feed of the camera on the screen. The next step is to take the picture.

The technique used by the API is to create a sequence of frames and save them as a stream. Unfortunately, there’s a limitation in the current SDK: you’ll only be able to take one picture at a time, so you’ll only be able to use sequences made by one frame.

The process starts with a CameraCaptureSequence object, which represents the capture stream. Due to the single-picutre limitation previously mentioned, you’ll be able to call the CreateCaptureSequence() method of the PhotoCaptureDevice class only by passing 1 as its parameter.

For the same reason, we’re just going to work with the first frame of the sequence that is stored inside the Frames collection. The CaptureStream property of the frame needs to be set with the stream that we’re going to use to store the captured image. In the previous sample, we use a MemoryStream to store the photo in memory. This way, we can save it later in the user’s Photos Hub (specifically, in the Camera Roll album).

Note: To interact with the MediaLibrary class you need to enable the ID_CAP_MEDIALIB_PHOTO capability in the manifest file.

You can also customize many settings of the camera by calling the SetProperty() method on the PhotoCaptureDevice object that requires two parameters: the property to set, and the value to assign. The available properties are defined by two enumerators: KnownCameraGeneralProperties, which contains the general camera properties, and KnownCameraPhotoProperties, which contains the photo-specific properties.

Some properties are read-only, so the only operation you can perform is get their values by using the GetProperty() method.

In the following samples, we use the SetProperty() method to set the flash mode and GetProperty() to get the information if the current region forces phones to play a sound when they take a picture.

Using the Hardware Camera Key

Typically, Windows Phone devices have a dedicated button for the camera, which can be used both to set the focus by half-pressing it, and to take the picture by fully pressing it. You are also able to use this button in your applications by subscribing to three events that are exposed by the CameraButtons static class:

ShutterKeyPressed is triggered when the button is pressed.

ShutterKeyReleased is triggered when the button is released.

ShutterKeyHalfPressed is triggered when the button is half-pressed.

In the following sample, we subscribe to the ShutterKeyReleased event to take a picture and the ShutterKeyHalfPressed event to use the auto-focus feature.

Recording a Video

The process to record a video is similar to the one we used to take a picture. In this case, we’re going to use the AudioVideoCaptureDevice class instead of the PhotoCaptureDevice class. As you can see in the following sample, the initialization procedure is the same: we decide which resolution and camera we want to use, and we display the returned live feed using a VideoBrush.

Note: To record videos, you’ll also need to enable the ID_CAP_MICROPHONE capability in the manifest file.

Recording a video is even simpler since the AudioVideoCaptureDevice class exposes the StartRecordingToStreamAsync() method, which simply requires you to specify where to save the recorded data. Since it’s a video, you’ll also need a way to stop the recording; this is the purpose of the StopRecordingAsync() method.

In the following sample, the recording is stored in a file created in the local storage:

The SDK offers a specific list of customizable settings connected to video recording. They are available in the KnownCameraAudioVideoProperties enumerator.

Interacting With the Media Library

The framework offers a class called MediaLibrary, which can be used to interact with the user media library (photos, music, etc.). Let’s see how to use it to manage the most common scenarios.

Note: In the current version, there’s no way to interact with the library to save new videos in the Camera Roll, nor to get access to the stream of existing videos.

Pictures

The MediaLibrary class can be used to get access to the pictures stored in the Photos Hub, thanks to the Pictures collection. It’s a collection of Picture objects, where each one represents a picture stored in the Photos Hub.

Note: You’ll need to enable the ID_CAP_MEDIALIB_PHOTO capability in the manifest file to get access to the pictures stored in the Photos Hub.

The Pictures collection grants access to the following albums:

Camera Roll

Saved Pictures

Screenshots

All other albums displayed in the People Hub that come from remote services like SkyDrive or Facebook can’t be accessed using the MediaLibrary class.

Tip: The MediaLibrary class exposes a collection called SavedPictures, which contains only the pictures that are stored in the Saved Pictures album.

Every Picture object offers some properties to get access to the basic info, like Name, Width, and Height. A very important property is Album, which contains the reference of the album where the image is stored. In addition, you’ll be able to get access to different streams in case you want to manipulate the image or display it in your application:

The GetPicture() method returns the stream of the original image.

The GetThumbnail() method returns the stream of the thumbnail, which is a low-resolution version of the original image.

If you add the PhoneExtensions namespace to your class, you’ll be able to use the GetPreviewImage() method, which returns a preview picture. Its resolution and size are between the original image and the thumbnail.

In the following sample, we generate the thumbnail of the first available picture in the Camera Roll and display it using an Image control:

Tip: To interact with the MediaLibrary class using the emulator, you’ll have to open the Photos Hub at least once; otherwise you will get an empty collection of pictures when you query the Pictures property.

With the MediaLibrary class, you’ll also be able to do the opposite: take a picture in your application and save it in the People Hub. We’ve already seen a sample when we talked about integrating the camera in our application; we can save the picture in the Camera Roll (using the SavePictureToCameraRoll() method) or in the Saved Pictures album (using the SavePicture() method). In both cases, the required parameters are the name of the image and its stream.

In the following sample, we download an image from the Internet and save it in the Saved Pictures album:

Music

The MediaLibrary class offers many options for accessing music, but there are some limitations that aren’t present when working with pictures.

Note: You’ll need to enable the ID_CAP_MEDIALIB_AUDIO capability in the manifest file to get access to the pictures stored in the Photos Hub.

The following collections are exposed by the MediaLibrary class for accessing music:

Albums to get access to music albums.

Songs to get access to all the available songs.

Genres to get access to the songs grouped by genre.

Playlists to get access to playlists.

Every song is identified by the Song class, which contains all the common information about a music track taken directly from the ID3 tag: Album, Artist, Title, TrackNumber, and so on.

Unfortunately, there’s no access to a song’s stream, so the only way to play tracks is by using the MediaPlayer class, which is part of the Microsoft.XNA.Framework.Media namespace. This class exposes many methods to interact with tracks. The Play() method accepts as a parameter a Song object, retrieved from the MediaLibrary.

In the following sample, we reproduce the first song available in the library:

One of the new features introduced in Windows Phone 8 allows you to save a song stored in the application’s local storage to the media library so that it can be played by the native Music + Videos Hub. This requires the Microsoft.Xna.Framework.Media.PhoneExtensions namespace to be added to your class.

The SaveSong() method requires three parameters, as shown in the previous sample:

The path of the song to save. It’s a relative path that points to the local storage.

The song metadata, which is identified by the SongMetadata class. It’s an optional parameter; if you pass null, Windows Phone will automatically extract the ID3 information from the file.

A SaveSongOperation object, which tells the media library if the file should be copied (CopyToLibrary) or moved (MoveToLibrary) so that it’s deleted from the storage.

Lens Apps

Windows Phone 8 has introduced new features specific to photographic applications. Some of the most interesting are called lens apps, which apply different filters and effects to pictures. Windows Phone offers a way to easily switch between different camera applications to apply filters on the fly.

Lens apps are regular Windows Phone applications that interact with the Camera APIs we used at the beginning of this article. The difference is that a lens app is displayed in the lenses section of the native Camera app; when users press the camera button, a special view with all the available lens apps is displayed. This way, they can easily switch to another application to take the picture.

Integration with the lenses view starts from the manifest file, which must be manually edited by choosing the View code option in the context menu. The following code has to be added in the Extension section:

Every lens app needs a specific icon that is displayed in the lenses view. Icons are automatically retrieved from the Assets folder based on a naming convention. An icon must be added for every supported resolution using the conventions in the following table:

Resolution

Icon size

File name

480 × 800

173 × 173

Lens.Screen-WVGA.png

768 × 1280

277 × 277

Lens.Screen-WXGA.png

720 × 1280

259 × 259

Lens.Screen-720p.png

The UriMapper class is required for working with lens apps. In fact, lens apps are opened using a special URI that has to be intercepted and managed. The following code is a sample Uri:

/MainPage.xaml?Action=ViewfinderLaunch

When this Uri is intercepted, users should be redirected to the application page that takes the picture. In the following sample, you can see a UriMapper implementation that redirects users to a page called Camera.xaml when the application is opened from the lens view.

Support Sharing

If you’ve developed an application that supports photo sharing such as a social network client, you can integrate it in the Share menu of the Photos Hub. Users can find this option in the Application Bar in the photo details page.

When users choose this option, Windows Phone displays a list of applications that are registered to support sharing. We can add our application to the list simply by adding a new extension in the manifest file, as we did to add lens support.

We have to manually add the following declaration in the Extensions section:

Again, we can use a UriMapper implementation to redirect users to our application’s page that offers the sharing feature. It’s also important to carry the FiledId parameter in this page; we’re going to need it to know which photo has been selected by the user.

The following sample shows a UriMapper implementation that simply replaces the name of the original page (MainPage.xaml) with the name of the destination page (SharePage.xaml):

After redirecting the user to the sharing page, we can use a method called GetPictureFromToken() exposed by the MediaLibrary class. It accepts the unique picture ID as a parameter and returns a reference to the Picture object that represents the image selected by the user.

The picture ID is the parameter called FileId that we received in the URI when the application was opened. In the following sample, you can see how we retrieve the parameter by using the OnNavigatedTo event which is triggered when the user is redirected to the sharing page, and use it to display the selected picture with an Image control.

Nothing else is required since this kind of integration will simply include a quick link in the Photos Hub. The application will be opened normally, as if it was opened using the main app icon.

Integrating With the Edit Option

Another option available in the Application Bar of the photo details page is called edit. When the user taps it, Windows Phone displays a list of applications that support photo editing. After choosing one, the user expects to be redirected to an application page where the selected picture can be edited.

This is the Uri to intercept to redirect users to the proper page where you’ll be able to retrieve the selected image by using the FileId parameter, as we did for the photo sharing feature.

Rich Media Apps

Rich media apps are applications that are able to take pictures and save them in the user’s library. When users open one of these photos, they will see:

text under the photo with the message “captured by” followed by the app’s name

a new option in the Application Bar called “open in” followed by the app’s name

This approach is similar to the sharing and editing features. The difference is that the rich media apps integration is available only for pictures taken within the application, while editing and sharing features are available for every photo, regardless of how they were taken.

The following declaration should be added in the manifest to enable rich media app integration: