One of the many features that Silverlight never got from WPF was the ability to tile or repeat the image used on an ImageBrush. However, it had some workarounds like applying a custom pixel shader to get the same result – but, since the Silverlight version used on Windows Phone was even more limited, there hasn’t been a proper replacement (other than hacks based on displaying multiple images on a custom Panel control) in either Silverlight for Windows Phone, Windows Runtime or the new UWP platform.

Thankfully, now we have Win2D which combines the powerful graphical capabilities of Direct2D with the easiness of use of the XAML stack – and in this tutorial we are going to learn how to add a tiled background to our UWP app with very little code.

Approach

The premise is very simple – we will put an instance of Win2D’s CanvasControl in the root container of the page so it draws behind everything else, set both of its alignments to Stretch and fill it entirely using a CanvasImageBrush. Also, in the demo project, we will add some UI controls to tweak the scaling and opacity values to better showcase the flexibility of image brushes.

So, let’s get started!

Implementation

Start by adding the Win2D library from NuGet on your project, and as stated before, adding a CanvasControl element to the XAML page you want to. Hook up its CreateResources and Draw events and let’s jump to the code behind.

Declare two fields of type CanvasBitmap and CanvasImageBrush, as we will need them to hold the references to the background image and the brush that will be used for drawing, respectively. Then we can proceed to initialize them in the event handler for CreateResources that we added previosly – but wait, there’s a catch!

Win2D’s CanvasControl is nice enough to start drawing only when all resources have been loaded. But we have a problem: our event handler is of type void and we will need to do some asynchronous calls inside it; adding the async modifier to it won’t solve anything since async void methods are fire-and-forget. The answer is to use CanvasCreateResourcesEventArgs.TrackAsyncAction.

By wrapping all our async calls inside a Task and casting it to an IAsyncAction by calling AsAsyncAction, we can pass all our resource loading operation to the CanvasControl so it can track when they have completed, and start issuing draw events. Sounds a bit confusing? It can be at first, so check the following code snippet:

As you can see, we just load our background image from disk and create the CanvasImageBrush with it. And to make it tile properly, we change both ExtendX and ExtendY to CanvasEdgeBehavior.Wrap – the default value is CanvasEdgeBehavior.Clamp which just extends indefinitely the last row or column of pixels.

That’s it for the initialization. Now let’s proceed with the drawing function, which is even easier, since we just fill a rectangle with the same size as the bound’s controls using the brush we have just created:

And that’s it! We now have a nicely tiling background. And to add a bit more of flexibility, we are going to explore the possibility of changing the size of the tiled image (by default it draws it with the same physical size as the source image) and its opacity. Add two Slider controls and hook their ValueChanged events – one for the scale, another for the opacity.

Let’s start with the opacity one. We will just divide its value by 100 (since Slider controls only support integer values, and the opacity range goes from 0.0 to 1.0) and assign it to the Opacity value of our ImageBrush. The call to CanvasControl.Invalidate will force a redraw of the background canvas so it updates accordingly:

Wait a minute – notice the resourcesLoaded flag? We added it so we don’t change any brush properties before it has been created. Just create it as a boolean field and set it to true at the end of the Task that creates your graphics resources.

And finally, let’s tackle the image scaling. This will be as easy as creating a Matrix3x2 that holds a transform (in our case, one that only has scaling) and assigning it to the brush’s Transform property. We can do lots of thing with this – rotate the brush image, skew it… your imagination is the limit!

private void ScaleSlider_ValueChanged(object sender, RangeBaseValueChangedEventArgs e)
{
// Don't modify the brush properties if it hasn't been initialized yet
if (!this.resourcesLoaded)
{
return;
}
// Apply a scale matrix transform to the brush; this way we can control how big the image will be drawn
this.backgroundBrush.Transform = System.Numerics.Matrix3x2.CreateScale((float)(e.NewValue / 100.0));
this.BackgroundCanvas.Invalidate();
}

Share this:

One of the few step backwards I have found when working with UWP projects is the process of deploying application packages to devices running Windows 10 Mobile.

While previously we had access to the handy Windows Phone Application Deployment 8.1 tool, which was very basic but at least had a graphical user interface, now the only way to deploy an app package without using Visual Studio it’s through the Windows 10 Application Deployment (WinAppDeployCmd) tool.

The new tool packs more features than its 8.1 counterpart, but the main drawback is that it’s entirely command line based – and sometimes, when redistributing application packages to users or clients that have limited IT knowledge, it can be a pain to guide them through the necessary steps for getting the app installed on their phones. And to make matters worse, when building application packages from Visual Studio, the Add-AppDevPackage.ps1 autogenerated script is only useful when deploying to devices using full Windows 10, not Windows 10 Mobile!

So, to make the distribution and deployment/sideload of Windows 10 Mobile packages, I have created a very simple script based on Add-AppDevPackage, called Add-W10MAppPackage. It’s a really simple PowerShell script that you can drop on the same folder that your ARM package is sitting on, run it, and have the app deployed on your device in a matter of seconds.

To use it, download it from the link at the beginning of the post (or you can check it in its GitHub repository, in case you want to contribute!) and drop it in a folder that contains a .appx or .appxbundle package, compiled in ARM for Windows 10 Mobile. Right click on it and select Run with PowerShell, and it will automatically search for the WinAppDeployCmd tool and call it with the default parameters to deploy the application to the phone currently connected to the computer. It will output any error messages if something goes wrong.

It also supports the following configuration parameters:

-Force [true/false]: if set to true, skips any input needed from the user.

-WindowsSdkPath “path/to/tool”: path to the WinAppDeployCmd tool in case it’s not installed in the default location.

-DeviceIp “192.168.0.1”: by default the script looks for devices with IP 127.0.0.1 (those connected by USB to the computer); changing the IP address allows deployment through WiFi/network.

If you find any issues while using it, feel free to comment on this post or report a new issue on the GitHub repository!

Share this:

One of the biggest shortcomings of WinRT for Windows/Windows Phone 8.1 was the state of media APIs – you could play a video file in a MediaElement but anything that involved extracting frames or applying effects was very cumbersome, as you had to put your code in a Windows Runtime Component, and worse of all, you had to write it in C++/CX.

Thankfully UWP has come to save the day, and while you still need to use Windows Runtime Components for writing this functionality (with its associated limitations, like making all your public classes sealed), now you can use C# for things like custom video effects, and more importantly, you don’t have to tweak the application manifest anymore to declare the Extension/ActivatableClass for them to be available! Other improvements include new classes like MediaClip, which allows you to load individual videos, and MediaComposition, which lets you create a media timeline to stitch together video clips and render them as a full length video.

In this tutorial we are going to learn how to create a basic custom video effect that implements the IBasicVideoEffect interface for rendering a video in grayscale, and how to apply it to a MediaClip before playing it through a MediaElement.

Implementation

We are going to create a new UWP application project and add a MediaElement to the main page’s XAML. We will come back to this later when we have our video effect ready – but you can try fiddling around and make it play the video on your own, it should look something like this:

Now, add a new Windows Runtime Component project to the solution and add a new class to it. This is were we will implement our video effect based on IBasicVideoEffect, and this is very important as trying to implement it on another project type will throw a COMException with error messages Failed to activate video effect/Class not registered when trying to use the effect later.

Now that you have the class that will hold the code and it implements the appropriate interface, let’s walk through all the properties and methods that IBasicVideoEffect provides:

public bool IsReadOnly
{
get { return true; }
}

IsReadOnly will return true if our effect doesn’t modify the input frame in any way. Since we are just going to get a source frame and apply a colour transform to it for drawing, we are going to set it as true.

SupportedEncodingProperties returns a list of the video formats that our effect will support when outputting the processed video frame. As per the documentation, returning an empty list will give us frames in plain RGB format, which will be the easiest one to work on, so let be it.

SupportedMemoryTypes is an enum that lets us specify if our effect will use the CPU, GPU or both for processing the frames. We want to set it to Cpu since this will give us access to the frame data through SoftwareBitmap, otherwise we would get handles to Direct3D data types.

public bool TimeIndependent
{
get { return true; }
}

TimeIndependent lets us specify if our effect can be used while the playing video is in any playback state. We are going to modify the data per frame so we are going to set it to true.

These functions have pretty self explanatory names and we aren’t going to go through them since they aren’t used in this tutorial – however you can check the source code of the sample project for a brief explanation on what they are used for.

Finally, let’s explain what the ProcessFrame function does – the most important one, and the one we will be using for processing the frame data and applying our desired effect to the video clip.

This function receives a ProcessVideoFrameContext as its only parameters, and it stores everything we need for applying our effect. The original frame for the video is stored inside the InputFrame member, and since we specified CPU processing, all the data for the frame image is in a convenient SoftwareBitmap in plain RGB (though in this case it’s ordered in BGR format) data.

We will start by getting the pixel data in a Buffer, and temporarily casting all its contents to a byte[]. Then, we iterate through it in increments of 4 bytes (since the frame has an alpha channel, too) and obtain the grayscale value via the relative luminance formula. Once this is done, we overwrite the R, G and B bytes with the luminance value, effectively obtaining a grayscale value with the same intensity as the source color. Finally, we copy our data buffer to the SoftwareBitmap of the output frame, and we will have a fully working custom video effect.That was easy, wasn’t it?

Back to our XAML page, hook up the Loaded event of the MediaElement (we have called it VideoPlayer since we will have to reference it later). Let’s open the code behind file and write the code for loading and playing the video file:

Pretty simple, isn’t it? We obtain a reference to the video file packaged with the app and create a MediaClip from it. Then, it’s a matter of adding a new VideoEffectDefinition to it with the full name (including namespace) of our custom video effect – the runtime will take care of fetching the type, activating and instancing it. Really easy compared to how convoluted it was in WinRT 8.1!

For displaying the video on the MediaElement, create a MediaComposition object and add the clip to it. Then it’s a matter of creating a MediaStreamSource from it with the GenerateMediaStreamSource function and passing it to our video playing control. The final result should look like this:

While easy to do and good looking, this approach comes with some caveats. Since we are doing all our processing on the CPU side, computational load is big (it can quickly eat up one entire CPU core even with 720p videos) and memory usage is high since we are allocating a new buffer for each frame – this can be up to 60 allocations per second! It a future article we will look into doing it with GPU acceleration through the Win2D library.