Today at PDC, we made a few announcements that would be of interest to you! First, Scott Guthrie announced the availability of the Silverlight 4 Beta. This version of Silverlight contains some cool new features that many of you have asked for, so read the What’s New document to get an overview of some of the new features.

To coincide with the release of Silverlight 4 Beta today and the release of Visual Studio 2010 Beta 2 a short while ago, we are making a version of Expression Blend available that allows you to work with .NET 4 and Silverlight 4 based projects.

Today at Internet Week in NYC, we announced the availability of Expression Studio 4. You can download the trial of Expression Studio 4 Ultimate that includes Expression Blend 4 and SketchFlow by clicking below:

Note that if you are currently doing Windows Phone development, please do not upgrade to the final version of Expression Blend 4 yet. We will release an updated version of all of our phone components in the future, so please continue using Expression Blend 4 RC.

Of course, no major release would be possible without the feedback all of you have provided, so thank you!

As always, please uninstall all existing versions of Deep Zoom Composer before installing the new version.

Quick Intro to DeepZoomPixThis release coincides with the recent launch of the DeepZoomPix technology preview: http://deepzoompix.com/default.aspx If you haven’t seen DeepZoomPix, it is a technology demo where you can upload and view your images using the Deep Zoom technology found in Silverlight:

A key part of DeepZoomPix is the viewer through which you can browse and interact with your images. Within the viewer, you have the ability to dynamically change how your images are arranged, play a slideshow, view a thumbnail of all images, as well as tag/filter:

There are a lot of cool things you can do in DeepZoomPix, and you can learn more about DeepZoomPix by reading their FAQ as well watching a video interview with Rado and Matt on how the site was actually created. Janete will have a blog post with more information on DeepZoomPix in the near future.

Deep Zoom Composer and DeepZoomPix As many of you know, not too long ago, we added the ability for you to be able to publish your photos online to PhotoZoom. Based on a lot of requests and feedback many of you have provided about PhotoZoom and the integration with Deep Zoom Composer, we closely followed the evolution of PhotoZoom into DeepZoomPix and were excited to help make Deep Zoom Composer work better with the new service.

The biggest change is that Deep Zoom Composer will only allow you to upload to DeepZoomPix:

The workflow and the UI are largely the same, so if you have used the PhotoZoom functionality before, everything should look familiar to you with the exception of a few name changes.

Our primary motivation for moving to DeepZoomPix was to address the large amount of feedback many of you provided us about PhotoZoom. We hope you all find DeepZoomPix to be more reliable and more fun to use.

Do note that DeepZoomPix is just a technology demo. It is something that we are only planning on keeping until the end of the year (Dec 31st), and after that, we are not sure what will happen to the service. Please do not use this location as a permanent storage location for your photos.

Before I wrap up this post, a huge thanks to Matt (who you can see in the video interview above) for helping unwind some gnarly technical issues associated with changing Deep Zoom Composer to use DeepZoomPix.

As always, please send us your feedback by posting on the forums or commenting below.

Exactly a year ago (plus one day…but who’s counting?!), I posted a sample WPF application that simulates falling snow. Since Silverlight 2 was released since then, below you will find a Silverlight version of a similar falling snow effect:

Just click on the Let It Snow banner to cause 200 snowflakes to start falling. Feel free to use this for your own projects, and the source files have been provided below:

For your own projects, the only thing you may want to tweak is the width and height of your application. Currently, everything is hard coded to a width and height of 500px by 300px respectively, but if you decide to change the size, be sure to open Page.xaml.cs and change the dimensions provided:

If you are working with a CTP or Beta of Visual Studio Code Name “Orcas”, you will notice that projects and solutions that are created or edited in Visual Studio Orcas cannot be successfully reloaded inside Expression Blend.

To make it easier to work around this issue, we are making available a utility tool that you can use to configure Expression Blend for Visual Studio Code Name “Orcas”. This tool can also be used with the Expression Blend 2 May Preview.

d)Choose the installed location of the version of “Blend.exe” that you want to configure. For example, if you are trying to configure the Blend 2 May Preview, you will find it in the (ProgramFiles)\Microsoft Expression\Blend 1.1 folder.

Currently, Expression Blend does not have a scissor tool which allows you to cut shapes like you can in Expression Design. Instead, what you do have are geometric operations that allow you to not only achieve a similar result as “cutting a shape” but also do much more.

Beyond giving you a working example, this implementation also includes a C# approach to getting mousewheel support in IE, Firefox, and Safari (Mac). You can read more about my mouse wheel support on my blog here.

As many of you know, today was the first day of MIX - Microsoft’s annual conference for designers and developers. Just like previous years, there has been a lot of great news coming out of the conference.

The two big things we announced are Expression Blend 4 Beta and an add-in to Expression Blend that gives you the ability to build applications for the Windows Phone.

Due to popular demand we have added the clip path editing/animating feature to Blend 2. Clip path editing works for both WPF and Silverlight 1.0 projects. You can download the December Preview to test out the new features! In this post, I will go through some of the interesting things you can do with this feature!

What is a Clipping Path?A clipping path is a path or shape that is applied to another object, hiding the portions of the masked object that fall outside of the clipping path. For example, the following image shows you a separate individual image and text object on the left, but thanks to clipping paths, on the right you have just the image object with everything outside of the Seattle text hidden:

Using a Path Object to Apply a Clipping PathLet's take a quick look at how to use clipping paths in Blend:

With the selection tool select the path or shape that you want to turn into a clipping path, hold CTRL, and then select the object that you want to clip. (Make sure that the clipping path object is in front of (in Z order) the object that you intend to clip.)

On the Object menu, point to Path, and click Make Clipping Path. You can also right click on the two selected objects and select Make Clipping Path under the Path options.

The left image below shows the path overlaid on our image, and on the right, you see the image visible through the region created by our path:

Editing a Clipping PathYou just saw how to apply a clipping path, so let's next look at how you would edit a clipping path. In Blend 2 you now have all path editing capabilities used to edit regular paths applicable to editing clipping paths as well.

Select the object with the clipping path applied. In this case you select the Image object.

Select the direct selection tool. Note that the clipping path is displayed as the purple outline. The clipping path consists of points and segments just like a regular path.

Click and drag a segment on the clipping path to edit as you would a path object. You will be able to edit the clipping path by using the same artboard gestures and keyboard shortcuts associated with path editing.

The following image shows you how manipulating the path preserves the overall masking efffect that we expect:

Releasing a Clipping PathYou saw how to create a clipping path, and you also saw how to edit a clipping path. Let's look at how to actually release a clipping path. It's pretty straightforward:

Select the object that is being clipped.

On the Object menu, point to Path, and then click Release Clipping Path.

Note: In Blend 1 you only had the option of removing the clipping path, and that would remove the original clipping object from the artboard. This behavior has been improved in Blend 2 by allowing you to release the clipping path without removing the clipping object!

Animating a Clipping PathIn Blend 2, along with being able to edit clipping paths you can also use the full animation capabilities used to animate regular paths to animate clipping paths as well. You can also take advantage of the structure changes supported by vertex animation.

Let's look at that in greater detail:

Select the object with the clipping path applied. In this case you select the Image object.

Add a storyboard in the Objects and Timeline panel.

Select the direct selection tool. Note that you can single click any point or segment to select or you can double click the clipping path to select all of the points and segments to move them all at once.

You can then use the vertex animation features to create interactive animations with the clipping path.

As seen in the following image, you can apply vertex animations to clipping paths. This can be used to easily create interactivity such as the “spotlight effect”:

Interop with DesignYou currently have the ability to import files into Blend created with Expression Design with clipping paths or copy/pasting objects with clipping paths applied from Design->Blend.

In Blend 2 we also support editing/animating of these clipping paths. Below is an example of me creating a clipping path in Expression Design:

To import an object with a clipping path from Expression Design to Blend 2 you can simply copy the element from Design and paste it into a Blend project. Below is an example of an image object from Design pasted into a Blend 2 Silverlight 1.0 project.

ConclusionAs you can see you can create a variety of visual and interactive effects by creating, editing, and animating clipping paths in Blend 2. Give the features a try and let us know if you have any feedback!

We are pleased to announce a new preview build in the Expression Blend 2 train – our September Preview. The September Preview has a slew of new features, many of them made possible based on your feedback!. We have also made available some videos that highlight the new features in this build. Visit the Expression Blend 2 September Preview page for viewing these, and to download and install the latest build.

As usual, we look forward to your feedback!

Many thanks,

The Expression Blend team

What is new in this release?

Visual Studio 2008 Support

The Expression Blend 2 September Preview can open and work with Microsoft Visual Studio® 2008 (formerly known as Microsoft Visual Studio code name "Orcas") Beta 2 projects and solutions. By default, Windows Presentation Foundation (WPF) projects that are created in the Expression Blend 2 September Preview are now Visual Studio 2008 projects, if Microsoft .NET Framework 3.5 is installed and such projects cannot be edited in Visual Studio 2005. The Expression Blend 2 September Preview can still open projects that were created with earlier versions of Expression Blend or Visual Studio 2005.

Making Controls from Existing Objects

The Expression Blend 2 September Preview contains new functionality that lets you refactor (in other words, convert) existing content into a control that can be reused (instantiated). Selected elements, their referenced resources, and referenced animations are refactored into the new control. You must build the project to be able to see and instantiate the new control.

Split View and XAML Editor Improvements

The Expression Blend 2 September Preview lets you view an open document in both Design view and XAML view at the same time by selecting the new Split tab on the right side of the artboard. Additionally, you can specify font size, font family, tab size, and word-wrap for the XAML editor (XAML tab) by modifying the Code Editor settings under Options in the Tools menu.

Storyboard Picker

The Storyboard Picker replaces the old Storyboard box. The picker consists of a label to indicate the name of the selected Storyboard (if a Storyboard is selected), a shortcut menu (available when you right-click the label), a pop-up button (and resulting pop-up menu), and a Close button to close all Storyboards and exit recording mode. Both the shortcut menu and the pop-up menu let you create a New Storyboard, and if a Storyboard is already selected, you can now Duplicate, Reverse, or Delete the selected Storyboard. The shortcut menu also lets you Rename the selected Storyboard. The pop-up menu contains all Storyboards in scope in a multicolumn layout. The pop-up menu can be resized, and its list filtered according to a text box at the top of the list. The Storyboard label serves as the Storyboard selector when you want to modify properties on a Storyboard.

Storyboard and Keyframe Properties

The Expression Blend 2 September Preview contains new functionality for setting properties on Storyboards and on keyframes in the Properties panel. When you have a Storyboard selected, you can change the direction of the animation and change the repeat behavior. When you have one or more keyframes selected, you can change the easing behavior between keyframes by modifying the related key splines graphically, or by setting specific values.

Vertex Animation

The Expression Blend 2 September Preview contains new functionality for animating individual vertices (points and tangents) on a line. Previously, if you modified a vertex when in animation recording mode, the original shape of the object was permanently modified.

Breadcrumb Bar

The Expression Blend 2 September Preview now displays a breadcrumb bar above the artboard, which helps you quickly switch editing scopes while you are editing templates and styles in WPF projects. The breadcrumb specifies the object that is selected. If a template can be applied to the object (such as a button), you can click a drop-down arrow in the breadcrumb item to view the actions that can be performed on the object (such as editing a button template). If you have already edited a style or template on the object, the breadcrumb will include additional items that represent the style and template items that you edited earlier. This makes it easy to see which style or template has already been edited on an object, to quickly switch the scope in which you are editing, and to understand exactly where you are as you make changes.

Font Embedding and Subsetting

The Expression Blend 2 September Preview contains new functionality for embedding and subsetting fonts in your project. Embedding makes sure that the font that you select for your application is the font that users will see when they run your application. Subsetting lets you create a custom font file that only contains a subset of the glyphs that you are interested in, and thereby reduce the size of your re-distributable. Typically, users will already have most of the fonts that you can select in Expression Blend, and therefore you do not have to embed them. If the user does not have your chosen font, a default system font will appear. If you do decide to embed, subset, or otherwise redistribute fonts in your application, it is your responsibility to make sure that you have the required license rights for those fonts. For the fonts that come with Expression Blend, see the Microsoft Software License Terms (EULA.language.rtf) file for full license terms. For other commercial fonts, see the Microsoft Typography web site for information that can help you locate a particular font vendor or find a font vendor for custom work. To embed fonts in an Expression Blend application, you can use the new Font Embedding manager available in the Tools menu and available in the Advanced Properties section under Text in the Properties panel when you select a text control. For more information about how to embed fonts in WPF applications, see Packaging Fonts with Applications on MSDN.

Build Options

When building inside the Expression Blend 2 September Preview, the property $(BuildingInsideExpressionBlend) is set to true. You can use this property in your project or .targets files to change how the project builds when in Expression Blend. For more information about how Visual Studio supports this scenario, see the Visual Studio Integration documentation.

Object Manipulation

We’ve added the ability to uniformly resize, scale, and rotate multiple selected elements by using resizing handles on the artboard. In addition, we made a number of usability improvements – for example, you now easily duplicate elements by dragging them with the Ctrl key pressed.

Jeff Kelly is back with Part II of his behaviors triple-feature. This time, he focuses on more details and provides some examples of a simple behavior, trigger, and action - Kirupa

Behaviors and triggers are set on objects in XAML via an attached property, Interactions.Behaviors or Interactions.Triggers respectively. When created via XAML, the IAttachedObject interface is invoked behind-the-scenes to automatically associate your triggers, actions and behaviors with the objects they are attached to in the XAML. It is also possible to call the IAttachedObject members directly from code, although the XAML syntax for these operations is typically sufficient.

( Example XAML snippet of a Trigger and Action )

The heart of the behaviors API is the IAttachedObject interface. This interface provides three things: the ability to attach to a specified object, the ability to detach from any object you may be attached to and a property that provides the object to which the IAttachedObject object is currently attached to. Triggers, action and behaviors all implement IAttachedObject. Additionally, each exposes two virtual functions: OnAttached and OnDetaching, which are called in response to the object being attached or detached to another object. Derived classes typically override these functions to hook/unhook event handlers, initialize state or release resources, and so forth.

A basic behavior has nothing more than what we’ve already described: an OnAttached and OnDetaching virtual and an AssociatedObject property. Depending on the desired behavior, the author may implement their behavior as a black-box, or they may choose to expose relevant properties to configure the operation of a specific instance of a behavior. One additional bit of functionality that is exposed by behaviors and specially tooled by Blend 3 are ICommand properties of behaviors. ICommand properties exposed on behaviors allow users of the behavior a way to interact with a behavior in configurable ways that will be explored in more detail in a future post.

The base class for creating a trigger is the TriggerBase class. The main addition to the basic API that Triggers introduce is the inherited InvokeActions method. A trigger typically hooks up event handlers or initializes some internal mechanism that will be used to determine when to fire (a timer, gesture engine, etc). Once a trigger has determined it is ready to fire, the author simply calls the inherited InvokeActions method and all Actions associated with that trigger will be invoked. This method accepts a parameter as an argument that will be passed to all Actions: this mechanism can be used to pass data between your triggers and your actions, such as EventArgs, or can be set to null and ignored.

Actions have the same basic API extensions as Triggers and Actions, but also require the author to implement the abstract Invoke method. This method is called when the action is invoked, and the functionality of the action should be implemented there. The base class for creating an action is TriggerAction.

One nice component of the API is the differences between Silverlight and WPF are minimal. A behavior written for one platform will need only changes to platform-specific code used in its implementation to compile against the other platform; the behavioral APIs are the same between platforms.

Type ConstraintsWhen creating a trigger, action or behavior, you need to specify a type constraint. This will control the types that your type may be attached to and is particularly useful if you are assuming the existence of a specific event or property on your AssociatedObject. The type constraint is specified as a generic type argument provided to the base class in your class definition. For instance, if you want to write an action that only applies on types derived from Rectangle, you would define it as:

class MyRectangleAction : TriggerAction<Rectangle>

Note:Your constraint type must derive from DependencyObject. If you don’t have a specific constraint, you should specify DependencyObject in the generic field of your derivative class definition.

Some Simple ExamplesTo wrap up this post, let’s look at some sample behaviors, actions, and triggers. Let’s start with a simple example of a behavior:

A simple trigger would look as follows:

Finally, here is a simple action:

This post was a mix between concepts and details. In a future post, I will dive into even more detail on how to write some of these behaviors.

But if you want to continue to use the January CTP of EID then please read this post before rushing off and installing anything.

As you know, the WinFX Runtime Components (RTC) are the redistributables (runtime binaries) needed for executing WinFX applications. You target a particular version of WinFX when you develop your Windows Presentation Foundation (WPF) applications. You'll also be aware that Expression Interactive Designer (EID) is itself a WPF application so it is built to run against a particular version of WinFX, as are the WPF applications you build using EID.

So, if you want to continue to use the January CTP of EID then you'll need the WinFX Jan CTP installed on your machine. If you've downloaded and played with the new WinFX Feb CTP then you'll need to revert to the WinFX RTC Jan CTP [1] in order to continue to use EID.

That is, at least for just a little while longer. We have a March CTP of Expression Interactive Designer under starter's orders as we speak so please be patient just a little longer. And the EID March CTP will work with the WinFX Feb CTP.

If you'd like to experiment with the new WinFX RTC Feb CTP [2] then please do so. If you're a developer-type then you'll also be keen to try out the compatible Visual Studio "Orcas" CTP [3] which contains a WPF visual designer (code name "Cider").

If you've been using the January Community Technology Preview (CTP) of Expression Interactive Designer (EID) and you've been waiting for the March CTP that we mentioned in a previous post [1] then you'll be glad to hear that your wait is over!

We're pleased to present the March 2006 CTP of EID that targets the February 2006 CTP of Microsoft WinFX Runtime Components (RTC). We have had an opportunity to make incremental improvements to various features of the product as well as to incorporate some of your feedback.

NOTE: Please uninstall any older versions of Deep Zoom Composer prior to installing this new version.

This release was really about fixing the bugs that many of you have found as well as addressing some major shortcomings in the app. One of the biggest shortcomings was the lack of comprehensive and updated documentation….until now!

Updated DocumentationThanks to some great work by Chris Lohr and his team, Deep Zoom Composer has some really informative (and nifty-looking) documentation in the form of a User Guide:

The User Guide covers topics ranging from what Deep Zoom is to how to actually use Deep Zoom Composer to create your own content.

You can access the User Guide inside Deep Zoom Composer by going to Help | User Guide or by pressing F1.

Random TriviaThe photographs used as examples inside the user guide were taken by Chris himself.

Improved Memory HandlingOne of the areas we have made and continue to make investments in is memory usage. Dealing with many high-resolution images on an interactive design surface is a challenge, but Deep Zoom Composer should now allow you to compose more images than you could in the past. We aren’t quite there yet, so expect future releases to address them in greater detail.

Improved Project SupportFor the past few releases, we made some major changes to our project structure and how the DZPRJ files are written and opened. Unfortunately, for some of you, those changes meant your older projects were no longer opening. We have tried to fix as many of those incompatibilities in this release, and a big thanks to all of you who have have e-mailed us your projects for testing.

If you find that your projects are still not opening, we apologize. To help us out, do e-mail your .dzprj files to kirupac[at]microsoft.com to aid in troubleshooting.

Updated Seadragon Ajax Templates In our previous release, you got to use Deep Zoom Composer to export your content to Seadragon Ajax. The approach that was used then was to have you upload the JS libraries along with your images to the server. That was a bit messy. In this release, we are deferring all JS downloading to the Live Labs team’s seadragon.com server itself, so you will no longer have to upload a large quantity of JS files.

Numerous Little FixesBesides the major changes listed above, we made many little tweaks that are too many to publish and list here. Some of them are bigger such as using an updated version of DeepZoomTools.dll, and some of them are less big like the numerous wording and string changes made.

If you have any questions or comments, feel free to comment below or post on our forums :)

While Expression Blend 2 allowed you to easily design Silverlight 1 and WPF applications, this service pack extends that support to Silverlight 2 as well. You now have the ability to take advantage of Silverlight 2 features such as control templating / styling, visual state manager, font embedding / subsetting, and more directly within Blend itself.

In Blend you use the Pen tool to draw lines and they appear in the XAML as <Path> elements. But after you’ve drawn your lines and you test your application, the Paths appear immediately: they don’t replay the gestures you used to create them. But sometimes that’s the effect you want, so how do you do that? The manual way is only practical if you have very few points. First draw out a Path consisting of straight segments. Then create a new Storyboard, move the playhead forward a tenth of a second and, using the Direct Selection tool, move the last point to the same position as the second-to-last point. Move the playhead again and repeat, each time moving an increasing number of points back to the previous position. When all your points are at the same position as the very first point, move the playhead to time 0 and click the Record Keyframe button. Now reverse the Storyboard and you’re done.

Any more than a very small number of points and you’ll need an automatic way of generating the Storyboard. You can do that with the code in the assembly dll built from the PathScribbler project I’ve provided below:

PathScribbler works with WPF paths but you could easily port the code to work with Silverlight. After building the dll and referencing it in your project, the way you use it is you first draw your Path (which may contain curved segments but they will be straightened by the PathScribbler code) and then you create a new, empty Storyboard. Finally just call PathScribbler.Generate() and pass the name of the Storyboard into which the animations are to be generated along with some other optional parameters. Then you can trigger your Storyboard any way you wish.

The PathScribbler project is contained in an example project named PencilWriter attached to the post. For this project I made a TextBlock into a Path and I trigger my Storyboard when the main window is clicked. Build and run the project then click the window and wait a moment until the animation is generated and then begins. Hope you have fun with PathScribbler!

Karen Corby is the Program Manager who worked with the Expression Blend team in developing the Visual State Manager feature. Karen has written four excellent and comprehensive blog posts (starting here) which explain the motivation for VSM, everything you can do with VSM and how it works under the covers, and how to build and skin a custom control. Even if you won't be writing your own custom control, this is highly recommended material on the subject from someone who designs the platform itself.

Below I've shamelessly copied out Karen's further reading section so we have a good set of links in one place:

I love 3D, so I'm always trying to incorporate it in my projects. And what better opportunity to work with 3D than a blog post! This week I want to share this Mini Cube project, which is a (2x2) simplified version of that famous game that gave me so many headaches in the past. This is a perfect candidate for a 3D sample, since you have to visualize the cube in three dimensions in order to play it properly.

This project is not finished yet, I've prepared the 3D objects and some interaction, but the cube doesn't "work", which means you can't rotate the blocks. But that's the point of this post... invite you to finish it ;-) So, let me give you an idea of what I have done so far and see if someone wants to take it to the next level:

Creating the 3D Objects

The workflow I used was Maya 6.0 > Export to Obj > Import Obj in Blend. In Maya I was able to create the geometry, separate materials, and adjust the pivot points for each block. For the geometry, I also created a few extra objects that will receive mouse events in Blend. If you click on one corner, you will notice that a Message Box shows the name of that corner (red1, red2, etc...) Please, see the following image to see where they are located. You can find the original Maya project in the project ZIP file (/miniCube/Maya_files).

Importing the Obj File in Blend

To import a Obj file you just need to drag the files to your WPF Project in Blend. Just remember that you have to drag the .obj and the .mtl files. The latter contains all the materials that you defined in Maya. Double click the file cube.obj to insert into the Artboard. Then use Camera Orbit to rotate the 3D space.

What else is Ready in Blend?

Ok, that's the general idea. Now if you open the Blend project you will see that I've prepared a few more steps:

I've created new groups in Blend for the blocks (b1, b2, b3, etc...)

The project has several Storyboards for each block. Those Storyboards are rotating each block in X, Y and Z, in both directions.

The code behind (Window1.xaml.cs) has basic interaction to rotate the cube.

If you click on a corner, you can see a Message Box that shows you which corner got the event (look for the method OnMouseUp in Window.xaml.cs).

What's Next?

Well, since you have the Holidays, you can use that time off to finish this project! The next steps are not easy, but interesting. You can use the Storyboards that are already there, or you can rotate the blocks using just code. The idea now is to make the cube work, so you can come back from the Holidays and spend your time playing the game at work! Just kidding...

I will publish the final project in January, but I would be glad to publish any solution that you might send us!

With WPF, you have several ways of creating animations. A common way is by using Storyboards/keyframes, and you can do that easily using Blend itself. If you are willing to take a trip through the dark side and use code to create your animations, you have the ability to create some really cool effects.

Snow is the big thing during this part of the year in much of the world...except if you live in a place that is too warm to have snow in December. To those of you who can't experience real snow (like me who is spending the holidays here), I decided to create a small application that uses UserControls, some old-fashioned C# code, and a dash of CompositionTarget.Rendering to simulate falling snow on your computer:

Click here to run the application, but if you are unable to run the ClickOnce application, download and extract the source code and use Blend 2 or Visual Studio 2008 to open and run this application instead. Let's look at some interesting bits and pieces of code that make up this application:

CompositionTarget.RenderingWhen creating animations via code, you need some sort of a loop structure that updates what is displayed on your screen. The loop structure in our case is provided via the CompositionTarget.Rendering event which calls our MoveSnowFlakes event handler each time your scene is drawn/re-drawn:

For more information on what the CompositionTarget.Rendering event does, refer to its MSDN documentation page.

Creating the OscillationIf you observe the snowflakes as they fall, you'll see that they oscillate horizontally. The oscillation effect is provided by the Cosine function which takes a number as its argument:

Because I am interested in positioning each snowflake precisely, each snowflake is placed inside a Canvas layout control, and the exact position is set using the SetLeft and SetTop attached properties. With this approach, I gain the ability to position my snowflake with exact x/y coordinates, but I do lose the automatic layout functionality WPF provides.

I hope this post helped you to see a new way of creating animations in WPF. Because WPF uses hardware acceleration and provides decent drawing support, feel free to experiment and create far cooler animations than what I have shown above or were able to create in the past with other technologies.

The following post is written by Jeff Kelly, one of the developers who worked extensively on both the behaviors runtime as well as the UI inside Blend that makes behaviors easier to use! In this post, he will provide an overview of the three components that make up what we collectively call “Behaviors” in the Expression Blend 3 Preview - Kirupa

One of the major goals of Blend 3 is to make it easier to build interactivity into your applications. In Blend 3, we’ve introduced several new concepts that make writing and reusing interactivity easier than ever before. By writing triggers, actions and behaviors, developers can provide functionality that is cleanly encapsulated and reusable by both developers and designers. Likewise, designers are empowered to add a new level of interactive functionality to their work without ever having to touch a code file.

Triggers and ActionsTo anyone familiar with WPF, triggers and actions should sound familiar. Blend 3 introduces a similar model, extends support to Silverlight as well as WPF, and allows you to write your own triggers and actions - opening a whole new world of possibilities for what kinds of functionality you can create and reuse in your own applications. Let’s look at what triggers and actions are at a conceptual level, and then dive a little deeper into the basics of the API.

An action is an object that can be invoked to perform an operation. If you think that sounds pretty vague, you’re right. The scope of an action is not constrained: if you can write the code to do something, you could write an action to do the same thing. That said, actions are best written to perform operations that are largely atomic in nature. That is, actions work best when they don’t rely on external state that needs to be persisted between invocations of the action, and that don’t have any dependencies on other actions existing or running in a particular order relative to their invocation.

Good actions

Change a property

Call a method

Open a window

Navigate to a page

Set focus

Actions aren’t particularly useful on their own: they provide functionality to do something, but no way to activate that functionality. In order to invoke an action, we need a trigger. Triggers are objects that contain one or more actions and invoke those actions in response to some stimulus. One very common trigger is one that fires in response to an event (an EventTrigger). Other examples might include a trigger that fires on a timer, or a trigger that fires when an unhandled exception is thrown.

One important thing to note is that triggers and actions are generally meant to be used together arbitrarily. In other words, you should avoid writing an action that makes assumptions about the type of trigger that invokes it, or a trigger that makes assumptions about the actions that belong to it. If you find yourself needing a tight coupling between a trigger and action, you should instead consider a behavior. Speaking of which….

BehaviorsWhereas the concepts of triggers and actions have been previously established in an earlier incarnation in WPF, the concept of a behavior is a new one. At a glance, a behavior looks similar to an action: a self-contained unit of functionality. The main difference is that actions expect to be invoked, and when invoked, they will perform some operation. A behavior does not have the concept of invocation; instead, it acts more as an add-on to an object: optional functionality that can be attached to an object if desired. It may do certain things in response to stimulus from the environment, but there is no guarantee that the user can control what this stimulus is: it is up to the behavior author to determine what can and cannot be customized.

As an example, consider a behavior that allows me to drag the object the behavior is attached to around with the mouse. I need to listen to the mouse down, mouse move, and mouse up events on the attached object. In response to the mouse down, I’ll record the mouse position, hook up the mouse move and mouse up handlers and capture the mouse input. On mouse move, I’ll update the position of the object as well as the mouse position. On mouse up, I’ll release mouse capture and unhook my mouse move and mouse down handlers.

My first inclination might be to try and use EventTriggers for each of these events, and write a StartDragAction, MoveDragAction and StopDragAction to invoke in each case. However, it soon becomes apparent that this scenario is not well-addressed by actions: I need to store state between invocations (previous mouse position and the state of the drag), and the operation isn’t atomic. Instead, I can write a behavior that wraps the exact functionality outlined above into a reusable component.

ConclusionHopefully this post gives you an overview of what makes up a Behavior – the catch all word we use for behaviors, actions, and triggers. In my next post, I will describe the API a bit more and provide some code snippets on how to write each of the behavior components.

It’s been a while since we released a preview of Expression Blend 3 at MIX. We’ve been working pretty hard on continuing our work on Blend 3, but we haven’t really shared what exactly we are working on.

Instead of detailing what we are doing, I figure I will just post a screenshot of what my daily build of Expression Blend looks like. Click on the following image to view a larger version:

( click above image to see a larger version )

Can you spot all of the major or minor changes between the version of Blend you are currently running and the version of Blend I currently have displayed?

As you probably know, Silverlight and WPF have a runtime piece called the Visual State Manager (or VSM for short). As I’ll describe in this post, VSM and the Expression Blend tooling support for VSM lend a nice clean mental model to the business of visual states and visual state changes for both custom controls and UserControls.

Although chronologically the story begins with the control author, I’ll talk about that aspect later in this post simply because there are more people in the world concerned with the visual stuff than with the logical stuff. The visual aspect begins in Blend, with a set of already-defined visual states organized into groups in the States panel.

You can identify three stages in the design of visual states and transitions. First, the static stage. Here you make each visual state look the way you want it to, and you do so ideally without any thought of transitions. You select a state in the States panel and you change object properties. And speaking of the States panel, this is probably a good time to introduce the idea of state groups.

Visual states are grouped in such a way that a) the states within a state group are mutually exclusive of one another and b) the states contained in any group are independent of the states contained in any other group. This means that one, and any one, state from every group can be applied at the same time without conflict. An example is a check box where the checked states are independent of, and orthogonal to, the mouse states. Changing an object’s property in more than one state within the same group is common practice. For example, you might change a Rectangle’s Fill to different colors in MouseOver, Pressed and Disabled. This works because only one state from the CommonStates state group is ever applied at a time. But changing an object’s property in more than one state group breaks the independent nature of the state groups and leads to conflicts where more than one state is trying to set the same object’s property at the same time. Blend will display a warning icon (with a tooltip containing details) next to any affected state group whenever this conflict situation is detected.

Each state group should contain a state that represents the default state for that group. CommonStates has ‘Normal’, CheckedStates has ‘Unchecked’, and so on. It is a good (and efficient) practice to set objects’ properties in Base such that no changes have to be made in any ‘default’ state. So, for example, you would hide a check box’s check glyph and focus rectangle in Base and then show them in Checked and Focused respectively.

So now you can click through the states to confirm that each looks correct. You can build and run, and test your states, and you might even stop at this stage if you’re happy that the control switches instantly from one state to another. But if instant state switches are not what you want then you can breathe life into your transitions in stage two, the transitions stage. First, add any transitions you want to see, then set transition durations and easing on them, still in the States panel. And again, in very many cases this will be enough for your scenario. What’s interesting for those designers who can stop at this point is that there was no need to open the Timeline and no need to be bothered with the attendant concepts of what a Storyboard is, etc. To keep unnecessary concepts and UI out of your face, by default Blend keeps the Timeline closed when you select a visual state or edit a transition duration. You can of course open it at any time with the Show Timeline button.

Still in the transitions stage, there may be times when you need a property’s value to change during the transition from StateA to StateB but, because of the way StateA and StateB are defined, the property either doesn’t change or doesn’t pass through the desired value. In this case you need to customize that transition. Select the transition and then use the Timeline as normal to define the animations that should take place during the transition.

The last stage is dynamic states. If, for example, you want a blue rectangle to subtly pulse while a control has focus then you need a steady-state animation. I also call them ‘in-state animations’ because the animation happens while you’re ‘in a state’. To do this, just select the state, open the Timeline, and go ahead and keyframe as usual, possibly also selecting the Storyboard and setting repeat behavior and auto reverse.

Now let’s move onto the topic of how states relate to control authoring. Before a designer can begin deciding what states and transitions look like, the control author must decide what states exist. As a control author, your job is not to think about visual states, but logical states. Forget what it looks like; what does it mean? You need to consider all the ways the end user (and possibly other factors such as invalid data) can interact with the control, and from that thinking build out a set of candidate states; and the states are logical at this point because they have no ‘look’. Now’s the time to think about whether your candidate states need factoring. Look for islands of states: closed graphs that don’t link to other states. There are two kinds: orthogonal and nested. Orthogonal islands should be put in their own state group. An example is that of CheckedStates and FocusedStates. There are no transitions between a CheckedStates state and a FocusedStates state, and a control is either checked or not, and focused or not, and there is no dependency between those two aspects. Islands that should be nested have the characteristics that there are no transitions between a StateGroupA state and a StateGroupB state, and the states in StateGroupB are only effective for one state in StateGroupA. For example if StateGroupA contains LoginPageHidden and LoginPageShown, and StateGroupB contains LoginNormal and LoginError, then it’s clear that StateGroupB is only effective during StateGroupA.LoginPageShown. In this case the two state groups are not orthogonal so you may choose not to use state groups. An alternative, and arguably cleaner, design would be to put StateGroupA at one level of hierarchy with StateGroupB defined on a nested control. Another point to bear in mind, related to naming states, is that each state name must be unique for a control type, even across state groups.

When a control initializes, it first has to get itself onto the state graph. This is important. If a control doesn’t do this then it is still in Base after initialization. Base is not a state; it merely represents the control with its local (or ‘base’) property values set, with no states applied. When the mouse pointer first moves over this control it will go to MouseOver but it will go there from Base, so the Normal -> MouseOver transition will not run the first time. This is a subtle bug that the consumer of your control cannot fix by defining Base -> MouseOver, because Base is not a state. So when you author your own templated control or UserControl, you should define a ‘default’ state in each state group. Have the control go to those ‘default’ states when it initializes, and do so with transitions suppressed so that it happens without delay. Once it’s on the state graph, the control is ready for state transitions to occur so now you can implement the event-handlers that trigger the transitions within the state graph.

I hope this post has been useful and has helped clarify some of the less obvious aspects of designing and authoring controls to work well with the Visual State Manager. If you want to see a walkthrough of some of the ideas presented here, you could try my Button styling video.

While the PathListBox control provides an easy way to lay out items along a path, creating a carousel control that appears 3 dimensional and has smooth scrolling requires additional functionality that we did not have time to do in Expression Blend 4. I’ve created the PathListBoxUtils sample available on CodePlex to provide the tools that make creating a carousel like the one shown below very easy:

It’s not entirely uncommon for projects that run fine to not work when loaded into Blend for editing:

There are a variety of issues that can cause this- some are bugs that we’re working to address, others are things that need to be fixed by the application developer. Unfortunately designability doesn’t always come free.

The exception information that’s displayed often has some useful information, but for complex projects I can save some guessing by just debugging through it using VS. This post will describe some steps to help you out.

Steps to Debug Exceptions on the Design Surface in Blend

Open the project with the error in Blend, but close the XAML file which contains the error.

Open the same project in Visual Studio.

Attach the VS debugger to the Blend process:

In Visual Studio, go to Debug->Attach to Process.

Select Blend.exe from the Available Processes list:

Ensure that the “Attach To:” field reads ‘Managed Code’, if not, click Select… and change it to “Managed Code”

Click Attach

Set Visual Studio to break on all exceptions:

In Visual Studio, go to Debug->Exceptions…

Ensure there’s a checkbox beside Common Language Runtime Exceptions:

Click “OK”

Go back to Blend, and open up the XAML file containing the error.

What ideally will happen is that you’ll get a nice stack trace leading to some of your code in Visual Studio and the cause is readily apparent:

Common Examples of Errors We See Let’s look at some common errors we often encounter with Silverlight 2 projects:

Common ErrorAccessing the web page while in the design surface, such as the above example. Anything related to HtmlPage is off-limits when in design time since the app is not being hosted in a web page:

if (HtmlPage.IsEnabled) { HtmlPage.Window.Alert("Hello World!"); }

Common ErrorAccessing isolated storage in the design surface, or even accessing a method which contains a call to isolated storage. When hosted at design-time the Silverlight application is actually running on the desktop .NET runtime where these APIs do not exist. This means that methods which contain a call to isolated storage cannot be called even if the isolated storage is never accessed.

Bonus Trick One of the other complaints that I’ve heard frequently is that debugging templates errors in Silverlight 2 Beta 2 is not so easy. Too often you just get a stack trace with a bunch of nonsense calls to Measure or MeasureOverride. To help debug these errors in my own projects I’ve started overriding MeasureOverride in my own controls so that they’ll appear in the stack trace as well. This way when a template fails to load, I can quickly tell where it was in the application.