Andrew Eichacker

Main menu

Post navigation

Here we are again. A dusty blog and something to announce. I won’t even pretend that I’ll be updating more often this time; I now have another daughter on the way, and we both know how that turned out last time. I did want to post about a couple other changes, though.

Kindle for webOS

Last year, I was planning on showing you how to build webOS applications using the Enyo framework. After a year and a half of working on the Kindle app for webOS, I was excited to show you how easy it was to build for the platform. I learned a ton about JavaScript and web technologies, and enjoyed working on a new form factor. The TouchPad shipped, and I was wondering what app I would work on next.

Well…then, it got weird.

All webOS devices were canceled or discontinued, and the webOS team was left wondering what the future held for the team and its efforts. After awhile, it was announced that HP would release the platform to the open source community. Many left, others were excited.

After awhile and some reflection, I decided it was my time to leave HP. I began the long, exhausting process of interviewing around different companies around the Bay Area. Then, in February, HP announced a sizable reduction in the webOS team. I was included.

By the grace of God, I was already in the middle of the interview process for a couple of companies, and gladly accepted a position as a Senior UI Engineer at Netflix (no, I can’t get the show you like on instant). I joined the TVUI team, which is responsible for the Netflix apps on the PS3, Wii, TVs, and other CE devices. My team is composed of brilliant people and, in just a couple of months, I have already learned a great deal from them. I’ve loved crafting rich, media-centered UI since I started at HP, and it’s very exciting to work on a form factor that will be evolving rapidly over the next few years.

Netflix: get a subscription...or ten!

I value my time at HP highly. I worked with some great people, learned a lot, and contributed to software shipped on millions of desktops, laptops, and tablets. I began there fresh out of college and have grown these past 4 years a great deal, with the influence and support of my colleagues there. I wish them luck as HP shifts along with the markets it serves.

Obviously, I won’t be writing any posts on building apps on the webOS platform; however, if you’re interested, I’d suggest you check out Enyo. It has morphed into a cross-platform JavaScript UI framework, and I’m sure it is still just as easy to use.

My future posts will mostly be about JavaScript. Some will be about gaming. Maybe a few on general software engineering. Sorry, none will discuss the intricate differences between types of cheeses, nor their profound impact on the Renaissance time period (nope, not a single one).

Fast user switching is a concept Microsoft first included with Windows XP that allows the OS to quickly switch to another user and back by keeping the first user’s applications running while the second uses the computer.

This sounds like a great idea, and for the most part, it is; however, it places a new responsibility on application developers, because the context in which an application is running could change at any time.

That is, user1 could be using app.exe, switch to user2, and user2 could use app.exe. Then you have two users using the same executable at the same time, and user1 expects that app.exe will retain his place in the app for when he returns.

Now, before you start shouting, “Anarchy!”, note that there are a variety of things you can do to address this responsibility.

You can just tell user2 that he can’t use the application until user1 closes it. This is a poor approach, but you can feel free to annoy your users with how lazy you are if you like.

Alternatively, you can set your application to begin listening for window events that signal a change in the computer’s session.

Just as we have before, we’ll be using our friend, WndProc, to capture messages that WPF doesn’t fire for us. The one we’re looking for this time is WM_WTSSESSION_CHANGE, which will notify the application that the Windows session has changed. In order to receive notifications for this event, we’ll have to register using the function WTSRegisterSessionNotification.

Let’s kick things off with some imports for the Win32 APIs we’ll be using and the constants associated with them (see the MSDN links in the previous paragraph).

[DllImport("WtsApi32.dll")]privatestaticexternbool WTSRegisterSessionNotification(IntPtr hWnd, [MarshalAs(UnmanagedType.U4)]int dwFlags);[DllImport("WtsApi32.dll")]privatestaticexternbool WTSUnRegisterSessionNotification(IntPtr hWnd);[DllImport("kernel32.dll")]publicstaticexternint WTSGetActiveConsoleSessionId();// dwFlags options for WTSRegisterSessionNotificationconstint NOTIFY_FOR_THIS_SESSION =0;// Only session notifications involving the session attached to by the window identified by the hWnd parameter value are to be received.constint NOTIFY_FOR_ALL_SESSIONS =1;// All session notifications are to be received.// session change message IDconstint WM_WTSSESSION_CHANGE = 0x2b1;publicenum WTSMessage
{// WParam values that can be received:
WTS_CONSOLE_CONNECT = 0x1, // A session was connected to the console terminal.
WTS_CONSOLE_DISCONNECT = 0x2, // A session was disconnected from the console terminal.
WTS_REMOTE_CONNECT = 0x3, // A session was connected to the remote terminal.
WTS_REMOTE_DISCONNECT = 0x4, // A session was disconnected from the remote terminal.
WTS_SESSION_LOGON = 0x5, // A user has logged on to the session.
WTS_SESSION_LOGOFF = 0x6, // A user has logged off the session.
WTS_SESSION_LOCK = 0x7, // A session has been locked.
WTS_SESSION_UNLOCK = 0x8, // A session has been unlocked.
WTS_SESSION_REMOTE_CONTROL = 0x9 // A session has changed its remote controlled status.}

The first thing we’ll need to do is register for notifications. You’ll probably want this near the start of your application, in case your user is fast and/or part of your QA. After registration is successful, we want to capture the initial session ID to use for comparisons later on.

When our WndProc is executed, we can check the message to see if it corresponds to the WM_WTSSESSION_CHANGE we’ve defined (confused? should’ve clicked that link earlier…).

We once again get the active session ID for the event to allow us to compare against our initial value. After all, we may not want to do anything if the session ID hasn’t changed.

It’s also useful to check the wParamValue in order to understand the type of session change that has occurred. If a user logs out, we can perform auto-save functions or some clean up to get rid of unneeded resources as a preparation for the new user to log in. Alternatively, we can expand the use of this feature and show low-res images to make repaints faster if it’s a remote connection.

Lastly, let’s be a good neighbor and unregister our subscription for notifications using WTSUnRegisterSessionNotification. Be sure to do this before your window handle is destroyed, such as in your Window_Closing event.

Limbo has been sitting on my Xbox 360 for the past few months, and I finally took the time to sit down and experience it in its entirety. I quickly found myself deeply engrossed in its foreboding atmosphere, clever puzzles, and unconventional storytelling. It is one of those games that pushes the medium forward and offers a glimpse of how the interactive medium can evoke emotions in such a way as to rival the most moving pieces in other art forms.

The first thing I noticed about Limbo is its unique visuals. It portrays a dim, black and white world that uses focus and shadows to create a sense of depth and mystery. The animations of your character portray a scared, weak little boy who is unable to defend himself. The effect is compounded with large, aggressive enemies and gruesome death sequences. These all combine to create an ominous environment that never feels safe and leaves you nervous about every step.

Limbo’s sound is just as effective at creating an engaging atmosphere. The boy’s footsteps echo in a near-silent forest, leaving you unsure of what you might find next. The stillness is shattered with the loud crashes of attacking creatures or smashing boxes, coupled with strong, sudden vibration from the controller. This brings you further to the edge of your seat as you avoid dangers, and downright scares you in some cases.

In many ways, Limbo’s presentation of ideas and expressions through gameplay is exemplary. It leads your emotions not only through the visual and auditory feedback you receive, but also through the movement of your character and the actions you perform. Read on for my interpretation of what this is saying, and how it communicates without a single piece of dialog.

WARNING: Some spoilers below. Due to the importance of surprise in some cases, I’d advise you to play before you continue reading.

Fear

The word “limbo” has a great deal of religious connotations, but I don’t interpret this game as a story about a boy existing somewhere between Heaven and Hell. Rather, it is a metaphor for the idea of being stuck in a situation or state of mind, unable to make progress outward. A fitting substitution would be “The Waiting Place”, as referenced in Dr. Suess’ brilliant Oh! The Places You’ll Go (forgive the reference – I’m a dad; this is one of my daughter’s favorite books). Here’s an excerpt:

“The Waiting Place…for people just waiting.

“Waiting for a train to go or a bus to come, or a plane to go or the mail to come, or the rain to go or the phone to ring, or the snow to snow or waiting around for a Yes or No or waiting for their hair to grow. Everyone is just waiting.

“Waiting for the fish to bite or waiting for wind to fly a kite or waiting around for Friday night or waiting, perhaps, for their Uncle Jake or a pot to boil, or a Better Break or a string of pearls, or a pair of pants or a wig with curls, or Another Chance. Everyone is just waiting.”

Limbo is a journey out of The Waiting Place.

The first thing that you do in Limbo is make the decision to act. When you start the game, the boy is lying on the ground, motionless. Only when you choose to act does the adventure begin. If you choose not to, the boy stays trapped in limbo. There is no escaping The Waiting Place without the desire to act. This first button press sets the stage for the rest of the game – your actions are specifically designed to elicit emotions or ideas from the game.

The first area is a dense forest, wherein we find our first adversary: the spider, which represents our fears. Spiders are typically small creatures, but this spider is massive and seemingly impossible to overcome. Fears often seem larger and more daunting than they are, and this enemy certainly reflects that. Overcoming these fears will not be easy. You begin by timidly facing the spider by moving towards it to provoke it, then running away as it attacks. You are then stuck in its web and prepared as its meal, reflecting the danger of letting fears limit us – to the point that we cannot move and are consumed by them. Finally, the game forces you, by means of an approaching boulder, to stop running from the spider and face it head on in a moment of absolute terror: you then find that this spider is not so indestructable. By the end, you face a defeated, pitiful enemy and even use it as a means to progress toward the exit of The Waiting Place.

Facing our Peers

You are then thrust into the midst of a group of hostile kids, a representation for the peers that limit us to where we are. This could be active on their part, such as when one of them manipulates the boy’s fear with a mechanical spider look-alike. It could also be our own concern for what others think. This is portrayed a few times when the others simply stand in front of an obstacle, staring at the boy, seemingly saying “you can’t do this”. There are other boys hung amongst the trees, discouraging you with the failures of others. You continue to push forward, ignoring their attempts to stop you. Finally, these adversaries are destroyed as a result of their attacks against you.

You then enter a city and stumble upon a dilapidated hotel sign. Hotels remind me of rest and finding comfort, but the sign for the hotel points to a pit that would result in the boy’s death. This seems to convey that, if we are to get out of The Waiting Place, we can’t get comfortable. We have to push forward. After passing the hotel, you fall into a factory, where you are pushed along conveyor belts and gears. This area represents the habits and routines that keep you in limbo. Staying on the conveyor belt and following the gears leads to death; instead, you must choose a different path. You must choose to break those habits and accept change.

In the final areas, the shifting of gravity and magnetic attractions is key to the puzzles you must overcome. You fall down, then fall up, and must change gravity in mid-air to reach your intended destination. This represents the obliterated confidence you have when stuck somewhere. The gravity shifts from moment to moment as you second-guess yourself and remain unsure of what is up and down.

Finally, after overcoming your fears, what others think, your self-destructive habits, and lack of confidence, you shatter through the glass of The Waiting Place and find yourself…back in the forest, lying on the ground.

Once again, you choose to get up and start moving forward. Are you still stuck? Are you right back where you started?

No. Something is different this time.

This time, you find what you were looking for. You are in the same place you started, sure, but you are not the same person. You know the way out, and you know who the people are that matter most – those who will help you get there.

Want some other thoughts? Check out GamesRadar’s compilation of interpretations of Limbo from their writers. There’s some really insightful thoughts in some of those.

Shortly After my daughter was born, I wrote to let you know that life was crazy and I’d be getting back to posting soon enough. I proceeded to complete my series on WPF 4.0 muti-touch and left you shivering and alone for 9 months.

Now, I have returned with the warm blanket of a new layout, some enthusiasm, and a lot of empty promises new stuff to show you!

Adalae is now 10 months old, gigantic, and starting to crawl. She’s certainly been a large reason for the infrequency of posts. Sorry, but she wins. You lose. Every time. Excuse me while I include an obligatory adorable photo:

Work has changed a lot over the past year, which is another reason/excuse why I haven’t been posting. I’ve typically shared things I’ve learned while working, but the projects I had been working on left me a little short on interesting material that I could really share.

Things finally stabilized, and I am currently working over at Palm on the Kindle app for the newly-announced HP TouchPad. Since I’m no longer working primarily with WPF, you likely won’t see much more of it in upcoming posts. However, I’ll be excited to share, in due time, what I’ve learned about Enyo, the latest version of Palm’s mobile development framework.

So, that’s where I’ve been. I would tell you that I plan on posting here more often, but we both know it would just jinx us for another 9 months. Instead, I’ll just stare awkwardly into the distance with a hopeful look on my face.

WPF 4.0’s manipulation events certainly made things easier to write an application that supports multitouch gestures. After you start playing with these gestures, however, you’ve found yourself disappointed.

You want more. There’s something missing. It’s just not like it used to be. “It’s not you, Manipulation events,” you say. “No…it’s me.” But then? A spark! You find out something new about them! Your relationship is saved! “Why, Manipulation events, I never knew you could handle…inertia!”

Having a long-term relationship with APIs aside, you’ve certainly landed on something interesting. WPF 4.0’s Manipulation events can also be used to handle inertia, which allows your UI to look a little more natural and fun.

For those of you who didn’t pay attention in 4th grade science, inertia is Newton’s Second Law of Motion. This law states that objects in motion tend to stay in motion, unless acted upon by an outside force. In other words: Ugg move stuff. Ugg let go. Stuff still move. Ugg hungry.

Science.

The idea behind inertia in WPF’s Manipulation events is to make objects that are being manipulated behave as a user would expect. When a user spins a card on a table, he can let go and it will continue spinning until it decelerates to a stop. Adding inertia to your manipulable objects makes users giddy to see things on a computer imitate the physical world.

In order to handle inertia, we need to create an event handler for our new inertia event, ManipulationInertiaStarting. This goes right along with your ManipulationDelta and ManipulationStarting events.

Once the user stops performing the gesture, ManipulationInertiaStarting is fired. Our event handler, HandleInertia, is actually a very simple method. It is used to set the values of deceleration for the various manipulation components.

You can set deceleration for each of the transformations supported by manipulation: translation, scaling, and rotation. Don’t get too worried about the numbers we have here (I pulled this from the inertia explanation on MSDN originally, I think). You don’t have to be so specific to take into account your DPI to ensure you have the exact right deceleration in physical terms. These values work pretty well, though.

Once it has set these deceleration values, it once again fires the ManipulationDelta event – if you recall, this is the event whose handler applies all of the transformations. It populates its ManipulationDeltaEventArgs with the previous values, decreased by our deceleration values. It continues to fire the event with diminishing values, causing the object to slowly come to a stop.

Since we are just reusing our already-defined ManipulationDelta handler, inertia is an incredibly easy addition to make to your manipulable objects.

The only change we have to make to our handler is to check to make sure our object doesn’t fly away. This is a simple solution where, if the object goes out of the window, it completes the inertia and provides a bounce effect to give feedback to the user it has reached the edge of the screen.***Correction: the e.Complete() method now appears to cancel the ReportBoundaryFeedback method (I wrote this application while everything was in beta). You can have the bounce effect without the e.Complete(), but your rectangle then flies out of the window. Let me know if you have a simple solution for allowing both to happen, as I likely won’t put any effort into it…*** You could easily change the behavior here to make the object more realistically react to its bounds if you like.

// Check if the rectangle is completely in the window.// If it is not and intertia is occuring, stop the manipulation.if(e.IsInertial&&!containingRect.Contains(shapeBounds)){// if both are uncommented, e.Complete() overrides e.ReportBoundaryFeedback()// comment out for a bounce, uncomment to stop the rectangle
e.Complete();// comment out to stop the rectangle, uncomment for a bounce// e.ReportBoundaryFeedback(bounceDelta);}
Matrix rectsMatrix =((MatrixTransform)rectToManipulate.RenderTransform).Matrix;
Point rectManipOrigin = rectsMatrix.Transform(new Point(rectToManipulate.ActualWidth/2, rectToManipulate.ActualHeight/2));// Rotate the Rectangle.
rectsMatrix.RotateAt(manipDelta.Rotation, rectManipOrigin.X, rectManipOrigin.Y);// Resize the Rectangle. Keep it square // so use only the X value of Scale.
rectsMatrix.ScaleAt(manipDelta.Scale.X, manipDelta.Scale.Y, rectManipOrigin.X, rectManipOrigin.Y);// Move the Rectangle.
rectsMatrix.Translate(manipDelta.Translation.X, manipDelta.Translation.Y);// Apply the changes to the Rectangle.
rectToManipulate.RenderTransform=(MatrixTransform)(new MatrixTransform(rectsMatrix).GetAsFrozen());
e.Handled=true;}}

That concludes my series on WPF 4.0 multitouch. Let me know in the comments what kinds of UI elements you’ve touchified with these new events.

Back when I reflected on last year, I shared that I had a daughter on the way. Taking that and my recent inactivity into account, you may have been able to piece together that yes, indeed, I am now a father. Adalae Claire Eichacker was born on April 21st (I know, its been awhile).
Of course she has completely changed my life and I’m now in the shell shock phase of figuring out what the “new normal” is. To make things harder, my grandma passed away just a couple of weeks later. After taking three weeks off for paternity and bereavement leave, I came back to work just as a big release was being finished up and a new project was beginning. Now, the World Cup is going and I’ve started a side project (more on that later).

So suffice it to say, I’ve been a bit busy and this blog is getting LONELY.

Do not fear, however! This has happened before, and will likely happen again. I have plenty of posts in my drafts to finish or write, and I’m sure I’ll have plenty more as I figure out how to balance being a father, husband, gamer, and programmer. Oh, and blogger. Also, superhero.

In a recent post, I showed you how to react to touch events in WPF 4.0. You can use that to implement the showcase multitouch gestures: scaling, rotating, and translation. It’s not too hard. Really, I’ve done it. Just dust off your geometry and trigonometry hats and get to it.

Are you done yet? No? Too lazy? Well, how about we make this easier. As I like to say regarding programmers: if necessity is the mother of invention, laziness is most certainly the father.

Luckily for us, Windows 7 has multitouch gesture recognition built in, and WPF now supports listening for it in its upcoming 4.0 release. Here’s how you can implement these gestures in your application.

We’ll first define a window that will contain two rectangles to manipulate.

The containing control defines handlers for the ManipulationStarting and ManipulationDelta events. These events are fired when a multitouch gesture is first recognized and when it changes, respectively.

The IsManipulationEnabled property is set to true for each object that we plan to manipulate. This property tells WPF to watch for gestures on manipulable controls. I would guess that forcing you to explicitly define the elements that react to gestures improves the performance of gesture recognition.

The ManipulationStarting handler sets up the manipulation container in order to specify a frame of reference that the values will be relative to. For example, it establishes the origin (0,0) for x and y coordinates.

Re-establishing the base line each time is important, as the values that the ManipulationDelta sends are not absolute. Each time the handler is called, the values are relative to the previous event firing. For example, if a user gestures a total rotation of 30 degrees, the events would look something like this:

# of Events

e.DeltaManipulation.Rotation

Total Rotation

1

5

5

2

5

10

3

5

15

4

5

20

5

5

25

6

5

30

Next, we establish an origin to use for the following manipulations. This specifies the point around which the rectangle will rotate and scale. Here, we’re setting it up at the middle of the rectangle.

I spent a little of time with some people over at AMD the other day, looking at ways to better utilize the video card using WPF.

A useful little chunk that came from that was using the Freeze method on UI elements that are being manipulated. This tells the video card to use the texture already in video memory instead of unloading the old one, performing the manipulation, and loading a new texture into memory. Since this is the most expensive action that can be done with a video card, using Freezable members can make things look much smoother.

Microsoft has made it much easier to access touch events in WPF. The touch events are likened to the mouse events you are likely very comfortable with, but with a little more information in order to support multitouch.

I’ll lay out a full application for you to play with. First, the XAML of the main window class:

Did you see that? I hooked up multitouch events in my XAML. GAME CHANGER.

Yes, it is that easy. You are already set up to receive touch events. Wizardry!

Now, let’s do something worthwhile with our new found power. This application will create squares for every touch point and show its associated ID. This kind of application is useful when messing with new hardware to see how accurate the touch is. It is basically an expanded version of the last example, supporting INFINITE touch points. Infinite up to a certain power of 2, anyway.

We’ll start with an array of colors to choose from for our infinite points.

Upon the first touch, we create a new Border and move it to the corresponding location using a TranslateTransform. We also create a child TextBlock in order to display the touch point’s ID.

The ID is very important when doing something more interesting with multitouch, as it signifies a unique finger. If you are coding any gestures, you’ll need to make sure you keep track of your fingers. Actually, that’s probably a pretty sound piece of advice for life in general.

Buzzwords will be a recurring segment where I explain some of the words and phrases I pick up on as I grow in my development knowledge. Some will be simple definitions; others will delve further into the concepts being presented to explore their meaning.

After I started at HP, my vocabulary was challenged every day with new abbreviations and HP jargon. There were also a few technical terms, two of which came up rather frequently: managed and unmanaged code. Using context clues, I quickly figured it out, but it was something I hadn’t been exposed to during school.

Unmanaged code is code that compiles into machine language to be executed using the computer’s hardware. That is to say, that there is no intermediary between your executable and the instructions given to your computer. Standard usage of C, C++, assembly, etc. can create binaries with these instructions.

Managed code is a term used to describe code that depends on .NET’s Common Language Runtime (CLR). C#, C++/CLI, VB.NET, etc. all will build assemblies with an Intermediate Language (IL). The CLR will interpret this language and compile each part into machine language when it is to be used (this is called Just-in-Time [JIT] compiling). This methodology allows some help for the programming, such as garbage collection and security checking (though at a cost to performance, since it is automatic).

The distinction between the two is important in Microsoft’s world, as managed code can be written in a variety of languages. .NET supports C++ (C++/CLI, above), so assuming that all of a C++ program is being compiled into machine code and executes without .NET might be incorrect.

The term “managed” is usually applied to applications that use .NET, specifically; however, I’ve also heard people use the term when referring to Java. While the term was coined by Microsoft to distinguish .NET code, I don’t see any harm in using it to describe Java, which uses similar concepts in its underbelly.

Post a comment on which you use most frequently. Be sure to list the advantages that made you make this decision.

Post navigation

Feel Free to Comment

If you have any comments regarding my code (different approaches, improvements, missed opportunities...), don't hesitate to post! I'd love to see your point of view. However, try to keep it constructive. "This code sucks" doesn't help anyone.

If it's not about code, just try to keep it on topic.

Disclaimer

The views and opinions expressed herein are strictly those of the author and do not necessarily reflect the views or opinions of his employer or any of its affiliates.