August 14, 2011

If you read this, chances are that you are a developer and that you like Silverlight. And why not? Exciting platform, great features, outstanding tooling. But! If you’re a corporate developer, have you sold it to your management yet? If not, this post is for you.

Silverlight is for RIA, and the domain of RIA applications is largely intranet or closed/controlled extranet user groups. This again is what is usually found in larger enterprise companies. Companies that usually have a vested interest in controlling their environment. And in terms of bringing software into production and of operations and maintenance afterwards, every new platform is one platform to many.

So, the odd developer comes along and talks about this great new technology. Does the management care? Probably not. What does it care about? Simple. Money! Money, as in costs for deployment and user support, hardware and licenses to get the stuff up and running, operations and developer training, maintenance. And money as in savings in the respective areas and – the cornerstone, as the business usually pays the bill – impact on the business. All usually subsumed under the term ROI.

About a year ago, I finished an analysis looking into RIA with Silverlight, conducted for a major customer. Not from the point of view of the developer, but that of business people, operations, and IT management:

So, let’s look briefly at each aspect…

User/Business perspective…

The business doesn’t exactly care for the platform Silverlight itself; it cares for its business benefits. Benefits as in improved user experience, streamlined business workflows, office integration, and so on. And since we had some lighthouse projects with Silverlight we were able to collect some customers’ voices:

“This [streamlining with Silverlight] would reduce a […] business process […] from ~10 min to less than a minute.”

“Advanced user experience of Silverlight UI helps raising acceptance of new CRM system in business units”

“I was very impressed of the prototype implementation […] with Silverlight 3. Having analyzed the benefits of this technology I came to the conclusion that I want the […] development team to start using Silverlight as soon as possible. […]”

This is also confirmed by the typical research companies, like Gartner or Forrester:

“Firms that measure the business impact of their RIAs say that rich applications meet or exceed their goals” (Forrester)

Operations perspective…

In production, the benefit of Silverlight applications (compared with respective conventional web based applications) is reduced server and network utilization.

For example, we had a (small but none-trivial) reference application at our disposal, which was implemented in ASP.NET as well as Silverlight (as part of an analysis to check the feasibility of Silverlight for LOB applications). We measured a particular use case with both implementations – starting the application and going through 10 steps, including navigation, searches, and selections. Both applications were used after a warm-up phase, meaning that the .xap file, as well as images and other static files had already been cached.

The particular numbers don’t matter, what matters is the difference between the amount of data that has been exchanged for each step (in case of navigations none at all for Silverlight). For the single steps:

And accumulated over time:

A ratio of roughly a tenth of the network utilization is quite some achievement – considering that the Silverlight application wasn’t even optimized to use local session state and caching, it should be even higher.

This should have a direct impact on the number of machines you need in your web farm. Add the fact that session state management on the client drastically reduces the demand for ASP.NET session state – usually realized with a SQL Server (Cluster) – there is yet another entry on the savings list.

On the down side is the deployment of the Silverlight plugin. For managed clients – especially if outsourcing the infrastructure comes into play – this may very well become a showstopper.

IT Management perspective…

With respect to development and maintenance, what IT Management should care about includes things like ability to deliver the business demands, development productivity, bug rates in production, costs for developer training, and so on.

Actually all areas in which Silverlight can shine, compared with other RIA technologies, and with the typical mix of web technologies as well:

Powerful abstractions lead to less code (up to 50% in one project), less complexity, less errors

Customers’ voices in this area:

“between our desktop app and the website, we estimate 50% re-use of code”

“a .NET developer can pretty much be dropped into a SL project. […] This is a huge deal […]”

“As alternative for Silverlight we considered Flash. […] only Silverlight could provide a consistent development platform (.NET/C#). […]”

Conclusion…

Taking all this together, and considering that enterprise companies usually have the tooling and test environments (well…) readily available, this all adds up to something like the following bill:

Whether the bill looks the same for your company or for one particular project, of course, depends on many things. Especially nowadays with all the hubbub around HTML5 and mobile applications (without any relevant Silverlight support). But if RIA is what you need, the Silverlight will quite often yield far more benefits than any other option.

Still, you need to do your own evaluation. However, I hope to have given you some hints on what you might focus on, if you want to sell technology to the people who make platform decisions in your company.

The actual analysis was fairly detailed and customer specific. But we also prepared a neutralized/anonymized version, which we just made available for download (pdf). (Also directly at SDX.)

March 12, 2011

The last post was about the chances and benefits of HTML5. But there are also some exaggerated expectations that HTML5 cannot fulfill. Mainly this concerns the term “RIA”, and the effect HTML5 will have in this area.

Or as someone wrote:

“Will HTML 5 one day make Flash, Silverlight and other plug-in technologies obsolete?” (link)

Actually this is a very regular question I have to answer. (Often enough it doesn’t come as question, but as statement. Or accusation. Sic!)

I already detailed how HTML5’s video and canvas will take over tasks that have formerly been solved by other technologies, namely Flash. And some people seem to infer that if Flash is a RIA technology, and HTML5 obsoletes Flash, then HTML5 must obviously make RIA technologies obsolete in general.

Quite wrong. Actually the fact that Flash is used for video and 2D graphics has nothing to do with Flash being a RIA technology. Flash simply happened to be able to delivering these features, both in terms of technical capability as well as broad availability. That it also happens to be a RIA technology is more or less happenstance.

But before moving on we need to clarify the terms “RIA” and “RIA technology”…

My personal definition of RIA technologies relates to the following attributes: Stateful programming model, with some kind of page model, for applications running in a browser sandbox. This includes Flash, JavaFx, Silverlight as a browser plugin (but not in its WP7-platform variation).

Wikipedia applies the terms to Flash, Java (not JavaFx! Sic!), and Silverlight. Sill, this is debatable, and a year ago even Wikipedia had a far broader definition, but in my experience this actually covers the common understanding. Curiously Adobe claims that AIR is their RIA technology, not Flash. But me, Wikipedia, and general consensus agree that Flash indeed is a valid RIA technology.

By the way: Leaving HTML+AJAX out of this picture is by no means meant to be deprecatory, it just reflects common understanding. Wikipedia actually makes the distinction based on the (lack of) necessity to install an additional software framework.

And a final tidbit: Once upon a time Microsoft advertised Silverlight as Flash replacement, addressing video and graphics, just like in the typical Flash use cases. However, even with growing adoption of the Silverlight plugin, Silverlight never became a serious competitor for Flash in that area. (This may actually have played a role in Microsoft’s commitment to HTML5…) Still, Silverlight has long outgrown this narrow definition and later versions has put more emphasis on business features.

So, let’s have a look at where HTML5 reaches its limits and where RIA technologies might kick in. I’ll look at regular web applications before moving on to RIA applications.

Web Applications

HTML5 is going to address standard demands of web applications, including those addressed today by Flash. This will have a crowding-out effect on Flash and RIA technologies in general. But once the demands go beyond being “standard”, RIA technologies will find their niche even in web applications.

On example could be premium video delivery: Some vendors will probably be eager to offer unique selling propositions in the emerging markets of WebTV and high quality HD video content (probably involving DRM).

Since Flash can no longer play the trump of being the only or the broadest available platform in this area, this will also change the picture among the RIA technologies. Especially Silverlight has been very successful in this area recently. Take the Olympic Games or maxdome.

Other examples include complex user interactions that are not feasible with canvas and script, e.g. Mazda’s car configurator, and similarly dynamic data visualizations.

Finally there is the support of additional features. RIA technology vendors certainly have shorter innovation cycles than standards bodies. This especially includes hardware support (camera, game controller, hardware accelerated animations and 3D).

These scenarios all require the user to accept the plugin – which might become a more severe issue if this necessity is less ubiquitous. Thus for the web site provider this always incurs the question whether he can compel his users to use his offering despite that nuisance, or whether he may have to provide a (perhaps simplified and less capable) HTML based version.

RIA Applications

HTML5 won’t turn HTML into a RIA technology. It doesn’t come with a new programming model, doesn’t change server side processing, page model, and postbacks, doesn’t change that fact that the HTML ecosystem really is a conglomerate of diverse technologies.

Many applications – be it multimedia, some kind of hardware dependency, or line of business – simply require the potential of rich client applications. For these, HTML simply cannot deliver what is necessary. Typical demands include:

While these are certainly not the demands for typical web applications. But intranet applications and applications addressing some distinct or closed user group on the web are very well within this category. Prominent example is SAP, one can also think of WebTV portals, home banking, or other.

In the past java applets were often used to cover these demands. Recently AJAX approach have spread, but while this worked to some degree, it often falls short of meeting the demands completely. From a technical perspective, RIA technologies are the adequate choice in these scenarios. And (in my opinion), Microsoft Silverlight is currently the best technology available in that area. Adobe AIR lacks availability and adoption, Flash alone is not sufficient, and JavaFx seems to die a slow death.

Conclusion

HTML5 will push RIA technologies out of their makeshift role (video and canvas). However this doesn’t affect the feasibility of employing RIA technologies on their own turf, i.e. beyond-HTML-capability demands in web applications and fully fledged RIA applications.

However, since this “pushing out of RIA technologies” mainly affects Flash, HTML5 has an interesting effect on the RIA market: Broad availability is no longer a strong USP for Flash, which is to the benefit of Silverlight. Add the hazy prospect of JavaFx and the fact that Silverlight is not only a RIA platform, but also enters devices (WP7, WebTV), and HTML5 may actually further the adoption of Silverlight – not as cross-platform tool, as it was once intended, but in all areas not covered by HTML5.

The one argument in favor of HTML5 – which no RIA technology is likely to ever achieve – is its universal availability across all platforms, even if that comes at a cost.

Where are we?

The conclusion of this little series may be as follows: The conflict, or enmity, between HTML5 and RIA that some people see (or exaggerate) doesn’t really exist. There may be a competition between HTML5 and Flash, but even that may turn out differently form what people expect.

Actually HTML5 and RIA complement each other. There are areas in which one technology certainly make more sense than the other, other areas in which there is a choice, again other areas in which a combination of both may work best; even areas in which neither is an ideal choice. E.g.…

A web application addressing the broadest available audience? HTML5.

An LOB application with high usability demands? Silverlight.

A mobile application addressing a broad audience? HTML5. As long as not tighter device integration is necessary, in which case one has to address several mobile OSes…

And between these black-and-white examples there’s a lot gray areas, to be decided on a case-by-case basis. And usually the important thing is not exactly which technology you favor. The important thing is to make an informed decision, aware of the pros and cons, and not solely based on political opinions of certain apologists.

March 7, 2011

META: This blog got a little sleepy recently. But actually I’ve been spending quite some time writing blog posts, albeit for our company blog. And I’m planning to reuse some of that content here (this post is actually the first one). Also I’ve been busy in several new areas, including WP7, and I may have something to say about those, too. So, this blog is still alive and kicking.

There’s a small hype around HTML5 for some time now. Ironically the reason were more political than for technical ones, since browsers begin to support HTML5 only just now. Be it organizational hassles, discussions about a video format, or a certain company having issues with Flash on their iPlatform. And since Microsoft has announced its decision to make HTML5 their cross platform strategy, the last big browser vendor has joined the camp.

And this is a good thing! Not only homogenizes HTML5 the web platform again, it is also the only feasible platform for cross platform development in the mobile world, which is much more diverse than our desktop eco system.

On the other hand I am regularly irritated about what people think HTML5 will be able to accomplish. Especially in the relation to RIA applications that are all but dead, according to various sources:

“HTML5, a groundbreaking upgrade to the prominent Web presentation specification, could become a game-changer in Web application development, one that might even make obsolete such plug-in-based rich Internet application (RIA) technologies as Adobe Flash, Microsoft Silverlight, and Sun JavaFX.” (link)

“Will HTML 5 one day make Flash, Silverlight and other plug-in technologies obsolete? Most likely, but unfortunately that day is still quite a way off.” (link)

“Sure maybe today, we have to rely on these proprietary browser plugins to deliver content to users, but the real innovative developers and companies are going to standard on HTML 5 and in turn revolutionize how users interact with data. We all want faster web applications and the only way to deliver this is to use HTML 5.” (link)

To put it bluntly: HTML is no RIA technology and HTML5 is not going to change that. Thus Silverlight is a valid choice for any RIA application today, and it will be one tomorrow.

On the other hand, HTML5 is certainly going to deliver features that today are the domain of RIA technologies, namely Flash. And this will affect RIA technologies in some way.

Then how will HTML5 affect RIA? Well, I’m afraid, it’s not that simple and there is no short andsufficient answer that I’m aware of. In order to decide – probably on a case by case basis – what HTML5 can do, in which use cases HTML5 is the right answer, and in which cases RIA technologies still are the better choices, we need to take a closer look at some details…

To keep it a little more concise, I’m going to break with my usual habit of very long posts. I’m splitting the remainder into two additional not-quite-so-long posts:

November 7, 2010

PDC happened and Microsoft fouled up the Silverlight message big time. The talk was that Silverlight is dead on the client, based on Steve Ballmer not mentioning Silverlight, and an interview with Bob Muglia published at Mary-Jo’s blog. Actually even we got irritated emails from our own customers, whom we had just convinced that Silverlight is the right choice for RIA applications.

It took some time for Microsoft to realize what fatal message they had sent, but eventually Muglia backpedaled, and from there one it seems that every other Microsoftie and close associate came out to deny the imminent death of Silverlight. So far I’ve stumbled over:

So, just for the record: I sincerely believe that Microsoft is still very much committed to Silverlight as RIA technology for regular (read non-WP7) clients.

Still, as relieved as I am that this mess unfolded that way, it kept me thinking. What if? I mean, what if Microsoft actually had dropped Silverlight on the client…?

What if?

Just imagine… What would happen if Microsoft actually had changed their strategy? What if Silverlight was really dead on the client, and “only” the development platform for WP7?

For the vast majority of regular web applications: Nothing much would have happened. Use cases here include mostly video and advertisement. And due to the availability of the plugin this is the domain of Adobe Flash. With the advent of HTML5, and it’s coverage of video and graphics (canvas) – and none the least the backing it gets from Apple – it should have been clear to everybody with open eyes that HTML5 will be the future in that area. But that will happen at the expense of Flash, not Silverlight!

But from there it would go downhill, and Microsoft would start losing…

Microsoft would lose a platform for non-typical demands on the web. Demands that go far beyond what HTML5 can deliver. Complex UIs such as car configurators (Mazda), HD video streaming (maxdome). And Silverlight it gaining momentum in the market, not Flash or some other technology.

Microsoft would lose their platform for RIA applications. In this area HTML5 is of no further relevance at all, rather Adobe AIR and JavaFx are the competition. And in this area Silverlight is way ahead of the completion, both technically in terms of business features, as well as by adoption (usually intranet applications, but also SAP).

Microsoft would lose the developer base it relies on for Windows Phone 7, and with it WP7 itself. One of the big selling points for WP7 is the fact that WP7 uses the very same platform as is used for RIA, thus every developer using Silverlight instantly becomes a WP7 developer. Ironically, focusing Silverlight on WP7 would take away that advantage. Silverlight would become a platform you have to learn before doing phone development. And since WP7 is just taking off, the future not yet certain, why take the risk? Why not learn Android instead? Who would then build the apps Microsoft needs?

It gets worse: Microsoft would lose credibility, and the trust of the developer community. This is not a technology at the end of its life cycle we’re talking about. It’s a technology just beginning to take off, and a technology that they told us were strategic! A technology more and more developers are just beginning to adopt, to invest in. If Microsoft dropped Silverlight – without any warning I might add –, how would those developers react? How could they trust Microsoft to be true to what they call “strategic” in the future? I know what I would think.

Needless to say that this would affect their partners and customers in the same way. Who would invest in any platform if he cannot be sure the platform is maintained for a reasonable time (rather than being dropped at the spur of a moment). And if the vendor cannot be trusted? The platform may be as good as it wants, the first thing to care about is to protect my investments.

In the end this would include every API, every platform, every offering they have. This specially includes Azure – the very platform Microsoft is betting the company on. Which developer would work against that API? Which ISV would build his software on Azure? Which partner would counsel his customers to use Azure? Which enterprise would rely on Azure with his applications and his data? It’s a strategic platform for Microsoft, sure. But with dropping Silverlight they would just have taught us what that means.

Ultimately Microsoft could lose… Microsoft. Because at the end of the day, credibility is the most important thing. That’s what made this whole thing a marketing fiasco. Not the fact that a bunch of developers and companies sat on the wrong bandwagon. Lose credibility and you lose the company.

Now, all this is hypothetical – I hope I made that clear with my statement above. And it is also just an opinion and certainly exaggerated in some points. But it may very well be the kind of trash and FUD that Microsoft will be experiencing for some time. Which is why I believe that Microsoft will continue to have to do damage control for quite some time to come.

August 29, 2010

CommunicationException: NotFound – that is the one exception that bugs every Silverlight developer sooner or later. Take the image from an earlier post:

This error essentially tells you that a server call somehow messed up – which is obvious – and nothing beyond, much less anything useful to diagnose the issue.

This not exactly rocket science, but the question comes up regularly at the Silverlight forums, so I’m trying to convey the complete picture once and for all (and only point to it, once the question comes up again – and it will!).

I’m also violating my rule not to write about something that is readily available somewhere else. But I have the feeling that the available information is either limited to certain aspects, not conveying the complete picture, or hard to come by or to understand. Why else would the question come up that regularly?

The Root Cause

So, why the “NotFound”, anyway? Any HTTP response contains a numeric code. 200 (OK) if everything went well, others for errors, redirections, caching, etc.; a list can be found at wikipedia. Any error whatsoever results in a different result code, say 401 (Unauthorized), 404 (NotFound), or 503 (Service unavailable).

Any plugin using the browser network stack (as Silverlight does by default) however is also subject to some restrictions the browser imposes in the name of security: The browser will only pass 200 in good cases or 404 without any further information in any other case to the plugin. And the plugin can do exactly NIL about it, as it never gets around to see the original response.

Note: This is not Silverlight specific, but happens to every plugin that makes use of the browser network stack.

Generally speaking there are two different groups of issues that are reported as errors:

Service errors: The services throws some kind of exception.

Infrastructure issues: The service cannot be reached at all.

Since those two groups of issues have very different root causes, it makes sense to be able to at least tell them apart, if nothing else. This is already half of the diagnosis.

Handling Service Errors

Any exception thrown by a WCF service is by default returns as service error (i.e. SOAP fault) with HTTP response code 500 (Internal Server Errors). And as we have established above, the Silverlight plugin never get’s to see that error.

The recommended way to handle this situation is to tweak the HTTP response code to 200 (OK) and expect the Silverlight client code to be able to distinguish error from valid result. Actually this is already backed into WCF: A generated client proxy will deliver errors via the AsyncCompletedEventArgs.Error property – if we tweak the response code that is. Fortunately the extensible nature of WCF allows us to do just that using a behavior, which you can find readily available here.

Once we get errors through to Silverlight we can go ahead and make actual use of the error to further distinguish server errors:

Business errors (e.g. server side validations) with additional information (like the property that was invalid).

Generic business errors with no additional information.

Technical errors on the server (database not available, NullReferenceException, …).

It’s the technical errors that will reveal more diagnostic information about the issue at hand, but let’ go through them one by one…

Business errors with additional information are actually part of the service’s contract, more to the point, the additional information constitutes the fault contract:

These faults are also called declared faults for the very reason that they are part of the contract and declared in advance. Declared faults are thrown and handled as FaultException<T> (available as full blown .NET version on the server, and as respective counterpart in Silverlight), with the additional information as generic type parameter:

Note: There’s no need to construct the FaultException from another exception. And of course this StandardFault class is rather simplistic, not covering more fine-grained information, e.g. invalid properties – which you may need in order to plug into the client side validation infrastructure. But that’s another post.

On the client side this information is available in a similar way, and can be used to give the user feedback:

Generic business errors are not part of the service contract, hence they are called undeclared faults, and they cannot contain additional information beyond what the already got. From a coding perspective they are represented by FaultExceptions (the non-generic version, .NET and Silverlight) and thrown and handled similarly to declared faults:

However, the documentation states…

“In a service, use the FaultException class to create an untyped fault to return to the client for debugging purposes. […]

In general, it is strongly recommended that you use the FaultContractAttribute to design your services to return strongly typed SOAP faults (and not managed exception objects) for all fault cases in which you decide the client requires fault information. […]”

That leaves arbitrary exceptions thrown for whatever reason in your service. WCF also translates them to (undeclared) faults, yet it uses the generic version of FaultException, with the predefined type ExceptionDetails. This way, any exception in the service can (or rather could) be picked up on the client:

However, while ExceptionDetails contains information about exception type, stack trace, and so on, that fault contains by default only a generic text, stating “The server was unable to process the request due to an internal error.”. This is exactly as it should be in production, where any further information might give the wrong person too much information. During development however it may make sense to get more information, to be able to diagnose these issues more quickly. To do that, the configuration has to be changed:

And now the returned information contains various details about the original exception:

BTW: To complete the error handling on the client, you need to to address the situation were the issue was on the client itself, in which case the exception would not be of some FaultException type:

This covers any exception thrown from the WCF service, provided it could be reached at all.

Alternative routes…

As I said, tweaking the HTTP response code is the recommended way to handle these errors. This is still a compromise on the protocol level to work around the browser network stack limitation. However, there are other workarounds to do that, and for the sake of completeness:

Compromise on the service’s contract: Rather than using the fault contract, one could include the error information in the regular data contract. This is typical for REST style services, e.g. Amazon works that way. For my application services I am generally reluctant to make that compromise. The down side is that is doesn’t cover technical errors, but that can be remedied with a global try/catch in your service method.

Avoid the browser network stack. Silverlight offers its own network stack implementation (client HTTP handling), though it defaults to using the browser stack. Using client HTTP handling, one can handle any HTTP response code, as well as offering more freedom regarding HTTP headers and methods. The downside however is, that we lose some of the features the browser adds to its network stack. Cookie handling and browser cache come to mind.

Handling Infrastructure Issues

If some issue prevented the service to be called at all, there is obviously no way for it to tweak the response. And unless we revert to client HTTP handling (which would be a rather drastic means, given the implications), the Silverlight client gets no chance to look at it either. Hence, we cannot do anything about our CommunicationException: NotFound.

However, by tweaking the response code for service exceptions as proposed above, we at least make it immediately obvious (if only by indirect reasoning) that the remaining CommunicationException: NotFound is indeed an infrastructure issue.

The good news is that infrastructure issues usually carry enough information by themselves. Also they appear rarely, but if they do they usually are quite obvious (including obviously linked to some recent change), affect any call (not just some), and are easily reproducibly. Hence using Fiddler one can get information about the issue very easily (even in the localhost scenario).

The fact that the issue is pretty obvious pretty fast, in turn makes it usually quite easy to attribute it to the actual cause – it must have been a change made very recently. Typical candidates are easy to track down:

Changing some application URLs, e.g. giving the web project a root folder for Cassini, without updating the service references.

Generating or updating service proxies, but forgetting to change the generated URLs to relative addresses.

Visual Studio sometimes assigns a new port to Cassini if the project settings say “auto-assign port”, and the last used port is somehow blocked. This may happen if another Cassini instance still lingers around from the last debugging session.

Any change recently made to the protocol or IIS configuration.

This only get’s dirty if the change was made by some team member and you have no way of knowing what he actually changed. But since this will likely affect the whole team, you will be in good company ;-)

Wrap up

There are two main issues with CommunicationException: NotFound:

It doesn’t tell you anything and the slew of possible reasons makes it unnecessarily hard to diagnose the root cause.

It prevents legitimate handling of business errors in a WCF/SOAP conformant way.

Both issues are addressed sufficiently by tweaking the HTTP response code of exceptions thrown within the service, which is simple enough. Hence the respective WCF endpoint behavior should be part of every Silverlight web project. And in case this is not possible for some reason, you can revert to client HTTP handling.

Much if not all of this information is available somewhere within the Silverlight documentation. However, each link I found only covered certain aspects or symptoms, and I hope I have provided a more complete picture on how to tackle (for the last time) CommunicationException: NotFound.

August 15, 2010

Just a small issue, but it may drive you crazy if you have to hunt it down… (it did that with me.)

I have my Silverlight application sitting on an .aspx page, occupying all real estate. I work a while. And suddenly there is this scrollbar appearing on the right side:

To hunt it down, I create a new Silverlight project. No scrollbar. I compare the .aspx‘’. No difference to speak of. I deconstruct the Silverlight part of my application in hope the scrollbar will disappear. It won’t.

Finally I stumbled by accident over the reason (which is – in retrospect – quite obvious):

Here’s the .aspx fragment that creates the Silverlight plugin:

Created by VS, I changed some code above this frament, and reformatted it in VS for better readability. And you know what? It’s already broken!

And I broke it by… drum-roll please… reformatting it in VS! The issue is the (irrelevant, according to any standard I’m aware of) whitespaces between the tags. Manually changing the code back to the original (closing object tag and iframe element at the end)…

August 8, 2010

I’ve been meaning to write about this for a while, because it’s a reoccurring nuisance: Using integrated authentication with Silverlight. More to the point, the nuisance is the differences between Cassini (Visual Studio Web Development Server) and IIS in combination with some WCF configuration pitfalls for Silverlight enabled WCF services….

Note: Apart from driving me crazy, I’ve been stumbling over this issue quite a few times in the Silverlight forums. Thus I’m going through this in detail, explaining one or the other seemingly obvious point…

Many ASP.NET LOB applications run on the intranet with Windows integrated authentication (see also here). This way the user is instantly available from HttpContext.User, e.g. for display, and can be subjected to application security via a RoleProvider. Silverlight on the other hand runs on the client. I have written about making the user and his roles available on the client before. However, the more important part is to have this information available in the WCF services serving the data and initiating server side processing. And being WCF, they work a little different from ASP.NET, or not, or only sometimes….

Starting with Cassini…

Let’s assume we are developing a Silverlight application, using the defaults, i.e. Cassini, and the templates Visual Studio offers for new items. When a “Silverlight-enabled WCF service” is created, it uses the following settings:

I choose compatibility mode, and as the client shows, HttpContext.Useris available out of the box:

Great, just what an ASP.NET developer is used to. But compatibility or not, it also shows that the WCF user is not available. But! WCF is configurable, and all we have to do is set the correct configuration. In this case we have to choose Ntlmas authentication scheme:

And look what we’ve got:

Great, no problem at all. Now we have the baseline and are ready to move on to IIS.

Now for IIS…

Moving to IIS on the developer machine is simple. Just go to the web project settings, tab “Web”, choose “Use Local IIS Web Server”, optionally creating the respective virtual directory from here. In order to work against IIS, Visual Studio needs to run with administrative permissions.

Moving from Cassini to IIS however has a slew of additional pitfalls:

The service URL

The IIS configuration for authentication

WCF service activation issues

The WCF authentication scheme

localhost vs. machine name

Usually they show up as team (which obviously doesn’t help), but let’s look at them one by one.

The service URL

There’s one difference between how Cassini and IIS are being addressed by the web project: Projects usually run in Cassini in the root (i.e. localhost:12345/default.aspx), while in IIS they run in a virtual directory (e.g. localhost/MyApplication/default.aspx). This may affect you whenever you are dealing with absolute and relative URLs. It will at least cause the generated service URLs to differ more than just by the port information. Of course you can recreate the service references at that point, but you don’t want to do that every time you switch between Cassini and IIS, do you?

BTW: There’s a similar issue if you are running against IIS, using localhost, and you create a service reference: This may write the machine name into the ServiceReferences.ClientConfig (depending on the proxy configuration), e.g. mymachine.networkname.com/application, rather than localhost. While these are semantically the same URLs, for Silverlight is qualifies as a cross domain call. Consequently it will look for a clientaccesspolicy.xml file which is probably not there and react with a respective security exception.

The solution with Silverlight 3 is to dynamically adjust the endpoint of the client proxy in your code to point to the service within the web, the Silverlight application was started from:

Coming versions of the tooling will probably generate relative URLs in the first place; until then you’ll have to remember to adjust them every time you add or update a service reference.

The IIS configuration for authentication

This one may be obvious, but in combination with others, it may still bite you. Initially starting the application will result in the notorious NotFound exception:

Note: To be able to handle server exceptions in Silverlight you’ll have to overcome some IE plugin limitations, inhibiting access to http response 500. This can be achieved via a behavior, as described on MSDN. However this addresses exceptions thrown by the service implementation and won’t help in the infrastructure related errors I’m talking about here.

The eventlog actually contains the necessary information:

No windows authentication? Well, while Cassini automatically runs in the context of the current user, IIS needs explicitly being told that Windows Authentication is required. This is simple in IIS configuration, just enable Windows Authentication and disable Anonymous Authentication in the IIS configuration for the respective virtual directory.

WCF service activation issues

Running again will apparently not have changed anything at all, displaying the same error. With just a seemingly minor difference in the eventlog entry:

That’s right. The service demands to run anonymous after it just demanded to run authenticated. Call it schizophrenic.

To make a long story short, our service has two endpoints with conflicting demands: The regular service endpoint requires Windows Authentication, while the “mex” endpoint for service meta information requires Anonymous access. OK, we might re-enable anonymous access, but that wasn’t indented, so the way to work around this activation issue is to keep anonymous access disabled and to remove the “mex” endpoint from the web.config:

Curiously generating the service reference still works in Visual Studio (perhaps with login dialogs, but still)…

The WCF authentication scheme

We’re still not there. The next issue when running the application might be a login credentials dialog when the service is called. And no matter what you type in, it still won’t work anyway, again with the NotFound exception. Unfortunately this time without an eventlog entry.

Again, to make it short, IIS doesn’t support Ntlm as authentication scheme, we need to switch to Negotiate… .

And now it works:

Do I have to say that this configuration doesn’t run in Cassini? Right, every time you switch between IIS and Cassini you have to remember to adjust the configuration. There is another enumeration value for the authentication scheme, named IntegratedWindowsAuthentication, which would be nice – if it worked. Unfortunately those two values, Ntlm and Negotiate, are the only ones that work, under Cassini and IIS respectively.

localhost vs. machine name

Now it works, we get the user information as needed. For a complete picture however, we need to look at the difference between addressing the local web server via localhost or via the machine name: Calls against localhost are optimized by the operating system to bypass some of the network protocol stack and work directly against the kernel mode driver (HTTP.SYS). This affects caching as well as http sniffers like Fiddler, which both work only via the machine name.

Note: This may actually be the very reason to switch to IIS early during development, when you need Fiddler as debugger (to check the actually exchanged information). Otherwise it’s later on, when you need it as profiler (to measure network utilization). Of course you’ll want http caching enabled and working by that time.

Of course you can put the machine name in the project settings, yet this would affect all team members. Perhaps a better idea is to have the page redirect dynamically:

Another word on caching: In IIS6 caching has to be explicitly set for the .xap file, using 1 day as default cache time and allowing 1 minute at the minimum. During development this may be an issue. With IIS7 caching should be automatically set to CacheUntilChange and you may also set the cache time with a resolution in seconds.

Where are we?

OK, that was quite a list of pitfalls and differences between Cassini and IIS, even IIS6 and IIS7. Some of this may go away with the new IIS Express. Some will stay and remain a nuisance. Visual Studio initially guides you towards using Cassini. At one time however you’ll have to switch to IIS. And since you cannot have both at the same time, this may be an issue especially in development teams. My recommendation would be: Start with IIS right away or plan the switch as a concerted action within your team.

July 24, 2010

My last post laid out how the employment of events has changed recently. Most importantly the broadcasting scenario – which was the major pattern so far – is no longer the only relevant pattern. Rather the “event-based asynchronous pattern”, MSDN, has emerged. Reasons include the inherently asynchronous nature of Silverlight as well as parallel patterns.

Now for the practical implications of this new pattern. Let’s look at an example to get the idea, and a better understanding of the consequences in code…

Let’s assume a component that is instantiated, does some work (allegedly asynchronous, and notifying about the progress) and provides the result at the end via an event. This is akin to making a server call, showing a confirmation message box, or the way the BackgroundWorker component works.

Example 1: Using events

First, implementing the a component the classical way would look somewhat like this:

The result event needs a respective EventArgs class, the declaration and the trigger method:

Creating the component, registering the event handlers, running the task, throw the component away. The fact that events are multi cast capable is never used at all (and never will, as the component is rather short-lived).

I guess we can agree that this is all very boilerplate. And all in all, that’s quite some overhead, from the component perspective as well as from the client code.

Example 2: Using callbacks

Now let’s try the new approach. Rather than defining an event, I pass in two callbacks. The information that was carried in the EventArgs is moved to the parameter lists, thus no need for these classes. The Cancel property is replaced by the return value of the callback. And since the client code always follows the same idiom, I expect the callbacks as constructor parameters, eliminating a source of errors along the way — something that is not possible with event handlers:

That’s it. No EventArgs classes, no events, no respective OnEventHappened methods. Granted, the callback declarations are a little more complex, and their parameters also lack intellisense providing information about the semantics of each parameter. But otherwise? Way shorter, way more concise, way less overhead. The actual worker method hasn’t changed at all, but all the event related overhead is gone, which amounted to only 40% LOC.

As you can see, it didn’t change that much. But passing in the lambdas via the constructor fits the use case far better than events, and it is even more robust, as I cannot forget to pass in a callback via the constructor, the way I can forget to register an event handler.

Speaking of lambdas, and since the implementation is that simple, we can even simplify the client code further by omitting those two handler methods:

Alright, this would have been possible with events as well if you used anonymous methods. But Visual Studio guides you otherwise and early examples of anonymous methods (before we had lambdas) where rather ugly, so I doubt that can be seen as valid counterargument. Here however lambdas can be seen as typical means of choice.

Verdict

Neat? Net result:

I’m writing less code on the event source side, including no longer declaring EventArgs classes.

I’m writing less code on the event sink side.

The handler methods can use clean parameter lists (rather than EventArgs).

I’m eliminating the risk of forgetting to register event handlers by making the callbacks explicit parameters.

I’m elimination the danger of leaks due to failing to consistently deregistering event handlers.

(That was not addressed in the example, but still.)

When chaining together several of these steps I can make the logic – especially conditional processing – more explicit and concise.

Events would either require setting up beforehand (partly unnecessary overhead), or setup on demand, cluttering the handler with registration and deregistration code.

All in all, this is way more readable, way more robust, and way more efficient than using events.

I for one have begun to adopt this scheme quite liberally. My Silverlight bookshelf application has wrappers for service calls that translate the event to callbacks (several actually, including error handling and other demands). My dialogs always take callbacks for OK and Cancel. I so far have two ICommand implementations, both take callbacks (one with parameters, the other without). I even have a PropertyObserver class that translates a PropertyChanged event into a callback. Actual event handlers? Apart from the wrappers enabling what I just presented, only a few reacting to control events.

In other words: This is not just an interesting technical detail. It really changes the way I’m addressing certain demands.

July 18, 2010

The other day I had a little chat with a colleague. It was about one of his worker components and how it used events to communicate intermediate state and the final result. About event registration and deregistration, and the ungainly code resulting from it. When I suggested using callbacks instead of events, he quickly jumped on the bandwagon; a few day later I got a single note on Messenger: „Callbacks are WAY COOL!“.

That got me thinking. Why did I recommend callbacks? When and why did I abandon events? What’s wrong with events anyway?

Well, there’s nothing wrong with events at all. It’s just that Silverlight, asynchronous, and multithreaded processing have changed the picture (and I may have worked too much in those areas lately ;-) ). And this is the deal:

Until recently I used to “see” (read: write code for/against) events mainly from the event consumer side. WinForms, WebForms; register handler, react to something. That kind of stuff.

Since “recently” I kind of had to do that less often. Why? Silverlight data binding solved many of the demands I previously used to address with event handlers. Making a control invisible for example. (Events still drive databinding, but at the same time databinding shields them away from me.)

And the “event-based asynchronous pattern”. Yep. We’ll get to that one.

OK, let’s try to classify these scenarios.

Broadcasts

The first two points are just two sides of the same coin: The radio broadcasting scenario.

Some component wants to advertise interactions or state changes; hence it broadcasts them by way of events.

Some client code needs to get notified about one or the other of these events; hence it subscribes to the event by way of an respective event handler, to consume it from there on.

Same as radio, the broadcaster broadcasts and doesn’t care whether anyone listens. Same as radio, the receiver is turned one and listens as long as something comes in. Well, the analogy stops at the life time: Event source and consumers tend to have similar life time.

Passing the Baton

The 3rd point is actually a quite different scenario: Start some work and have an event notify me about the result (and sometimes about intermediate state). Once I receive the result I let go of the participant and pass the baton on to the next piece of work.

Same as in a relay run, each participant does one job and once it’s done, he is out of business. Same as in a relay run, participation is obligatory – take someone out (or put something in his way) and the whole chain collapses.

Needless to say that this is nothing like the broadcasting scenario…

Usually the reason for the event approach (rather than simple return values) is asynchronous processing; and in fact this is not a particularly new pattern – BackgroundWorker works accordingly. On the other hand the pattern is still evolving, as the usual pattern for asynchronous work has been no pattern at all (i.e. leave it to the developer, as Thread or ThreadPooldo), or the IAsyncResult pattern (relying on a wait handle). New developments however start to employ events more often, and Microsoft has actually dubbed this pattern as “event-based asynchronous pattern” (see MSDN).

One area which relies heavily on this pattern is server requests in Silverlight, via WebClient or generated proxies. But it doesn’t stop there, as Silverlight is asynchronous by nature, rather than by exception: Showing a modal dialog, navigating to another page, (down-)loading an assembly. And quite often these single incidences are chained together to form a bigger logical program flow, for example:

The user clicks the delete button –> the application shows the confirmation dialog –> a call to the delete operation is made –> IF it succeeds (the code navigates to a list page –> …) OTHERWISE (an error message box is shown –> …)

Any arrow represents an event based “hop” to bridge some “asynchronicity gap” – essentially turning the logically sequential chain into a decoupled, temporary register and deregister event nightmare.

Coming back to the beginning of the post: This is the scenario I was discussion with my colleague. And doing this with a whole bunch of events and respective handler methods is simply awkward, especially if you even have to provide the event sources, usually with respective EventArgs classes. And the issue of having to consistently deregister the event handlers in order to avoid memory leaks becomes more prevalent.

Changing the Picture…

Inevitably I got annoyed with the setup/teardown orgies, and eventually I began to abandon events in this case and started passing simple callbacks along. Like this:

And actually I’m not the only one following this approach. The Task Parallel Library TPL for example has already started to make heavy use of callbacks. So this is definitely not limited to Silverlight…

Note: Also this lays the ground for a next evolutionary step: Coroutines.

After downloading the solution (VS2010!), you need to open the appSettings.config file in the web project and provide your personal Amazon credentials. Afterwards F5 should be all you need.

Calls via the server use a simple cache implementation that stores the XML returned from Amazon in a local directory. This way one can have a more detailed look at the information available from Amazon. This is intended for debugging purposes and to avoid flooding Amazon during development – it is not suitable for production code!