The short answer to the question of whether the Microsoft .NET Framework (and its related tools and technologies) has a future is of course, don’t be silly.

The reality is that successful technologies take years, usually decades, perhaps longer, to fade away. Most people would be shocked at how much of the world runs on RPG, COBOL, FORTRAN, C, and C++ – all languages that became obsolete decades ago. Software written in these languages runs on mainframes and minicomputers (also obsolete decades ago) as well as more modern hardware in some cases. Of course in reality mainframes and minicomputers are still manufactured, so perhaps they aren’t technically “obsolete” except in our minds.

It is reasonable to assume that .NET (and Java) along with their primary platforms (Windows and Unix/Linux) will follow those older languages into the misty twilight of time. And that such a thing will take years, most likely decades, perhaps longer, to occur.

I think it is critical to understand that point, because if you’ve built and bet your career on .NET or Java it is good to know that nothing is really forcing you to give them up. Although your chosen technology is already losing (or has lost) its trendiness, and will eventually become extremely obscure, it is a pretty safe bet that you’ll always have work. Even better, odds are good that your skills will become sharply more valuable over time as knowledgeable .NET/Java resources become more rare.

Alternately you may choose some trendier alternative; the only seemingly viable candidate being JavaScript or its spawn such as CoffeeScript or TypeScript.

How will this fading of .NET/Java technology relevance occur?

To answer I’ll subdivide the software world into two parts: client devices and servers.

Consumer Apps

Consumer apps are driven by a set of economic factors that make it well worth the investment to build native apps for every platform. In this environment Objective C, Java, and .NET (along with C++) all have a bright future.

Perhaps JavaScript will become a contender here, but that presupposes Apple, Google, and Microsoft work to make that possible by undermining their existing proprietary development tooling. There are some strong economic reasons why none of them would want every app on the planet to run equally on every vendor’s device, so this seems unlikely. That said, for reasons I can’t fathom, Microsoft is doing their best to make sure JavaScript really does work well on Windows 8, so perhaps Apple will follow suit and encourage their developers to abandon Objective C in favor of cross-platform JavaScript?

Google already loves the idea of JavaScript and would clearly prefer if we all just wrote every app in JavaScript for Chrome on Android, iOS, and Windows. The only question in my mind is how they will work advertising into all of our Chrome apps in the future?

My interest doesn’t really lie in the consumer app space, as I think relatively few people are going to get rich building casual games, fart apps, metro transit mapping apps, and so forth. From a commercial perspective there is some money to be made building apps for corporations, such as banking apps, brochure-ware apps, travel apps, etc. But even that is a niche market compared to the business app space.

Business Apps

Business apps (apps for use by a business’s employees) are driven by an important economic factor called a natural monopoly. Businesses want software that is built and maintained as cheaply as possible. Rewriting the same app several times to get a “native experience” on numerous operating systems has never been viable, and I can’t see where IT budgets will be expanding to enable such waste in the near future. In other words, businesses are almost certain to continue to build business apps in a single language for a single client platform. For a couple decades this has been Windows, with only a small number of language/tool combinations considered viable (VB, PowerBuilder, .NET).

But today businesses are confronted with pressure to write apps that work on the iPad as well as Windows (and outside the US on Android). The only two options available are to write the app 3+ times or to find some cross-platform technology, such as JavaScript.

The natural monopoly concept creates some tension here.

A business might insist on supporting just one platform, probably Windows. A couple years ago I thought Microsoft’s Windows 8 strategy was to make it realistic for businesses to choose Windows and .NET as this single platform. Sadly they’ve created a side loading cost model that basically blocks WinRT business app deployment, making Windows far less interesting in terms of being the single platform. The only thing Windows has going for it is Microsoft’s legacy monopoly, which will carry them for years, but (barring business-friendly changes to WinRT licensing) is doomed to erode.

You can probably tell I think Microsoft has royally screwed themselves over with their current Windows 8 business app “strategy”. I’ve been one of the loudest and most consistent voices on this issue for the past couple years, but Microsoft appears oblivious to the problem and has shown no signs of even recognizing the problem much less looking at solutions. I’ve come to the conclusion that they expect .NET on the client to fade away, and for Windows to compete as just one of several platforms that can run JavaScript apps. In other words I’ve come to the conclusion that Microsoft is willingly giving up on any sort of technology lock-in or differentiation of the Windows client in terms of business app development. They want us to write cross-platform JavaScript apps, and they simply hope that businesses and end users will choose Windows for other reasons than because the apps only run on Windows.

Perhaps a business would settle on iOS or Android as the “one client platform”, but that poses serious challenges given that virtually all businesses have massive legacies of Windows apps. The only realistic way to switch clients to iOS or Android is to run all those Windows apps on Citrix servers (or equivalent), and to ensure that the client devices have keyboards and mice so users can actually interact with the legacy Windows apps for the next several years/decades. Android probably has a leg up here because most Android devices have USB ports for keyboards/mice, but really neither iOS nor Android have the peripheral or multi-monitor support necessary to truly replace legacy Windows (Win32/.NET).

This leaves us with the idea that businesses won’t choose one platform in the traditional sense, but rather will choose a more abstract runtime: namely JavaScript running in a browser DOM (real or simulated). Today this is pretty hard because of differences between browsers and between browsers on different platforms. JavaScript libraries such as jquery, angular, and many others seek to abstract away those differences, but there’s no doubt that building a JavaScript client app costs more today than building the same app in .NET or some other more mature/consistent technology.

At the same time, only JavaScript really offers any hope of building a client app codebase that can run on iOS, Android, and Windows tablets, ultrabooks. laptops, and PCs. So though it may be more expensive than just writing a .NET app for Windows, JavaScript might be cheaper than rewriting the app 3+ times for iOS, Android, and Windows. And there’s always hope that JavaScript (or its offspring like CoffeScript or TypeScript) will rapidly mature enough to make this “platform” more cost-effective.

I look at JavaScript today much like Visual Basic 3 in the early 1990s (or C in the late 1980s). It is typeless and primitive compared to modern C#/VB or Java. To overcome this it relies on tons of external components (VB had its component model, JavaScript has myriad open source libraries). These third party components change rapidly and with little or no cross-coordination, meaning that you are lucky if you have a stable development target for a few weeks (as opposed to .NET or Java where you could have a stable target for months or years). As a result a lot of the development practices we’ve learned and mastered over the past 20 years are no longer relevant, and new practices must be devised, refined, and taught.

Also we must recognize that JavaScript apps never go into a pure maintenance mode. Browsers and underlying operating systems, along with the numerous open source libraries you must use, are constantly versioning and changing, so you can never stop updating your app codebase to accommodate this changing landscape. If you do stop, you’ll end up where so many businesses are today: trapped on IE6 and Windows XP because nothing they wrote for IE6 can run on any modern browser. We know that is a doomed strategy, so we therefore know that JavaScript apps will require continual rewrites to keep them functional over time.

What I’m getting at here is that businesses have an extremely ugly choice on the client:

Rewrite and maintain every app 3+ times to be native on Windows, iOS, and Android

Absorb the up-front and ongoing cost of building and maintaining apps in cross-platform JavaScript

Select one platform (almost certainly Windows) on which to write all client apps, and require users to use that platform

I think I’ve listed those in order from most to least expensive, though numbers 1 and 2 could be reversed in some cases. I think in all cases it is far cheaper for businesses to do what Delta recently did and just issue Windows devices to their employees, thus allowing them to write, maintain, and support apps on a single, predictable platform.

The thing is that businesses are run by humans, and humans are often highly irrational. People are foolishly enamored of BYOD (bring your own device), which might feel good, but is ultimately expensive and highly problematic. And executives are often the drivers for alternate platforms because they like their cool new gadgets; oblivious to the reality that supporting their latest tech fad (iPad, Android, whatever) might cost the business many thousands (often easily 100’s of thousands) of dollars each year in software development, maintenance, and support costs.

Of course I work for a software development consulting company. Like all such companies we effectively charge by the hour. So from my perspective I’d really prefer if everyone did decide to write all their apps 3+ times, or write them in cross-platform JavaScript. That’s just more work for us, even if objectively it is pretty damn stupid from the perspective of our customers’ software budgets.

Server Software

Servers are a bit simpler than client devices.

The primary technologies used today on servers are .NET and Java. Though as I pointed out at the start of this post, you shouldn’t discount the amount of COBOL, RPG, FORTRAN, and other legacy languages/tools/platforms that make our world function.

Although JavaScript has a nescient presence on the server via tools like node.js, I don’t think any responsible business decision maker is looking at moving away from existing server platform tools in the foreseeable future.

In other words the current 60/40 split (or 50/50, depending on whose numbers you believe) between .NET and Java on the server isn’t likely to change any time soon.

Personally I am loath to give up the idea of a common technology platform between client and server – something provided by VB in the 1990s and .NET over the past 13 years. So if we really do end up writing all our client software in JavaScript I’ll be a strong advocate for things like node.js on the server.

In the mid-1990s it was pretty common to write “middle tier” software in C++ and “client tier” software in PowerBuilder or VB. Having observed such projects and the attendant complexity of having a middle tier dev team who theoretically coordinated with the client dev team, I can say that this isn’t a desirable model. I can’t support the idea of a middle tier in .NET and a client tier in JavaScript, because I can’t see how team dynamics and inter-personal communication capabilities have changed enough (or at all) over the past 15 years such that we should expect any better outcome now than we got back then.

So from a server software perspective I think .NET and Java have a perfectly fine future, because the server-side JavaScript concept is even less mature than client-side JavaScript.

At the same time, I really hope that (if we move to JavaScript on the client) JavaScript matures rapidly on both client and server, eliminating the need for .NET/Java on the server as well as the client.

Conclusion

In the early 1990s I was a VB expert. In fact, I was one of the world’s leading VB champions through the 1990s. So if we are going to select JavaScript as the “one technology to rule them all” I guess I’m OK with going back to something like that world.

I’m not totally OK with it, because I rather enjoy modern C#/VB and .NET. And yes, I could easily ride out the rest of my career on .NET, there’s no doubt in my mind. But I have never in my career been a legacy platform developer, and I can’t imagine working in a stagnant and increasingly irrelevant technology, so I doubt I’ll make that choice – stable though it might be.

Fwiw, I do still think Microsoft has a chance to make Windows 8, WinRT, and .NET a viable business app development target into the future. But their time is running out, and as I said earlier they seem oblivious to the danger (or are perhaps embracing the loss of Windows as the primary app dev target on the client). I would like to see Microsoft wake up and get a clue, resulting in WinRT and .NET being a viable future for business app dev.

Failing that however, we all need to start putting increasing pressure on vendors (commercial and open source) to mature JavaScript, its related libraries and tools, and its offspring such as TypeScript – on both client and server. The status of JavaScript today is too primitive to replace .NET/Java, and if we’re to go down this road a lot of money and effort needs to be expended rapidly to create a stable, predictable, and productive enterprise-level JavaScript app dev platform.

As you can tell from my post volume, I’ve had a few weeks of enforced downtime during which time a lot of thoughts have been percolating in my mind just waiting to escape :)

There’s an ongoing discussion inside Magenic as to whether there is any meaningful difference between consumer apps and business apps.

This is kind of a big deal, because we generally build business apps for enterprises, that’s our bread and butter as a custom app dev consulting company. Many of us (myself included) look at most of the apps on phones and tablets as being “toy apps”, at least compared to the high levels of complexity around data entry, business rules, and data management/manipulation that you find in enterprise business applications.

For example, I have yet to see an order entry screen with a few hundred data entry fields implemented on an iPhone or even iPad. Not to say that such a thing might not exist, but if such a thing does exist it is a rarity. But in the world of business app dev such screens exist in nearly every application, and typically the user has 1-2 other monitors displaying information relevant to the data entry screen.

Don’t get me wrong, I’m not saying mobile devices don’t have a role in enterprise app dev, because I think they do. Their role probably isn’t to replace the multiple 30” monitors with keyboard/mouse being used by the employees doing the work. But they surely can support a lot of peripheral tasks such as manager approvals, executive reviews, business intelligence alerts, etc. In fact they can almost certainly fill those roles better than a bigger computer that exists only in a fixed location.

But still, the technologies and tools used to build a consumer app and a business app for a mobile device are the same. So you can surely imagine (with a little suspension of disbelief) how a complex manufacturing scheduling app could be adapted to run on an iPad. The user might have to go through 20 screens to get to all the fields, but there’s no technical reason this couldn’t be done.

So then is there any meaningful difference between consumer and business apps?

I think yes. And I think the difference is economics, not technology.

(maybe I’ve spent too many years working in IT building business apps and being told I’m a cost center – but bear with me)

If I’m writing a consumer app, that app is directly or indirectly making me money. It generates revenue, and of course creating it has a cost. For every type of device (iOS, Android, Win8, etc.) there’s a cost to build software, and potential revenue based on reaching the users of those devices. There’s also direct incentive to make each device experience “feel native” because you are trying to delight the users of each device, thus increasing your revenue. As a result consumer apps tend to be native (or they suck, like the Delta app), but the creators of the apps accept the cost of development because that’s the means through which they achieve increased revenue.

If I’m writing a business app (like something to manage my inventory or schedule my manufacturing workload) the cost to build software for each type of device continues to exist, but there’s zero increased revenue (well, zero revenue period). There’s no interest in delighting users, we just need them to be productive, and if they can’t be productive that just increases cost. So it is all cost, cost, cost. As a result, if I can figure out a way to use a common codebase, even if the result doesn’t “feel native” on any platform, I still win because my employees can be productive and I’ve radically reduced my costs vs writing and maintaining the app multiple times.

Technically I’ll use the same tools and technologies and skills regardless of consumer or business. But economically there’s a massive difference between delighting end users to increase revenue (direct or indirect), and minimizing software development/maintenance costs as much as possible while ensuring employees are productive.

From a tactical perspective, as a business developer it is virtually impossible to envision writing native apps unless you can mandate the use of only one type of device. Presumably that’s no longer possible in the new world of BYOD, so you’ve got to look at which technologies and tools allow you to build a common code base. The list is fairly short:

(yes, I know C++ might also make the list, but JavaScript sets our industry back at least 10 years, and C++ would set us back more than 20 years, so really????)

I’m assuming we’ll be writing lots of server-side code, and some reasonably interactive client code to support iPad, Android tablets, and Windows 8 tablets/ultrabooks/laptops/desktops. You might also bring in phones for even more narrow user scenarios, so iPhone, Android phones, and Windows Phone too.

Microsoft .NET gets you the entire spectrum of Windows from phone to tablet to ultrabook/laptop/desktop, as well as the server. So that’s pretty nice, but leaves out iPad/iPhone and Android. Except that you can use the Xamarin tools to build iOS and Android apps with .NET as well! So in reality you can build reusable C# code that spans all the relevant platforms and devices.

As an aside, CSLA .NET can help you build reusable code across .NET and Xamarin on Android. Sadly some of Apple’s legal limitations for iOS block some key C# features used by CSLA so it doesn’t work on the iPad or iPhone :(

The other option is JavaScript or related wrapper/abstraction technologies like PhoneGap, TypeScript, etc. In this case you’ll need some host application on your device to run the JavaScript code, probably a browser (though Win8 can host directly). And you’ll want to standardize on a host that is as common as possible across all devices, which probably means Chrome on the clients, and node.js on the servers. Your client-side code still might need some tweaking to deal with runtime variations across device types, but as long as you can stick with a single JavaScript host like Chrome you are probably in reasonably good shape.

Remember, we’re talking business apps here – businesses have standardized on Windows for 20 years, so the idea that a business might mandate the use of Chrome across all devices isn’t remotely far-fetched imo.

Sadly, as much as I truly love .NET and view it as the best software development platform mankind has yet invented, I strongly suspect that we’ll all end up programming in JavaScript – or some decent abstraction of it like TypeScript. As a result, I’m increasingly convinced that platforms like .NET, Java, and Objective C will be relegated to writing “toy” consumer apps, and/or to pure server-side legacy code alongside the old COBOL, RPG, and FORTRAN code that still runs an amazing number of companies in the world.

A lot of people, including myself, felt (feel?) deeply betrayed by Microsoft’s rather abrupt dismissal of what some of us thought was the best client-side dev platform they’ve ever come up with: Silverlight.

Perhaps even more people are worried about the future of WPF in the face of Microsoft’s obvious focus on the new Windows Runtime (WinRT) at the expense of the Desktop (Win32) technologies such as Windows Forms and WPF.

I’m a little more sanguine about this than many people.

I never really bought into the idea of Silverlight as a cross-platform technology. I know, I know, Microsoft made it work on some flavors of OS X. But they didn’t take it to Linux or Android, and Apple blocked them from ever going to the iPad or iPhone. And honestly, you have to follow the money. Companies don’t exist to do good, they exist to make money, and Microsoft didn’t charge for Silverlight and so only stood to lose money by enabling us to build apps that ran on non-Windows devices just as well as Windows devices.

(as an aside, this is why I never get too upset when Google drops yet another free service – the way I look at it is that I’m exploiting the hell out of Google’s free stuff as long as they have it, and when they decide to drop a free service I just have to start paying for something I should have been paying for the entire time (but didn’t have to thanks to Google’s amazing “business” model)).

I did buy into the idea of Silverlight as a much safer and easier-to-deploy way of building Windows smart client apps. So to me the truly sad part about Silverlight going away is that it pushed us back toward creating apps that aren’t as safe (out of the sandbox), and that are slightly harder to deploy (ClickOnce).

Perhaps I’m unusual, but I really do buy into the idea that smart client apps don’t need the ability to reformat people’s hard drives, or alter system files, or snoop through my personal documents without my knowledge. In other words, the full client-side .NET/WPF/Windows Forms/Win32 technology stack just isn’t necessary for 99% of the apps I want to build and/or run, and after a few decades of dealing with viruses and malware and other bad stuff, I’m about ready to be done with it!

So here we site, with Silverlight in maintenance mode so Microsoft will keep it running on their platforms for another decade, but without any real assurance that it will continue to work on the Mac. And frankly I don’t really care, because I always thought the Mac was a lark.

To me where we are is simple:

Microsoft is treating all of Win32/.NET on the client as legacy, so Windows Forms, WPF, and Silverlight are in the exact same boat

They are all stable (essentially unchanging) into the foreseeable future

They are all good/viable Win32/Desktop client technologies

They will ultimately fade away

Microsoft is putting all their energy/money into rapidly bringing WinRT up to speed

Being a fan of “follow the money”, I expect that we’ll all eventually move to WinRT

WinRT 8.1 shows some good XAML/C# improvements over 8.0, demonstrating Microsoft’s commitment to making this a viable platform

WinRT still has a fundamentally flawed deployment/licensing model for business apps, and until they fix this WinRT is pretty much useless for business

WinRT still lags in XAML features behind Silverlight 5, but it is catching up

WinRT (like Silverlight) will hopefully never do everything WPF does, because then we’d be back to the same malware hell-hole we’re in with Win32

In short, for everyone wishing and hoping for Microsoft to put more energy/money into WPF (or even more far-fetched into Silverlight) I think the answer is that THEY ARE – but they are doing it via WinRT, by eventually providing a viable XAML/C# platform for business development on Windows that escapes the baggage of legacy Win32/.NET/Desktop.

We just need to do two things

Be patient, because WinRT is a v1 technology and will take a little time to mature

Something I’m not worried about, because most businesses are just now getting to Win7 and won’t go to Win8 for a couple more years, so there’s some time for Microsoft to get their act together

Keep the pressure on Microsoft to bring WinRT to the level we need

In terms of licensing/deployment models

In terms of technology capabilities

Let’s face it. Either Microsoft (with us pushing/prodding/helping) provides a viable WinRT platform for us in the future, or we’d better all start learning JavaScript and/or Objective C…

After having a couple days to collect my thoughts regarding last week’s Build 2013 conference I want to share some of my observations.

First, I left Build happier with Microsoft than I’ve been for a couple years. Not necessarily due to any single thing or announcement, but rather because of the broader thematic reality that Microsoft really is listening (if perhaps grudgingly in some cases) to their customers. And the display of truly amazing, cool, and sexy laptops and tablets running Windows 8 was really something! I was almost literally drooling over some of the machines on display!

Now to summarize some of my thoughts.

The bad:

They didn’t add support for Silverlight in the WinRT browser (not that anyone really thought they would).

The changes in Windows 8.1 to provide some accommodations for people who are attached to the Start button are quite nice. To be honest, I was pretty skeptical that these changes were just silliness, but having used 8.1 Preview for a few days now I’m sold on my own positive emotional reaction to having the wallpaper the same on the desktop and start screen (though I’m still not booting to desktop, nor do I plan to do so).

The Windows 8.1 changes that bring the start screen experience more in line with Windows Phone are even nicer. The new item selection gesture (tap and hold) and the fact that new apps don’t automatically appear on the start screen (only on the “app apps” screen) are just like the phone, and make the system easier to deal with overall.

The updates to WinRT XAML are extremely welcome – especially around data binding – these are changes I’ll use in CSLA .NET right away.

The added WinRT API capabilities demonstrate Microsoft’s commitment to rapidly maturing what amounts to a Version 1 technology as rapidly as possible.

The fact that Azure had no big announcements, because they’ve been continually releasing their new stuff as it becomes available is wonderful! In fact, this whole “faster release cadence” concept from Windows, Azure, and Visual Studio is (imo) a welcome change, because it means that the overall .NET and Microsoft platform will be far more competitive by being more agile.

There was a serious emphasis on XAML, and most of the JavaScript content was web-focused, not WinRT-focused – and I think this is good because it reflects the reality of the Microsoft developer community. Most of us are .NET/XAML developers and if we’re going to shift to WinRT someday in the future it’ll be via .NET/XAML. For my part, if I’m forced to abandon .NET for JavaScript I’ll learn general JavaScript, not some Microsoft-specific variation or library – but if I see a viable future for .NET in the WinRT world, then I’ll continue to invest in .NET – and this conference was a start on Microsoft’s part toward rebuilding a little trust in the future of .NET.

The new 8” tablet form factor is way nicer than I’d expected. I had a Kindle Fire and ultimately gave it to my son because I already have an eInk Kindle and couldn’t see a good use for the Fire. But an 8” Win8 tablet is a whole different matter, because it runs the Kindle app and it runs Office and WinRT apps so it is immediately useful. The small screen means amazing battery life and light weight, and the ATOM processor means it runs Win32 and WinRT apps – I’m really enjoying this new Acer device!

The neutral:

As I tweeted last week the one recurring bit of feedback I heard from people was disappointment in the lack of WPF announcements or content. I’m not overly concerned about that, because I view Windows Forms, Silverlight, and WPF as all being the same – they are all in maintenance mode and Microsoft is just keeping them running. The same unprecedented stability enjoyed by Windows Forms developers for the past 8 years is now the reality for WPF too. Sure, this might be a little boring to be on an unchanging platform, but the productivity is hard to beat!!

Related to the lack of WPF content I want to suggest a different interpretation. WinRT with .NET/XAML is (imo) the “future of WPF”. What we really need to see is WinRT XAML continuing to rapidly evolve such that it becomes a natural progression to move from WPF/Silverlight to WinRT at some point in the future. I am encouraged by what was presented at Build in terms of the evolution of WinRT XAML, and if that continues I think we’ll find that moving to WinRT will become pretty attractive at some future time.

There was some content on the use of WinRT to create business apps, and that content was welcome. If-and-when Microsoft does fix the side-loading licensing issues so WinRT becomes viable for business use it is nice to know that some serious thought has gone into design and development of business apps on the new platform.

In conclusion, the overall vibe at the conference was positive. Attendees were, from what I could see, enjoying the conference, the content, and the technology. Moreover, I think Microsoft has taken a first small step toward rebuilding their relationship with (what was once) the Microsoft developer community (not that Azure ever lost this rapport, but the Windows client sure did). If they continue to build and foster this rapport I think they can win back some confidence that there’s a future for .NET and/or Windows on the client.

In a recent email thread I ended up writing a lengthy bit of content summarizing some of my thoughts around the idea of automatically projecting js code into an HTML 5 (h5js) browser app.

Another participant in the thread mentioned that he’s a strong proponent of separation of concerns, and in particular keeping the “model” separate from data access. In his context the “model” is basically a set of data container or DTO objects. My response:

-----------------------------

I agree about separation of concerns at the lower levels.

I am a firm believer in domain focused business objects though. In the use of “real” OOD, which largely eliminates the need for add-on hacks like a viewmodel.

In other words, apps should have clearly defined logical layers. I use this model:

The key is that the business layer consists of honest-to-god real life business domain objects. These are designed using OOD so they reflect the requirements of the user scenario, not the database design.

If you have data-centric objects, they’ll live in the Data access layer. And that’s pretty common when using any ORM or something like EF, where the tools help you create data-centric types. That’s very useful – then all you need to do is use object:object mapping (OOM) to get the data from the data-centric objects into the more meaningful business domain objects.

At no point should any layer talk to the database other than the Data access layer. And at no point should the Interface/Interface control layers interact with anything except the Business layer.

Given all that, the question with smart client web apps (as I’ve taken to calling these weird h5js/.NET hybrids) is whether you are using a service-oriented architecture or an n-tier architecture. This choice must be made _first_ because it impacts every other decision.

The service-oriented approach says you are creating a system composed of multiple apps. In our discussion this would be the smart client h5js app and the server-side service app. SOA mandates that these apps don’t trust each other, and that they communicate through loosely coupled and clearly defined interface contracts. That allows the apps to version independently. And the lack of trust means that data flowing from the consuming app (h5js) to the service app isn’t trusted – which makes sense given how easy it is to hack anything running in the browser. In this world each app should (imo) consist of a series of layers such as those I mentioned earlier.

The n-tier approach says you are creating one app with multiple layers, and those layers might be deployed on different physical tiers. Because this is one app, the layers can and should have reasonable levels of trust between them. As a result you shouldn’t feel the need to re-run business logic just because the data flowed from one layer/tier to another (completely different from SOA).

N-tier can be challenging because you typically have to decide where to physically put the business layer: on the client to give the user a rich and interactive experience, or on the server for more control and easier maintenance. In the case of my CSLA .NET framework I embraced the concept of _mobile objects_ where the business layer literally runs on the client AND on the server, allowing you to easily run business logic where most appropriate. Sadly this requires that the same code can actually run on the client and server, which isn’t the case when the client and server are disparate platforms (e.g. h5js and .NET).

This idea of projecting server-side business domain objects into the client fits naturally into the n-tier world. This has been an area of deep discussion for months within the CSLA dev team – how to make it practical to translate the rich domain business behaviors into js without imposing a major burden of writing js alongside C#.

CSLA objects have a very rich set of rules and behaviors that ideally would be automatically projected into a js business layer for use by the smart client h5js Interface and Interface control layers. I love this idea – but the trick is to make it possible such that there’s not a major new burden for developers.

This idea of projecting server-side business domain objects into the client is a less natural fit for a service-oriented system, because there’s a clear and obvious level of coupling between the service app and the h5js app (given that parts of the h5js app literally generate based on the service app). I’m not sure this is a total roadblock, but you have to go into this recognizing that such an approach compromises the primary purpose of SOA, which is loose coupling between the apps in the system…

Windows RT – Windows 8 on ARM devices (note: Windows RT and WinRT are not the same thing)

Windows 8 UI style – a user experience design language often used when building WinRT applications

Windows 8 basically includes two different operating systems.

One is the “old” Win32 OS we think of today as Windows 7. This is now called Windows 8 Desktop, and is available on Windows 8 Intel tablets, laptops, and desktops. This is only partially available on ARM devices, and you should not expect to build or deploy Win32 Desktop apps to ARM devices.

The other is the new Windows Runtime (WinRT) “operating system”. This is a whole new platform for apps, and is available on all Windows 8 machines (ARM, Intel, tablet, laptop, desktop). If you want the widest reach for your apps going forward, you should be building your apps for WinRT.

Confusingly enough, “Windows 8” runs on Intel devices/computers. “Windows RT” is Windows 8 for ARM devices. The only real difference is that Windows RT won’t allow you to deploy Win32 Desktop apps. Windows RT does have a Desktop mode, but only Microsoft apps can run there. Again, if you want to build a Windows 8 app that works on all devices/computers, build the app for WinRT, because it is consistently available.

Windows 8 UI style describes a user experience design language for the look and feel of WinRT apps. This isn’t a technology, it is a set of design principles, concepts, and guidelines.

Another source of confusion is that to build a WinRT app in Visual Studio you need to create a “Windows 8 UI style” app. What makes this odd, is that this type of app is targeting WinRT, and it is entirely up to you to conform to the Windows 8 UI style guidelines as you build the app.

“Windows 8 UI style” was called “Metro style”, but Microsoft has dropped the use of the term “Metro”. I am skeptical that this new “Windows 8 UI style” term will last long-term, because it obviously makes little sense for Windows Phone 8, Xbox, Windows 9, and other future platforms that may use the same UI style. But for now, this appears to be the term Microsoft is using.

Thinking about app development now, there are several options on the Microsoft platforms.

I want to summarize some of the more major changes coming to the data portal in CSLA 4 version 4.5. Some of these are breaking changes.

I’ve done four big things with the data portal:

Added support for the new async/await keywords on the client and server

Merged the .NET and Silverlight data portal implementations into a single code base that is now common across WinRT, .NET, and Silverlight

Removed the public virtual DataPortal_XYZ method definitions from Silverlight, because it can now invoke non-public methods just like in .NET. Also, all local Silverlight data portal methods no longer accept the async callback handler, because they now support the async/await pattern

Remove the ProxyMode concept from the Silverlight data portal, because the RunLocal attribute is now available on all platforms

All four have some level of breaking change.

Adding comprehensive support for async/await changes the way .NET handles exceptions. Although I’ve worked to keep the top level exceptions consistent, the actual exception object graph (nested InnerExceptions) will almost certainly be different now.

Merging the .NET and Silverlight data portal implementations introduces a number of relatively minor breaking changes for Silverlight users. Though if you’ve created custom proxy/host pairs or other more advanced scenarios you might be more affected than others. There may also be unintended side-effects to .NET users. Some might be bugs, others might be necessary to achieve platform unification.

Removing the public virtual DataPortal_XYZ methods from BusinessBase and BusinessListBase will break anyone using the local Silverlight data portal. The fix is minor – just change the public scopes to protected. This change shouldn’t affect anyone using .NET, or using a remote data portal from Silverlight.

Removing the async callback parameter from all Silverlight client-side DataPortal_XYZ methods will break anyone using the local Silverlight data portal. The fix is to switch to the new async/await pattern. The code changes are relatively minor, and generally simplify your code, but if you’ve made extensive use of the client-side data portal in Silverlight this will be a pretty big change I’m afraid.

Similarly, removing the ProxyMode concept from the Silverlight data portal is a breaking change for people using the local Silverlight data portal. Again, the fix is pretty simple – just add the RunLocal attribute to the DataPortal_XYZ (or object factory) methods as you have always done in .NET.

On the upside, the coding patterns for writing code in .NET, WinRT, and Silverlight are now the same.

For example, a DataPortal_Fetch method on any platform looks like this:

private void DataPortal_Fetch(int id)

or like this

private async Task DataPortal_Fetch(int id)

The data portal will automatically detect if your method returns a Task and it will await the method, allowing you to use the await keyword inside your DataPortal_XYZ methods.

This is one of the few platform-specific concepts left in the data portal.

What is really cool is that the client and server sync/async concepts can be mixed (as long as you know what to expect).

Client method

Client platform

Server method

Server platform

Remarks

Fetch

.NET only

void DataPortal_Fetch

any

Client call is synchronous; server call is synchronous

Fetch

.NET only

Task DataPortal_Fetch

any

Client call is synchronous; server call is asynchronous; note that client will block until the server’s work is complete

BeginFetch

any

void DataPortal_Fetch

any

Client call is asynchronous (event-based); server call is synchronous; client will not block, and must handle the callback event to be notified when the server call is complete

BeginFetch

any

Task DataPortal_Fetch

any

Client call is asynchronous (event-based); server call is asynchronous; client will not block, and must handle the callback event to be notified when the server call is complete

FetchAsync

any

void DataPortal_Fetch

any

Client call is asynchronous; server call is synchronous; client will block or not, depending on how you invoke the client-side Task (using await or other techniques); the client-side Task will complete when the server call is complete

FetchAsync

any

Task DataPortal_Fetch

any

Client call is asynchronous; server call is asynchronous; client will block or not, depending on how you invoke the client-side Task (using await or other techniques); the client-side Task will complete when the server call is complete

I expect all client-side data portal code to switch to the async/await versions of the methods, and so I’ve made them the mainline path through the data portal. The synchronous and event-based async methods use async/await techniques behind the scenes to do implement the desired behaviors.

There is a lot of variety in how you can invoke an awaitable method like FetchAsync. The specific async behaviors you should expect will vary depending on how you invoke the method. For example, there’s a big difference between using the await keyword or the Result or RunSynchronously methods:

var obj = await CustomerEdit.GetCustomerAsync(123);

var obj = CustomerEdit.GetCustomerAsync(123).Result;

The former is async, the latter is sync. The former will return a simple exception (if one occurs), the latter will return an AggregateException containing the simple exception. This has little to do with CSLA, and nearly everything to do with the way the async/await and task parallel library (TPL) are implemented by .NET.

Finally, I do need to state that the actual network transport (typically WCF) used by .NET, Silverlight, and WinRT aren’t the same. This is because WCF in .NET is far more flexible than in Silverlight or WinRT. And because the WCF client-side proxies generated for Silverlight use event-driven async methods, and in WinRT the proxy is task-based.

The data portal hides these differences pretty effectively, but you should understand that they exist, and as a result there may be subtle behavioral differences between platforms, especially when it comes to exceptions and exception details. The success paths for creating, fetching, updating, and deleting objects are identical, but there may be edge cases where differences exist.

All in all I am quite pleased with how this is turning out. I’ve put a massive amount of work into the data portal for 4.5, especially around unifying the implementations across platforms. I suspect there’ll be some issues to work through during the beta testing phase, but the end result is a far more consistent, maintainable, and streamlined codebase for all platforms. That will benefit all of us over time.

There are three fairly popular presentation layer design patterns that I collectively call the “M” patterns: MVC, MVP, and MVVM. This is because they all have an “M” standing for “Model”, plus some other constructs.

The thing with all of these “M” patterns is that for typical developers the patterns are useless without a framework. Using the patterns without a framework almost always leads to confusion, complication, high costs, frustration, and ultimately despair.

These are just patterns after all, not implementations. And they are big, complex patterns that include quite a few concepts that must work together correctly to enable success.

You can’t sew a fancy dress just because you have a pattern. You need appropriate tools, knowledge, and experience. The same is true with these complex “M” patterns.

And if you want to repeat the process of sewing a fancy dress over and over again (efficiently), you need specialized tooling for this purpose. In software terms this is a framework.

Trying to do something like MVVM without a framework is a huge amount of work. Tons of duplicate code, reinventing the wheel, and retraining people to think differently.

At least with a framework you avoid the duplicate code and hopefully don’t have to reinvent the wheel – allowing you to focus on retraining people. The retraining part is generally unavoidable, but a framework provides plumbing code and structure, making the process easier.

You might ask yourself why the MVC pattern only became popular in ASP.NET a few short years ago. The pattern has existed since (at least) the mid-1990’s, and yet few people used it, and even fewer used it successfully. This includes people on other platforms too, at least up to the point that those platforms included well-implemented MVC frameworks.

Strangely, MVC only started to become mainstream in the Microsoft world when ASP.NET MVC showed up. This is a comprehensive framework with tooling integrated into Visual Studio. As a result. typical developers can just build models, views, and controllers. Prior to that point they also had to build everything the MVC framework does – which is a lot of code. And not just a lot of code, but code that has absolutely nothing to do with business value, and only relates to implementation of the pattern itself.

We’re in the same situation today with MVVM in WPF, Silverlight, Windows Phone, and Windows Runtime (WinRT in Windows 8). If you want to do MVVM without a framework, you will have to build everything a framework would do – which is a lot of code that provides absolutely no direct business value.

Typical developers really do want to focus on building models, views, and viewmodels. They don’t want to have to build weak reference based event routers, navigation models, view abstractions, and all the other things a framework must do. In fact, most developers probably can’t build those things, because they aren’t platform/framework wonks. It takes a special kind of passion (or craziness) to learn the deep, highly specialized techniques and tricks necessary to build a framework like this.

What I really wish would happen, is for Microsoft to build an MVVM framework comparable to ASP.NET MVC. Embed it into the .NET/XAML support for WinRT/Metro, and include tooling in VS so we can right-click and add views and viewmodels. Ideally this would be an open, iterative process like ASP.NET MVC has been – so after a few years the framework reflects the smartest thoughts from Microsoft and from the community at large.

In the meantime, Caliburn Micro appears to be the best MVVM framework out there – certainly the most widely used. Probably followed by various implementations using PRISM, and then MVVM Light, and some others.

It seems like every time I install Visual Studio 2010, SQL Express doesn’t work.

I just repaved my laptop – new Win7 install, the whole works.

My previous install didn’t have working SQL Express – as in Visual Studio couldn’t create or open SQL Express files as part of a project. I’d spent a few hours trying to get it working – installing and uninstalling VS/SQL in various combinations to no avail.

The OS reinstall was, in part, because I figured I’d screwed something up so bad it just need a total restart.

Sadly, after installing Win7, Office, VS10, and then VS10 SP1 I still don’t have a working SQL Express – basically out of the box.

My conclusion? The VS10 installer is broken. What else could be wrong here?

At no point, on this new OS install, have I installed SQL Server by hand. The SQL Server install on the machine is directly from the VS10 install – and it doesn’t work.

The SQLEXPRESS service is running, but VS10 can’t talk to it.

I’m surely not looking forward to spending another ton of hours troubleshooting this problem – again. And presumably without success – again.

Disclaimer: I know nothing. The following is (hopefully) well educated speculation on my part. Time will tell whether I’m right.

I really like Silverlight. I’ve been a strong proponent of Silverlight since 2007 when I rushed to port CSLA .NET to the new platform.

In fact, Magenic provided me with a dev and test team to make that transition happen, because we all saw the amazing potential of Silverlight.

And it has been a good few years.

But let’s face reality. Microsoft has invested who-knows-how-much money to build WinRT, and no matter how you look at it, WinRT is the replacement for Win32. That means all the stuff that runs on Win32 is “dead”. This includes Silverlight, Windows Forms, WPF, console apps – everything.

I wouldn’t be surprised if Silverlight 5 was the last version. I also wouldn’t be surprised if .NET 4.5 was the last version for the Win32 client, and that future versions of .NET were released for servers and Azure only.

Before you panic though, remember that VB6 has been “dead” for well over a decade. It died at the PDC in 1999, along with COM. But you still use VB6 and/or COM? Or at least you know organizations who do? How can that be when it is dead??

That’s my point. “dead” isn’t really dead.

Just how long do you think people (like me and you) will continue to run Win32-based operating systems and applications? At least 10 years, and many will probably run 15-20 years into the future. This is the rate of change that exists in the corporate world. At least that’s been my observation for the past couple decades.

Microsoft supports their technologies for 10 years after a final release. So even if SL5 is the end (and they haven’t said it is), that gives us 10 years of supported Silverlight usage. The same for the other various .NET and Win32 technologies.

That’s plenty of time for Microsoft to get WinRT mature, and to allow us to migrate to that platform over a period of years.

I don’t expect WinRT 1.0 (the Windows 8 version) to be capable of replacing Win32 or .NET. I rather expect it to be pretty crippled in many respects. Much like VB 1.0 (and 2.0), .NET 1.0 and 1.1, Silverlight 1 and 2, etc.

But Windows 9 or Windows 10 (WinRT 2.0 or 3.0) should be quite capable of replacing Win32 and .NET and Silverlight.

If we assume Win8 comes out in 2012, and that Microsoft does a forced march release of 9 and 10 every two years, that means 2016 will give us WinRT 3.0. And if we hold to the basic truism that Microsoft always gets it right on their third release, that’ll be the one to target.

I think it is also reasonable to expect that Win9 and Win10 will probably continue to have the “blue side” (see my Windows 8 dev platform post), meaning Win32, .NET, and Silverlight will continue to be released and therefore supported over that time. They may not change over that time, but they’ll be there, and they’ll be supported – or so goes my theory.

This means that in 2016 the clock might really start for migration from Win32/.NET/Silverlight to WinRT.

Yes, I expect that a lot of us will build things for WinRT sooner than 2016. I certainly hope so, because it looks like a lot of fun!

But from a corporate perspective, where things move so slowly, this is probably good news. Certain apps can be ported sooner, but big and important apps can move slowly over time.

What to do in the meantime? Between now and 2016?

Focus on XAML, and on n-tier or SOA async server access as architectural models.

Or focus on HTML 5 (soon to be HTML 6 fwiw, and possibly HTML 7 by 2016 for all we know).

In fact, the plan is for a version 4.3 release to support Silverlight 5, then version 4.5 with support for .NET 4.5 and WinRT.

I suspect that you can use Silverlight or WPF as a bridge to WinRT. The real key is architecture.

An n-tier architecture is fine, as long as the data access layer is running on a server, and the client uses async calls to interact with the server. WinRT requires a lot of async, at a minimum all server interactions. Silverlight forces you to adopt this architecture already, so it is a natural fit. WPF doesn’t force the issue, but you can choose to do “the right thing”.

You can also build your client applications to be “edge applications” – on the edge of a service-oriented system. This is a less mature technology area, and it is more costly. But it is also a fine architecture for environments composed of many disparate applications that need to interact as a loosely coupled system. Again, all service interactions by the edge applications (the ones running on the clients) must be async.

Or you can build “hybrid solutions”, where individual applications are built using n-tier architectures (with async server calls). And where some of those applications also expose service interfaces so they can participate as part of a broader service-oriented system.

I favor option 3. I don’t like to accept the cost and performance ramifications of SOA when building an application, so I’d prefer to use a faster and cheaper n-tier architecture. At the same time, many applications do need to interact with each other, and the requirement to create “application mashups” through edge applications happens from time to time. So building my n-tier applications to have dual interfaces (XAML and JSON for example) is a perfect compromise.

The direct users of my application get n-tier performance and maintainability. And the broader organization can access my slower-moving, standards-based, contractual service interface. It is the best of both worlds.

So do I care if Silverlight 5 is the last version of Silverlight?

Only if WPF continues to evolve prior to us all moving to WinRT. If WPF continues to evolve, I would expect Silverlight to, at a minimum, keep up. Otherwise Microsoft has led a lot of people down a dead-end path, and that’s a serious betrayal of trust.

But if my suspicions are correct, we won’t see anything but bug fixes for WPF or Silverlight for many years. I rather expect that these two technologies just became the next Windows Forms. You’ll notice that WinForms hasn’t had anything but bug fixes for 6 years right? The precedent is there for a UI technology to be “supported, stable, and stagnant” for a very long time, and this is my expectation for WPF/SL.

And if that’s the case, then I don’t care at all about a Silverlight 6 release. We can use WPF/SL in their current form, right up to the point that WinRT is stable and capable enough to act as a replacement for today’s Win32/.NET applications.

At //build/ this past week I heard numerous people suggest that WinRT and Metro aren’t going to be used for business app development. That Metro is for consumer apps only. These comments were from Microsoft and non-Microsoft people.

They could be right, but I think they are wrong.

Let’s think forward about 5 years, and assume that Win8 has become quite successful in the consumer space with Metro apps. And let’s remember that consumers are the same people who use computers in the corporate world.

Then let’s think back to the early 1990’s, when most corporate apps were 3270 or VT100 green-screen terminals, and what people used at home (or even at work for other apps like Lotus 123 or maybe Excel) was Windows.

Users back then pushed hard for Windows interfaces for everything. Compared to the green-screen terminals, Windows was a breath of fresh air, and users really wanted to get away from the terminal experience.

Metro is to today’s Windows, what Windows was to terminals. A smooth, touch-based interface that encourages more intuitive and aesthetically pleasing user experiences. And the extremely popular iPad interface is similar.

In 5 years I strongly suspect that users will be pushing hard for Metro interfaces for everything. The old-fashioned, legacy Windows look (because that’s what it will be) will be extremely undesirable.

In fact I would suggest it is already uncool, especially in the face of iPad apps that exist today.

Windows computers are … computers.

iPad devices are friendly companions on people’s individual journeys through life.

(and so are Windows Phone devices – my phone is not a computing device, it is an integral part of my life)

Windows 8 and Metro gives us the opportunity to build apps that fit into something that is an integral part of people’s lives.

Because most people spend at least half their waking life at work, it seems obvious that they’ll want their work apps to be just as smooth and seamless as all the other apps they use.

In short, my prediction is that 5 years from now there’ll be “legacy Windows developers” still building stuff that runs in the clunky desktop mode. And there’ll be “modern Windows developers” that build stuff that runs in Metro – and a lot of those Metro apps will be business applications.

Will we make Jensen (the Microsoft keynote speaker pushing the vision of Metro) happy with every one of these business apps? Will they all fit exactly into the “Metro style”?

I doubt it.

But games don’t either. Metro games take over the whole screen and do whatever they want in the space. And nobody complains that they break the Metro style rules.

Some data-heavy business apps will also break the Metro style rules – I pretty much guarantee it. And yet they’ll still run in the Metro mode, and thus will run on ARM as well as Intel chips, and won’t require the user to see the clunky desktop mode.

In 5 years we can all check back on this blog post to see if I’m right. But I strongly suspect that 5 years from now I’ll be having a great time building some cool Metro business app

Microsoft revealed quite a lot of detail about "Windows 8" and its programming model at the //build/ conference in September 2011. The result is a lot of excitement, and a lot of fear and worry, on the part of Microsoft developers and customers.

From what I've seen so far, in reading tweets and other online discussions, is that people's fear and worry are misplaced. Not necessarily unwarranted, but misplaced.

There's a lot of worry that ".NET has no future" or "Silverlight has no future". These worries are, in my view, misplaced.

First, it is important to understand that the new WinRT (Windows Runtime) model that supports Win8 Metro style apps is accessible from .NET. Yes, you can also use C++, but I can't imagine a whole lot of people care. And you can use JavaScript, which is pretty cool.

But the important thing to understand is that WinRT is fully accessible from .NET. The model is quite similar to Silverlight for Windows Phone. You write a program using C# or VB, and that program runs within the CLR, and has access to a set of base class libraries (BCL) just like a .NET, Silverlight, or WP7 app today. Your program also has access to a large new namespace where you have access to all the WinRT types.

These WinRT types are the same ones used by C++ or JavaScript developers in WinRT. I think this is very cool, because it means that (for perhaps the first time ever) we'll be able to create truly first-class Windows applications in C#, without having to resort to C++ or p/invoke calls.

The BCL available to your Metro/WinRT app is restricted to things that are "safe" for a Metro app to use, and the BCL features don't duplicate what's provided by the WinRT objects. This means that some of your existing .NET code won't just compile in WinRT, because you might be using some .NET BCL features that are now found in WinRT, or that aren't deemed "safe" for a Metro app.

That is exactly like Silverlight and WP7 apps. The BCL features available in Silverlight or WP7 are also restricted to disallow things that aren't safe, or that make no sense in those environments.

In fact, from what I've seen so far, it looks like the WinRT BCL features are more comparable to Silverlight than anything else. So I strongly suspect that Silverlight apps will migrate to WinRT far more easily than any other type of app.

None of this gives me any real worry or concern. Yes, if you are a Windows Forms developer, and very possibly if you are a WPF developer, you'll have some real effort to migrate to WinRT, but it isn't like you have to learn everything new from scratch like we did moving from VB/COM to .NET. And if you are a Silverlight developer you'll probably have a pretty easy time, but there'll still be some real effort to migrate to WinRT.

If nothing else, we all need to go learn the WinRT API, which Microsoft said was around 1800 types.

So what should you worry about? In my view, the big thing about Win8 and Metro style apps is that these apps have a different lifetime and a different user experience model. The last time we underwent such a dramatic change in the way Windows apps worked was when we moved from Windows 3.1 (or Windows for Workgroups) to Windows 95.

To bring this home, let me share a story. When .NET was first coming out I was quite excited, and I was putting a lot of time into learning .NET. As a developer my world was turned upside down and I had to learn a whole new platform and tools and langauge - awesome!! :)

I was having a conversation with my mother, and she could tell I was having fun. She asked "so when will I see some of this new .NET on my computer?"

How do you answer that? Windows Forms, as different as it was from VB6, created apps that looked exactly the same. My mother saw exactly zero difference as a result of our massive move from VB/COM to .NET.

Kind of sad when you think about it. We learned a whole new programming platform so we could build apps that users couldn't distinguish from what we'd been doing before.

Windows 8 and Metro are the inverse. We don't really need to learn any new major platform or tools or languages. From a developer perspective this is exciting, but evolutionary. But from a user perspective everything is changing. When I next talk to my mother about how excited I am, I can tell her (actually I can show her thanks to the Samsung tablet - thank you Microsoft!) that she'll see new applications that are easier to learn, understand, and use.

This is wonderful!!

But from our perspective as developers, we are going to have to rethink and relearn how apps are designed at the user experience and user workflow level. And we are going to have to learn how to live within the new application lifecycle model where apps can suspend and then either resume or be silently terminated.

Instead of spending a lot of time angsting over whether the WinRT CLR or BCL is exactly like .NET/Silverlight/WP7, we should be angsting over the major impact of the application lifecycle and Metro style UX and Metro style navigation within each application.

OK, I don't honestly think we should have angst over that either. I think this is exciting, and challenging. If I wanted to live in a stable (stagnant?) world where I didn't need to think through such things, well, I think I'd be an accountant or something…

Yes, this will take some effort and some deep thinking. And it will absolutely impact how we build software over the next many years.

And this brings me to the question of timing. When should we care about Metro and WinRT? Here's a potential timeline, that I suspect is quite realistic based on watching Windows releases since 1990.

Win8 will probably RTM in time for hardware vendors to create, package, and deliver all sorts of machines for the 2012 holiday season. So probably somewhere between July and October 2012.

For consumer apps this means you might care about Win8 now, because you might want to make sure your cool app is in the Win8 online store for the 2012 holiday season.

For business apps the timing is quite different. Corporations roll out a new OS much later than consumers get it through retailers. As an example, Windows 7 has now been out for about three years, but most corporations still use Windows XP!!! I have no hard numbers, but I suspect Win7 is deployed in maybe 25% of corporations - after being available for three years.

That is pretty typical.

So for business apps, we can look at doing a reasonable amount of Win8 Metro development around 2015.

Yes, some of us will be lucky enough to work for "type A" companies that jump on new things as they come out, and we'll get to build Metro apps starting in late 2012.

Most of us work for "type B" companies, and they'll roll out a new OS after SP1 has been deployed by the "type A" companies - these are the companies that will deploy Win8 after has been out for 2-4 years.

Some unfortunate souls work for "type C" companies, and they'll roll out Win8 when Win7 loses support (so around 2018?). I used to work for a "type C" company, and that's a hard place to find yourself as a developer. Yet those companies do exist even today.

What does this all mean? It means that for a typical corporate or business developer, we have around 4 years from today before we're building WinRT apps.

The logical question to ask then (and you really should ask this question), is what do we do for the next 4 years??? How do we build software between now and when we get to use Metro/WinRT?

Obviously the concern is that if you build an app starting today, how do you protect that investment so you don't have to completely rewrite the app in 4 years?

I don't yet know the solid answer. We just don't have enough deep information yet. That'll change though, because we now have access to early Win8 builds and early tooling.

What I suspect is that the best way to mitigate risk will be to build apps today using Silverlight and the Silverlight navigation model (because that's also the model used in WinRT).

The BCL features available to a Silverlight app are closer to WinRT than full .NET is today, so the odds of using BCL features that won't be available to a Metro app is reduced.

Also, thinking through the user experience and user workflow from a Silverlight navigation perspective will get your overall application experience closer to what you'd do in a Metro style app - at least when compared to any workflow you'd have in Windows Forms. Certainly you can use WPF and also create a Silverlight-style navigation model, and that'd also be good.

Clearly any app that uses multiple windows or modal dialogs (or really any dialogs) will not migrate to Metro without some major rework.

The one remaining concern is the new run/suspend/resume/terminate application model. Even Silverlight doesn't use that model today - except on WP7. I think some thought needs to go into application design today to enable support for suspend in the future. I don't have a great answer right at the moment, but I know that I'll be thinking about it, because this is important to easing migrations in the future.

It is true that whatever XAML you use today won't move to WinRT unchanged. Well, I can't say that with certainty, but the reality is that WinRT exposes several powerful UI controls we don't have today. And any Metro style app will need to use those WinRT controls to fit seamlessly into the Win8 world.

My guess is that some of the third-party component vendors are well on their way to replicating the WinRT controls for Silverlight and WPF today. I surely hope so anyway. And that's probably going to be the best way to minimize the XAML migration. If we have access to controls today that are very similar to the WinRT controls of the future, then we can more easily streamline the eventual migration.

In summary, Windows 8, WinRT, and Metro are a big deal. But not in the way most people seem to think. The .NET/C#/CLR/BCL story is evolutionary and just isn't that big a deal. It is the user experience and application lifecycle story that will require the most thought and effort as we build software over the next several years.

Personally I'm thrilled! These are good challenges, and I very much look forward to building .NET applications that deeply integrate with Windows 8. Applications that I can point to and proudly say "I built that".

Yesterday I set out to do “something simple” – to use Windows Server AppFabric to do basic health monitoring of a WCF service.

A few hours later, I was pulling my hair out and my brain was spinning.

The basic issue is that configuring and using AppFabric involves numerous moving parts, and that leads to a ton of complexity for something that is otherwise pretty simple. But there are a lot of moving parts:

install your web site into IIS (create an IIS Application where the virtual root points to your web site directory)

in IIS Manager go to the virtual root, and click the Configure link on the far right under the Manage WCF and WF Services label

in the resulting dialog, click Monitoring on the left, then enable application monitoring

call your service

back in IIS Manager, double-click the AppFabric Dashboard option for your virtual root to see the dashboard, with the cute little counters showing that your services have been used

Of course that didn’t work.

In the end, I think it didn’t work because the AppFabric Event Collection Service (a Windows service that runs to collect event information) didn’t have NTFS security rights to read my application’s web.config file.

But that’s not the first thing I thought to check. No, the first thing I thought to check was whether data was getting into the AppFabric tables in SQL Server. It was not. So then (after a little googling with Bing), it sounded like the problem was that SQL Agent wasn’t running.

Of course it turns out that SQL Agent can’t run against SQL Express. But having three different versions of SQL Server installed was making this all very hard to troubleshoot. So I spent some quality time uninstalling SQL 2005 and 2008, and then installing SQL Server Developer 2008 R2 – so now I have a real up-to-date SQL Server instance where SQL Agent does work.

(again, I suspect that was all wasted time – but on the upside, I have a far less confusing SQL Server installation )

All that work, and it didn’t help. Then it occurred to me that my configurations were probably out of sync. So I reconfigured AppFabric, and my app, and the web sites and virtual roots in IIS Manager – all to use the new 2008 R2 database. And things were out of sync, so this was necessary and good.

But that didn’t help either. Still no data was in the AppFabric database.

Finally, I found a page lurking deep in MSDN that contained good troubleshooting information. And in here were instructions on how to view the event log for the AppFabric event collection service – which couldn’t read my web.config file.

I thought I’d hit the jackpot, so I updated the NTFS permissions on my web folder so the collection service could read the directory.

Still nothing. So I went to bed, frustrated at the continual failure.

This morning I thought I’d try again. Still no joy. So I rebooted my machine, and then it worked.

So I suspect that the core issue was the NTFS file permissions for the collection service. But with all the other changes I made, some service didn’t re-read its configuration until the system was rebooted.

In the end, it only took me about 6 hours of work to get Windows Server AppFabric to monitor the health of my WCF service. Hopefully I’ll never have to go through that again…

If that isn’t enough, there’s a raffle at the end of the day, with great prizes (including an MSDN Universal subscription), and our special guest Carl Franklin from .NET Rocks! will be in attendance to spice up the event.

BasicVM and LiveDemo are literally the code I created on stage, so they are nothing to look at and probably have little value if you weren’t at the talks. The other demos are more complete and may be useful – especially UsingBxf, which shows some interesting UI proto-framework concepts in a pretty straightforward manner.

BasicVM

Demo created on stage to illustrate the most basic MVVM concepts.

Collective

Demo/prototype of a knowledge base/forum application that makes use of a pre-release version of CSLA 4, MVVM and other Silverlight concepts. It is a 3-tier physical deployment (Silverlight client, web/app server, SQL Server database) architecture using many CSLA .NET concepts.

This code uses the Bxf (“Basic XAML Framework”) to implement its shell and viewmodel code in a way that can be unit tested.

LiveDemo

Demo created on stage to illustrate the basic use of the Visual Studio 2010 XAML designer.

UsingBxf

Demo showing simple use of the “Basic XAML Framework” (Bxf) in WPF. The same Bxf code works in Silverlight as well. The Bxf code and this demo show how to implement a basic UI shell that shows content and a status bar. For more advanced uses of Bxf see the Collective demo.

Last week I spent a few hours switching the CSLA .NET for Windows unit/integration tests from nunit to mstest.

This wasn’t terribly hard, because the tests were originally created with the idea of supporting both test frameworks. Of course as different people added tests over several years time inconsistencies crept in, and that’s what I had to address to make this switch.

I didn’t remove the compiler directives for nunit, so it should take relatively little effort to switch back to nunit, but I don’t personally plan to do that.

mstest is now available in all professional versions of Visual Studio 2010, and Microsoft is obviously faster about getting their test framework updated as .NET and Visual Studio change. Looking at www.nunit.org there’s no mention of VS10 or .NET 4.0. Yes, I know people have tweaked nunit to work on .NET 4.0, but mstest allows me to eliminate one level of uncertainty from my process.

Besides, there are all these really cool tools and capabilities in VS10, some of which tie into testing and coverage, and this gives me motivation to play with them :)

In Visual Studio 2010 and .NET 4.0 Microsoft is amping up the visibility of the “client profile” concept. In fact, when you install the 4.0 client profile on a machine, it doesn’t drag the rest of the framework to that client later – they just get the client profile. And when you create a WPF or Windows Forms project in VS10 you default to targeting the client profile.

That’s all good – great in fact!!

But I’ve fallen in love with the validation attribute concepts in System.ComponentModel.DataAnnotations.dll. These attributes are designed specifically to enable a UI framework author (or a business layer framework author – like me with CSLA .NET) to automatically create a rich user experience based on the attributes decorating business objects.

This concept was first fully realized in Silverlight 3 – a client technology – and is now fully supported in .NET 4.0 full profile. But it is a client side technology, and so should be in the client profile.

I’ve logged this issue on connect, and recommend you vote for this to be resolved:

Of course I’m referring to Windows Forms, which is about 8 years old. Even in dog years that’s not old. But in software years it is pretty old I’m afraid…

I’m writing this post because here and in other venues I’ve recently referred to Windows Forms as “legacy”, along with asmx and even possibly Web Forms. This has caused a certain amount of alarm, but I’m not here to apologize or mollify.

Technologies come and go. That’s just life in our industry. I was a DEC VAX guy for many years (I hear Ted Neward laughing now, he loves these stories), but I could see the end coming years before it faded away, so I switched to the woefully immature Windows platform (Windows 3.0 – what a step backward from the VAX!). I know many FoxPro people who transitioned, albeit painfully, to VB or other tools/languages. The same with Clipper/dBase/etc. Most PowerBuilder people transitioned to Java or .NET (though much to my surprise I recently learned that PowerBuilder still actually exists – like you can still buy it!!).

All through my career I’ve been lucky or observant enough to jump ship before any technology came down on my head. I switched to Windows before the VAX collapsed, and switched to .NET before VB6 collapsed, etc. And honestly I can’t think of a case where I didn’t feel like I was stepping back in time to use the “new technology” because it was so immature compared to the old stuff. But every single time it was worth the effort, because I avoided being trapped on a slowly fading platform/technology with my skills becoming less relevant every day.

But what is “legacy”? I once heard a consultant say “legacy is anything you’ve put in production”. Which might be good for a laugh, but isn’t terribly useful in any practical sense.

I think “legacy” refers to a technology or platform that is no longer an area of focus or investment by the creator/maintainer. In our world that mostly means Microsoft, and so the question is where is Microsoft focused, where are they spending their money and what are they enhancing?

The answers are pretty clear:

Azure

Silverlight

ASP.NET MVC

WPF (to a lesser degree)

ADO.NET EF

WCF

These are the areas where the research, development, marketing and general energy are all focused. Ask a Microsoft guy what’s cool or hot and you’ll hear about Azure or Silverlight, maybe ADO.NET EF or ASP.NET MVC and possibly WPF or WCF. But you won’t hear Windows Forms, Web Forms, asmx web services, Enterprise Services, Remoting, LINQ to SQL, DataSet/TableAdapter/DataTable or numerous other technologies.

Some of those other technologies aren’t legacy – they aren’t going away, they just aren’t sexy. Raw ADO.NET, for example. Nobody talks about that, but ADO.NET EF can’t exist without it, so it is safe. But in theory ADO.NET EF competes with the DataSet (poorly, but still) and so the DataSet is a strong candidate for the “legacy” label.

Silverlight and WPF both compete with Windows Forms. Poor Windows Forms is getting no love, no meaningful enhancements or new features. It is just there. At the same time, Silverlight gets a new release in less than 12 month cycles, and WPF gets all sorts of amazingly cool new features for Windows 7. You tell me whether Windows Forms is legacy. But whatever you decide, I’m surely spending zero cycles of my time on it.

asmx is obvious legacy too. Has been ever since WCF showed up, though WCF’s configuration issues have been a plague on its existence. I rather suspect .NET 4.0 will address those shortcomings though, making WCF as easy to use as asmx and driving the final nail in the asmx coffin.

Web Forms isn’t so clear to me. All the buzz is on ASP.NET MVC. That’s the technology all the cool kids are using, and it really is some nice technology – I like it as much as I’ll probably ever like a web technology. But if you look at .NET 4.0, Microsoft has done some really nice things in Web Forms. So while it isn’t getting the hype of MVC, it is still getting some very real love from the Microsoft development group that owns the technology. So I don’t think Web Forms is legacy now or in .NET 4.0, but beyond that it is hard to say. I strongly suspect the fate of Web Forms lies mostly in its user base and whether they fight for it, whether they make Microsoft believe it continues to be worth serious investment and improvement into the .NET 5.0 timeframe.

For my part, I can tell you that it is amazingly (impossibly?) time-consuming to be an expert on 7-9 different interface technologies (UI, service, workflow, etc). Sure CSLA .NET supports all of them, but there are increasing tensions between the stagnant technologies (most notably Windows Forms) and the vibrant technologies like Silverlight and WPF. It is no longer possible, for example, to create a collection object that works with all the interface technologies – you just can’t do it. And the time needed to deeply understand the different binding models and subtle differences grows with each release of .NET.

CSLA .NET 4.0 will absolutely still support all the interface technologies. But it would be foolish to cut off the future to protect the past – that way lies doom. So in CSLA .NET 4.0 you should expect to see support for Windows Forms still there, but probably moved into another namespace (Csla.Windows or something), while the main Csla namespace provides support for modern interface technologies like WPF, ASP.NET MVC, Silverlight, etc.

I am absolutely committed to providing a window of time where Windows Forms users can migrate their apps to WPF or Silverlight while still enjoying the value of CSLA .NET. And I really hope to make that reasonably smooth – ideally you’ll just have to change your base class types for your business objects when you switch the UI for the object from Windows Forms to XAML – though I suspect other minor tweaks may be necessary as well in some edge cases.

But let’s face it, at some point CSLA .NET does have to drop legacy technologies. I’m just one guy, and even with Magenic being such a great patron it isn’t realistic to support every technology ever invented for .NET :) I don’t think the time to drop Windows Forms is in 4.0, because there are way too many people who need to migrate to WPF over the next 2-3 years.

On the other hand, if you and your organization aren’t developing a strategy to move off Windows Forms in the next few years I suspect you’ll eventually wake up one day and realize you are in a bad spot. One of those spots where you can’t hire anyone because no one else has done your technology for years, and nobody really remembers how it works (or at least won’t admit they do unless you offer them huge sums of money).

I don’t see this as bad. People who want stability shouldn’t be in computing. They should be in something like accounts receivable or accounts payable – parts of business that haven’t changed substantially for decades, or perhaps centuries.

I was just complaining that the cool new Windows 7 features weren’t available to me as a .NET developer – at least not without painful p/invoke calls.

My complaints were ill-founded however, as it turns out there’s a solution in the form of the Windows API Code Pack.

This makes me happy (though I haven’t tried it yet, so I’m just assuming it works) because I want access to Jump Lists and some other Windows shell integration concepts – which appear to be nicely included in the code pack.

A ridiculously long time ago I was in a meeting at Microsoft, sitting next to Ted Neward. As you may know, Ted lives in both the Java and .NET worlds and kind of specializes in interop between them.

Somehow (and I don’t remember the specifics), we got to talking about object serialization and related concepts like encryption, signing and so forth. It turned out that Java had a library of wrapper types that worked with the serialization concept to make it very easy to sign and encrypt an object graph.

Thinking about this, it isn’t hard to imagine this working in .NET, and so I whipped up a similar library concept. I’ve used it from time to time myself, but never quite got around to putting it online for public consumption. Until now:

You can then send the wrapper over the network as long as it is serialized using the BinaryFormatter or NetDataContractSerializer, and on the other end you can make sure it hasn’t been tampered with by verifying the signature:

if (wrapper.Verify(hashKey))

Of course the really tricky part is key exchange. How did both ends of the process get access to the same hashKey value? That’s outside the scope of my library, and frankly that is the really hard part about things like security…

In fact, if you look inside the code for the various wrapper classes, you’ll find that I’m just delegating all the interesting work to the .NET cryptography subsystem. By using the various wrappers together you can do asymmetric public/private keys, symmetric keys. You can do signing, and encryption. I think I now cover all the different algorithms supported by .NET – in one nicely abstract scheme.

Also, if you look inside the solution you’ll see a compression wrapper. That was an experiment on my part, and I really didn’t find the result satisfying. My thought was that you’d wrap your object graph (maybe after it was encrypted and signed) in the compression wrapper, and then that would be serialized to go over the wire.

But it turns out that there are two flaws:

Serializing the compressed data makes it quite a bit bigger, and you are better off transferring the CompressedData value from the wrapper rather than allowing the wrapper itself to be serialized.

More importantly, compressing encrypted data doesn’t work well. Encrypted data is pretty random, and the two compression algorithms included in .NET don’t do a particularly good job of compressing that data. I don’t know if other algorithms are better at compressing encrypted data, but I was disappointed with the results I found here.

In any case, I’ve found the crypto wrapper classes to be generally useful in abstracting most of the complexity of dealing with the .NET crypto subsystem, and I thought I’d share the code in case anyone else can find it useful as well.

One of the topic areas I get asked about frequently is authorization. Specifically role-based authorization as supported by .NET, and how to make that work in the "real world".

I get asked about this because CSLA .NET (for Windows and Silverlight) follow the standard role-based .NET model. In fact, CSLA .NET rests directly on the existing .NET infrastructure.

So what's the problem? Why doesn't the role-based model work in the real world?

First off, it is important to realize that it does work for some scenarios. It isn't bad for course-grained models where users are authorized at the page or form level. ASP.NET directly uses this model for its authorization, and many people are happy with that.

But it doesn't match the requirements of a lot of organizations in my experience. Many organizations have a slightly more complex structure that provides better administrative control and manageability.

Whether a user can get to a page/form, or can view a property or edit a property is often controlled by a permission, not a role. In other words, users are in roles, and a role is essentially a permission set: a list of permissions the role has (or doesn't have).

This doesn't map real well into the .NET IPrincipal interface, which only exposes an IsInRole() method. Finding out if the user is in a role isn't particularly useful, because the application really needs to call some sort of HasPermission() method.

In my view the answer is relatively simple.

The first step is understanding that there are two concerns here: the administrative issues, and the runtime issues.

At administration time the concepts of "user", "role" and "permission" are all important. Admins will associate permissions with roles, and roles with users. This gives them the kind of control and manageability they require.

At runtime, when the user is actually using the application, the roles are entirely meaningless. However, if you consider that IsInRole() can be thought of as "HasPermission()", then there's a solution. When you load the .NET principal with a list of "roles", you really load it with a list of permissions. So when your application asks "IsInRole()", it does it like this:

bool result = currentPrincipal.IsInRole(requiredPermission);

Notice that I am "misusing" the IsInRole() method by passing in the name of a permission, not the name of a role. But that's ok, assuming that I've loaded my principal object with a list of permissions instead of a list of roles. Remember, the IsInRole() method typically does nothing more than determine whether the string parameter value is in a list of known values. It doesn't really matter if that list of values are "roles" or "permissions".

And since, at runtime, no one cares about roles at all, there's no sense loading them into memory. This means the list of "roles" can instead be a list of "permissions".

The great thing is that many people store their users, roles and permissions in some sort of relational store (like SQL Server). In that case it is a simple JOIN statement to retrieve all permissions for a user, merging all the user's roles together to get that list, and not returning the actual role values at all (because they are only useful at admin time).

PDC 2008 was a lot of fun - a big show, with lots of announcements, lots of sessions and some thought-provoking content. I thought I'd through out a few observations. Not really conclusions, as those take time and reflection, so just some observations.

Windows Azure, the operating system for the cloud, is intriguing. For a first run at this, the technology seems surprisingly complete and contains a pretty reasonable set of features. I can easily see how web sites, XML services and both data-centric and compute-centric processing could be built for this platform. For that matter, it looks like it would be perhaps a week's work to get my web site ported over to run completely in Azure.

The real question is whether that would even make sense, and that comes down to the value proposition. One big component of value is price. Like anyone else, I pay a certain amount to run my web site. Electricity, bandwidth, support time, hardware costs, software costs, etc. I've never really sorted out an exact cost, but it isn't real high on a per-month basis. And I could host on any number of .NET-friendly hosting services that have been around for years, and some of them are pretty inexpensive. So the question becomes whether Azure will be priced in such a way that it is attractive to me. If so, I'm excited about Azure!! If not, then I really don't care about Azure.

I suspect most attendees went through a similar thought process. If Microsoft prices Azure for "the enterprise" then 90% of the developers in the world simply don't care about Azure. But if Microsoft prices Azure for small to mid-size businesses, and for the very small players (like me) then 90% of the developers in the world should (I think) really be looking at this technology

Windows 7 looks good to me. After the Tuesday keynote I was ready to install it now. As time goes by the urgency has faded a bit - Vista has stabilized nicely over the past 6-8 months and I really like it now. Windows 7 has some nice-sounding new features though. Probably the single biggest one is reduced system resource requirements. If Microsoft can deliver on that part of the promise I'll be totally thrilled. Though I really do want multi-monitor RDP and the ability to manage, mount (and even boot from) vhd files directly from the host OS.

In talking to friends of mine that work at Microsoft, my level of confidence in W7 is quite high. A couple of them have been running it for some time now, and while it is clearly pre-beta, they have found it to be a very satisfying experience. When I get back from all my travels I do think I'll have to buy a spare HD for my laptop and give it a try myself.

The Oslo modeling tools are also interesting, though they are more future-looking. Realistically this idea of model-driven development will require a major shift in how our industry thinks about and approaches custom software development. Such a massive shift will take many years to occur, regardless of whether the technology is there to enable it. It is admirable that Microsoft is taking such a gamble - building a set of tools and technologies for something that might become acceptable to developers in the murky future. Their gamble will pay off if we collectively decide that the world of 3GL development really is at an end and that we need to move to higher levels of abstraction. Of course we could decide to stick with what has (and hasn't) worked for 30+ years, in which case modeling tools will go the way of CASE.

But even if some of the really forward-looking modeling ideas never become palatable, many of the things Microsoft is doing to support modeling are immediately useful. Enhancements to Windows Workflow are a prime example, as is the M language. I've hard a hard time getting excited about WF, because it has felt like a graphical way to do FORTRAN. But some of the enhancements to WF directly address my primary concerns, and I can see myself getting much more interested in WF in the relatively near future. And the ability of the M language to define other languages (create DSLs), where I can create my own output generator to create whatever I need - now that is really, really cool!

Once I get done with my book and all my fall travel, you can bet I'll be exploring the use of M to create a specialized language to simplify the creation of CSLA .NET business classes :)

There were numerous talks about .NET 4.0 and the future of C# and VB.

Probably the single biggest thing on the language front is that Microsoft has finally decided to sync VB and C# so they have feature parity. Enough of this back-and-forth with different features, the languages will now just move forward together. A few years ago I would have argued against this, because competition breeds innovation. But today I don't think it matters, because the innovation is coming from F#, Ruby, Python and various other languages and initiatives. Both VB and C# have such massive pre-existing code-bases (all the code we've written) that they can't move rapidly or explore radical ideas - while some of these other languages are more free to do just that.

The framework itself has all sorts of changes and improvements. I spent less time looking at this than at Azure and Oslo though, so I honestly just don't have a lot to say on it right now. I look at .NET 4.0 and Visual Studio 2010 as being more tactical - things I'll spend a lot of time on over the next few months anyway - so I didn't see so much need to spend my time on it during PDC.

Finally, there were announcements around Silverlight and WPF. If anyone doubts that XAML is the future of the UI on Windows and (to some degree) the web, now is the time to wake up and smell the coffee. I'm obviously convinced Silverlight is going to rapidly become the default technology for building business apps, with WPF and Ajax as fallback positions, and everything at the PDC simply reinforced this viewpoint.

The new Silverlight and WPF toolkits provide better parity between the two XAML dialects, and show how aggressively Microsoft is working to achieve true parity.

But more important is the Silverlight intersection with Azure and Live Mesh. The fact that I can build smart client apps that totally host in Azure or the Mesh is compelling, and puts Silverlight a notch above WPF in terms of being the desired start-point for app development. Yes, I really like WPF, but even if it can host in Azure it probably won't host in Mesh, and in neither case will it be as clean or seamless.

So while I fully appreciate that WPF is good for that small percentage of business apps that need access to DirectX or rich client-side resources, I still think most business apps will work just fine with access to the monitor/keyboard/mouse/memory/CPU provided by Silverlight.

A couple people asked why I think Silverlight is better than Ajax. To me this is drop-dead simple. I can write a class in C# or VB that runs on the client in Silverlight. I can write real smart client applications that run in the browser. And I can run that exact same code on the server too. So I can give the user a very interactive experience, and then re-run that same code on the server because I don't trust the client.

To do that in Ajax you'd either have to write your code twice (in C# and in Javascript), or you'd have to do tons of server calls to simulate the interactivity provided by Silverlight - and that obviously won't scale nearly the same as the more correct Silverlight solution.

To me it is a no-brainer - Ajax loses when it comes to building interactive business apps like order entry screens, customer maintenance screens, etc.

That's not to say Ajax has no home. The web and browser world is really good at displaying data, and Ajax makes data display more interesting that simple HTML. I strongly suspect that most "Silverlight" apps will make heavy use of HTML/Ajax for data display, but I just can't see why anyone would willingly choose to create data entry forms or other interactive parts of their app outside of Silverlight.

And that wraps up my on-the-flight-home summary of thoughts about PDC.

Next week I'm speaking at the Patterns and Practices Summit in Redmond, and then I'll be at Tech Ed EMEA in Barcelona. I'm doing a number of sessions at both events, but what's cool is that at each event I'm doing a talk specifically about CSLA .NET for Silverlight. And in December I'll be at VS Live in Dallas, where I'll also give a talk directly on using CSLA .NET.

Someone on the CSLA .NET discussion forum recently asked what new .NET 3.5 features I used in CSLA .NET 3.5. The poster noted that there are a lot of new features in .NET 3.5, which is true. They also included some .NET 3.0 features as "new", though really those features have now been around for 15 months or so and were addressed in CSLA .NET 3.0. CSLA .NET 3.0 already added support for WCF, WPF and WF, so those technologies had very little impact on CSLA .NET 3.5.

My philosophy is to use new technologies only if they provide value to me and my work. In the case of CSLA .NET this is extended slightly, such that I try to make sure CSLA .NET also supports new technologies that might be of value to people who use CSLA .NET.

While .NET 3.5 has a number of new technologies at various levels (UI, data, languages), many of them required no changes to CSLA to support. I like to think this is because I'm always trying to look into the future as I work on CSLA, anticipating at least some of what is coming so I can make the transition smoother. For example, this is why CSLA .NET 2.0 introduced a provider model for the data portal - because I knew WCF was coming in a couple years and I wanted to be ready.

Since CSLA .NET already supported data binding to WPF, Windows Forms and Web Forms, there was no real work to do at the UI level for .NET 3.5. I actually removed Csla.Wpf.Validator because WPF now directly supplies that behavior, but I really didn't add anything for UI support because it is already there.

Looking forward beyond 3.5, it is possible I'll need to add support for ASP.NET MVC because that technology eschews data binding in favor of other techniques to create the view - but it is too early to know for sure what I'll do in that regard.

Since CSLA .NET has always abstracted the business object concept from the data access technique you choose, it automatically supported LINQ to SQL (and will automatically support ADO.NET EF too). No changes required to do that were required, though I did add Csla.Data.ContextManager to simplify the use of L2S data context objects (as a companion to the new Csla.Data.ConnectionManager for raw ADO.NET connections). And I enhanced Csla.Data.DataMapper to have some more powerful mapping options that may be useful in some L2S or EF scenarios.

LINQ to Objects did require some work. Technically this too was optional, but I felt it was critical, and so there is now "LINQ to CSLA" functionality provided in 3.5 (thanks to my colleague Aaron Erickson). The primary feature of this is creating a synchronized view of a BusinessListBase list when you do a non-projection query, which means you can data bind the results of a non-projection query and allow the user to add/remove items from the query result and those changes are also reflected in the original list. As a cool option, LINQ to CSLA also implements indexed queries against lists, so if you are doing many queries against the same list object you should look into this as a performance booster!

So all that's left are some of the language enhancements that exist to support LINQ. And I do use some of them - mostly type inference (which I love). But I didn't go through the entire body of existing code to use the new language features. The risk of breaking functionality that has worked for 6-7 years is way too high! I can't see where anyone would choose to take such a risk with a body of code, but especially one like CSLA that is used by thousands of people world-wide.

That means I used some of the new language features in new code, and in code I had to rework anyway. And to be honest, I use those features sparingly and where I thought they helped.

I think trying to force new technologies/concepts/patterns into code is a bad idea. If a given pattern or technology obviously saves code/time/money or has other clear benefits then I use it, but I try never to get attached to some idea such that I force it into places where it doesn't fit with my overall goals.

I have a question (helping a colleague do some research) for all .NET VB developers.

Do you use late binding in VB? If so, how/why do you use it? What are the scenarios where you find it of value?

I'll start this off with my own observations:

I use late binding when getting data of a given shape from unknown types.

For example, you can write a nice bit of reusable data access code that accepts data from a web service, LINQ object, etc. by using late binding. You can’t easily do this without late binding in fact, because the types of the objects are different even though their shapes are the same.

That dynamic interface concept that got dropped from VB9 would address this issue in a better way, but late binding makes it work too.

I also use late binding when creating some generic types. There are cases where generics and casting are problematic, but converting a value to type Object first allows you to do a cast or operation that wouldn’t otherwise be allowed. I don’t know if this is “late binding” as such, but it is a useful technique!

I have used late binding when dynamically loading an assembly for interaction. Ideally you’d require the assembly author to implement one of your interfaces, but that’s not always possible, and late binding is a particularly nice way to get “polymorphic” access to multiple assemblies that you don’t control.

I'm just back from the MIX 08 conference. This was the first conference I've attended in many years (around 10 I think) where I wasn't speaking or somehow working. I'd forgotten just how fun and inspiring it can be to simply attend sessions and network with people in the open areas. No wonder people come to conference!! :)

Not that it was all fun and games. I did have meetings with some key Microsoft people and Scott Hanselman interviewed me for an upcoming episode of Hanselminutes (discussing the various data access and ORM technologies and how they relate to CSLA .NET 3.5).

The Day 1 keynote was everything I'd hoped for.

Well, nearly. The first part of the keynote was Ray Ozzie trying to convey how Microsoft and the web got to where it is now. The goal was to show the vision they are pursuing now and into the future, but I thought the whole segment was rather flat.

But then Scott Guthrie came on stage and that was everything you could hope for. Scott is a great guy, and his dedication and openness seem unparalleled within Microsoft. I remember first meeting him when ASP.NET was being unveiled. At that time he seemed so young and enthusiastic, and he was basically just this kick-ass dev who'd created the core of something that ultimately changed the Microsoft world. Today he seems nearly as young and easily as enthusiastic, and he's overseeing most of the cool technologies that continue to change the Microsoft world. Awesome!

So ScottGu gets on stage and orchestrates a keynote that really illustrates the future of the web. Silverlight (which makes me SOOoooo happy!), IE8, new data access technologies (like we needed more, but they are still cool!) and things like ASP.NET MVC and more.

The real reason for keynotes though, is to inspire. And this keynote didn't disappoint. The demos of Silverlight and related technologies were awesome! There was some funny and cute banter with the casting director from Circ del Sole as she demonstrated using a cool disconnected WPF app. There was a fellow RD, Scott Stanfield, showing integration of SeaDragon into Silveright so we can look (in exquisite detail) at the memorabilia owned by the Hard Rock Cafe company, some thought-provoking demos of Silverlight on mobile devices and more.

Now to be honest, I've never been a fan of the web development model. Having done terminal-based programming for many years before coming to Windows, I find it hard to get excited about returning to that ancient programming model. Well, a worse one actually, because at least the mainframe/minicomputer world had decent state management...

AJAX helps, but the browser makes for a pretty lame programming platform. It is more comparable perhaps to an Apple II or a Commodore 64 than to a modern environment, and that's before you get into the inconsistencies across browsers and that whole mess. Yuck!

Which is why Silverlight is so darn cool! Silverlight 2.0 is really a way to do smart client development with a true web deployment model. Much of the power of .NET and WPF/XAML, with the transparent deployment and cross-platform capabilities of the browser world. THIS is impressive stuff. To me Silverlight represents the real future of the web.

It should come as no surprise then, that I spent my time in Silverlight 2.0 sessions after the keynote. Sure, I've been working (on and off) with Silverlight 1.1/2.0 for the past several months, but it was a lot of fun to see presentations by great speakers like Joe Stegman (a Microsoft PM) and various other people.

One of the best sessions was on game development with Silverlight. I dabble in game development whenever I have spare time (not nearly as much as I'd like), and so the talk was interesting from that perspective. But many of the concepts and techniques they used in their games are things designers and developers will likely use in many other types of application. Background loading of assemblies and content while the app is running, and some clever animation techniques using pure XAML-based concepts (as opposed to some other animation techniques I saw that use custom controls written in C#/VB - which isn't bad, but it was fun to see the pure-XAML approaches).

Many people have asked about "CSLA Light", my planned version of CSLA .NET for Silverlight. Now that we have a Beta 1 of Silverlight I'll be working on a public release of CSLA Light, based on CSLA .NET 3.5. Microsoft has put a lot more functionality into Silverlight 2.0 than they'd originally planned - things like data binding, reflection and other key concepts are directly supported. This means that the majority of CSLA can be ported (with some work) into Silverlight. The data portal is the one big sticking point, and I'm sure that'll be the topic of future blog posts.

My goal is to support the CSLA .NET 3.5 syntax for property declaration and other coding constructs such that with little or no change you can take a business class from full CSLA and have it work in CSLA Light. This goal excludes the DataPortal_XZY implementations - those will almost always be different, though if you plan ahead and use a DTO-based data access model even that code may be the same. Of course time will tell how closely I'll meet this goal - but given my work with pre-beta Silverlight 2.0 code I think it is pretty realistic.

Scott Guthrie indicated that Silverlight 2.0 Beta 1 has a non-commercial go-live license - right now. And that Beta 2 would be in Q2 (I'm guessing June) and would have a commercial go-live license, meaning it can be used for real work in any context.

The future of the web is Silverlight, and Beta 1 is the start of that future. 2008 is going to be a great year!

I've been working all day on this WPF/WCF application, mostly trying to figure out how to configure WCF to actually do what I want in terms of security and authentication. All those angle brackets from the config files have given me a splitting headache... WCF may be cool, but configuring even relatively simple security scenarios is ridiculously difficult.

And then distaster struck. As though fighting with WCF and SSL wasn't enough, VS 2008 decided to quit publishing my app for ClickOnce. In order to test this app, I need to publish for ClickOnce on my dev box, copy the results to a test server and then run the code on a test client (thankfully we live in an age of virtual machines!!).

So the failure to publish to ClickOnce brought me up short. The issue is that the WPF project wouldn't build. It would build and run fine in all other ways, but not when I tried to publish for ClickOnce. It had been publishing just fine, and then BOOM!

(The only thing I can think of is that I was publishing for online only, then I published for online/offline, and then I switched back to online only - maybe VS doesn't like that sort of waffling and wants me to be more decisive?)

The specific problem is that the .g.i.cs files for each XAML source file that should have been in the obj\Debug directory didn't get there. Google was no help - searching for "clickonce publish .g.i.cs obj\Debug could not be found" resulted in one hit - to an MSDN forums post that was unreachable (I kept getting an MSDN forums error page).

Build|Clean Solution had no effect. Shutting down and reopening VS had no effect. Rebooting the dev box had no effect.

Finally I thought to manually delete the obj and bin folders in the project directory. And for good measure I deleted the .user file and .suo file for the project and solution. Then I reopened the project and it how publishes just fine.

I rather expected this - a bit of confusion around .NET versions and related CSLA .NET versions.

Microsoft started the whole thing by calling .NET 2.5 version 3.0. Oops, did I say that out loud? :)

But it is true. From a .NET programming perspective, 3.0 is purely additive over 2.0. Thus it is really hard to see why it is a major version.

Especially when .NET 3.5 has a much bigger impact on day-to-day use of .NET, but is just a point release... If anything, this should have been .NET 4.0, but it isn't and so now we're all royally stuck in the land of confusion.

Nothing to do but make the best of it.

I know several people and organizations who ignored .NET 3.0, but are now looking to move to .NET 3.5. Effectively "skipping" 3.0, though the reality is that their move to 3.5 is also a move to 3.0. Personally I think that's smart - they saved themselves a year of pain by not trying to use .NET 3.0 with the limited tools available, and can now move to 3.0/3.5 with Visual Studio 2008 - so they have decent tool support for the technology.

At this time I do not anticipate being able to make CSLA 3.5 work without .NET 3.5, primarily due to use of new compiler features as well as LINQ and features in .NET 2.0a and 3.0a (aka .NET 2.0 SP1 and 3.0 SP1).

Visual Studio 2008 Beta 2, along with Microsoft .NET 3.5 Beta 2, is available for download. Here's Soma's official announcement.

I find that downloading such huge sets of files requires a bit of help. My recommendation: Free Download Manager. This tool is awesome - indispensable in fact - if you do any downloads beyond small text files :) It does queued downloads, resumed downloads and throttling. Perhaps best of all, it does multi-threaded downloads, so it maximizes the use of your bandwidth when running at full throttle.

Update: Apparently there are some things you must do/fix before using VS 2008!! Read ScottGu's blog post about it!

Update 2: According to Juval Lowy, the svcutil.exe program in Beta 2 is broken. A workaround is to copy an older (Beta 1?) version of svcutil.exe over the top of the Beta 2 version.Instead, Justin Smith says that you need to run "sn.exe -Vr svcutil.exe" - apparently then you don't need to copy an older verison over the new one.

I posted previously about an issue where the WCF NetDataContractSerializer was unable to serialize a SecurityException object. Microsoft provided some insight.

It turns out that the constructor of the SerializationException object doesn't set the Action property to anything valid. Before you can serialize a SerializationException with NDCS you must explicitly set the Action property to a valid SecurityAction.

This does mean that NDCS is not compatible with the BinaryFormatter in this case, but at least there's a workaround/solution.

I've now updated CSLA .NET 3.0 to explicitly set the Action property any time a SecurityException is thrown, thus allowing the WCF data portal channel to return valid details about the nature of any exception.

The WCF NetDataContractSerializer is an almost, but not quite perfect, replacement for the BinaryFormatter.

The NDCS is very important, because without it WCF could never be viewed as a logical upgrade path for either Remoting or Enterprise Services users. Both Remoting and Enterprise Services use the BinaryFormatter to serialize objects and data for movement across AppDomain, process or network boundaries.

Very clearly, since WCF is the upgrade path for these core technologies, it had to include a serialization technology that was functionally equivalent to the BinaryFormatter, and that is the NDCS. The NDCS is very cool, because it honors both the Serializable model and the DataContract model, and even allows you to mix them within a single object graph.

Unfortunately I have run into a serious issue, where the NDCS is not able to serialize the System.Security.SecurityException type, while the BinaryFormatter has no issue with it.

The issue shows up in CSLA in the data portal, because it is quite possible for the server to throw a SecurityException. You'd like to get that detail back on the client so you can tell the user why the server call failed, but instead you get a "connection unexpectedly closed" exception instead. The reason is that WCF itself blew up when trying to serialize the SecurityException to return it to the client. So rather than getting any meaningful result, the client gets this vague and nearly useless exception instead.

By the way, if you want to see the failure, just run this code:

Dim buffer As New System.IO.MemoryStream Dim formatter As New System.Runtime.Serialization.NetDataContractSerializer Dim ex As New System.Security.SecurityException("a test") formatter.Serialize(buffer, ex)

And if you want to see it not fail run this code:

Dim buffer As New System.IO.MemoryStream Dim formatter As New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter Dim ex As New System.Security.SecurityException("a test") formatter.Serialize(buffer, ex)

I've been doing a lot of work with the NDCS over the past several months. And this is the first time I've encountered a single case where NDCS didn't mirror the behavior of the BinaryFormatter - which is why I do think this is a WCF bug. Now just to get it acknowledged by someone at Microsoft so it can hopefully get fixed in the future...

The immediate issue I face is that I'm not entirely sure how to resolve this issue in the data portal. One (somewhat ugly) solution is to catch all exceptions (which I actually do anyway), and then scan the object graph that is about to be returned to the client to see if there's a SecurityException in the graph. If so perhaps I could manually invoke the BinaryFormatter and just return a byte array. The problem with that is in the case where the object graph is a mix of Serializable and DataContract objects - in which case the BinaryFormatter won't work because it doesn't understand DataContract...

In the end I may just have to leave it be, and people will need to be aware that they can never throw a SecurityException from the server...

Earlier I blogged about the fact that the Orcas Beta 1 VPC image doesn't have ASP.NET set up with IIS, so you have to do that. Unfortunately there are a couple other issues I've discovered. Here's the full list:

IIS isn't configured for ASP.NET

Windows authentication isn't enabled for the default web site in IIS - blocking the use of VS debugging until you enable it

The default for a VS Orcas web site is to build for .NET 3.5. If you attempt to debug such a project (when it is set to run in IIS) you'll get an error dialog with a vague message about an authentication error. The reason for this is that ASP.NET only supports .NET 2.0. To resolve this, you must go into the web site's properties dialog and set its target .NET version to 2.0. You can still reference the 3.0 and 3.5 assemblies and use the new features, but VS must build to .NET 2.0 or you can't debug in IIS.

But it isn't just the debugger - other features may not work properly either, possibly resulting in a "hang" when you try to access a page.

For those of you at my workshop at VS Live this past Sunday, this was why my web site wouldn't run properly in the VPC. Fortunately there is this workaround, but I hope Microsoft provides a more comprehensive solution in the release version, because it is quite confusing to have to set your build version back to 2.0 even though you are really building against 3.5...

You might wonder why this matters, given that LINQ uses database indexes to get its data. But that's actually Dlinq, which runs against SQL Server.

LINQ itself runs against objects, arrays, collections, lists and so forth. All of which are just in-memory objects, and obviously aren't indexed at all. LINQ does "table scans" against arrays and lists at all times. Basically LINQ just runs a lot of for-each loops for you. And in the vast majority of cases that is the right answer, because most lists are only a few score or maybe a few hundred items in length, and using for-each is faster than building an index.

However, you might have lists that are big enough, or where you are doing many repeated queries against the same set of properties, where the cost of building an index is lower than the cost of using simple for-each loops. And this is where the ability to index properties of the objects in a list such that LINQ uses the index becomes very useful.

In any case, check out i4o, because it is interesting and very cool stuff!

I am building a WPF UI for my ProjectTracker CSLA .NET sample app. On the whole this is going pretty well, and I anticipate being done within the next couple days. I’ve found and fixed a couple bugs in CSLA – one in BusinessListBase that’s been there forever, and one in ValidationPanel that caused a null reference exception. While I fixed that last one, I optimized the code a bit, which does seem to make the control a bit faster.

But one thing I spent a ridiculous amount of time on was the simple process of getting a ComboBox control to bind to one data source to get its list of items, and to the business object property for the key value. Google turned up a number of search results, none of which really addressed this particular scenario – which seems odd to me given how common a scenario it is…

In ProjectTracker, a person (resource) can be assigned to a project. If they are, they are given a role on that project. The Role property is numeric – a key into a name/value list, and a foreign key into the Roles table in the database. In Windows Forms and Web Forms the UI handles translating this numeric value to a human-readable value through ComboBox controls, and obviously WPF can do the same thing. The trick is in figuring out the XAML to make it happen.

Here’s the ComboBox XAML:

<ComboBox

ItemsSource="{Binding Source={StaticResource RoleList}}"

DisplayMemberPath="Value"

SelectedValuePath="Key"

SelectedValue="{Binding Path=Role}"

Width="150" />

Let’s break this down.

The ItemsSource property specifies the data source for the data that will populate the display of the control. In my case it is referencing a data provider control defined like this:

<Page.Resources>

<csla:CslaDataProvider x:Key="RoleList"

ObjectType="{x:Type PTracker:RoleList}"

FactoryMethod="GetList"

IsAsynchronous="False" />

</Page.Resources>

This data provider control loads a name/value list object called RoleList (that inherits from Csla.NameValueListBase). The child objects in this collection expose properties Key and Value.

You can see how the ComboBox uses the DisplayMemberPath to specify that the Value property from the name/value list should be displayed to the user, and the SelectedValuePath specifies that the Key value from the list should be used to select the current item (the Key value is not displayed to the user).

Then notice that the SelectedValue property is used to bind to the main business object. This binding statement sets a path to a property on the overall DataContext for the ComboBox, which in my case is actually set through code at the Page object level:

this.DataContext = project;

So all controls on the entire page, by default, bind to a Project object, and this includes the ComboBox.

Remember though, that the Role property on the Project is a numeric index value. And so is the Key value from the name/value list. The ComboBox control connects these two automatically. So when the business object’s Role property changes, the ComboBox automatically changes the displayed/selected item. Conversely, when the user changes the ComboBox selected item, that automatically causes the business object’s Role property to change.

I wanted to run a pre-existing web site, so I copied it to the VPC, aimed a virtual directory at it in IIS and BOOM! 404 errors.

I figured it was directory permissions, incorrect paths, all sorts of things.

Eventually, in frustration I opened VS and created a new ASP.NET web project at localhost/test. Guess what? VS kindly informed me that ASP.NET wasn't enabled for IIS on this machine, and asked if I'd like to enable it. Of course I said yes - and instantly my pre-existing web site started working.

So if you decided to use the Beta 1 VPC (which is a nice way to go), be aware that ASP.NET is not enabled for IIS, and you'll need to enable it before doing any work with ASP.NET or WCF.

We're really not that far away from some major new technology. Visual Studio "Orcas" is slated to ship this fall, and will include .NET 3.5; which includes new compilers. And new compilers always means fun! :)

The VB team is running a series of webcasts to bring everyone up to speed on the language changes and the resulting capabilities.

I’ve been spending a lot of time in WPF-land over the past few weeks, and thought I’d share some of what I’ve learned. I haven’t been learning styles or UI layout stuff – Microsoft says that’s the job of the turtleneck-wearing, metrosexual GQ crowd, so I’ll just roll with that. Instead, I’ve been learning how to write data source provider controls, and implement Windows Forms-like behaviors similar to the ErrorProvider and my Csla.Windows.ReadWriteAuthorization control.

You know, manly programming J

The data source provider control is, perhaps, the easiest thing I’ve done. Like ASP.NET, WPF likes to use a data control concept. And like ASP.NET, the WPF data provider controls are easy to create, because they don’t actually do all that much at runtime. (I do want to say thank you to Abed Mohammed for helping to debug some issues with the CslaDataProvider control!)

I’m sure, as designer support for data provider controls matures in tools like Visual Studio and Expression Blend, that life will get far more complex. Certainly the Visual Studio designer support for Csla.Web.CslaDataProvider has been the single most time consuming part of CSLA .NET, though I was able to create the runtime support in an afternoon…

What I have today, is a Csla.Wpf.CslaDataProvider control that works similar to the ObjectDataProvider. The primary difference is that CslaDataProvider understands how to call Shared/static factory methods to get your objects, rather than calling a constructor method. The result is that you can create/fetch CSLA .NET business objects directly from your XAML.

The really cool part of this, is that CslaDataProvider supports asynchronous loading of the data. To be fair, the hard work is done by the WPF base class, DataSourceProvider. Even so, supporting async is optional, and requires a bit of extra work in the control – work that is worth it though. If you construct a form that has multiple DataContext objects for different parts of the form, loading all of them async should give some nice performance benefits overall.

On the other hand, if your form is bound to a single business object the value isn’t clear at all. Though the data load is async, the form won’t actually display until all the async loads are complete, so for a single data source on a form my guess is that async is actually counter-productive.

The validation/ErrorProvider support is based on some work Paul Stovell published on the web. I conceptually based my work on similar concepts, and have created a ValidationPanel control that uses IDataErrorInfo to determine if any bindings of any controls contained in the panel are invalid.

The panel control loops through all the controls contained inside the panel. On each control it loops through the DependencyProperty elements defined for that control (in WPF controls, normal properties aren’t bindable, only dependency properties). And it then loops through any Binding objects attached to each DependencyProperty. I discovered that those Binding objects are complex little buggers, and that there are different kinds of binding that I need to filter out. Specifically relative bindings and control-to-control bindings must be ignored, because they don’t represent a binding to the actual business object.

To optimize performance the panel does a bunch of caching of binding information, and is relatively sparing in how often it refreshes the validation data – but it is comparable to ErrorProvider, in that changing one business object property does trigger rechecking of all other properties bound within the same ValidationPanel. I think this is necessary, because so many data source objects are constructed around the Windows Forms model. Not following that model would cause a lot of headache when moving to WPF.

The authorization support is implemented in a manner similar to the validation. Csla.Wpf.AuthorizationPanel uses the CSLA .NET IAuthorizeReadWrite interface and scans all bindings for all controls contained in the panel to see if the business object will allow the current user to read or write to the bound property. If the user isn’t allowed to read the property, the panel either hides or collapses the data bound control (your choice). If the user isn’t allowed to write to the property, the panel sets the control’s IsReadOnly property to true, or if the control doesn’t have IsReadOnly, it sets IsEnabled to false.

AuthorizationPanel doesn’t include the same level of caching as ValidationPanel, but I don’t think it is necessary. Where ValidationPanel refreshes on every property change, AuthorizationPanel only refreshes if the data source object is changed (replaced) or if you explicitly force a refresh – probably because the current user’s principal object has changed.

I want to pause here and point out that I’ve had a lot of help in these efforts from Paul Czywczynski. He’s spent a lot of time trying these controls and finding various holes in my logic. And the next control addresses one of those holes...

The IsValid, IsDirty, IsSavable, IsNew and IsDeleted properties on a CSLA .NET business object are marked as Browsable(false), meaning they aren’t available for data binding. In Windows Forms you can work around this easily by handling a simple event on the BindingSource object. But in WPF the goal is to write no code – to do everything through XAML (or at least to make that possible), so such a solution isn’t sufficient.

Enter the ObjectStatusPanel, which takes the business object’s properties and exposes them in a way that WPF can consume. Using this panel, your object’s status properties (single object or collection) become available for binding to WPF controls, and if the object’s properties change those changes are automatically reflected in the UI. The most common scenario here is to bind a button’s enabled status to the IsSavable property of your editable root business object.

Most recently I’ve started refactoring the code in these controls. For the most part, I’ve now consolidated the common code from all three into a base class: Csla.Wpf.DataPanelBase. This base class encapsulates the code to walk through and find relevant Binding objects on all child controls, and also encapsulates all the related event handling to detect when the data context, data object, object property, list or collection have changed. It turns out that a lot of things can happen during data binding, and detecting all of them means hooking, unhooking and responding to a lot of events.

I wrote all these controls originally using the Dec 2006 CTP, and just started using them in the Mar 2007 CTP. As a pleasant surprise there was no upgrade pain – they just kept working.

In fact, they work better in the MarCTP, because the Cider designer (WPF forms designer) is now capable of actually rendering my custom controls. What I find very interesting, is that the designer actually runs the factory method of the CslaDataProvider control, so the form shows real data from the real objects right there in Visual Studio. I’m not sure this is a good thing, but that’s what happens.

There’s no doubt that I’ll find more issues with these controls, and they’ll change over the next few weeks and months.

But the exciting thing is that I’m now able to create WPF forms that have functional parity with Windows Forms, including validation, authorization and object status binding. And it can all be done in either XAML or code, running against standard CSLA .NET business objects.

People attending VS Live at the end of this month will be the first to see these controls in action – both in my workshop on Sunday the 25th and in my sessions during the week. And I plan to put a test version of CSLA .NET 3.0 online that week as well, so anyone who wants to play with it can give it a go.

Right now, if you aren’t faint of heart, you can grab the live code from my svn repository. Keeping in mind, of course, that this is an active repository and so the code in trunk/ may or may not actually work at any given point in time.

String formatting in .NET is a pain. Not that it has ever been easy - even COBOL formatting masks can get out of hand, but there's no doubt that the .NET system is harder to grasp and remember than the VB 1-6 scheme...

I just had a need to format an arbitrary value using a user-supplied format string. You'd think that

obj.ToString(format)

would do the trick. Except that System.Object doesn't have that override of ToString(), so that's not a universal solution. So String.Format() is the obvious next choice, except that I need to somehow take a format string like 'N' or 'd' and make it into something valid for String.Format()...

Brad Abrams has some good info. But his problem/solution isn't quite what I needed. Close enough to extrapolate though:

outValue =

string.Format(string.Format("{{0:{0}}}", format), value);

Given a format string of 'N', the inner Format() returns "{0:N}", which is then used by the outer Format() to format the actual value.

Then earlier today I tried building a workflow. And the workflow designer wouldn't open. Instead I got a concise little error dialog saying "Microsoft.VisualStudio.Shell.WindowPane.GetService(System.Type)". This was the case with both C# and VB projects.

Worse, attempting to close VS after that point caused a complete VS crash. It turns out that other designers (like project properties) fail as well, with similar errors.

In talking to some people at Microsoft, I discovered that the problem wasn't universal. But in talking to more people, the root of the issue appeared.

I downloaded the huge 9 part VSTS/TFS edition of the Orcas VPC. Other people downloaded the smaller 7 part VSTS-only edition. The problem only occurs in the big 9 parter, and is due to a side-by-side issue. Some VS 2005 components are in the bigger VPC to support SQL Server, and they are causing the issue. The 7 part VPC doesn't have that functionality, or those components, and so the SxS problem doesn't occur.

So I'm about 80% done downloading the 7 parter, then I can get back to work.

I get a lot of questions about the new ADO.NET Entity Framework and LINQ and how these technologies interact with CSLA .NET. I've discussed this in a few other blog posts, but the question recently came up on the CSLA .NET forum and I thought I'd share my answer:

They are totally compatible, but it is important to remember what they are for.

Both ADO.NET EF and LINQ work with entity objects - objects designed primarily as data containers. These technologies are all about making it an easy and intuitive process to get data into and out of databases (or other data stores) into entity objects, and then to reshape those entity objects in memory.

CSLA .NET is all about creating business objects - objects designed primarily around the responsibilities and behaviors defined by a business use case. It is all about making it easy to build use case-derived objects that have business logic, validaiton rules and authorization rules. Additionally, CSLA .NET helps create objects that support a rich range of UI supporting behaviors, such as data binding and n-level undo.

It is equally important to remember what these technologies are not.

ADO.NET EF and LINQ are not well-suited to creating a rich business layer. While it is technically possible to use them in this manner, it is already clear that the process will be very painful for anything but the most trivial of applications. This is because the resulting entity objects are data-centric, and don't easily match up to business use cases - at least not in any way that makes any embedded business logic maintainable or easily reusable.

CSLA .NET is not an object-relational mapping technology. I have very specifically avoided ORM concepts in the framework, in the hopes that someone (like Microsoft) would eventually provide an elegant and productive solution to the problem. Obviously solutions do exist today: raw ADO.NET, , the DAAB, nHibernate, Paul Wilson's ORM mapper, LLBLgen and more. Many people use these various technologies behind CSLA .NET, and that's awesome.

So looking forward, I see a bright future. One where the DataPortal_XYZ methods either directly make use of ADO.NET EF and LINQ, or call a data access layer (DAL) that makes use of those technologies to build and return entity objects.

Either way, you can envision this future where the DP_XYZ methods primarily interact with entity objects, deferring all the actual persistence work off to EF/LINQ code. If Microsoft lives up to the promise with EF and LINQ, this model should seriously reduce the complexity of data access, resulting in more developer productivity - giving us more time to focus on the important stuff: object-oriented design ;) .

I have a custom data source control as part of my CSLA .NET framework. It is somewhat like ObjectDataSource, but works with objects that are created through factory methods rather than default constructors (and some other variations).

All is pretty well with this control, except one issue: it fails when adding a CslaDataSource control to a page as a new data source from a DetailsView or other control.

In other words, you add a DetailsView to the page, then tell that control you want it to bind to a new data source and it brings up a wizard, where you pick CslaDataSource so a new one is added to the page.

The problem I’m getting is a VERY odd exception: Csla.Web.CslaDataSource can not be cast to Csla.Web.CslaDataSource.

Yes, that’s right – the type can’t be cast to itself.

I believe this is because the wizard is loading its own copy of Csla.dll into memory, separate from the one used by the web forms designer. Or something like that. This even confuses the debugger – it can’t show details about the type because it says the type is loaded into two different GUIDs. I don’t know what the GUIDs represent (appdomains, versions?), but it is obviously not good.

This is very weird, and after hours of time on this, I’m quite stumped. Any help, clues, pointers or ideas are VERY welcome!

In both cases the issue is that data binding doesn’t refresh the value from the data source after it updates the data source from the UI. This means that any changes to the value that occur in the property set code aren’t reflected in the UI.

The question he posed to me was whether it was a good idea to have a property set block actually change the value. In most programming models, goes the thought, assigning a property to a value can’t result in that property value changing. So any changes to the value that occur in the set block of a property are counter-intuitive, and so you simply shouldn’t change the value in the setter code.

Here’s my response:

The idea of a setter (which is really just a mutator method by another name) changing a value doesn't (or shouldn't) seem counter-intuitive at all.

If we were talking about assigning a value to a public field I’d agree entirely. But we are not. Instead we’re talking about assigning a value to a property, and that’s very different.

If all we wanted were public fields, we wouldn't need the concept of "property" at all. The concept of "property" is merely a formalization of the following:

public fields are bad

private fields are exposed through an accessor method

private fields are changed through a mutator method

creating and using accessor/mutator methods is awkward without a standard mechanism

So the concept of "property" exists to standardize and formalize the idea that we need controlled access to private fields, and a standard way to change their value through a mutator method.

Consider the business rule that says a document id must follow a certain form - like SOP433. The first three characters must be alpha and upper case, the last three must be numeric. This is an incredibly common scenario for document, product, customer and other user-entered id values.

Only a poor UI would force the user to actually enter upper case values. The user should be able to type what they want, and the software will fix it.

But putting the upper case rule in the UI is bad, because code in the UI isn't reusable, and tends to become obsolete very rapidly as technology and/or the UI design changes. There's nothing more expensive over the life of an application than a line of code in the UI. So while it is possible to implement this rule in a validation control, in JavaScript, in a button click event handler - none of those are good solutions to the real problem.

Yet if that rule is placed purely in the backend system, then the user can't get any sort of interactive response. The form must be "posted" or "transmitted" to the backend before the processing can occur. Users want to immediately see the value be upper case or they get nervous.

So then we're stuck. Many people implement the rule twice. Once in the UI to make the user happy, and once in the backend, which is the real rule implementation. And then they try to keep those rules in sync forever - the result being an expensive, unreliable and hard to maintain system.

I've watched this cycle occur for 20 years now, and it is the same time after time. And it sucks.

This, right here, is why VB got such a bad name through the 1990’s. The VB forms designer made it way too easy to write all the logic in the UI, and without any other clear alternative that's what happened. The resulting applications are very fragile and are impossible to upgrade to the next technology (like .NET). Today, as we talk, many thousands of lines of code are being written in Windows Forms and Web Forms in exactly the same way. Those poor people will have a hell of a time upgrading to WPF, because none of their code is reusable.

What's needed is one location for this rule. Business objects offer a workable solution here. If the object implements the rule, and the object runs on the client workstation, then (without code in the UI) the user gets immediate response and the rule is satisfied. And the rule is reusable, because the object is reusable - in a way that UI code never can be (or at least never has been).

That same object, with that same interactive rule, can be used behind Windows Forms, Web Forms, WPF and even a web services interface. The rule is always applied, because it is right there in the object. And for interactive UIs it is immediate, because it is in the field's mutator method (the property setter).

So in my mind the idea of changing a value in a setter isn't counter-intuitive at all - it is the obvious design purpose behind the property setter (mutator). Any other alternative is really just a ridiculously complex way of implementing public fields. And worse, it leaves us where we've been for 20+ years, with duplicate code and expensive, unreliable software.

Here's an issue from Windows Forms that appears to have crept into WPF as well – along with a solution (thanks to Sam Bent and Kevin Moore from the WPF team):

Consider a class with a property that enforces a business rule - such as that the value must be all upper case:

public class Test : INotifyPropertyChanged

{

public event

System.ComponentModel.PropertyChangedEventHandler

PropertyChanged;

// ...

private string _data;

public string Data

{

get

{

return _data;

}

set

{

_data = value.ToUpper();

OnNotifyPropertyChanged("Data");

}

}

}

Bind this to a TextBox and type in a lower-case value. The user continues to see the lower-case value on the screen, even though the object obviously has an upper-case value. The PropertyChanged event is blissfully ignored by WPF data binding.

I believe this is the same "optimization" as in Windows Forms, where the assumption is that since the value was put into the object by data binding that it can’t be different from what's on the screen - so no refresh is needed. Obviously that is a unfortunate viewpoint, as it totally ignores the idea than an object might be used to centralize business logic or behavior...

In Windows Forms the solution to this issue is relatively simple: handle an event from the BindingSource and force the BindingSource to refresh the value. Bill McCarthy wrapped this solution into an extender control, which I included in CSLA .NET, making the workaround relatively painless.

In WPF the solution is slightly different, but also relatively painless.

It turns out that this optimization doesn’t occur if an IValueConverter is associated with the binding, and if the binding’s UpdateSourceTrigger is not PropertyChanged.

For the TextBox control the UpdateSourceTrigger is LostFocus, so it is good by default, but you’ll want to be aware of this property for other control types.

An IValueConverter object’s purpose is to format and parse the value as it flows to and from the target control and source data object. In my case however, I don’t want to convert the value at all, I just want to defeat this unfortunate “optimization”. What’s needed is an identity converter: a converter that does no conversion.

For my part, I want to thank everyone who contributed to the CSLA .NET forums, and to the CSLAcontrib project. The time and energy you have all put in over the past few months has been a great help to the CSLA .NET community, and I know there are many people out there who are grateful for your efforts!

Most importantly though, I want to thank all the users of CSLA .NET and everyone who has purchased copies of my books. At the end of the year I received numerous emails thanking me for creating the framework (and I appreciate that), but I seriously want to thank all of you for making this a vibrant community. CSLA .NET is one of the most widely used development frameworks for .NET, and that is because each of you have taken the time to learn and use the framework. Thank you!

For me 2006 was a year of change. Starting with CSLA .NET 2.0 I've been viewing CSLA .NET as not just an offshoot of my books, but as a framework in its own right. Of course many people have been treating it that way for years now, but I hope it has been helpful to have me treat point releases a bit more formally over the past number of months.

This extends to version 2.1, which represents an even larger change for me. With version 2.1 I'm releasing my first self-published ebook to cover the changes. This ebook is not a standalone book, rather it is best thought of as a "sequel" to the 2005 book. However, it is well over 150 pages and covers both the changes to the framework itself, as well as how to use the changes in your application development. The ebook is undergoing technical review. That and the editing process should take 2-3 weeks, so the ebook will be available later this month.

Looking at the rest of 2007 it is clear that I'll be spending a lot of time around .NET 3.0 and 3.5.

I'll be merging the WcfChannel into CSLA .NET itself, as well as implementing support for the DataContract/DataMember concepts. This, possibly coupled with one WPF interface implementation for collections, will comprise CSLA .NET 3.0.

It is not yet clear to me what changes will occur due to .NET 3.5, but I expect them to be more extensive. Some of the new C#/VB language features, such as extension methods and lambda expressions, have the potential to radically change the way we think about interacting with objects and fields. When you can add arbitrary methods to any type (even sealed types like String) many interesting options become available.

Then there's the impact of LINQ itself, and integration with the ADO.NET Entity Framework in one manner or another.

ADO EF appears, at least on the surface, to be YAORM (yet another ORM). If that continues to be true, then it is a great way to get table data into data entities, but it doesn't really address mapping the data into objects designed around use cases and responsibility. If you search this forum for discussions on nHibernate you'll quickly see how ADO EF might fit into the CSLA .NET worldview just like nHibernate does today: as a powerful replacement for basic ADO.NET and/or the DAAB.

LINQ is potentially more interesting, yet more challenging. It allows you to run select queries across collections. At first glance you might think this eliminates the need for things like SortedBindingList or FilteredBindingList. I’m not sure that’s true though, because the result of any LINQ query is an IEnumerable<T>. This is the most basic type of list in .NET; so basic that the result must often be converted to a more capable list type.

Certainly when you start thinking about n-level undo this becomes problematic. BusinessBase (BB) and BusinessListBase (BLB) work together to implement the undo capabilities provided by CSLA .NET. Running a LINQ query across a BLB results in an IEnumerable<T>, where T is your BB-derived child type. At this point you’ve lost all n-level undo support, and data binding (Windows Forms, and any WPF grid) won’t work right either.

So at the moment, I’m looking at LINQ being most useful in the Data Access Layer, along with ADO EF, but time will tell.

The point of all this rambling is this: I didn’t rush CSLA .NET 1.0 or 2.0. They came out when I felt I had good understanding of the issues I wanted to address in .NET 1.0 and .20 respectively. And when I felt I had meaningful solutions or answers to those issues. I’m treating .NET 3.5 (and presumably CSLA .NET 3.5) the same way. I won’t rush CSLA .NET to meet an arbitrary deadline, and certainly not to match Microsoft’s release of .NET 3.5 itself. There’s no point coming out with version of CSLA .NET that misses the mark, or that provides poor solutions to key issues.

So in 2007 I’ll most certainly be releasing the version 2.1 ebook and CSLA .NET 3.0 (probably with another small ebook). Given that Microsoft’s vague plans are to have .NET 3.5 out near the end of 2007, I don’t expect CSLA .NET 3.5 to be done until sometime in 2008; but you can expect to see beta versions and/or my experiments around .NET 3.5 as the year goes on.

Of course I’ll be doing other things beyond CSLA .NET in 2007. I’m lined up to speak at the SD West and VS Live San Francisco conferences in March. I’m speaking in Denver and Boulder later in January, and I’ll be doing other speaking around the country and/or world as the year goes on. Click here for the page where I maintain a list of my current speaking engagements.

To close, thank you all for your support of the CSLA .NET community, and for your kind words over the past many months. I wish you all the best in 2007.

Perhaps most importantly, SP1 rolls up a number of hotfixes that many people have been using for a long time to improve the stability and performance of Visual Studio 2005. I know this is one service pack I'm installing immediately!!

Several people have asked me about my thoughts on the Microsoft .NET 3.0 Workflow Foundation (WF) technology.

My views certainly don’t correspond with Microsoft's official line. But the “official line” comes from the WF marketing team, and they'll tell you that WF is the be-all-and-end-all, and that's obviously silly. Microsoft product teams are always excited about their work, which is good and makes sense. We all just need to apply an "excitement filter" to anything they say, bring it back to reality and decide what really works for us. ;)

Depending on who you talk to, WF should be used to do almost anything and everything. It can drive your UI, replace your business layer, orchestrate your processes and workflow, manage your data access and solve world hunger…

My view on WF is a bit more modest:

Most applications have a lot of highly interactive processes - where users edit, view and otherwise interact with the system. These applications almost always also have some non-interactive processes - where the user initiates an action, but then a sequence of steps are followed without the user's input, and typically without even telling the user about each step.

Think about an inventory system. There's lots of interaction as the user adds products, updates quantities, moves inventory around, changes cost/price data, etc. Then there's almost always a point at which a "pick list" gets generated so someone can go into the warehouse and actually get the stuff so it can be shipped or used or whatever. Generating a pick list is a non-trivial task, because it requires looking at demand (orders, etc.), evaluating what products to get, where they are and ordering the list to make the best use of the stock floor personnel's time. This is a non-interactive process.

Today we all write these non-interactive processes in code. Maybe with a set of objects working in concert, but more often as a linear or procedural set of code. If a change is needed to the process, we have to alter the code itself, possibly introducing unintended side-effects, because there's little isolation between steps.

Personally I think this is where WF fits in. It is really good at helping you create and manage non-interactive processes.

Yes, you have to think about those non-interactive processes in a different way to use WF. But it is probably worth it, because in the end you'll have divided each process into a set of discrete, autonomous steps. WF itself will invoke each step in order, and you have the pleasure (seriously!) of creating each step as an independent unit of code.

From an OO design perspective it is almost perfect, because each step is a use case, that can be designed and implemented in isolation - which is a rare and exciting thing!

Note that getting to this point really does require rethinking of the non-interactive process. You have to break the process down into a set of discrete steps, ensuring that each step has very clearly defined inputs and outputs, and the implementation of each step must arbitrarily ensure any prerequisites are met, because it can't know in what order things will eventually occur.

The great thing (?) about this design process is that the decomposition necessary to pull it off is exactly the same stuff universities were teaching 25 years ago to COBOL and FORTRAN students. This is procedural programming "done right". To me though, the cool think is that each "procedure" now becomes a use case, and so we're finally in a position to exploit the power of procedural AND object-oriented design and programming! (and yes, I am totally serious)

So in the end, I think that most applications have a place for WF, because most applications have one or more of these non-interactive processes. The design effort is worth it, because the end result is a more flexible and maintainable process within your application.

Now on one hand this makes sense. There's no doubt that WinFX introduces major functionality to .NET. Windows Presentation Foundation is the effective successor to Windows Forms after all - how much more major can you get??

On the other hand, the new .NET 3.0 doesn't break any .NET 2.0 code, and yet it "includes" .NET 2.0. All .NET 3.0 does is add new stuff. Typically, when I think of a major version number changing, I expect that I'll have to retest everything and that much of my stuff might break or be affected. None of that is happening here.

Even changing from .NET 1.0 to 1.1 brought tons of headaches (if you used Remoting or Enterprise Services at least). And that was a point release. Yet here we have a major version change that doesn't change any existing bits.

I guess it just goes to show that there are no hard-and-fast rules in the software industry ;)

In any case, there's no doubt that Microsoft will reduce confusion overall by keeping everything under the .NET moniker, so I think this is a wise move.

I’ve run into a spot where I’m stuck, and I’m hoping someone has an idea.

CSLA .NET 2.0 includes an ASP.NET data source control: CslaDataSource. This works well, except for one issue, which is that it doesn’t refresh the metadata for your business objects unless you close VS 2005 and reopen the IDE.

The reason for this problem is that your business assembly gets loaded into memory so the data source control can reflect against it to gather the metadata. That part works fine, but once an assembly is loaded into an AppDomain it can’t be unloaded. It is possible to unload an entire AppDomain however, and so that’s the obvious solution: load the business assembly into a temporary AppDomain.

So this is what I’m trying to do, and where I’m stuck. You see VS 2005 has a very complex way of loading assemblies into ASP.NET web projects. It actually appears to use the ASP.NET temporary file scheme to shadow the assemblies as it loads them. Each time you rebuild your solution (or a dependant assembly – like your business assembly), a new shadow directory is created.

The CslaDataSource control is loaded into the AppDomain from the first shadow directory – and from what I can tell that AppDomain never unloads, so the control is always running from that first shadow directory. And then – even if I use a temporary AppDomain – the business assembly is also loaded from that same shadow directory, even if newer ones exist.

And that’s where I’m stuck. I have no idea how to find out the current shadow directory, and even if I do odd things like hard-coding the directory, then I just get in trouble because the new AppDomain thinks it has a different Csla.dll than the AppDomain hosting the Web Forms designer.

Here’s the code that loads the Type object within the temporary AppDomain:

publicIDataSourceFieldSchema[] GetFields()

{

List<ObjectFieldInfo> result =

newList<ObjectFieldInfo>();

System.Security.NamedPermissionSet fulltrust =

new System.Security.NamedPermissionSet("FullTrust");

AppDomain tempDomain = AppDomain.CreateDomain(

"__temp",

AppDomain.CurrentDomain.Evidence,

AppDomain.CurrentDomain.SetupInformation,

fulltrust,

new System.Security.Policy.StrongName[] { });

try

{

// load the TypeLoader object in the temp AppDomain

Assembly thisAssembly = Assembly.GetExecutingAssembly();

int id = AppDomain.CurrentDomain.Id;

TypeLoader loader =

(TypeLoader)tempDomain.CreateInstanceFromAndUnwrap(

thisAssembly.CodeBase, typeof(TypeLoader).FullName);

// load the business type in the temp AppDomain

Type t = loader.GetType(

_typeAssemblyName, _typeName);

// load the metadata from the Type object

if (typeof(IEnumerable).IsAssignableFrom(t))

{

// this is a list so get the item type

t = Utilities.GetChildItemType(t);

}

PropertyDescriptorCollection props =

TypeDescriptor.GetProperties(t);

foreach (PropertyDescriptor item in props)

if (item.IsBrowsable)

result.Add(newObjectFieldInfo(item));

}

finally

{

AppDomain.Unload(tempDomain);

}

return result.ToArray();

}

This replaces the method of the same name from ObjectViewSchema in the book.

Notice that it creates a new AppDomain and then invokes a TypeLoader class inside that AppDomain to create the Type object for the business class. The TypeLoader is a new class in Csla.dll that looks like this:

Since this object is created in the temporary AppDomain, the business assembly is loaded into that AppDomain. The Type object is [Serializable] and so is serialized back to the main AppDomain so the data source control can get the metadata as needed.

This actually all works – except that it doesn’t pick up new shadow directories as they are created.

Any ideas on how to figure out the proper shadow directory are appreciated.

Honestly, I can’t figure out how this works in general – because obviously some part of the VS designer picks up the new shadow directory and uses it – even though the designer apparently doesn’t. I am quite lost here.

To make matters worse, things operate entirely differently when a debugger is attached to VS than when not. When a debugger is attached to VS then nothing appears to pick up the new shadow directories – so I assume the debugger is interfering somehow. But it makes tracking down the issues really hard.

I just got back from Norway (so my body has no idea what time it actually is right now...), and one of the conversations I had while there was about data binding a TextBox to an object's property that is a Nullable<T> - like Nullable(Of Integer) or something.

Somehow I had expected that Windows Forms would have anticipated this (obvious) concept and would handle it. Not so...

Fortunately, as a result of this conversation, one of the people at the conference took some of the ideas we were tossing around and came up with an extender control to address the issue. Very nice!

This is an interesting, and generally good idea as I see it. Unfortunately this team, like most of Microsoft, apparently just doesn't understand the concept of data hiding in OO. SPOIL allows you to use your object's properties as data elements for a stored procedure call, which is great as long as you only have public read/write properties. But data hiding requires that you will have some private fields that simply aren't exposed as public read/write properties. If SPOIL supported using fields as data elements for a stored procedure call it would be totally awesome!

The same is true for LINQ. It works against public read/write properties, which means it is totally useless if you want to use it to load "real" objects that employ basic concepts like encapsulation and data hiding. Oh sure, you can use LINQ (well, dlinq really) to load a DTO (data transfer object - an object with only public read/write properties and no business logic) and then copy the data from the DTO into your real object. Or you could try to use the DTO as the "data container" inside your real object rather than using private fields. But frankly those options introduce complexity that should be simply unnecessary...

While it is true that loading private fields requires reflection - Microsoft could solve this. They do own the CLR after all... It is surely within their power to provide a truly good solution to the problem, that supports data mapping and also allows for key OO concepts like encapsulation and data hiding.

In a previous post I discussed some issues I’ve been having with the ASP.NET Development Server (aka VS Host or Cassini).

I have more information direct from Microsoft on my issue. It turns out that it is “by design”, and a sad thing this is… VS Host is designed such that the thread on which your code runs can (and does) go between AppDomains.

Objects placed on the Thread object, such as the CurrentPrincipal, must be serializable and the assembly must be available to all AppDomains; even the primary AppDomain that isn’t running as part of your web site!

And this is the root of my problem. I create a custom IPrincipal object in an assembly (dll). I put it in the Bin directory and then use it – which of course means it ends up on the Thread object. Cassini then runs my code in an AppDomain for my web site and all is well until it switches out into another AppDomain that isn’t running my web site (but rather is just running Cassini itself). Boom!

Why boom? Well, that custom IPrincipal object on the thread is still on the thread. When the thread switches to the other AppDomain, objects directly attached to the thread (like CurrentPrincipal) are automatically serialized, the byte stream transferred to the new AppDomain, and deserialized into the new AppDomain. This means that the new AppDomain must have access to the assembly containing the custom IPrincipal class – but of course it doesn’t, because it isn’t running as part of the web site and thus doesn’t have access to the Bin directory.

What’s the answer? Either don’t use Cassini (which has been my answer), or install the assembly with the custom IPrincipal into the GAC. Technically the latter answer is the “right” one, but that has the ugly side-effect of preventing rapid custom application development. All of a sudden you can’t just change a bit of code and press F5 to test; instead you must build your code, update the GAC and then you can test. Nasty…

As an aside, this is exactly the same issue you’ll run into when using nunit to test code that uses a custom IPrincipal on the thread. Unlike nunit however, you can’t predict when Cassini will switch you to another AppDomain so you can’t work around the issue by clearing the CurrentPrincipal like you can with nunit (or at least I haven’t found the magic point at which to do it…).

What’s really scary is that it was implied that this could happen under IIS as well – but that flies in the face of years of experiential evidence to the contrary. I guess the safe thing to do is to treat IIS like Cassini, and put shared assemblies in the GAC. But I’m not sure I’m ready to advocate that yet, because that means complicating installs a whole lot, and I’ve never encountered this threading issue under IIS so I don’t think it is a real issue.

Microsoft announced Go Live licenses this morning for WCF (Windows Communication Foundation / “Indigo”) and WF (Windows Workflow Foundation), which allow customers to use the January Go Live releases of WCF and WF in their deployment environments. (Note that these are unsupported Go Lives.)

I know when to use a Tank (plodding and durable lethality) and I know when to use a A-10 (fast, maneuverable and vulnerable lethality), but if you make tanks fly and add a few feet of armor on an A-10 then you get the same muddy water we have between C# and VB.Net.Those that know me will forgive the military analogy ;)

I continued the analogy:

The problem we have today, in my opinion, is that C# is a flying tank and VB is a heavily armored attack plane.

Microsoft did wonderful things when creating .NET and these two languages - simply wonderful. But the end result is that no sane person would purchase either a tank or an A-10 now, because both features can be had in a single product. Well, actually two products that are virtually identical except for their heritage.

Of course both hold baggage from history. For instance, C# clings to the obsolete concept of case-sensitivity, and VB clings to the equally obsolete idea of line continuation characters.

Unfortunately the idea of creating a whole new language where the focus is on the compiler doing more work and the programmer doing less just isn't in the cards. It doesn't seem like there's any meaningful language innovation going on, nor has there been for many, many years...

(Even LINQ (cool though it is) doesn't count. We had most of LINQ on the VAX in FORTRAN and VAX Basic 15 years ago...)

The only possible area of interest here are DSLs (domain-specific languages), and I personally think they are doomed to be a repeat of CASE.

As I’m working on the next edition of my Expert VB/C# Business Objects books I’m building both a Web Forms and Web Services interface. And I’m running into issues with the ASP.NET Development Server provided with Visual Studio 2005. This web server is often referred to as the VS Host.

The issue is with assembly loading. Apparently the VS Host has difficulties loading assemblies at times, which causes issues.

For instance, you can’t test a custom membership provider if it is in a separate assembly that is referenced by your web site – that causes an assembly load exception.

In my particular case the problem I’m having is that I put a custom IPrincipal object on the thread and HttpContext. That works great, but at some point VS Host apparently switches to some other internal AppDomain – triggering an attempt to serialize and deserialize that principal object. The result? BOOM!

Again, an assembly load exception – this time because the attempt to deserialize the principal object into that other mysterious AppDomain fails…

All my usual tricks have failed. I’ve tried clearing the principal from the thread and HttpContext before the page processing is complete. No dice. I’ve tried handling the AppDomain event that is raised when an assembly can’t be found. No dice (which isn’t a surprise, since it isn’t MY AppDomain that’s having the problem).

In the end, the VS Host appears to be useless for any work where you are using custom principal or custom membership provider classes. In a casual conversation on this topic a friend of mine suggested that perhaps VS Host was for prototypes and hobbyists; and I think he was right. Real web development still must be done with IIS only...

Brant Estes, a fellow Magenic consultant, wrote a very interesting blog entry showing how you can embed those pesky extra DLLs into your EXE, and yet have .NET find them when needed. He even tried it with CSLA .NET, creating a single EXE that embeds the CSLA assembly as a resource - very cool!

ClickOnce is a great technology, but it seems that documentation is a bit scarce... In particular, like all web-based technologies, it requires some obscure configuration. The web may be great in some ways, but it does often require that you have a lot of arcane knowledge to do even the simplest thing...

This forum thread covers the little-documented fact that for IIS 6.0 the web server serving up a ClickOnce application needs to map the .application, .manifest and .deploy file types in IIS as follows:

In the case of CSLA .NET 2.0 I'm writing my own CslaDataSource that understands how to work nicely with CSLA .NET business objects. In conferring with the ASP.NET team it seems like this is the best option. It turns out that writing a DataSource control isn't overly difficult - it is writing the designer support (for VS 2005) that is the hard part...

ASP.NET 2.0 has bi-directional data binding, which is a big step forward. This means you can bind the UI to a data source (DataTable, object, etc.) and not only display the data, but allow the user to do “in-place” editing that updates the data source when the page is posted back.

The end result is very cool, since it radically reduces the amount of code required for many common data-oriented web pages.

Unfortunately the data binding implementation isn’t very flexible when it comes to objects. The base assumption is that “objects” aren’t intelligent. In fact, the assumption is that you are binding to “data objects” – commonly known as Data Transfer Objects or DTOs. They have properties, but no logic.

In my case I’m working with CSLA .NET 2.0 objects – and they do have logic. Lots of it, validation, authorization and so forth. They are “real” objects in that they are behavior-based, not data-based. And this causes a bit of an issue.

There are two key constraints that are problematic.

First, ASP.NET insists on creating instances of your objects at will – using a default constructor. CSLA .NET follows a factory method model where objects are always created through a factory method, providing for more control, abstraction and flexibility in object design.

Second, when updating data (the user has edited existing data and clicked Update), ASP.NET creates an instance of your object, sets its properties and calls an Update method. CSLA .NET objects are aware of whether they are new or old – whether they contain a primary key value that matches a value in the database or not. This means that the object knows whether to do an INSERT or UPDATE automatically. But when a CSLA .NET object is created out of thin air it obviously thinks it is new – yet in the ASP.NET case the object is actually old, but has no way of knowing that.

The easiest way to overcome the first problem is to make business objects have a public default constructor. Then they play nicely with ASP.NET data binding. The drawback to this is that anyone can then bypass the factory methods and incorrectly create the objects with the New keyword. That is very sad, since it means your object design can’t count on being created the correct way, and a developer consuming the object might use the New keyword rather than the factory method and really mess things up. Yet at present this is how I’m solving issue number one.

I am currently solving issue number two through an overloaded Save method in BusinessBase: Save(forceUpdate) where forceUpdate is a Boolean. Set it to True and the business object forces its IsNew flag to False, thus ensuring that an update operation occurs as desired. This solution works wonderfully for the web scenario, but again opens up the door to abuse in other settings. Yet again a consumer of the object could introduce bugs into their app by calling this overload when it isn’t appropriate.

The only way out of this mess that I can see is to create my own ASP.NET data control that understands how CSLA .NET objects work. I haven’t really researched that yet, so I don’t know how complex it is to write such a control. I did try to subclass the current Object control, but they don’t provide extensibility points like I need, so that doesn’t work. The only answer is probably to subclass the base data control class itself…

This is the conclusion (for now) of the issue with Windows Forms data binding that I discussed in an earlier entry.

To recap, the issue occurs in detail forms built using data binding. The current control’s value isn’t refreshed when the user tabs off the control. If the data source (business object or smart DataTable with a partial class) changed the value due to business logic that change is not reflected in the UI, even though the changed value is in the data source.

In talking to the data team at Microsoft, it turns out that this is a bug and will likely be fixed in some future service pack. At this late stage of the game however, it won’t be fixed for release of VS 2005.

Fortunately they were able to come up with a decent workaround in the meantime. The workaround is done in the UI and involves hooking an event from each Binding object, then in the event handler forcing the current value to be refreshed from the data source.

To set this up, add the following code to your form:

Private Sub HookBindings()

For Each ctl As Control In Me.Controls

For Each b As Binding In ctl.DataBindings

AddHandler b.BindingComplete, AddressOf ReadBinding

Next

Next

End Sub

Private Sub ReadBinding(ByVal sender As Object, _

ByVal e As BindingCompleteEventArgs)

e.Binding.ReadValue()

End Sub

Then just make sure to call HookBindings in your form’s Load event or constructor. Ideally this is the kind of thing you’d do in a base form class, then make all your normal forms just inherit from that base so there’s no extra code in each actual form.

The HookBindings method loops through all controls on the form, and all Binding objects for each control. Every Binding object’s BindingComplete event is hooked to the ReadBinding method – making it the event handler for all Binding objects.

The ReadBinding method handles all BindingComplete events for the form. Any time that event is raised this method merely forces data binding to read the current value from the data source and refresh the control.

Since BindingComplete is raised after the user has tabbed off a control and the binding is “complete”, this refresh of the display value ensures that the current control does actually contain the value from the data source even if the data source changed the value.

In my last post I discussed a nasty bug with Windows Forms data binding, along with one possible solution (which is most certainly a hack). This led to some discussion – including a couple people observing that this hack should probably be in the UI layer, not the business layer (which is where I’d put it).

In concept I agree with the idea that the hack could belong in the UI layer. However, that is a slippery slope for two reasons.

First, you could argue that all data binding support only exists for Windows Forms, since neither Web Forms and certainly Web services utilize any of the interfaces or event models required by data binding. Yet it is quite obvious that data binding requires that the data source itself implement these behaviors in order to work properly. The hack is merely overcoming a bug in the way data binding works, so it naturally fits along with all the other interfaces and eventing that already exist in the data source.

Second, it is a hack. One would hope that this is a temporary issue that Microsoft will fix. Remember that this will break all .NET 1.x code that uses data binding where the data source actually manipulates data! It would seem likely that it would be a high priority to fix this issue (we can only hope). Thus it is less important where the hack is placed, than it is how transparent it is and how easily it can be removed in the future when (if) Microsoft fixes this bug.

In the case of CSLA .NET 2.0 I can entirely embed the hack into the framework, meaning that removing the hack in the future would be a simple framework change. In the case of custom objects or smart DataTables a well-implemented hack could be removed by changing just a few lines (3-6) of code in each object.

But let’s consider the idea of solving this in the UI. The obvious solution in the UI is to add a LostFocus handler for every control and just updating the control at that point – thus directly overcoming the issue at hand. I’m sure there are various other possibilities, but the problem is merely that the value isn’t updated automatically at LostFocus, so it is hard to imagine a more direct or simple solution than to just update it directly.

That turns out to be brainless code in every form in the system. It is brainless enough that you could use a little reflection and generically wrap it up into a new base form class so when the form loads it hooks LostFocus for all events, and the event handler loops through all bindings on the control and refreshes any and all values associated with those bindings. This is effectively what data binding does anyway, so we’d be simply replicating what they should be doing for the control.

Wrapping this up into a new base form class is the way to go obviously – since it avoids writing this code in every form. However, it does require the use of reflection and forces all forms to inherit from this new base class – which is a PITA.

I think it would be a bit more invasive to remove the hack from the UI than from CSLA .NET. However, it may be less invasive to remove the hack from the UI than to remove it from custom objects or smart DataTable objects.

So I will likely show the UI hack in the smart DataTable book, and the Timer hack in the Business Objects books (barring some better hacks or a real fix from Microsoft).

I recently discovered an issue with Windows Forms data binding when a form is bound against a smart data container like a DataTable with partial class code or a CSLA .NET –style object. The issue is pretty subtle, but nasty – and my current workaround doesn’t thrill me. So after reading this if you have a better work around I’m all ears – otherwise this could end up in the books I’m current writing, since both are impacted… :(

So here’s the problem:

Create a smart data source – defined as one that includes business logic such as validation and manipulation of data. In this case, it is the manipulation part that’s the issue. For instance, maybe you have a business rule that a given text field must always be upper case, and so you properly implement this behavior in your business object or partial class of your DataTable to keep that business logic out of the UI.

In a business object for instance, this code would be in the property set block – perhaps something like this in a CSLA .NET 2.0 style class:

Specifically notice that the inbound value is changed to upper case with a ToUpper call in the set block.

Now add that data container to the Data Sources window and drag it onto your form in Details mode. Visual Studio nicely creates a set of controls reflecting your properties/columns (damn I love this feature!!) and sets up all the data binding for you. So far so good.

Now run the application and enter a lower case value into the Last Name text box and tab off that control. Data binding automatically puts the new value into the object and runs the code in the set block. This means the object now contains an upper case version of the value you entered.

Notice that the TextBox control does not reflect the upper case value. The value from the object was never refreshed in the control.

Now change another value in a different control and tab out. Notice that the Last Name TextBox control is now updated with the upper case value.

So that’s the problem. Data binding updates all controls on a form when a PropertyChanged event is raised, except the current control. You can put a breakpoint in the property get block and you’ll see that the value isn’t retrieved until some other control triggers the data binding refresh.

I did report this bug to Microsoft, but it has been marked as postponed - meaning that it won't get fixed for release. An unfortunate side-issue is that this issue makes data binding in 2.0 work differently than in .NET 1.x, so any .NET 1.x code that binds against a smart data container like a business object will likely quit working right under 2.0.

Now to my workaround (of which I’m not overly proud, but which does work). My friend Ed Ferron should get a serious kick out of this – feel free to laugh all you’d like Ed!

The problem is that data binding isn’t refreshing whatever control is currently being edited when the PropertyChanged event is raised. The obvious question then is how to get a PropertyChanged event raised after the property set block has completed. Because at that point data binding will actually refresh the control.

The easiest way to get some action to occur “asynchronously” without actually using multi-threading (which would be serious overkill) is to use the System.Windows.Forms.Timer control. That control runs on your current thread, but provides a simulation of asynchronous behavior. (On the VAX this was called an asynchronous trap, so the concept has been around for a while!)

Putting the Timer control into your data object is problematic. The Timer control implements IDisposable, so your object would also need to implement IDisposable – and then any objects using your object would need to implement IDisposable. It gets seriously messy.

Additionally, a real application may have dozens or hundreds of objects. They can’t each have a timer – that’d be nuts! And the OS would run out of resources of course.

But there’s a solution. Use a shared Timer control in a central location. Like this one:

PublicClass Notifier

#Region" Request class "

PrivateClass Request

Public Method As Notify

Public PropertyName AsString

PublicSubNew(ByVal method As Notify, _

ByVal propertyName AsString)

Me.Method = method

Me.PropertyName = propertyName

EndSub

EndClass

#EndRegion

PublicDelegateSub Notify(ByVal state AsString)

PrivateShared mObjects AsNew Queue(Of Request)

PrivateSharedWithEvents mTimer AsNew System.Windows.Forms.Timer

PublicSharedSub RequestNotification(ByVal method As Notify, _

ByVal propertyName AsString)

mObjects.Enqueue(New Request(method, propertyName))

mTimer.Enabled = True

EndSub

SharedSubNew()

mTimer.Enabled = False

mTimer.Interval = 1

EndSub

PrivateSharedSub mTimer_Tick(ByVal sender AsObject, _

ByVal e As System.EventArgs) Handles mTimer.Tick

mTimer.Enabled = False

While mObjects.Count > 0

Dim request As Request = mObjects.Dequeue

request.Method.Invoke(request.PropertyName)

EndWhile

EndSub

EndClass

This could alternately be implemented as a singleton object, but the usage syntax for this is quite nice.

This Notifier class really implements the RequestNotification method, allowing an object to request that the Notifier call the object back in 1 tick of the clock (about 100 nanoseconds) on a specific method.

The reason the class maintains a list of objects to notify is because even in a single threaded application you can easily envision a scenario where multiple notification requests are registered within 100 nanoseconds – all you need is programmatic loading of data into an object.

Now your data object must implement a method matching the Notify delegate and register for the callback. For instance, here’s the updated code from above:

PublicProperty LastName() AsString

Get

Return mLastName

EndGet

Set(ByVal value AsString)

mLastName = value.ToUpper

Notifier.RequestNotification(AddressOf Notify, "LastName")

EndSet

EndProperty

PublicSub Notify(ByVal propertyName AsString)

RaiseEvent PropertyChanged(Me, _

New PropertyChangedEventArgs(propertyName))

EndSub

Rather than raising the event directly, the property set block now asks the Notifier to call the Notify method in 1 tick. So 1 tick later – after the property set block is complete and data binding has moved on – the Notify method is called and the appropriate PropertyChanged event is raised.

With this change (hack) data binding will now refresh the Last Name TextBox control as you tab out of it. Though there’s a 100 nanosecond delay in there, it isn’t something the user can actually see and so you can argue it doesn’t matter.

I just got back from a week’s vacation/holiday in Great Britain and I feel very refreshed.

And that’s good, given that just before going to the UK I wrapped up the first draft of Chapter 1 in the new book Billy Hollis and I are writing. As you have probably gathered by now, this book uses DataSet objects rather than my preferred use of business objects.

I wanted to write a book using the DataSet because I put a lot of time and energy into lobbying Microsoft to make certain enhancements to the way the objects work and how Visual Studio works with them. Specifically I wanted a way to use DataSet objects as a business layer – both in 2-tier and n-tier scenarios.

Also, I wanted to write a book using Windows Forms rather than the web. This reflects my bias of course, but also reflects the reality that intelligent/smart client development is making a serious comeback as businesses realize that deployment is no longer the issue it was with COM and that development of a business application in Windows Forms is a lot less expensive than with the web.

The book is geared toward professional developers, so we assume the reader has a clue. The expectation is that if you are a professional business developer (a Mort) that uses VB6, Java, VB.NET, C# or whatever – that you’ll be able to jump in and be productive without us explaining the trivial stuff.

So Chapter 1 jumps in and creates the sample app to be used throughout the book. The chapter leverages all the features Microsoft has built into the new DataSet and its Windows Forms integration – thus showing the good, the bad and the ugly all at once.

Using partial classes you really can embed most of your validation and other logic into the DataTable objects. When data is changed at a column or row level you can act on that changed data. As you validate the data you can provide text indicating why a value is invalid.

The bad part at the moment is that there are bugs that prevent your error text from properly flowing back to the UI (through the ErrorProvider control or DataGridView) in all cases. In talking to the product team I believe that my issues with the ErrorProvider will be resolved, but that some of my DataGridView issues won’t be fixed (the problems may be a “feature” rather than a bug…). Fortunately I was able to figure out a (somewhat ugly) workaround to make the DataGridView actually work like it should.

The end result is that Chapter 1 shows how you can create a DataSet from a database, then write your business logic in each DataTable. Then you can create a basic Windows Forms UI with virtually no code. It is really impressive!!

But then there’s another issue. Each DataTable comes with a strongly-typed TableAdapter. The TableAdapter is a very nice object that handles all the I/O for the DataTable – understanding how to get the data, fill the DataTable and then update the DataTable into the database. Better still, it includes atomic methods to insert, update and delete rows of data directly – without the need for a DataTable at all. Very cool!

Unfortunately there are no hooks in the TableAdapter by which you can apply business logic when the Insert/Update/Delete methods are called. The end result is that any validation or other business logic is pushed into the UI. That’s terrible!! And yet that’s the way my Chapter 1 works at the moment…

This functionality obviously isn’t going to change in .NET or Visual Studio at this stage of the game, meaning that the TableAdapter is pretty useless as-is.

(to make it worse, the TableAdapter code is in the same physical file as the DataTable code, which makes n-tier implementations seriously hard)

Being a framework kind of guy, my answer to these issues is a framework. Basically, the DataTable is OK, but the TableAdapter needs to be hidden behind a more workable layer of code. What I’m working through at the moment is how much of that code is a framework and how much is created via code-generation (or by hand – ugh).

But what’s really frustrating is that Microsoft could have solved the entire issue by simply declaring and raising three events from their TableAdapter code so it was possible to apply business logic during the insert/update/delete operations… Grumble…

The major bright point out of all this is that I know business objects solve all these issues in a superior manner. Digging into the DataSet world merely broadens my understanding of how business objects make life better.

Though to be fair, the flip side is that creating simple forms to edit basic data in a grid is almost infinitely easier with a DataTable than with an object. Microsoft really nailed the trivial case with the new features - and that has its own value. While frustrating when trying to build interesting forms, the DataTable functionality does mean you can whip out the boring maintenance screens with just a few minutes work each.

Objects on the other hand, make it comparatively easy to build interesting forms, but require more work than I'd like for building the boring maintenance screens...

In a reply to a previous entry on the Mort persona, Dan B makes this comment:

I've written about this before, but I'll say it again - I think the dilemma VB faces is the dichotomy between being taken seriously as a modern OO language and the need to carry along the Morts. It's the challenge of balancing the need to distance itself from some of those VB6 carryovers with the need to keep those millions of high-school, hobbyist, etc. developers buying the product.

My previous post wasn't really about VB as such, but more about the "Mort" persona. That persona exists and isn't going anywhere. There are a whole lot of Morts out there, and more entering the industry all the time. Most developers are professional business developers and thus most developers fit the Mort persona. That's just fact.

Whether this large group of people chooses to congregate around VB, C#, Java, Powerbuilder or some other tool doesn't matter in the slightest.

What does matter from a vendor's perspective (such as Microsoft) is that this is the single largest developer demographic, and so it makes a hell of a lot of sense to have a tool that caters to the pragmatic and practical focus of the Mort persona.

If this is VB that's awesome and I am happy. But if enough Morts move to C#, then C# will be forced to accommodate the priorities and requirements of the Mort persona. Microsoft has proven time and time again that they are very good at listening to their user base, and so whatever tool attracts the overwhelming population of Morts will ultimately conform to their desires.

Don’t believe me? Why does C# 2005 have edit-and-continue? Because so many Morts went from VB to C# and they voted very loudly and publicly to get e&c put into their new adopted language. I know a great many Elvis/Einstein people who think the whole e&c thing was a waste of time and money – but they’ve already lost control. And this is just the beginning.

In other words, for those Elvis and Einstein personas who evangelize C# my words are cautionary. You are outnumbered 5 to 1, and if Mort comes a-calling you will almost instantly lose control of C# and you'll probably feel like you need a new home.

The irony is that you’ll have brought this doom on yourselves by telling the vast majority of developers that the only way to get your respect is to use semi-colons, when the reality is that the only way to get your respect is a fundamental change in worldview from pragmatic and practical to elegant and engineered - and frankly that's just not going to happen.

Most people are in this industry only partially because of technology. They are driven by the desire to solve business problems and to help their organizations be better and stronger. It is a small subset that are primarily driven by the love of technology.

If this ratio is changing at all, it is changing away from technology. Tools like Biztalk and concepts like software factories and domain-specific languages are all about abstracting the technology to further enable people who are primarily driven by the business issues and the passion to solve them.

But I don’t see this as hopeless. As one of my fellow RDs mentioned to me a few weeks ago, in Visual Studio 2005 C++ is finally a first-class .NET language. To paraphrase her view, Mort can have VB or C# or both, because the real geeks (the Elvis/Einstein types) can and will just go back to C++ and be happy. But the truly wise will geeks will use both where appropriate.

Mort is the Microsoft “persona” often associated with VB, and for various reasons “Mort” has unfortunately become an insult.

But in reality, Mort is the business developer. You know, like the 3 million or so VB6 developers that have been programming for anywhere from 5-20 years and have a pretty good clue about how software is built. Mort is not a newbie or a hobbyist. Mort is a professional[1] business developer. In fact, the Mort persona represents the vast majority of developers. There are around 3-5 VB developers for every C++/C# developer out there. And an increasing number of C# developers are “Morts” as well - you don't stop being a Mort just because you change programming languages after all.

The highly productive majority of developers who build business systems day in and day out. These business developers typically build systems for 1 to 1000 users. Systems that are of critical importance to their business. Systems that are linked to the very lifeblood of the companies for which these people work.

Mort is the heart and soul of the Microsoft platform. Mort is the reason Microsoft development is pervasive in virtually all small to mid-sized companies, and why it is lurking in the shadows everywhere you look even in the Fortune 100. These are the business developers that won't say no, that won't give up and who refuse to spend weeks or months on over-thinking J2EE or COM+ architectures when they can have their software up and saving money in a few days.

These are the people who never left the "smart client" and so isn’t "coming back to it" in a revolution. Long-time business developers are the people who saw the web for the terminal-based monstrosity it is and never left the productivity of Windows itself. These are the true Microsoft loyalists.

They aren't the uber-geeks. They aren't in it for the love of technology nearly as much as for the love of helping their end users and their companies. They are pragmatic and focused on just getting stuff done and running and saving money.

It isn’t like quality doesn’t count. Quality is critical, but also relative to the task at hand. In most cases, adequate software that’s deployed in a couple months is infinitely superior to exquisitely designed and tested software that’s deployed in a couple years.

A very large number of these “Mort” business developers are still using VB6 and have yet to move to .NET. Whether these business developers stay in VB or move to C# doesn't really matter to me a whole lot. Speaking as a geek I think that what’s important is that they move to .NET, because it is a far superior platform than Windows. But is this actually important to the business developers themselves?

The fact is that the majority of business developers aren't going to change the way they work due to a new language or even a new platform. If .NET can't give them the high levels of productivity of VB6 they won't move. If Microsoft can't convince mainstream business developers that they can switch to .NET quickly, easily and in order to gain serious and pragmatic benefit they'll never move. Nor should they. If .NET doesn’t make their job easier what would be the point?

Personally I am convinced that Visual Studio 2005 (with its attendant new VB and C# languages and related tools) is the tipping point. Not from a geek perspective (though there’s cool stuff there too), but from a pragmatic get-it-done perspective.

The new data access features in ADO.NET and in Windows Forms are truly the best stuff Microsoft has ever done in this area. The levels of productivity for building business applications in Windows Forms are unmatched by any technology I’ve seen.

The new and updated Windows Forms controls and the streamlined nature of Windows development brings back memories of VB6. Yes, Windows Forms is still a new forms engine when compared to VB6, but finally we can honestly make the claim that it is easier and more powerful than its VB6 predecessor. We can sincerely show that a business application can be written faster and with less code in VB 2005 than in VB6.

Things like the new splitter control, the flow layout, the toolstrip (the toolstrip is my new favorite toy btw), the new datagridview and other controls are the keys to serious productivity. Couple them with the easy way you create template projects, forms and classes and you almost immediately have a highly consistent and productive development environment to match or exceed VB6.

At Tech Ed last week Microsoft announced that VS 2005 and SQL Server 2005 will be released around the week of Nov 8, 2005. If you are one of the very large number of business developers who’ve been holding off on .NET, I understand. But I strongly suggest you look at VS 2005 and VB 2005, because I’m betting you are going to love what you see!

[1] As in sports, a professional is someone who makes their living by doing something. In this context, a professional business developer is someone who makes their living by building business software

Indigo is Microsoft’s code name for the technology that will bring together the functionality in today’s .NET Remoting, Enterprise Services, Web services (including WSE) and MSMQ. Of course knowing what it is doesn’t necessarily tell us whether it is cool, compelling and exciting … or rather boring.

Ultimately beauty is in the eye of the beholder. Certainly the Indigo team feels a great deal of pride in their work and they paint this as a very big and compelling technology.

Many technology experts I’ve talked to outside of Microsoft are less convinced that it is worth getting all excited.

Personally, I must confess that I find Indigo to be a bit frustrating. While it should provide some absolutely critical benefits, in my view it is merely laying the groundwork for the potential of something actually exciting to follow a few years later.

Why do I say this?

Well, consider what Indigo is again. It is a technology that brings together a set of existing technologies. It provides a unified API model on top of a bunch of concepts and tools we already have. To put it another way, it lets us do what we can already do, but in a slightly more standardized manner.

If you are a WSE user, Indigo will save you tons of code. But that’s because WSE is experimental stuff and isn’t refined to the degree Remoting, Enterprise Services or Web services are. If you are using any of those technologies, Indigo won’t save you much (if any) code – it will just subtly alter the way you do the things you already do.

Looking at it this way, it doesn’t sound all that compelling really does it?

But consider this. Today’s technologies are a mess. We have at least five different technologies for distributed communication (Remoting, ES, Web services, MSMQ and WSE). Each technology shines in different ways, so each is appropriate in different scenarios. This means that to be a competent .NET architect/designer you must know all five reasonably well. You need to know the strengths and weaknesses of each, and you must know how easy or hard they are to use and to potentially extend.

Worse, you can’t expect to easily switch between them. Several of these options are mutually exclusive.

But the final straw (in my mind) is this: the technology you pick locks you into a single architectural world-view. If you pick Web services or WSE you are accepting the SOA world view. Sure you can hack around that to do n-tier or client/server, but it is ugly and dangerous. Similarly, if you pick Enterprise Services you get a nice set of client/server functionality, but you lose a lot of flexibility. And so forth.

Since the architectural decisions are so directly and irrevocably tied to the technology, we can’t actually discuss architecture. We are limited to discussing our systems in terms of the technology itself, rather than the architectural concepts and goals we’re trying to achieve. And that is very sad.

By merging these technologies into a single API, Indigo may allow us to elevate the level of dialog. Rather than having inane debates between Web services and Remoting, we can have intelligent discussions about the pros and cons of n-tier vs SOA. We can apply rational thought as to how each distributed architecture concept applies to the various parts of our application.

We might even find that some parts of our application are n-tier, while others require SOA concepts. Due to the unified API, Indigo should allow us to actually do both where appropriate. Without irrational debates over protocol, since Indigo natively supports concepts for both n-tier and SOA.

Now this is compelling!

As compelling as it is to think that we can start having more intelligent and productive architectural discussions, that isn’t the whole of it. I am hopeful that Indigo represents the groundwork for greater things.

There are a lot of very hard problems to solve in distributed computing. Unfortunately our underlying communications protocols never seem to stay in place long enough for anyone to really address the more interesting problems. Instead, for many years now we’ve just watched as vendors reinvent the concept of remote procedure calls over and over again: RPC, IIOP, DCOM, RMI, Remoting, Web services, Indigo.

That is frustrating. It is frustrating because we never really move beyond RPC. While there’s no doubt that Indigo is much easier to use and more clear than any previous RPC scheme, it is also quite true that Indigo merely lets us do what we could already do.

What I’m hoping (perhaps foolishly) is that Indigo will be the end. That we’ll finally have an RPC technology that is stable and flexible enough that it won’t need to be replaced so rapidly. And being stable and flexible, it will allow the pursuit of solutions to the harder problems.

What are those problems? They are many, and they include semantic meaning of messages and data. They include distributed synchronization primitives and concepts. They include standardization and simplification of background processing – making it as easy and natural as synchronous processing is today. They include identity and security issues, management of long-running processes, simplification of compensating transactions and many other issues.

Maybe Indigo represents the platform on which solutions to these and other problems can finally be built. Perhaps in another 5 years we can look back and say that Indigo was the turning point that finally allowed us to really make distributed computing a first-class concept.

Microsoft has a strong commitment (and strong incentive) to helping the VB6 community move into VB.NET. Many features in Whidbey (Visual Studio 2005) are specifically geared toward making VB.NET more accessible to existing VB6 developers. In preparation for the Visual Studio 2005 release, Microsoft has dedicated people to the job of coming up with ways to help people move.

The new VBRun web site is one of the first big moves in that direction. This site is all about VB6 today, and how to make VB6 work with VB.NET. No one, least of all Microsoft, expects the move to happen instantly and this site tries to bring a level of fusion between VB6 and VB.NET.

I just spent three days getting inundated with the details of Indigo as it stands today (in its pre-beta state). For those that don’t know, Indigo is the combined next generation of .NET Remoting, Enterprise Services, Web services (asmx and WSE) and MSMQ all rolled into one big ball.

I also had some conversations about Avalon, though not nearly in so much detail. For those that don’t know, Avalon is the next generation display technology that uses full 3D rendering and should allow us to (finally) escape the clutches of GDI. Avalon is the display technology related to XAML. XAML is an XML markup language to describe Avalon interfaces.

My interest in these technologies spans quite a range of concerns, but top on my list is how they might impact my Business Objects books and the related CSLA .NET framework.

It turns out that the news is pretty good. Seriously good actually. In fact, it looks like people using CSLA .NET today are going to be very happy over the next few years.

Within the context of CSLA .NET, Indigo is essentially a drop-in replacement for Remoting. I will have to change the DataPortal to use Indigo, but that change should have no impact on an application’s UI, business logic or data access code. In other words (cross your fingers), a business application based on CSLA .NET should move to Indigo with essentially no code changes.

[Disclaimer: Indigo isn’t even in beta, so anything and everything could change. My statements here are based on what I’ve seen and heard, and thus may change over time as well.]

One of my primary goals for CSLA .NET 2.0 is to alter the DataPortal to make it easier to adapt it to various transports. In the short term this means Remoting, asmx, WSE and Enterprise Services (DCOM). But it also means I’ll be able to add Indigo support with relative ease once Indigo actually exists.

Avalon is a different story. Avalon is a new UI technology, which means that moving to Avalon means tossing out your existing UI and building a new one. But if you are using Windows Forms today, with CSLA .NET business objects for your logic and databinding to connect the two together your life will be better than most. It appears that Avalon will also support databinding against objects just like (hopefully better than) Windows Forms.

Since a well-written CSLA-based Windows Forms application doesn’t have any business logic (not even validation) in the UI itself, switching to Avalon should merely be a rip-and-replace of the UI, with little to no impact on the underlying business or data access layers. I keep telling people that the “UI is expendable”, and here’s the proof

I just thought I’d share these observations. Indigo and Avalon (together under the label of WinFX) won’t show up for quite a long time, so none of this is of immediate interest. Still it is nice to know that when it does show up sometime in the future that CSLA .NET will have helped people to move their applications more easily to the new technologies.

Cross-pollination is one of the benefits of regular interaction with fellow speakers and authors. For instance, Juval Lowy was just showing me this cool and rather obscure feature where you can just make method calls “disappear”. Now I am not convinced that this is a good thing, as the effect is not at all explicit or obvious – but it is cool nonetheless.

The idea is that you can write a method:

Public Sub Foo()

End Sub

Then you can call this method elsewhere in your code. Perhaps even many places in your code:

Public Sub Client()

Foo()

End Sub

So far so good. But now apply this attribute to the Foo method:

<Conditional("MyFlag")> _

Public Sub Foo()

End Sub

When you now compile your project, Foo is compiled, but the call to Foo in the Client method is not included in the result. Yes, that’s right – all calls to Foo everywhere in your code disappear without a trace. They aren’t in the CIL, and thus are just poof, gone.

To get those calls to Foo back, you need to define a compiler symbol:

#Const MyFlag = True

Now mystically, the calls to Foo will reappear in the compiled code.

This is a form of conditional compilation, similar to checking for a debug condition or something. But it is not explicit, and the caller will have no idea that they aren’t calling the method any more. Nor will the developer, since there’s no way for the caller to even know that the method is marked for disappearance.

Seems like a feature that should remain obscure, but from a purely geeky perspective it is pretty interesting.

As I’ve mentioned before, I personally like using VB. I tolerate, and have become quite competent in, C#.

But the current situation is frustrating.

I am speaking at an event in a couple months, and the organizers requested that all code samples be in both VB and C#. On one hand this makes sense, but I must say that it means I’ll likely have half the demos I’d otherwise use.

Porting VB to C# or visa versa is boring work. It is time wasted redoing something that’s already done. And I don’t do it. I already have more cool ideas than I have time to try out, and wasting time adding or removing semicolons from my last cool experiment merely cuts into time when I could be doing something interesting.

The sole exception to this is CSLA .NET. In that case I have chosen to maintain two identical versions of the code. It is painful and frustrating, but important. The most frustrating part of it though, is that I have numerous enhancements to CSLA .NET that I’ll never publish to the world, because the pain involved in porting to the other language is too high.

From time to time people have suggested that I use a tool to convert the code. But I opted specifically against that option with CSLA .NET because I wanted the VB code to look like VB and the C# code to look like C#. Code conversion tools don’t capture the subtle stylistic differences between languages, and I consider those differences to be important.

You can always tell VB code that came from such a tool. It looks like crap, and no self-respecting VB developer would ever write such poor code. Likewise, I have no doubt that comparable C# code would be equally offensive.

Since the code I’m talking about here is intended to teach programming concepts, the quality and style of the code is even more important than normal. Thus I just can’t see how using tools of this nature is good.

Perhaps we’ll find out. If I can find a good VB->C# converter maybe I’ll convert some of my demos for this upcoming event and see if the C# attendees howl… (and visa versa)

But I can’t say I’m going to invest a huge amount of time figuring it out, because I’ve got some other cool ideas I’m working on and can’t afford the distraction.

I was going to stay out of this, really I was. But it appears to be spinning out of control, with the press jumping in and spouting inaccurate conclusions left and right…

I am a Visual Basic MVP, and I do not favor the idea of merging the VB6 IDE into Visual Studio 2007 (or whatever comes after 2005). I’m afraid I see that as a ridiculous idea, for many of the reasons Paul Vick lists.

I’ve been in this industry for a reasonable amount of time – nearly 20 years. Heck, my first job on the VAX was porting code from the PDP-11 to the VAX. But there were still companies running that PDP-11 software ten years after the hardware was no longer manufactured.

The lesson? Companies run old, dead technology.

Why?

Because it works. Companies are nothing if not pragmatic. And that’s OK. But none of those companies expected DEC to support the long-dead PDP. They built their own support network through user groups and a sub-industry of recyclers who sold refurbished parts for years and years after nothing new was made.

VB6 today is the PDP-11 of 20 years ago. It is done, and support is ending. (though technically Microsoft has support options through 2008 I guess)

And you know what? Companies will continue to run VB6.

Why?

Because it works. Microsoft would prefer if you upgraded, but you don’t have to. And that’s OK. But like the people running the PDP-11’s, you can’t expect Microsoft to support a dead tool. Especially when they’ve provided a far superior alternative in the form of VB 2005.

If you want to keep running VB6 that’s cool. But like anyone using dead technology, you have to accept the costs of handling your own support. Of getting “refurbished” parts (in this case developers and components).

I’ll bet you that none of those old companies are still running a PDP-11 today. Why? Because eventually the cost of running a dead technology outweighs the cost of moving forward. The business eventually decides that moving forward is the cost effective answer and they do it.

This will be true for VB6 as well. It is simply a cost/benefit decision at a business level, nothing more. For some time to come, it will be cost-effective to maintain VB6 code, even though the technology is dead. Eventually – perhaps even 10 years later – the cost differential will tip and it will be more effective to move to a more modern and supported technology.

Other than a few odd ducks, I very much doubt that most developers would choose to continue to use VB6. Certainly they wouldn’t make that choice after using VB.NET or VB 2005 for a few days. I’ve seen countless people make the migration, and none of them are pining for the “good old days of VB6” after using VB.NET. Mostly because VB.NET is just so damn much fun!

And if you press the VB6-focused MVP’s, by and large you’ll find that they are staying in VB6 for business reasons, not technical ones. Their customer bases are pragmatic and conservative. Their customer bases are still at the point where the cost of running a dead technology is lower than switching to a modern technology. And that’s OK. That’s business.

What irritates me is when people let emotion into the discussion. This shouldn’t be a dogmatic discussion.

I have this long-standing theory and thought I’d share it. The theory goes like this:

First there was DOS.

Then there was Windows which ran on DOS.

Then DOS was emulated in Windows.

Then there was .NET which ran on Windows.

Then Windows was emulated in .NET.

Of course that last bit hasn't happened yet, but I expect it is just a matter of time before Microsoft's OS becomes .NET and Win32 becomes an emulated artifact. I also expect that branding-wise Windows will remain the foremost brand. I think this is why the .NET brand is already being deprecated. .NET as a brand must fade so it can be reborn like the phoenix in the future - as Windows.

If you read the article you’ll find that the Longhorn OS will seriously change the way that .NET itself is versioned. In fact, it turns out that to a serious degree the whole idea of installing “side-by-side” versions of .NET itself will go away when Longhorn shows up.

Oh sure, they have plans for a complex scheme by which assemblies can be categorized into different dependency levels. Some levels can be versioned more easily, while others can only be versioned with the OS itself.

What this really means is that .NET is losing its independence from the OS. In the end, we’ll only get new versions of .NET when we get new versions of the OS – and we all know how often that happens…

I’d say that this was inevitable, but frankly it was not. Java hasn’t fallen into this trap, and .NET doesn’t need to either. Not that it will be easy to avoid, but the end result of the current train of thought portrayed by Richter is devastating.

Fortunately there’s the mono project. As .NET becomes more brittle and stagnant due to its binding with Longhorn, we might find that mono on Windows becomes a very compelling story. mono will be able to innovate, change and adapt much faster than the .NET that inspired it. Better still, mono will remain unbound from the underlying OS (like .NET was originally) and thus will be able to run side-by-side in cases where .NET becomes crippled.

Hopefully Microsoft will realize what they are doing to themselves before all this comes to pass. Otherwise, I foresee a bright future for mono on Windows.

In my previous entry Randy H notes that Microsoft has a different approach to marketing:

MS has some incredibly talented marketers. The Technical Product Manager role is essentially a marketer that helps to determine what features go into products and how things should work. To me, that kind of marketing has a lot of value. I wouldn't dismiss the role of marketing in our greatest technology companies. Wasn't .NET a whole lot of marketing as well?

While it is true that Microsoft has a unique approach to marketing, they really aren't much different than anyone else. While .NET was as much marketing as anything else (since the ".NET" got slapped on _everything_ for a while), the reason it has been successful is due to its technical merits.

Notice that the ".NET" label is fading already - Visual Studio 2005, Visual Basic 2005, etc. No .NET left in the product names at all. My guess is that ".NET" the term will fade away into the same marketing hole that swallowed up Remote OLE Automation, MTS and soon SOA.

I have always found it amazing when Microsoft is said to have this "great marketing machine". In many ways they are the worst marketers out there. Certainly far, far worse than Apple or IBM for instance.

Apple has the trendy thing going, and has for a very long time. Microsoft has never been trendy or fashionable or cool or hip. But Apple sure is hip, and it shows in their iPod sales. For some reason though, having powerful marketing in the "cool space" doesn't translate to widespread use.

IBM has those really kick-ass commercials that juxtapose business situations with strange solutions. And prior to that they had the cool commercials showing non-tech scenarios that were just metaphors for IT issues. Very cool and very smart stuff. Very effective too, as IBM’s global consulting arm has become large and influential due to that kind of marketing.

Microsoft has never had anything remotely similar to “real” marketing like that. Microsoft’s marketing has always been more subtle and focused on technologists. In reality, Microsoft’s marketing has always been more grass-roots, much like the open-source world.

And there’s some humor for you. The open-source world has apparently decided that it too needs marketing. Even if you make no money off your work, you certainly want the fame/notoriety – and to get fame you need people to use your stuff rather than your competitors’ stuff (regardless of whether they are commercial or OSS).

At the same time, Microsoft really wants to move into the enterprise space, and so they have been trying to figure out how to do “actual” marketing along the lines of IBM. And they want to sell consumer items like the Media PC, so they’ve been struggling to figure out how to be hip like Apple. Hopefully as they do this, they’ll manage to continue the MSDN and TechNet-style marketing to the technical community. We’ve been the bread-and-butter for them over the past 12 or so years after all.

One of the areas where the non-Microsoft world has long criticized Microsoft is in their use of a closed bug and issue tracking system. The open-source world, in particular, claims that having an public submission process is one of their key benefits.

With Visual Studio 2005 and .NET 2.0, Microsoft has launched what is apparently a little publicized and little-known public submission web site for bugs and suggestions. It was code-named Ladybug, and is now called the Product Feedback Center.

Not that it is totally unheard of, or unused. Ladybug is rumored to be the primary motivation in C# getting edit-and-continue. E&C wasn't even on the C# feature list, but large numbers of ex-VB developers who are now enjoying semi-colons apparently voted E&C to the top of the Ladybug wish list, putting pressure on Microsoft to add the feature.

True story? I don't know, but that's the rumor.

The point being, Microsoft pays attention to the stuff on this site. If you have a bug or issue in VB, C#, ASP.NET or whatever you should make sure to report it on Ladybug!!!

I have as well, with the intent of addressing some uses for CSLA .NET 2.0. Here’s one important concept I’ve figured out:

Public Class BaseClass

Public Function BaseAnswer() As Integer

Return 42

End Function

End Class

Public Class AddMethod(Of T As {New, AddMethod(Of T)})

Inherits BaseClass

Public Shared Function GetObject() As T

Return New T

End Function

End Class

Public Class StrangeMethod

Inherits AddMethod(Of StrangeMethod)

Public Function GetStrangeAnswer() As Integer

Return 123321

End Function

End Class

Public Class TestAddMethod

Public Sub Test()

Dim extendedBaseClass As StrangeMethod = StrangeMethod.GetObject

extendedBaseClass.BaseAnswer()

extendedBaseClass.GetStrangeAnswer()

End Sub

End Class

The primary benefits are:

1)The consumer (the Test method) doesn’t have to deal with generic syntax at all – yea!

2)The code inside the generic can be minimized – very little code needs to be in the AddMethod class. This is good, because debugging inside generics is limited compared to inside regular classes. Most code can be put into BaseClass rather than AddMethod, and AddMethod can include only code requiring the use of generics (unlike my trivial example here).

3)By keeping as much code as possible in BaseClass we gain better polymorphism. Generics aren’t polymorphic, so the fewer public methods in a generic the better. Putting public methods into a base class or interface is far preferable in most cases.

4)Note that AddMethod is constrained to only be used through inheritance. I thought this was a neat trick that helps enforce this particular model for using generics. AddMethod is only useful for creation of a subclass.

Earlier this week I spoke at the Vermont .NET user group in Burlington, VT. I count myself fortunate to have been in VT in early October, as the fall colors were out in their full splendor. Many parts of the world have their legends, and one of Vermont’s is the majesty of the fall colors. I have got to say that in this case the reality is at least as good as the hype, and probably better. I saw views that were simply breathtaking, and the day was cloudy! I can only imagine how impressive it would be under bright sunshine.

At the end of my presentation to the user group a gentleman (who’d driven all the way up from southern Connecticut!) asked whether I thought VB or C# was the better language. This is not an uncommon question, and it has been a perennial issue since .NET was in alpha release. Normally I tend to brush the question off by either answering with the typical Microsoft wishy-washy stuff about culture and freedom of choice, but it turns out that I’ve been giving this issue quite a lot of thought over the past few months and my views have been changing.

To fully understand my perspective, you need to realize that I spent many years working on DEC VAX computers using the VMS and then OpenVMS operating system.

It was, by the way, renamed to OpenVMS as a marketing response to the “open” Unix world. Yes, the same Unix world that is now being eaten alive by the more open Linux world. The humor to be found in our industry is often subtle yet truly impressive.

One of the key attributes of VMS was language neutrality. The operating system was written in assembly and FORTRAN, and there was some small benefit to using FORTAN as a programming language over other languages. However, other languages such as Pascal and VAX Basic were adapted to the platform and were easily equals to FORTRAN for virtually any task. In fact VAX Basic was often better, because it had taken language concepts from Pascal, FORTRAN and other languages and was basicall a powerful hybrid with the best of all the other languages. All the OS-level power of FORTRAN with the elegance of Pascal.

As a side note, the only second-class language I ever encountered under VMS was C++. It turns out that C and C++ were/are way too tied to the stream-based Unix world-view to be good general use languages across multiple platforms. Things that were trivial in any other VMS language were insanely difficult in C++, especially as they related to any sort of IO operation. Since most programs do some IO, this made C++ really nasty…

For some years I was a manager in the IT department of a manufacturing company. At that time one of my key hiring requirements was that a developer had to know at least two languages. Knowledge of just one language was an absolute sign of immaturity and/or closed-mindedness which would lead to a dangerous lack of perspective.

Once Windows NT and Visual Basic came out I started migrating from VMS to Windows – basically following David Cutler (the creator of both VMS and Windows NT).

In many ways Windows NT lost a great deal of the language neutrality offered by VMS. The primary language of NT was C++, and the language warped the OS in some ways that made it more like Unix and less like VMS. The idea that FORTRAN or Pascal would be comparable to C++ in Windows was (and is) absurd. This lack of language neutrality certainly slowed the adoption of NT as a primary development target for the first several years of its existence. Just look back at the books and code of the early 90’s – writing even simple applications was ridiculously complex. Especially when compared to more mature environments like Unix or OpenVMS.

Enter Visual Basic (and a whole host of competing products). Vendors, including Microsoft, Borland, IBM and others rapidly realized that C++ would never be the vehicle for most people to develop software on Windows. Many languages/tools were created in the early 90’s to make Windows actually useful from a business development perspective. The common theme in all these products was abstraction. Since the Windows OS itself wasn’t language neutral, every one of these tools added an abstraction layer of some sort to “get rid” of the Windows OS and provide a platform that was more programmable by the language in question.

In the end three tools emerged as being dominant. In order of popularity they were Visual Basic, Powerbuilder and Delphi. While each of these had their own abstraction layers to make Windows programmable, there was no commonality between their abstractions. And C++ had grown its own abstraction too – in the form of MFC (and I should mention OWL as well I suppose). Even so, C++ was not competitive with VB or Powerbuilder, though it may have been more popular than Delphi on the whole.

In an effort to shore up the Windows OS, Microsoft created COM. In many ways this was an attempt to provide some common programming model akin to what we had in OpenVMS in the first place. And COM had its good points – chief among them being that there was finally some common programming scheme that would work across languages. Sure, each language out there could mess up and create components that weren’t interoperable, but at least it was possible to interact between languages.

Really this is all we asked. Under OpenVMS too, it was possible to violate the common language calling standard and create libraries that couldn’t be used by other languages but yours. Most people weren’t dense enough to do such a thing of course, because the benefits of cross-language usage were too high. Part of this, I think, flowed from the fact that most OpenVMS developers worked in multiple programming languages as a matter of course.

In Windows COM development in the mid to late 90’s most people didn’t intentionally write components that could only be used by C++ or VB or whatever. The benefits of cross-language usage were too high.

There were two exceptions to this. The first were developers that only worked in C++. They’d often create COM components that could only be used by C++ through sheer ignorance. The second were developers that only worked in VB. They’d often expose VB-only types (like collections) from their components, making life difficult or impossible for users of other languages. In both cases, the developers simply didn’t have the perspective of working in multiple languages and so they had no clue that they’d written inane code. Incompetence through ignorance.

But COM had its limitations in other areas. With the advent of Java the limitations of COM became painfully apparent by contrast and Microsoft had to do something. They could have taken the Java route and created yet another platform that was language-focused like Windows or the JVM. Almost certainly this would have given rise to another round of VB/Powerbuilder tools that abstracted the platform into something that could support more productive business development. (you can see some of this belatedly starting in the Java world even today with things like Java Faces, etc.)

Fortunately Microsoft decided to go with a language neutral approach. Though on its surface .NET is very unlike OpenVMS, it has the common trait of language neutrality. Like OpenVMS, the neutrality has its limits, but it is pretty darn good. Also like OpenVMS, .NET did create a set of second-class languages (ironically including C++).

So here we sit today, working with a language neutral platform that actually has multiple viable languages. Like the languages on VMS, some of the .NET languages have runtime libraries, but by and large the focus is less on the language than it is on the platform. To me this is familiar territory, and I must say that it makes me happy in many ways.

It also means that this silly language debate is rapidly losing my interest. As a corollary, I am beginning to think that my hiring criteria from VMS is valid on .NET. Namely that a prerequisite for hiring a developer is that they should know at least two languages. Only knowing one language (such as C# or VB) means that a developer has a seriously limited perspective and will be far less effective than one who knows more than one language. Such developers are likely to be incompetent through ignorance.

And I’m not talking only about VB or C# specifically here. COBOL.NET or J# or any other language is fine. The point is that a language gives a person a perspective on the platform, and having only one perspective is simply too limiting.

A while ago I posted a list of VB features that I think should also be in C#. I got numerous comments from C# developers who obviously had no perspective. Many of the comments showed that they simply didn’t understand what they are missing. I pity those people.

Likewise, there are features of C#, J# and COBOL.NET that VB should incorporate. People who live entirely in the VB world would likely disagree, and I pity them as well.

The whole idea behind having a language neutral platform is to have multiple languages that compete and try new and innovative ideas. The whole idea is to compare and contrast and take the better ideas and improve each language over time.

And for this discussion to be meaningful I think we need to accept the reality of “language families”. Knowing both C++ and C# is generally meaningless, as they are in the C family. C, C++, Java and C# follow the same fundamental philosophy and so limit the perception of people who never branch into the rest of the languages.

If you only know one language or language family then your ability to compare and contrast language features is severely restricted.

Now I’m not saying that absolute mastery of multiple languages is required. That makes no sense. I am very good at VB. I am competent in C#, and with a bit of brushing up I can probably do a good enough job in FORTRAN, Pascal, Modula II, C, rexx, awk, DCL and a handful of other languages I’ve learned over the years.

The point is that knowledge is power, and in the case of languages this power comes in the form of perspective. And perspective provides flexibility of thought and improves your ability to do your job.

But the thing that scares me the most at this point is that C# is just VB with semi-colons. And VB is just C# without semi-colons. And Java is just C# with a different runtime library. How can we have language innovation when the majority-usage languages have such little variation between them? We desperately need some real innovation in at least one of these languages, because I can’t believe that C#, VB or Java are the best that the human race can come up with…

Jason Bock, a fellow Magenic employee, has put together a web site with information about any and all .NET languages. Since I love programming languages and language innovations, I think this site is really cool!

Years ago it was Carl and Gary's that was the central hub for the VB community. The place we all started browsing and then jumped off to other locations. There really hasn't been an equivalent hub (or portal) for a very long time.

Robert has been working (along with Duncan) on this for quite a while now, soliciting input from a lot of people in the VB community - including authors, speakers and others. The site has been slowly evolving, and now is really starting to show some great promise as a central hub for the VB community.

A customer asked me for a list of things VB can do that C# can't. "Can't" isn't meaningful of course, since C# can technically do anything, just like VB can technically do anything. Neither language can really do anything that the other can't, because both are bound to .NET itself.

But here's a list of things VB does easier or more directly than C#. And yes, I'm fully aware that there's a comparable list of things C# does easier than VB - but that wasn't the question I was asked I'm also fully aware that this is a partial list.

For the C# product team (if any of you read this), this could also act as my wish list for C#. If C# addressed even the top few issues here I think it would radically improve the language.

Also note that this is for .NET 1.x - things change in .NET 2.0 when VB gets edit-and-continue and the My functionality, and C# gets iterators and anonymous delegates.

Finally, on to the list:

One key VB feature is that it eliminates an entire class of runtime error you get in case sensitive languages - where a method parameter and property in the same class have the same name but for case. These problems can only be found through runtime testing, not by the compiler. This is a stupid thing that is solved in VB by avoiding the archaic concept of case sensitivity.

Handle multiple events in single method (superior separation of interface and implementation).

WithEvents is a huge difference in general, since it dramatically simplifies (or even enables) several code generation scenarios.

In VB you can actually tell the difference between inheriting from a base class and implementing an interface. In C# the syntax for both is identical, even though the semantic meaning is very different.

Implement multiple interface items in a single method (superior separation of interface and implementation).

Also, independent naming/scoping of methods that implement an interface method - C# interface implementation is comparable to the sucky way VB6 did it... (superior separation of interface and implementation).

Multiple indexed properties (C# only allows a single indexed property).

Optional parameters (important for Office integration, and general code cleanliness).

Late binding (C# requires manual use of reflection).

There are several COM interop features in VB that require much more work in C#. VB has the ComClass attribute and the CreateObject method for instance.

The Cxxx() methods (such as CDate, CInt, CStr, etc) offer some serious benefits over Convert.xxx. Sometimes performance, but more often increased functionality that takes several lines of C# to achieve.

The VB RTL also includes a bunch of complex financial functions for dealing with interest, etc. In C# you either write them by hand or buy a third-party library (because self-respecting C# devs won't use the VB RTL even if they have to pay for an alternative).

The InputBox method is a simple way to get a string from the user without having to build a custom form.

Sound a Beep in less than a page of code.

And please, no flames. I know C# has a comparable list, and I know I've missed some VB items as well. The point isn't oneupmanship, the point is being able to intelligently and dispassionately evaluate the areas where a given language provides benefit.

If C# adopted some of these ideas, that would be cool. If VB adopted some of C#'s better ideas that would be cool. If they remain separate, but relatively equal that's probably cool too.

Personally, I want to see some of the more advanced SEH features from VAX Basic incorporated into both VB and C#. The DEC guys really had it nailed back in the late 80's!

I got this question via email. I get variations on this question a lot, so I thought I’d blog my answer.

Hope you don't mind imposing on you for a second. I actually spoke to you very briefly after the one of the sessions and you seemed to concur with me that for my scenario - which is a typical 3-tier scenario, all residing on separate machines, both internal and external clients - hosting my business components on IIS using HTTP/binary was a sensible direction. I've recently had a conversation with someone suggesting that Enterprise Services was a far better platform to pursue. His main point in saying this is the increased productivity - leveraging all the services offered there (transactions, security, etc.). And not only that, but that ES is the best migration path for Indigo, which I am very interested in. This is contrary to what I have read in the past, which has always been that ES involves interop, meaning slower (which this person also disputes, by the way), and Don Box's explicit recommendation that Web Services were the best migration path. I just thought I'd ask your indulgence for a moment to get your impressions. Using DCOM is a little scary, we've had issues with it in the past with load-balancing etc. Just wondering if you think this is a crazy suggestion or not, and if not, do you know of any good best-practices examples or any good resources.

The reason people are recommending against remoting is because the way you extend remoting (creating remoting sinks, custom formatters, etc) will change with Indigo in a couple years. If you aren't writing that low level type of code then remoting isn't a problem.

Indigo subsumes the functionality of both web services and remoting. Using either technology will get you to Indigo when it arrives. Again, assuming you aren't writing low-level plug-ins like custom sinks.

Enterprise Services (ES) provides a declarative, attribute-based programming model. And this is good. Microsoft continues to extend and enhance the attribute-based models, which is good. People should adopt them where appropriate.

That isn't to say, however, that all code should run in ES. That's extrapolating the concept beyond its breaking point. Just because ES is declarative, doesn’t make ES the be-all and end-all for all programming everywhere.

It is true that ES by itself causes a huge performance hit - in theory. In reality, that perf hit is lost in the noise of other things within a typical app (like network communication or the use of XML in any way). However, specific services of ES may have larger perf hits. Distributed transactions, for instance, have a very large perf hit - which is essentially independent of any interop issues lurking in ES. That perf hit is just due to the high cost of 2-phase transactions. Here's some performance info from MSDN supporting these statements.

The short answer is to use the right technology for the right thing.

If you need to interact between tiers that are inside your application but across the network, then use remoting. Just avoid creating custom sinks or formatters (which most people don't do, so this is typically a non-issue).

If you need to communicate between applications (even .NET apps) then use web services. Note this is not between tiers, but between applications – as in SOA.

If you need ES, then use it. This article may help you decide if you need any ES features in your app. The thing is, if you do use ES, your client still needs to talk to the server-side objects. In most cases remoting is the simplest and fastest technology for this purpose. This article shows how to pull ES and remoting together.

Note that none of the above options use DCOM. The only case I can see for DCOM is where you want the channel-level security features it provides. However, WSE is providing those now for web services, so even there, I'm not sure DCOM is worth all the headaches. Then the only scenario is where you need stateful server-side objects and channel-level security, because DCOM is the only technology that really provides both features.

Most people (including me) don’t regularly Dispose() their Command objects when doing data access with ADO.NET. The Connection and DataReader objects have Close() methods, and people are very much in the habit (or should be) of ensuring that either Close() or Dispose() or both are called on Connection and DataReader objects.

But Command objects do have a Dispose() method even though they don’t have a Close() method. Should they be disposed?

I posed this question to some of the guys on the ADO.NET team at Microsoft. After a few emails bounced around I got an answer: “Sometimes it is important”

It turns out that the reason Command objects have a Dispose method is because they inherit from Component, which implements IDisposable. The reason Command objects inherit from Component is so that they can be easily used in the IDE on designer surfaces.

However, it also turns out that some Command objects really do have non-managed resources that need to be disposed. Some don’t. How do you know which do and which don’t? You need to ask the dev that wrote the code.

It turns out that SqlCommand has no un-managed resources, which is why most of us have gotten away with this so far. However, OleDbCommand and OdbcCommand do have un-managed resources and must be disposed to be safe. I don’t know about OracleCommand – as that didn’t come up in the email discussions.

Of course that’s not a practical answer, so the short answer to this whole thing is that you should always Dispose() your Command objects just to be safe.

So, follow this basic pattern in VB.NET 2002/2003 (pseudo-code):

Dim cn As New Connection("…")

cn.Open()

Try

Dim cm As Command = cn.CreateCommand()

Try

Dim dr As DataReader = cm.ExecuteReader()

Try

' do data reading here

Finally

dr.Close() ' and/or Dispose() – though Close() and Dispose() both work

End Try

Finally

cm.Dispose()

End Try

Finally

cn.Close() ' and/or Dispose() – though Close() and Dispose() both work

A few weeks ago I posted an entry about a solution to today's problem with serializing objects that declare events. It was pointed out that there's a better way to handle the list of delegates, and so here's a better version of the code.

In .NET 1.x there's a problem serializing objects that raise events when those events are handled by a non-serializable object (like a Windows Form). In .NET 2.0 there's at least one workaround in the form of Event Accessors.

The issue in question is as follows.

I have a serializable object, say Customer. It raises an event, say NameChanged. A Windows Form handles that event, which means that behind the scenes there's a delegate reference from the Customer object to the Form. This delegate that is behind the event is called a backing field. It is the field that backs up the event and actually makes it work.

When you try to serialize the Customer object using the BinaryFormatter or SoapFormatter, the serialization automatically attempts to serialize any objects referenced by Customer - including the Windows Form. Of course Windows Form objects are not serializable, so serialization fails and throws a runtime exception.

Normal variables can be marked with the NonSerialized attribute to tell the serializer to ignore that variable during serialization. Unfortunately, an event is not a normal variable. We don't actually want to prevent the event from being serialized, we want to prevent the target of the event delegate (the Windows Form in this example) from being serialized. The NonSerialized attribute can't be applied to targets of delegates, and so we have a problem.

In C# it is possible to use the field: target on an attribute to tell the compiler to apply the attribute to the backing field rather than the actual variable. This means we can use [field: NonSerialized()] to declare an event, which will cause the backing delegate field to be marked with the NonSerialized attribute. This is a bit of a hack, but does provide a solution to the problem. Unfortunately VB.NET doesn't support the field: target for attributes, so VB.NET doesn't have a solution to the problem in .NET 1.x.

Though there is a solution in C#, the solution is not an elegant solution, so in both VB.NET and C# we really need a better answer. I have spent a lot of time talking with the folks in charge of the VB compiler, and hope they come up with an elegant solution for VB 2005. In the meantime, here’s an answer that will work in either language in .NET 2.0.

In VB 2005 we’ll have the ability to declare an event using a “long form” using a concept called an Event Accessor. Rather than declaring an event using one of the normal options like:

Public Event NameChanged()

or

Public Event NameChanged As EventHandler

where the backing field is managed automatically, you’ll be able to declare an event in a way that you manage the backing field:

Public Custom Event NameChanged As EventHandler

AddHandler(ByVal value As EventHandler)

End AddHandler

RemoveHandler(ByVal value As EventHandler)

End RemoveHandler

RaiseEvent(ByVal sender As Object, ByVal e As EventArgs)

End RaiseEvent

End Event

In this model we have direct control over management of each event target. When some code wants to handle our event, the AddHandler block is invoked. When they detach from our event the RemoveHandler block is invoked. When we raise the event (using the normal RaiseEvent keyword), the RaiseEvent block is invoked.

This means we can declare our backing field to be NonSerialized if we so desire. Better yet, we can have two backing fields – one for targets that can be serialized, and another for targets that can’t be serialized:

_

Private mNonSerializableHandlers As New Generic.List(Of EventHandler)

Private mSerializableHandlers As New Generic.List(Of EventHandler)

Then we can look at the type of the target (the object handling our event) and see if it is serializable or not, and put it in the appropriate list:

Public Custom Event NameChanged As EventHandler

AddHandler(ByVal value As EventHandler)

If value.Target.GetType.IsSerializable Then

mSerializableHandlers.Add(value)

Else

If mNonSerializableHandlers Is Nothing Then

mNonSerializableHandlers = _

New Generic.List(Of EventHandler)()

End If

mNonSerializableHandlers.Add(value)

End If

End AddHandler

RemoveHandler(ByVal value As EventHandler)

If value.Target.GetType.IsSerializable Then

mSerializableHandlers.Remove(value)

Else

mNonSerializableHandlers.Remove(value)

End If

End RemoveHandler

RaiseEvent(ByVal sender As Object, ByVal e As EventArgs)

For Each item As EventHandler In mNonSerializableHandlers

item.Invoke(sender, e)

Next

For Each item As EventHandler In mSerializableHandlers

item.Invoke(sender, e)

Next

End RaiseEvent

End Event

The end result is that we have declared an event that doesn’t cause problems with serialization, even if the target of the event isn’t serializable.

This is better than today’s C# solution with the field: target on the attribute, because we maintain events for serializable target objects, and only block serialization of target objects that can’t be serialized.