You would think that no code outside the business class could call the set block, because it is private. Certainly in .NET this would appear as a read-only property to any code outside the class.

But in Silverlight, data binding is perfectly capable of calling this code. Worse, the reflection PropertyInfo object for this property returns true for CanWrite, so this appears as a read-write property to any code.

I don’t think the problem is in the C# compiler, because if I write code that tries to set the property I get a compile-time error saying that’s not allowed.

Also, it isn’t a problem with reflection, because trying to set the value using the SetValue() method of a PropertyInfo object fails with the expected MethodAccessException (Silverlight reflection doesn’t allow you to manipulate private members). This, in particular, is weird, because if CanWrite returns true it should be safe to write to a property…

Update: I just checked .NET (being the suspicious sort) and it turns out that CanWrite returns true for a read-only set block in .NET too. And of course reflection won't set the property due to the scope issue, just like SL. But WPF data binding also doesn't call the private set block, where Silverlight somehow cheats and does call it - so at least the scope of the issue is narrowed a bit.

Which begs the question then: how is data binding bypassing the normal property scope protections so it can manipulate a private member? With the next question being how can I do it too? :)

Seriously, this is the first time I’ve found where Microsoft has code in Silverlight that totally bypasses the otherwise strict rules, and it is a bit worrisome (and a pain in the @$$).

Anyway, the workaround at the moment is to use the old-fashioned approach and create a separate mutator method:

This works, because when there’s no set block at all the property is actually read-only to everyone, inside and outside the class. Kind of inconvenient for the other code inside the business class that wants to set the property, because that code must use this non-standard mutator method – but at least it works…

For some years now there’s been this background hum about “domain specific languages” or DSLs. Whether portrayed as graphical cartoon-to-code, or specialized textual constructs, DSLs are programming languages designed to abstract the concepts around a specific problem domain.

A couple weeks ago I delivered the “Lap around Oslo” talk for the MDC in Minneapolis. That was fun, because I got to demonstrate MSchema (a DSL for creating SQL tables and inserting data), and show how it can be used as a building-block to create a programming language that looks like this:

“Moving Pictures” by “Rush” is awesome!

It is hard to believe that this sentence is constructed using a programming language, but that’s the power of a DSL. That’s just fun!

I also got to demonstrate MService, a DSL for creating WCF services implemented using Windows Workflow (WF). The entire thing is textual, even the workflow definition. There’s no XML, no C#-class-as-a-service, nothing awkward at all. The entire thing is constructed using terms you’d use when speaking aloud about building a service.

A couple years ago I had a discussion with some guys in Microsoft’s connected systems division. My contention in that conversation was that a language truly centered around services/SOA would have a first-class construct for a service. That C# or VB are poor languages for this, because we have to use a class and fake the service construct through inheritance or interfaces.

This MService DSL is exactly the kind of thing I was talking about, and in this regard DSLs are very cool!

So they are fun. They are cool. So why might DSLs be a bad idea?

If you’ve been in the industry long enough, you may have encountered companies who built their enterprise systems on in-house languages. Often a variation on Basic, C or some other common language. Some hot-shot developer decided none of the existing languages at the time could quite fit the bill. Or some adventurous business person didn’t want the vendor lock-in that came with using VendorX’s compiler. So these companies built their own languages (usually interpreted, sometimes compiled). And they built entire enterprise systems on this one-off language.

As a consultant through the 1990’s, I encountered a number of these companies. You might think this was rare, but it was not all that rare – surprising but true. They were all in a bad spot, having a lot of software built on a language and tools that were known by absolutely no one outside that company. To hire a programmer, they had to con someone into learning a set of totally dead-end skills. And if a programmer left, the company not only lost domain knowledge, but a very large percentage of the global population of programmers who knew their technology.

How does this relate to DSLs?

Imagine you work at a bio-medical manufacturing company. Suppose some hot-shot developer falls in love with Microsoft’s M language and creates this really awesome programming language for bio-medical manufacturing software development. A language that abstracts concepts, and allows developers to write a line of this DSL instead of a page of C#. Suppose the company loves this DSL, and it spreads through all the enterprise systems.

Then fast-forward maybe 5 years. Now this company has major enterprise systems written in a language known only by the people who work there. To hire a programmer, they need to con someone into learning a set of totally dead-end skills. And if a programmer leaves, the company has not only lost domain knowledge, but also one of the few people in the world who know this DSL.

To me, this is the dark side of the DSL movement.

It is one thing for a vendor like Microsoft to use M to create a limited set of globally standard DSLs for things like creating services, configuring servers or other broad tasks. It is a whole different issue for individual companies to invent their own one-off languages.

Sure, DSLs can provide amazing levels of abstraction, and thus productivity. But that doesn’t come for free. I suspect this will become a major issue over the next decade, as tools like Oslo and related DSL concepts work their way into the mainstream.

One of my kid’s machines just died – hard drive crash. In the past, this has been a pain, because I’d have to reinstall the OS (including finding and installing all the drivers) and he’d have to reinstall all his games, find the keys, all that stuff. It could literally take days or weeks to get the computer back to normal.

However, a few months ago I picked up the HP Windows Home Server appliance. It does regular (at least weekly, if not daily) automatic image backups of all the machines in my house. I bought it because a couple colleagues of mine had machines crash and they were singing the praises of WHS in terms of getting themselves back online quickly and easily.

I am now officially joining the chorus!

Here’s what I did: Pop out the bad hard drive, and put in an empty new one. Boot off the system rescue CD, walk through a simple wizard, wait 90 minutes for the restore and that’s it – he’s totally up and running as though nothing happened. Better actually, because this new hard drive is 3x bigger than the original – WHS simply restored to the new, bigger, drive without a complaint.

I guess it can be a little more complex with nonstandard network or hard drive drivers (How to restore a PC from a WHS after hard drive fails), but even that doesn’t look too bad. But in my case, WHS found the hard drive and network card automatically, so it was a total no-brainer.

The thing is, I’m not used to computers acting like or being like an appliance. But the HP WHS box really is an appliance – the kind of thing a regular home user could install. The machine comes with a fold-out instruction poster. 6 steps to install (things like “plug in power”, “plug in network”, “push on button”, etc). And it does these automatic backups, in a way where it deals with increasing volumes of data by warning you BEFORE the server runs out of space (unlike Vista’s built-in backup, which is terrible).

Start running out of space? Just pop in a new hard drive – without even shutting down the server. I’ve added two since I got the box. All PCs should work this way!!

The backups appear to be very smart. I’m backing up numerous machines, and the total backups are using less space than if you add all the backed up content together. I assume they are using compression, but I also think they are doing smart things like not backing up Windows XP and Vista each time, because those are the same across numerous machines. As are many of the games played by I and my kids.

What’s even better, is that WHS does video and audio streaming. I’ve been putting all our media on the box, and watching it from the xbox or media PC in other rooms.

There are more features, but I don’t want to sound like a spec sheet.

The point is that I’ve been entirely impressed by the simplicity and consumer-friendliness of this product since I took it out of the box (did I mention it is a really nice-looking mini-tower?). The fact that the computer restore feature works exactly as advertised is just further confirmation that it was a great purchase.

I seriously think that every home that has one or more computers with any data that shouldn’t be lost needs a WHS. Yes, that probably means you! ;)

If you read the comments on my blog at all, you may have periodically run across a multi-page diatribe from a guy calling himself “Rich” or “Tony”. This guy continually reposts the same comment on my blog (among others). The comment, at first glance, appears to be meaningful and valid – it discusses the merits of the Strangeloop AppScaler, a great product that is provided by the company of a friend of mine.

I assume this spammer is somehow disgruntled with Strangeloop, or has a personal vendetta against my friend, wants to give Aussies a bad name, or is simply unhinged. Probably a combination of these.

Anyway, I keep deleting the spam. He keeps adding more. While it is a small thing in the scheme of things, other than providing as much information as possible (including IP addresses and other information) to the Australian authorities (his IP addresses are consistently Australian), there’s not a lot I can do directly.

So I figured I’d just let the world at large know that this person exists, so if you run across these comments, or if you are another blogger who’s targeted by this criminal, that you know what is going on.

And if you are another blogger who’s been targeted, please contact me, and I’ll help you get in touch with the people investigating the issue so you can help provide tracking information. I know they’ve already narrowed the search dramatically, and I’m sure continued trace information will help identify the person so they can take care of him.

And if you are the spammer, and I’m sure he’ll read this at some point, all I can say is: please get a life.

Or to put it another way, it is about making a set of high level, often difficult, choices up front. The result of those choices is to restrict the options available for the design and construction of a system, because the choices place a set of constraints and restrictions around what is allowed.

When it comes to working in a platform like Microsoft .NET, architecture is critical. This is because the platform provides many ways to design and implement nearly anything you’d like to do. There are around 9 ways to talk to a database – from Microsoft, not counting the myriad 3rd party options. The number of ways to build web apps continues to grow, etc. The point I’m making is that if you just throw the entire .NET framework at a dev group you’ll get a largely random result that may or may not actually meet the short, medium and long-term needs of your business.

Developing an architecture first allows you to rationally evaluate the various options, discard those that don’t fit the business and application requirements and only allow use of those that do meet the needs.

An interesting side-effect of this process is that your developers may disagree. They may only see short-term issues, or purely technical concerns, and may not understand some of the medium/long term issues or broader business concerns. And that’s OK. You can either say “buck up and do what you are told”, or you can try to educate them on the business issues (recognizing that not all devs are particularly business-savvy). But in the end, you do need some level of buy-in from the devs or they’ll work against the architecture, often to the detriment of the overall system.

Another interesting side-effect of this process is that an ill-informed or disconnected architect might create an architecture that is really quite impractical. In other words, the devs are right to be up in arms. This can also lead to disaster. I’ve heard it said that architects shouldn’t code, or can’t code. If your application architect can’t code, they are in trouble, and your application probably is too. On the other hand, if they don’t know every nuance of the C# compiler, that’s probably good too! A good architect can’t afford to be that deep into any given tool, because they need more breadth than a hard-core coder can achieve.

Architects live in the intersection between business and technology.

As such they need to be able to code, and to have productive meetings with business stakeholders – often both in the same day. Worse, they need to have some understanding of all the technology options available from their platform – and the Microsoft platform is massive and complex.

Which brings me back to the Application Architecture Guide. This guide won’t solve all the challenges I’m discussing. But it is an invaluable tool in any .NET application architect’s arsenal. If you are, or would like to be, an architect or application designer, you really must read this book!

As one of the co-chairs for this conference, I’m really pleased with the speaker and topic line-up. In my view, this is one of the best sets of content and speakers VS Live has ever put forward.

Even better, this VS Live includes the MSDN Developer Conference (MDC) content. So you can get a recap of the Microsoft PDC, with all its forward-looking content, and then enjoy the core of VS Live with its pragmatic and independent information about how you can be productive using Microsoft’s technologies today, and into the future.

Perhaps best of all are the keynotes. Day one starts with a keynote containing secret content. Seriously – we can’t talk about it, and Microsoft isn’t saying – but it is going to be really cool. The kind of thing you’ll want to hear in person so you can say “I was there when...”. And Day two contains some of the best WPF/XAML apps on the planet. We’re talking about apps that not only show off the technology, but show how the technology can be used in ways that will literally change the world – no exaggeration! Truly inspirational stuff, on both a personal and professional level!

I know travel budgets are tight, and the economy is rough. All I can say, is that you should seriously consider VS Live SF if you have the option. I think you’ll thank me later.

I got this link from another blog I just discovered, The Morning Brew. I’m not sure why I only found out about the Brew now, because it appears to be the perfect resource, for me at least.

I mostly quit reading blogs about 6 months ago – I discovered that reading blogs had largely supplanted doing real work, and that could only lead to really bad results!!

Interestingly, I also mostly quit listening to any new music about 6 months ago. Browsing through the Zune.net catalog for interesting music had largely supplanted being entertained by music (there’s a lot of crap music out there, and I got tired of listening to it).

Is there a parallel here? I think so.

In October (or so), Zune.net introduced a Pandora-like feature where the Zune service creates a “virtual radio station” based on the music you listen to most. This is exactly what I want! I want a DJ (virtual or otherwise) who filters out the crap and provides me with good (and often new) music. The whole Channels idea in Zune.net kept me from canceling my subscription, and the more recent addition of 10 track purchases per month for free clinched the deal.

And I want the same thing for blogs. I don’t want to filter through tons of blog posts about good restaurants, pictures of kids or whatever else. (I’m sure those are interesting to some people, but I read blogs for technical content, and only read friends blogs for non-technical content). And this is where The Morning Brew comes in. It is like a “DJ” for .NET blogs. Perfect!!

I should point out that on twitter I also subscribe to Silverlight News – for the same reason – it is a pre-filtered set of quality links, though I think I may start following this via RSS instead of twitter, because I prefer the RSS reading ability in Outlook to twitter for this particular type of information.

In any case, I hope everyone reading this post has had a wonderful last couple weeks.

For the many of you who had a religious holiday in here, I hope it was fulfilling and you felt the full meaning of the holiday, and that it brought you closer to your god/goddess/gods/ultimate truth. If you had a secular holiday in here, I hope it was fulfilling and meaningful, preferably filled with family and fun and a warm sense of community.