File stream data can be used from the .NET Framework using the traditional SqlParameter, but there is also a specialized class called SqlFileStream which can be used with .NET Framework 3.5 SP1 or later. This class provides mechanisms, for example, for seeking a specific position from the data.

There are pros and cons to this approach. The backup and transactional issues, along with the performance considerations, all have to be evaluated against your specific system requirements. Having the SQL Server engine manage the database relationship to the binary files seems like a big advantage over maintaining flat files yourself.

Jeremy’s Graceful Shutdown Braindump should really include another use case. How do you create a .NET application that never shuts down? Ever!

This is a common scenario for closed systems that only allow the user to interact with a predefined set of applications. In other words, the user is never able to utilize any of the operating system functionality. In particular, they can not install new applications or update any software components.

This situation is related to the issues discussed in Medical Device Software on Shared Computers. Creating a closed Windows-based system is not an easy task. For our XP Embedded system here are some of the considerations:

Prevent booting from a peripheral device (CD-ROM, USB stick, etc.)

Prevent access to the BIOS so that #1 is enforced.

Prevent plug-n-play devices from auto-starting installers.

You can not run Explorer as the start-up shell — no desktop or start menu.

Prevent Ctrl-Alt-Del from activating task manager options.

Disable the Alt-Tab selection window so the user can not switch application focus.

Ensure that the primary user interface application is always running.

All UI components must exit without user interaction when the system is powered down.

One of the challenges for .NET applications is how to handle unexpected exceptions. What you need first is a way to catch all exceptions. OK, so now you know your program is in serious distress. You may be able to recover some work (a la a “graceful shutdown”), but after that it’s not a good idea keep the application running.

That means you have to restart the program. For a WinForm application one option is:

C#

1

Application.Restart();

Application.Restart() essentially calls Application.Exit() which tries to gracefully shutdown all UI threads. The problem with that is the application may appear to be hung if you have background worker threads that are monitoring hardware devices that are not currently responding.

Another issue is when the .NET application is doing interop with COM components. I’ve seen situations where all of the managed threads appear to exit properly via Application.Exit() but an un-managed exception (and error window) still occur. This behavior is unacceptable.

The way to ensure that the application restarts properly (simplified):

C#

1

2

Process.Start(Application.ExecutablePath);

Environment.Exit(0);

The Environment.Exit() call is harsh, but it is the only way I know of that guarantees that the application really exits. If you want a Windows Application event log and a dump of your application you can use Environment.FailFast() instead.

UPDATE (9/19/09): I ran across a post about COM object memory allocation in mixed managed/unmanaged environments: Getting IUnknown from __ComObject. As this article exemplifies, debugging COM objects under these circumstances is a real pain in the butt. We used strongly-typed managed wrappers for our COM objects. Besides a .NET memory profiler we just monitored overall allocations externally with Process Explorer. It may be undocumented and fragile, but at least it’s good to know that there is way to dig deeper if you need to.

It’s not easy getting your arms around this one. The term Cloud Computing has become a catch-all for a number of related technologies that have been used in enterprise-class systems for many years (e.g. grid computing, SOA, virtualization, etc.).

One of the primary concerns of cloud computing in Healthcare IT is privacy and security. A majority of the content and comments in just about every article or blog post about CC, re: health data or not, deal with these concerns. I’m going to save that discussion for a future post.

I’m also not going to dig into the multitude of business and technical trade-offs of these “cloud” options versus more traditional SaaS and other hybrid server approaches. People write books about this stuff and there’s a flood of Internet content that slice and dice these subjects to death.

My purpose here is to provide an overview of cloud computing from a developers point-of-view so we can begin to understand what it would take to implement custom software in the cloud. All of the major technical aspects are well covered elsewhere and I’m not going to repeat them here. I’m just going to note the things that I think were important to take into consideration when looking at each option.

Here‘s a simplified definition of Cloud Computing that’s easy to understand and will get us started:

Cloud computing is using the internet to access someone else’s software running on someone else’s hardware in someone else’s data center while paying only for what you use.

As a consumer, for example of a social networking site or PHR lets say, this definition fits pretty well. There’s even an EMR that is implemented in the cloud, Practice Fusion, that would fit this definition.

As a developer though, I want it to be my software running in the cloud so I can make use of someone else’s infrastructure in a cost effective manner. There are currently three major CC options. Cloud Options – Amazon, Google, & Microsoft gives a good overview of these.

The Amazon development model involves building Zen virtual machine images that are run in the cloud by EC2. That means you build your own Linux/Unix or Windows operating system image and upload it to be run in EC2. AWS has many pre-configured images that you can start with and customize to your needs. There are web service APIs (via WSDL) for the additional support services like S3, SimpleDB, and SQS. Because you are building self-contained OS images, you are responsible for your own development and deployment tools.

AWS is the most mature of the CC options. Applications that require the processing of huge amounts of data can make effective you of the AWS on-demand EC2 instances which are managed by Hadoop.

If you have previous virtual machine experience (e.g. with Microsoft Virtual PC 2007 or VirtualBox) one of the main differences working with EC2 images is that they do not provide persistent storage. The EC2 instances have anywhere from 160 GB to 1.7 TB of attached storage but it disappears as soon as the instance is shut down. If you want to save data you have to use S3, SimpleDB, or your own remote storage server.

It seems to me that having to manage OS images along with applications development could be burdensome. On the other hand, having complete control over your operating environment gives you maximum flexibility.

GAE allows you to run Python/Django web applications in the cloud. Google provides a set of development tools for this purpose. i.e. You can develop your application within the GAE run-time environment on our local system and deploy it after it’s been debugged and working the way you want it.

Google provides entity-based SQL-like (GQL) back-end data storage on their scalable infrastructure (BigTable) that will support very large data sets. Integration with Google Accounts allows for simplified user authentication.

From the GAE web site: “This is a preview release of Google App Engine. For now, applications are restricted to the free quota limits.”

Azure is essentially a Windows OS running in the cloud. You are effectively uploading and running your ASP.NET (IIS7) or .NET (3.5) application. Microsoft provides tight integration of Azure development directly into Visual Studio 2008.

For enterprise Microsoft developers the .NET Services and SQL Data Services (SDS) will make Azure a very attractive option. The Live Framework provides a resource model that includes access to the Microsoft Live Mesh services.

Bottom line for Azure: If you’re already a .NET programmer, Microsoft is creating a very comfortable path for you to migrate to their cloud.

Getting Started

All three companies make it pretty easy to get software up and running in the cloud. The documentation is generally good, and each has a quick start tutorial to get you going. I tried out the Google App Engine tutorial and had Bob in the Clouds on their server in about 30 minutes.

Stop by and sign my cloud guest book!

Misc. Notes:

All three systems have Web portal tools for managing and monitoring uploaded applications.

Which is Best for You?

One of the first things that struck me about these options is how different they all are. Because of this, from a developer’s point-of-view I think you’ll quickly have a gut feeling about which one best matches your current skill sets and project requirements. The development components are just one piece of the selection process puzzle though. Which one you actually might end up using (it could very well be none) will also be based on all your other technical and business needs.

UPDATE (6/23/09): Here’s a good high level cloud computing discussion: Reflections on Executive Briefing Event: Cloud & RIA. I like the phrase “Cloud Computing is Elastic” because it captures most the appealing aspects of the technology. It’s no wonder Amazon latched on to that one — EC2.

Is a picture of a computer program really worth a 1000 words? HP seems to think so.
Making Sense of Spaghetti Code discusses the visual representation of source code as a marketing tool for their consulting services. Being from California, the budget cutting complaint:

because there aren’t enough programmers who know the Cobol language used in the state’s payroll software

is pretty scary. The urban (programming) myth that there are still more lines of Cobol in use than any other language may actually be true. Scarier still!

The VS.NET Settings designer creates an ApplicationSettingsBaseSettings class in Settings.Designer.cs (and optionally Settings.cs). The default values from the designer are saved in app.config and are loaded into the Settings.Default singleton at runtime.

So, now you have a button on a properties page that says ‘Reset to Factory Defaults’ where you want to reload the designer default values back into your properties. If you want do this for all property values you can just use Settings.Default.Reset(). But what if you only want to reset a subset of your properties?

There may be a better way to do this, but I couldn’t find one. The following code does the job and will hopefully save someone from having to reinvent this wheel.

The ResetToFactoryDefaults method takes a collection of SettingsPropertys and uses the DefaultValue string to reset the value. Most value types (string, int, bool, etc.) worked with the TypeConverter, but the StringCollection class is not supported so the XML string has to be deserialized manually.

These helper methods show how just selected (and all) properties can be reset.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

// Reset only the date and time format defaults.

//

publicstaticvoidSetDateTimeFormatDefaults()

{

SettingsPropertyCollection col=newSettingsPropertyCollection();

col.Add(Properties.Settings.Default.Properties["DateFormat"]);

col.Add(Properties.Settings.Default.Properties["TimeFormat"]);

ResetToFactoryDefaults(col);

}

// Resets ALL properties to their designer default settings.

// Same as Settings.Default.Reset()

//

publicstaticvoidResetToFactoryDefaults()

{

ResetToFactoryDefaults(Properties.Settings.Default.Properties);

}

This code was developed with VS 2005, but should also work in VS 2008.

We ran into an interesting problem recently that I have not been able to find documented anywhere.

We’re doing real-time USB data acquisition with .NET 2.0. The data bandwidth and processing isn’t overwhelming. Specifically, we expect data packets at 50 Hz — every 20 ms. Yet we were having horrible delay problems. In the end we found that Console.Writeline was the culprit!

To verify this a test program was written to measure the throughput of the following loop:

C#

1

2

3

while(m_working){

Console.WriteLine(string.Format("Message {0} [{1}]",cnt++,mstr));

}

The length of mstr is varied and this loop is run for about 30 seconds. The results show the ms per message for increasing message lengths:

Console.Writeline is surprisingly slow!

We use log4net for our logging. With the timestamp and class information, a log message is typically greater than 100 characters. A single log message introduces at least a 20 ms delay in that case, with additional messages adding that much more. Even though debug logging would not be included in the released version, these significant delays make development difficult.

Not only do you need to make sure that there are no Console.Writeline's in your real-time threads you also need to remove the console appender (<appender-ref ref="ConsoleAppender"/>) from the log4net configuration. The log4net file appenders do not cause noticeable delays.

like-minded individuals within the larger world of the Microsoft® .NET Framework who felt a growing frustration that Microsoft tooling, guidance, and .NET culture at large did not reflect or support an important set of core values.

The name is misleading because even though most members are from the .NET community, the group’s purpose is to promote a set of core values that are platform/language independent. To summarize from Jeremy’s article:

Keeping an eye out for a better way.

Adopt the best of any community.

Not content with the status quo — experimenting with techniques.

It’s the principles and knowledge that really matter.

The members of the ALT.NET group are distinguished technologist and many are productive bloggers, e.g. codebetter.com and Ayende@Rahien. Also, the discussion group altdotnet is very active (over 6200 posts since the beginning of the year) and lively. There are also periodic group meetings (see the ALT.NET site for links) that use Open Space Technology (OST) to organize conference agendas. Check out the interesting videos (by David Laribee) from the recent conference in Seattle.

So why are ALT.NETters not like the rest of us? We’re experienced developers that use modern tools and techniques, but we:

This list could go on and on. Many have never used an ORM or the MVC design pattern either. The point isn’t what we know versus what they know. I’ve talked about Stereotyping Programmers before and how it’s just plain bad. I think the ALT.NET community has made a conscious effort to improve their inclusiveness.

The ALT.NET group is certainly on the cutting edge of useful and innovative software technologies and techniques. We may not understand everything they’re talking about, but the conversation is well worth listening to. Someday you may be faced with a challenge that will need just the type of solutions they’ve been discussing.

I’m not a computer scientist. I’m also not one of the many über programmers that create and analyze software frameworks and techniques. I simply design and develop software that attempts to meet my customer’s needs. To that end I’m always looking for the best tools available to get the job done.

I know many people blow off design patterns as ivory tower twaddle and silly jargon, but I think they’re very important in regards to designing user interface code. Design patterns give us a common vocabulary that we can use in design discussions. A study of patterns opens up the accumulated wisdom of developers who have come before us.

My opinion: You don’t need to be a rocket scientist to understand design patterns. Most are just common sense. Complex patterns are designed to solve complex problems. Design patterns should be thought of as a tool that you use just like any other. Don’t let the ‘ivory tower twaddle’ scare you away.

I think most people would agree that one of the key components to creating a successful software product is quality. I’ve developed .NET applications in the past and have experienced the difficulty of testing and maintaining the functionality of Winform forms and components when they are created with the default Visual Studio tools. If you’re not careful, here’s what you end up with:

I should note here that the development of software for medical devices already has rigorous verification and validation processes to ensure quality. See FDA Good Manufacturing Practice (GMP – Quality System Regulation) subpart C–Design Control (§ 820.30 sections f & g). However, these requirements do not preclude the need for development techniques that make the software more robust and maintainable. On the contrary, the higher the software quality, the easier it is to meet these standards.

I’ve recently spent some time trying to select a GUI architecture that will allow us to create a more robust unit testing environment. This is why I started looking at Model-View-Controller (MVC) and Model-View-Presenter (MVP) design patterns. The need for these types of design patterns is twofold:

There are many articles and blog posts that describe MVC, MVP, and their numerous variations. These techniques have been around for many years but the current corner-stone comes from these Martin Fowler articles:

Once you understand these concepts you can start to grasp the trade-offs of all of the available MVC/MVP flavors. If you’re like me, one of the problems you’ll run into is that there are so many different approaches (and opinions) that you’ll be left wondering which is best to implement. The only advice you’ll get is that it’s a matter of choice. Great, thanks! From the article above, Josh puts it best:

If you put ten software architects into a room and have them discuss what the Model-View-Controller pattern is, you will end up with twelve different opinions.

This is when you turn from the theory and start looking for concrete implementations that might be suitable for your situation. Microsoft has released an ASP.NET MVC Framework as part of VS2008, but all of the Winform code samples I found were part of either blog posts or articles.

As you look at the different implementations (and relevant snippets), you quickly realize that following these design patterns requires significantly more work than just adding your application’s logic directly to the IDE generated delegates. The additional work is expected and is the trade-off for improved testability.

That’s fine, and worth it, but it’s still time and money. We do not have the resources, or experience, to undertake a full Test-Driven Development (TDD) effort. We will implement MVC/MVP on only the displays that we feel are the most vulnerable.

I’m not going to list all of the candidate examples I looked at. I will mention that Jeremy’s series of articles (here) dig deep into these issues and have lots of good code examples. Each approach has their pros and cons, just like the one I’ll present here. We’ll try to use it, but may end up with something else in the end. As we become more experienced, I suspect we’ll evolve into a customized solution that best meets our needs.

This hybrid approach appealed to me for a couple of reasons. The first is that I spent several years doing Swing development, which uses a MVC that also allows multiple simultaneous views of a single model. I also like the event driven approach, which is not only heavily used in Java, but is also well supported in .NET. In any case, the View is passive and all of the important functional logic is centralized in the Controller class which can be easily tested with real or mock Model and View objects.

Matthew has done a good job of providing support generic classes that make implementation somewhat less cumbersome. The MvcControlBase class provides generic Control-View wiring while ChangeRequestEvents manages all events in a single class.

The project download provided by the article is a VS2008 solution. We’re still using VS2005, but I was able to back-port the project to VS2005 with only minor modifications that had no effect on functionality. The VS2005 project is available for download here:

I see adoption of MVC/MCP methodology for GUI development as a critical component for improvement in software reliability, quality, and long-term maintainability. Also, structuring the application side with MVC/MVP is only half the battle. Developing an effective testing strategy must go along with it in order to achieve these objectives. Until Microsoft provides an integrated Winforms MVC solution like they did for ASP.NET, we’ll just have to continue to roll our own.

I’d like to hear about your experiences, suggestions, and recommendations on any of these topics.

I have never used BizTalk and had little knowledge of its capabilities going in. BizTalk reference material and articles can be found in numerous places on the web — a good summary is Introducing BizTalk Server 2006 R2 (pdf).

My major take-aways from the presentation were:

BizTalk is an enterprise class product — i.e. a heavy weight solution designed to scale for very large business needs (global reach, high throughput, tight control and policies, highly reliable).

As such, the learning curve is steep.

BizTalk uses a message oriented architecture designed to connect disparate systems of all types.

Microsoft SQL Server is used as the back-end database and is very tightly bound to BizTalk functionality and performance. The message persistence capability of the Message Box is a powerful built-in tool.

The Microsoft Enterprise Service Bus (ESB) Guidance further uses BizTalk to support a loosely coupled messaging architecture.

BizTalk will also be a key component of Microsoft’s new service-oriented architecture (SOA) framework called Oslo.

Because of its message handling architecture it’s easy to see how HL7 translation and routing could be accomplished. Microsoft provides accelerators (pre-defined schema, orchestration samples, etc.) for HL7 and HIPAA for this purpose.

It’s not hard to understand the importance of BizTalk in the larger Enterprise space. It appears to be benefiting from its years of prior experience and continued integration with other evolving Microsoft technologies. Overall, I was very impressed with BizTalk.

A Note on Special Interest Groups

I’m not only lucky to have a SIG like this in the area, but it’s also great to have people as knowledgeable (and friendly) as Brian and Chris running it. Great job guys!

I would encourage everyone to seek out and attend their local user/developer group meetings. Don’t just go for the free pizza (which usually isn’t that good anyway) — it’s a great way to improve yourself both technically and professionally. You’ll also get to meet new people that have the same interests as you.

I think that getting exposure to technologies that you don’t use in your day-to-day work can be just as rewarding as becoming an expert in your own domain. Learning about cutting-edge software (or hardware) is exciting no matter what it is. That new knowledge and perspective also has the potential to lead you down roads that you might not have considered otherwise.