Rants of an ADO.NET Dev

Category Archives: Rant

One of the questions I often hear when I tell someone that I work on ADO.NET is “Is that still alive?” And, while ADO may be long gone, ADO.NET is today one of the most popular ways to connect to SQL Server and is the underlying layer beneath many (if not all) .Net Object Relational Mappers (ORMs) like Entity Framework (EF). However, it is because of these ORMs that you never hear about ADO.NET – it is no longer the “new, sexy” thing (and we don’t have product names like “Magical Unicorn Edition”). This does not mean that ADO.NET is simply becoming a framework to build ORMs, the world is a bit more complicated than that…

One thing that ORMs do well is to allow developers to quickly and easily get a database up and running, and then to get access to the data that is sitting there. For instance, the tooling in EF allows you to quickly create a model, deploy that to a database, code some LINQ (with its IntelliSense goodness) to grab some data, and then manipulate that data as an object. ORMs also make prototyping and rapid development easier as the model they generate can be updated from the database and any changes in the database schema will then show as compiler errors since the corresponding object in that model has now changed as well. One of the problems with EF, and ORMs in general, is that the extra layer of abstraction adds a performance cost, and while EF has been working to reduce that, you can see from the graph below that hand-coded ADO.NET is vastly quicker – so, what to do?

Opportunity Cost

As with any technology choice, you need to carefully considered what each option offers, and what each options costs. EF (and ORMs) offer quick and easy coding at the cost of the application’s performance. For some projects this may be fine – the developers may be too busy to consider hand-coding, or the cost of additional servers may be much less that finding, hiring and onboarding additional developers. Alternatively, ADO.NET may be chosen if the application is large enough that any percentage performance increase is a significant cost savings, or that the turn around time for a single database transaction is critical and every microsecond counts. Just to emphasize the point – remember that ADO.NET is an abstraction from the TDS protocol. There may be developers for which they performance is ultra-critical and they would rather hand-code TDS packets and communicate with SQL Server that way (which is entirely possible, since TDS is a documented protocol).

At the end of the day, you need to choose what makes sense to you and to your business plan – whether that means hand-coding ADO.NET, utilizing an ORM or a mix in between (like starting with an ORM and then hand-coding performance critical queries).

Today I needed a testing machine for an ASP.NET MVC application that I am building, so I started up the Hyper-V server and created a new virtual machine. When it came to choosing the version of Server 2008 R2 to install, I saw the option for “Windows Web Server (Core Edition)” and thought, “Brilliant! A nice, light-weight version of Windows Server that won’t eat too much RAM from the Host server” and, unlike the 2008 Core Editions, 2008 R2 Core Editions support .NET and ASP.NET (albeit a cut-down version).

And this is where things went down-hill. As it turns out, SQL Server 2008 is not supported on the Core Editions. Nor is the SQL 2008 R2 CTP supported either. This is *very* surprising as I can’t imagine anything in SQL Server that would require a UI (other than the management console – but that can be installed on another computer); and it also seemed logical that any version of “Windows Web Server” should be able to support a proper, database-driven web application.

tldr: Browsers need better ways of handling large HTTP requests – possibly with an "upload monitor" (Scroll to the last paragraph for a little more)

I think that we’ve all come across this before: you go to a website, select a file to upload, hit “Submit” and then wait forever and hope that the file you’ve selected has actually made it to the server on the other side. Recently, while building an ASP.NET MVC application for a client, I decided that I wanted a better experience for my end users, especially since this application is to be externally hosted from the client, who is the primary user. As such, I didn’t want the client to be waiting and hoping that the documents and images that they are uploading are actually reaching the other side.

So, to get past this issue, my first idea was to turn to AJAX. Using JavaScript made sense as it meant that no 3rd-party plugins are required and that anyone with JavaScript disabled can still upload files (albeit with a worse interface – but anyone disabling JavaScript should expect that). Unfortunately, AJAX file uploaders appeared to be mediocre at best – they were inaccurate with percentage (if they showed a percentage), the interfaces were pretty poor and most didn’t support multiple file upload features (such as selecting multiple files in the open file dialog).

With a quick bit of searching, I found SWFUpload – which appeared to have all the features that I wanted: a nice interface, multiple file uploads and very customisable. Having implemented SWFUpload into my application, I hit a major bug. As it turns out, SWFUpload was built using Flash, and Flash (being the wonderful Adobe application it is) only sends cookies from Internet Explorer, if at all. This meant that the uploading component would not use the same session as Firefox, and so would be redirected to the "log in" page instead of uploading the file. The suggested fix from SWFUpload didn’t work (apart from the various security exploits it introduced) and implementing my own SessionIDManager (to permit session id’s in the query string) failed due to ASP.NET’s SQL Membership Provider storing additional Authentication data in a separate cookie.

Luckily enough, Silverlight supports sending the cookies from the browser it is hosted in. Hoping for a simple, drop-in solution I was (again) severely disappointed. The Silverlight component I had chosen (Silverlight Multi File Uploader), as well as many others, send the file in the body of the HTTP request (instead of putting it in a mulitpart/form-data encoded POST header) and chunked the upload. This was greatly frustrating as I wanted to put the uploaded file straight into the database, without placing it temporarily on the file system. After some hacking of the Silverlight component, I was able to strip out the chunking code, but attempting to have the request use mulitpart/form-data failed due to Silverlight’s unusual (but rather logical) way of doing HTTP Requests. Silverlight relies on the browser sending the request and completing its headers (such as Content-Length). However, the Content-Length header is put before the Content-Type and this causes IIS (and, most likely, every other webserver) to ignore the POST variables. In short, I couldn’t have the file uploaded properly (ie appearing in the Request.Files array), but I could live with put it in the HTTP Request body (and access it via the Request.InputStream stream).

But enough of the backstory – what is to be done? Well, its quite simple. The user’s browser should realise that it is uploading a large file and show the user how much of the file is complete (and, possibly, speed and estimated time). I find it curious that the user is presented so much information about a download and yet next to nothing about the statistics of an upload. When you think about it from a web developer’s point of view, the browser is the only place to implement this feature. Why? Because HTML is for content, CSS for styling and JavaScript for functionality. Out of those three, JavaScript is the best candidate for implementing an upload "monitor"; but JavaScript has no information about the OS the browser is in (nor should it), and so there is no way for the JavaScript to know how large the file is – this means that the responsibility falls out of the Web Developer’s hands and into the browsers hands.

First off, a bit of a disclaimer – I admit that I am using Visual Studio 2010 (Beta 2) and Office 2010 (Beta); as such I shouldn’t expect all documentation for them to be complete or everything to being running bug free.

With that said: Begin the rant!

Today I decided to attempt to build an Outlook 2010 add in that would enable me to "block" particular authors on RSS Feeds. The reason I wanted to do this is that I am subscribed to the TechNet, ASP.NET, MSDN and Windows Team combined feeds – these 4 "combined" feeds are a collation of quite a number of blogs by Microsoft. The unfortunate thing is than a number of the blogs are completely irrelevant to me, or in a foreign language. With my Add In I could then block the non-English and irrelevant authors without having to create a-hundred-and-one "rules". So, what went wrong?

First off, there is no "RSS" class. RSS items are of type PostItem or, more specifically, have a MessageClass or IPM.Post.RSS. So while it is possible to tell that an individual Post item is an RSS item (by doing a string comparison), its impossible to filter or check folders. This isn’t a *big* thing, but it certainly is very annoying.

Don’t use the classes, use the interfaces. For every VBA class, there is a .NET Interface AND a class; for instance there is an Outlook.PostItem (Interface) and Outlook.PostItemClass (Class). So which to use? Apparently the interface is the one to use, I think. The classes are apparently "COM coclasses that are required for interoperability with the corresponding COM object" – which to me sounds like an Infrastructure class, which is fine if it is private or internal, NOT public. Also, in case you didn’t notice, to add insult to injury there is a class with the word "Class" in its name that is referring to the fact it is a class (and the interface isn’t prefixed with an "I" – but we’ll forgive that).

The documentation is incomplete. And its not just the 2010 documentation that is incomplete. To use the example above, lets have a look at the PostItem Interface members (Go, click on the link and have a look. I’ll wait right here. Done?). That’s right – they’re empty. But IntelliSense in Visual Studio (and the actual API) tell a different story. Ok, lets assume that (being programmers) they’re lazy. maybe the documentation is in the PostItemClass Class (Again, click on the link…). Hmm…. apparently all of the Properties, Methods and Events do the same thing… which is the same thing as the Class… Oh, and in the name of fairness, reopen one of those links and check the version of Office the documentation is for. That’s right, 2007.