Tuesday, December 30, 2008

A Good Recession?

A couple of weeks ago, in the midst of more economy doom and gloom, our CEO gave a company address speaking to the benefits of a recession. He then went on to make the point that recessions can actually be good for some companies.

In short, recessions reward good management. Companies who've had the discipline to not overextended themselves financially, stay within their circle of expertise, and are passionate about what they do, are excellently positioned to prosper in a recession.

As the competition starts to thicken and there are fewer customers to go around, mediocre companies inevitably fall apart or shrink substantially. Meanwhile great companies relish the opportunity to compete against their counterparts.

A recession separates the pretenders from the contenders. Firms who have built strong relationships with their clients and continue to deliver exceptional value, often have little to fear from a recession.

If anything, a recession helps good companies by weeding out all the chaff polluting the business space. So if a recession can help great companies, how about great developers?

For The Love = For The Win

Let's face it, a lot of IT personal (including developers) aren't exactly passionate about what they do. Somehow somewhere, a lackluster wave of hires stumbled into the IT workplace.

Maybe it was during the dot com days when insane salaries were lobbed at anyone who could work a keyboard. Maybe a career advisor or two read an aptitude test upside down. Either way I'm sure at some point you've bumped into a programmer who isn't passionate about programming.

Not that being apathetic towards your job is a sin per se, but it if you're not passionate about something, you'll never be exceptional at it. And if the most you're destined for is average, then both you and your team will be having perpetual dates with mediocrity.

People who are passionate about programming, would be doing it whether they're getting paid to or not. They're a lot more likely to be reading technical blogs, show up at user groups, and help a team grow deeper technically. And get this, they'll actually enjoy doing it!

So here's the good news about recessions for developers.

The Cull

Don't get it twisted, a lot of good people get let go during recessions. Huge swaths get cut through IT groups, and a lot of great staff ends up on the wrong side of the red line.

But just like great companies, great developers don't have a hard time finding work. Their managers are more than happy to write them glowing letters of reference and refer them on to other firms that can make use of their skills. The same passion that has kept them up to speed with industry changes will help them shine during the interview process.

It's not just good staff that gets laid off, bad hires also get let go too, and this helps development teams tremendously. Managers who once had the luxury of keeping bad hires are finally forced to take a hard look at their staff. The end result is often a more proficient and passionate team of developers.

Hiring is also a lot more productive during a recession, not only are there more great candidates to choose from, but bad hires are a lot more likely to occur when there's too much work available, as oppose to when there's too little. The net result is that teams tend to get more competent as only good developers get filtered back in.

Lastly, people who are simply stumbling around an industry are encouraged to take a moment to take a hard look at their current career path and decide if it's aligned with their strengths and interests. This isn't just good for the industry, it's essential for individuals to find something that they're truly passionate about and that they can find lasting success in. Keeping distracted people in an industry when their strengths really lie elsewhere not only erodes the craft, it steals valuable time from them that they could use to find something that they're exceptional at. Time they could otherwise ply at a trade more closely coupled to their natural strengths. When a bad hire is made, it's not only unfair to the company, it's wasteful of the employee's time. Recessions help hit the reset button.

Another Take

So depending on how optimistic you're feeling you could say that recessions are pretty good for you the developer. They help people find better jobs, stop bad hires, move people to industries where they can find real success, and reward those who truly belong in a space. Ideally it'll help improve your team, your company and your industry. If you're lucky you just might hit the trifecta.

Monday, December 22, 2008

Out with the Old

Don’t get me wrong, I like old things. Constructs like the abacus, scythes, and slide rules will always be remembered by those unfortunate enough to have used them. But to continue using dated tools in today’s ever changing environment is like running cavalry against a line of tanks. As you continue to post mortem projects you should also be auditing your tool belt. This post is about me retiring not only a single tool, but an entire toolkit.

The AJAX Control Toolkit

The AJAX Control Toolkit was released under such a name in January 23rd, 2007, prior this namesake it was known as ATLAS. This toolkit is built on top of the AJAX Extensions and includes a series of controls to help ASP.NET developers take their user experience to the next level. The toolkit boasts many controls like animation extenders, better validators, modal windows, and widgets like accordions and drag panels. A lot of developers (including me) jumped at these controls and were all too happy to include the dependencies in their projects in exchange for the near free functionality. I should know, I was one of them.

The problem with these controls isn’t that they suck; the problem is that they’re not the best. While Microsoft was developing these controls, other client side efforts were underway. You’ve probably heard of these other libraries, some of them being jQuery, prototype, and mootools. I’m personally ditching the toolkit in exchange for jQuery.

jQuery

If you haven’t heard of jQuery by now, I’d kindly suggest that you get out more often. I personally think of jQuery as the Firefox of JavaScript libraries. A lot of jQuery’s strengths come from a litany of professional grade plug-ins that have been contributed and then vetted by the jQuery community. It’s this same community, large in number and energized by a passion for better UIs that makes jQuery such an impressive offering. This is truly a technology that has a bright roadmap.

I decided to ditch the ACT (Ajax Control Toolkit) for jQuery for the following reasons:

Dependencies: The ACT has server side dependencies (both the System.Web.Extensions.dll and AjaxControlToolKit.dll need to be present), jQuery has no server side dependencies besides the emitted JavaScript. This same JavaScript is also smaller than that emitted by the ACT.

Controls: Most likely, in the time that it took you to read this blog post, another jQuery plug-in was submitted to the community. There are a lot of talented JavaScript developers that are using this framework to make some truly impressive plug-ins. I urge you to check out the plug-ins page, it more than dwarfs the ACT offering.

Collaboration: Developers may make it functional, but designers make it usable, and sadly the latter almost always trumps the former. Designers need to be able to mock out UIs and style these controls. The more they get involved the less you have to do when it comes to presentation. For designers to develop style sheets, they often need to get access to the control.

For the ACT this often means they need to be running Windows (which designers seldom do), install Visual Studio, and upgrade to either .NET v2.0 + System.Web.Extensions, or .NET v3.0 (sound tedious yet)?

Or they could just work with jQuery UI, a site that helps designers familiarize themselves with the markup and JavaScript needed to produce some great interfaces.

The ACT doesn't offer natural middle ground between developers and designers; it also doesn’t make for a shared ownership of the UI. Designers should be involved all the way through the project pipeline, not simply handing off a style sheet to a developer.

Passing the style sheet to a developer who didn't create the layout and isn't responsible for it, is likely to fast track your web application to ugly ville.

Worse yet, it’s likely the developer will change the style sheet if s/he can’t get markup to be emitted exactly like the designer was planning for (such is common when working with the ACT). This makes it increasingly awkward for the designer to maintain the style sheet as it moves to QA and then production. The ACT just isn't very designer friendly.

Don't Take My Word For It

Learning jQuery not only gives you access to a tonne of great controls; it starts you down the process of learning a DOM manipulation framework that is truly platform independent. If you’re concerned about support from a big company, Microsoft recently decided to include jQuery with Visual Studio as of September 2008. Consider using jQuery in SharePoint to spruce up a dull UI, or to make ASP.NET MVC more palatable. I think it’s time to get on the bus, this is one that you don’t want to miss.

Friday, December 19, 2008

Kind Of A Black Box

For the longest time ASP.NET Just In Time compiling has always been a pretty gray matter for me. Word on the street has it, that some things you'll do against a web site cause the site to be re-JIT'd. Other actions will simply cause the application to unload. But which actions precipitate which results?

Just In Time Compiling

As you're probably aware, the code that you put in an ASP.NET web site isn't the actual code that runs. All those assemblies, web pages, user controls etc... all need to be Just In Time Compiled before they can actually be used. Visual Studio creates MSIL (Microsoft Intermediate Language) a bytecode that is highly portable. Before you can run it a .NET runtime, it needs to be compiled again into native code. Native code is specifically targeted to the machine it's going to be running on and is a lot more efficient. You can do this yourself (assuming you don't want your code to Just In Time compiled) by using either the ASP.NET Precompilation tool or Ngen.exe (Native Image Generator) another tool that comes with the .NET SDK.

All that's required for MSIL to be JIT'd is that the destination machine have a compiler that's capable of optimizing your MSIL for the given environment. A common example of this is when a developer builds an application on a 32-bit machine and then goes and deploys that MSIL on to some 64-bit machine. At runtime the MSIL will get natively compiled into 64-bit assemblies by a compiler that's sensitive to the needs of the destination 64-bit machine.

Optimize to the targeted CPU and the operating system model where the application runs. For example JIT can choose SSE2 CPU instructions when it detects that the CPU supports them. With a static compiler one must write two versions of the code, possibly using inline assemblies.

The system is able to collect statistics about how the program is actually running in the environment it is in, and it can rearrange and recompile for optimum performance. However, some static compilers can also take profile information as input.

The system can do global code optimizations (e.g. inlining of library functions) without losing the advantages of dynamic linking and without the overheads inherent to static compilers and linkers. Specifically, when doing global inline substitutions, a static compiler must insert run-time checks and ensure that a virtual call would occur if the actual class of the object overrides the inlined method.

Although this is possible with statically compiled garbage collected languages, a bytecode system can more easily rearrange memory for better cache utilization.

That's pretty much the just of it. Just In Time compiling adds a lot of flexibility to .NET development. Even though you've built a very generic application (can run anywhere with a CLR), what really get's run is a highly optimized version of that code that is targeted to the destination machine's architecture, operating system, etc...

How Does This Happen (in ASP.NET)

When you first visit a web application it needs to at least JIT the important stuff. JIT'ing starts with what's called top-level items. The following items are compiled first (pulled from here).

Item

Description

App_GlobalResources

The application's global resources are compiled and a resource assembly is built. Any assemblies in the application's Bin folder are linked to the resource assembly.

App_WebResources

Proxy types for Web services are created and compiled. The resulting Web references assembly is linked to the resource assembly if it exists.

Profile properties defined in the Web.config file

If profile properties are defined in the application's Web.config file, an assembly is generated that contains a profile object.

App_Code

Source code files are built and one or more assemblies are created. All code assemblies and the profile assembly are linked to the resources and Web references assemblies if any.

Global.asax

The application object is compiled and linked to all of the previously generated assemblies.

After the application's top level items have been compiled, ASP.NET compiles folders, pages, and other items as needed. The following table describes the order in which ASP.NET folders and items are compiled.

Item

Description

App_LocalResources

If the folder containing the requested item contains an App_LocalResources folder, the contents of the local resources folder are compiled and linked to the global resources assembly.

Adding, modifying, or deleting Web service references in the App_WebReferences directory.

Adding, modifying, or deleting the application's Web.config file.

Whenever one of the above conditions is met the old application domain is "drain stopped", which means that new requests are allowed to finish executing and once they're finished the Application Domain hosting those assemblies unloads. As soon as the recompile is finished a new Application Domain starts up with the new code and starts to handle all new requests.

Exodus

Believe it or not, having this kind of knowledge somewhere on hand will actually help you when you need it the most...when you're troubleshooting. This kind of stuff doesn't need to be kept at the tip of your tongue, but it should be at least semi-salvageable from the recesses of memory. The worst kind of trouble is the kind that doesn't make any sense. Having sophisticated tools that do a lot of leg work for you can also put bullet holes in your feet if you haven't read the instruction manual.

Hope that provided some value. I swear this stuff comes up more than you'd think.

Saturday, December 13, 2008

I Should Know This

There's a lot of things I've taken the time to learn formally (ie. go to school, take a cert) which escape me when they would actually come in handy. In fact it becomes even more frustrating when we end up figuring out what happened...and it's something I already (should) know.

This once again became wildly apparent this week while I was troubleshooting some random web app. As it turns out, poor memory is the gift that keeps on giving, and my own memory is definitely feeling the festive nature of the holidays. In a very small nutshell, I forgot the following.

Every application you set up in a web site (including the root application) ends up being in its own application domain. It'll have its own set of assemblies, get JIT'd independently, and of course be completely isolated from the the contents of other application domains (Session, Statics, Cache, etc... won't be shared.).

Refresher

Process: Contains the code and data of a program. Also contains at least one thread. Represents a boundary that stops it from wreaking havoc on other processes and requires special techniques to communicate across. Runs in a security context which dictates what it can and can't do. If it's running some .NET code then it contains one or more application domains.

An example of a process might be notepad.exe, or if we're talking about IIS aspnet_wp.exe (Win XP, Win 2000) or w3wp.exe (Windows 2003).

Application Domain: Are more efficient than processes. One process can load the .NET framework once and host many application domains. Each app domain might load its own assemblies (or even it's own child application domains), but will not have to reload the overhead of the .NET framework. Since each application domain runs within the .NET framework, it benefits from the features there of (things like Code Access Security, Managed Memory, JIT Compiling, etc...).

Application Domains also represent a boundary that you'll need special techniques to communicate across. Since more than one can be hosted in an process, it's common for the .NET Framework to load up and unload application domains at runtime. This is what happens when you recycle an IIS Application Pool, the application domains within it are unloaded and new app domains are created to service new requests.

In ASP.NET an Application Domain is created for every virtual directory that you configure and as application in IIS. Each can have it's own bin folder, web.config and IIS Application Pool.

It's worth mentioning that even though there's Application Domain boundary separation between applications, they can still inherit web.config settings from each other (if one is in a parent folder).

A high level view of Processes, the .NET Runtime, Application Domains, and the assemblies they in turn load might look like the following picture.

Application Pools (IIS 6 and above): Host one or more applications (Application Domains if we're talking about .NET code). If an application pool is running in worker process isolation mode, it can also spread those applications over one or more processes (a web garden)! They allow you a great deal of flexibility when it comes to deciding how much isolation you want to give your applications. You can use application pools to provide process level isolation for each application (each application can have its own w3wp.exe [or more if you web garden]). Application Pools can also combine multiple applications in a single application pool (saves resources).

And We're Back

Every now and then I get a little scared of the complexity that will be in development environments 20 years from now. Yes, the stuff being written will be extremely cool, and the tools that get used will no doubt be just as impressive. That being said, there will be a a lot of complexity in play in the future and a tonne to be aware of.

With .NET v4.0 in the works, a lot of focus seems to be on parallel computing (ie. PLINQ) I can't imagine the computing world is about to get much simpler.

Sunday, December 7, 2008

Finish Line

This week I finally finished the last of the SharePoint certifications. They include:

MCTS: 70-542 (MOSS Application Development)

MCTS: 70-541 (WSS Application Development)

MCTS: 70-630 (MOSS Configuration)

MCTS: 70-631 (WSS Configuration)

Doing all four in a single year was a little more time consuming than I thought it would be, and I'm glad it's finally over.

The Value

While certifications themselves don't really get you deep in a subject matter (at least not that I've noticed), they do give you a really good idea of the surface area. At that point you're in a good position to explore the topics on you own and get deep in a few.

Here's a more concrete example. A cert in ASP.NET won't get you deep in HTTPModules, but it will tell you what they are, and what you might use them for. The next time you hit a problem that involves say URL Rewriting, you've still got some learning to do, but you're a lot less likely to leave the reservation completely and come up with something home brewed and semi exotic.

To me that's one of the biggest values of certifications, they try to clearly define the scope of the technology. After a cert you should at least know what you don't know about some tech stack. At that point you're a lot less likely to write a bunch of code to solve a problem that your tech stack is naturally geared towards solving.

Rounding The Edges

Of course each of the four SharePoint exams have a slightly different focus. It's also worth mentioning that two of them don't really have anything to do with writing code, they're all about configuration. I thought these were important because of SharePoint's farm footprint. The platform is a lot more than a single web server/database. It's really a series of services working together on many different machines. When all these services act in concert, the resulting ballet provides a pretty decent platform for collaboration.

Because there's so much in play (and so much complexity), it seemed just as important to me to learn the infrastructure nuts and bolts as it was to learn the various APIs.

This sentiment was recently corroborated by the SharePoint Master program. The program involves the same four certifications. I'm pretty sure the content authors of the program are on the same page, that you can't be a "SharePoint Master" unless you have a holistic knowledge of both infrastructure and application development.

Go For It

For those wondering how to go about getting a certification I would encourage them to do so. The experience alone will tell you if you feel it was worth its salt. The typical certification route looks like this.

Read the about the test requirements. This may involve one or more exams that each have a preparation guide. Be sure to follow the preparation guide.

Study up. This might involve some online learning, some books or just spending a lot of time on the MSDN.

Take a Practice Test. Real exams cost $125 and a couple hours of your time. It's doubtful you'd want to want to write the real exam more than once. Most exams require at least 70% as a passing grade, I've written exams that required as much as 80% to pass. Make sure you're ready before you schedule an exam. You can often find free practice exams online, failing that just pay the $70 to a practice test provider (MeasureUp, Transcender, TestKing, etc...). The idea is to make sure you're ready before scheduling the real exam.

Schedule an exam. For Microsoft exams you'll most likely end up going through Prometric or Pearson Vue. Their web site will book you an appointment with a testing provider (most often some IT college or learning center). Exams cost $125 USD and usually need to be scheduled at least 48 hours prior to writing.

Show up and ace the exam!

Everyone learns at a different pace, but expect to spend at least fifteen hours going over material and practice exams for your first certification. After writing a couple exams you'll notice your preparation start to decrease as you start to streamline the process.

After finishing a certification you'll be given an MCP ID (if you don't already have one), a welcome kit, a certificate, and use of a certification logo (below) for business cards and such. I don't personally make use of the last two, but they may help demonstrate to your boss that your passionate about technology, or convince women you're able to commit to something.

Remember that certs are just a part of your learning continuum, supplement them where need be. They're nice to have, but by no means the finish line.

Tuesday, December 2, 2008

Great Utility

SharpZipLib is a great little library that helps you programmatically compress/decompress zip archives. If your needs aren't too exotic (ie. you need to programmatically zip/unzip a series of files/folders) this could very well be your ticket.

You've probably heard of it before (it's been around for quite a while), some sample usage might look like this (compresses a file).

This is pretty normal usage. It adds a zip entry to an archive and creates said archive. What might confuse you though is the error you'll get if you try to to unpack the archive using a legacy unzip tool (ie. the stock Windows XP decompression tool).

The stock Windows XP extraction tool won't be able to unpack the archive and will complain about the archive being "invalid or corrupted" if you try to extract the archive contents.

Zip64 Extensions

What's happening here is that the utility is enabling Zip64 extensions for the archive, and some older utilities can't read Zip64 Extensions. You could simply turn off Zip64 Extensions, but this will cause problems when you start adding files larger than 4GB to your archive.

A better solution is to make a mild tweak to the way we add files to the archive. By specifying the size of the file we're adding, the ZipOutputStream can decide whether or not to use the Zip64 extensions. If we don't need them then they'll be turned off automatically. The mild tweak below fixes the error from the above code:

...ZipEntry entry = new ZipEntry(Path.GetFileName(sourceFilePath));entry.DateTime = DateTime.Now;/* By specifying a size, SharpZipLib will turn on/off UseZip64 based on the file sizes. If Zip64 is ON* some legacy zip utilities (ie. Windows XP) who can't read Zip64 will be unable to unpack the archive.* If Zip64 is OFF, zip archives will be unable to support files larger than 4GB. */entry.Size = new FileInfo(sourceFilePath).Length;zipStream.PutNextEntry(entry);...

Hope that helps someone. That error drove me crazy for a while and I didn't find much via Google.

Monday, December 1, 2008

All This Hardware And No Uptime

SharePoint is pretty heavy. I often think of it as an 800 pound gorilla who stopped exercising and let itself slide. To handle all the services that run within a farm and provide decent response time to users, a reasonable amount of hardware usually gets provisioned to pick up the slack.

I'm talking about real iron here. Large farms featuring clustered SQL Servers, redundant application servers, and a series of web front ends balanced with either a network load balancer or a Microsoft NLB cluster.

One might look at all this gear and think that as a result, the farm is almost guaranteed to enjoy some pretty high availability right? Well I guess that depends on what you call high availability.

A Desire for High Availability

\

Total downtime (HH:MM:SS)

Availability

per day

per month

per year

99.999%

00:00:00.4

00:00:26

00:05:15

99.99%

00:00:08

00:04:22

00:52:35

99.9%

00:01:26

00:43:49

08:45:56

99%

00:14:23

07:18:17

87:39:29

For clients that are running SharePoint internally, uptime is probably important but not a huge priority. Clients that use SharePoint for internet facing applications are another matter. These users are far more likely to use SharePoint to buttress e-commerce offerings or brand efforts. These businesses usually want high availability and may even ask for four to five nines of uptime (99.99%-99.999% uptime). Five nines (sometimes called the holy grail of uptime), equates to being down no more than 5 minutes 15 seconds a year. It's a bold proposition and requires a great deal of planning and forethought. Still, if it's doable, all this hardware should set you in the right direction right?

Heavy Patches, And Lots of Them

The biggest hurdle I've had with providing high availability to clients with SharePoint has come from the patching procedures issued from Microsoft. Normally when updating applications/machines it's possible to update one machine at a time, using your load balancer to shelter this machine from production. Once the machine has has been updated you can bring it back into production and start updating one of it's siblings. With SharePoint this process gets a little more complicated. Here are a couple of the reasons:

There's no uninstall/rollback for most SharePoint updates (your best bet for uninstall is a machine level backup).

The recommended install procedure dictates that you stop all IIS instances for Web Front Ends. This makes it difficult to continue to provide service or at the very least hold up a stall/maintenance page.

There's usually at least one machine in the farm that rejects the upgrade and needs to be troubleshot individually. For me these have often resulted in removing the machine from the farm, upgrading it, and then adding it back to the farm. This usually adds to server downtime, especially if the server was serving a key role (ie: SSP host or the machine that hosts the Central Administration web site).

Assuming you manage to make it through all the above without a lot of downtime, how many times a year do you think you might be able to do it and still maintain a reasonable downtime SLA? Before you answer that, consider all the updates that have come down the pipe for WSS since it's RTM (it's SharePoint 2007 remember). This is also just a list of updates for WSS, there's a whole other table for MOSS (although most of the dates and versions coincide).

Don't get me wrong, updates are good. In fact, I like it when Microsoft fixes things, especially when the clients who have purchased MOSS have already paid potentially millions in licensing fees. I just wished these updates which happen many times a year AND provide critical fixes to expected functionality had better upgrade strategies.

Do SharePoint updates and the way in which SharePoint farms are upgraded make high availability a pipe dream? Does all that hardware do nothing except help the farm scale out?

A Little Transparency

In fact all I'm really looking for from these updates is a little transparency. I'd be thrilled if I could get a little more detail as to what's going on underneath the hood and what to do when the error log starts filling up.

I've yet to see a really good troubleshooting strategy or even deployment strategy that gives you good odds of limiting downtime when it comes time to roll out these upgrades.

We have a ticket open with MS support to take up this issue. The wait for SharePoint related issues is still pretty long, but rest assured should I come up with one or find a good resource for these kinds of rollouts you'll know where you'll find it.

Tuesday, November 25, 2008

Pulling Teeth

It's an exceedingly common scenario these days to want to make a virtual machine out of some physical machine. In fact when it comes to reproducing an environment I can't think of a more thorough way (especially when there's multiple machines involved).

This often leaves someone tasked with finding a powerful, affordable, and flexible physical to virtual conversion utility. Now finding a P2V solution isn't really a challenging problem, but finding one that's easy to use is.

I would encourage anyone who's ever been frustrated with the P2V tools on the .VHD (Microsoft Virtual Server, Virtual PC) side to consider making a .VMDX instead (VMWare Workstation, Server, Fusion).

Novocain

The VMWare Converter is free, effortless to use, and makes hot clones (in addition to cold ones) with the same ease that Johnnie Cochran acquits his clients. With a hot clone you don't even need to shut down the OS or stop any services! You can perform a P2V transformation and experience zero down time...all for free (if you call being bound to a VM Ware product free). This utility is so handy it just may be the help you need to help you make a better vendor choice when it comes to virtualization...

Just walk through a wizard, click hit a few radio buttons, and you'll be creating a hot clone in no time.

Details

The VMWare converter will run on the following OS's:

Windows XP Professional

Windows 2003 Server

Windows 2000 Professional

Windows 2000 Server

Windows NT SP4+ (IE5 or higher required)

You're likely to get a much smoother ride with Windows XP and Windows 2003 as they both make use of the Volume Shadow Copy service, but supposedly you can make hot clones with the other OS's as well (I've only made hot clones on Windows XP and Windows 2003).

Like any task that could go horribly wrong, there's a couple of things you should check before selling yourself on hot clones, or at least placing any bets.

Don't just take my word for it, try the utility on some machine you don't like and create a clone (hot or cold). When you're done, just be sure post a YouTube video of you Office Space'ing the machine just like these guys did.

The clones should be made hot, the revenge should be served cold.

Note: Some employers take offense to employees taking their hardware into fields and striking company assets repeatedly with with sports equipment, regardless of whether or not a hot clone has been made.

Saturday, November 22, 2008

Uh, You're Testing That Where?

A while back one of our data analysts and a network admin stubbed out a sheltered domain within our LAN. The idea was that we would set up a sandboxed class C network where developers could test code and not worry about breaking anything on our public network.

While this might seem a little hardcore at first, remember that a lot of solutions these days are meant to be deployed to some reasonably complex IT environments (like this sample BizTalk topology on the right). As such these codes may be making calls to Active Directory, Exchange, sit underneath some type of proxy (ISA/Squid), and get deployed to some kind of web farm (think load balancer appliances, NLB clusters, etc...).

Needless to say these kinds of environments are dramatically different from what you find on your workstation. Depending on the type of application you're delivering and where it's getting deployed, simply testing on Windows XP and IIS 5 might not qualify as a realistic environment.

The Right Kind of Cheap

The good news is that with Virtual Server/PC being free (give or take a windows license), virtualization has never been cheaper or easier. We ended up using a bunch of hardware kicking around the office to provision these machines. Our little test subnet already has a wide variety of virtual instances running around in it, hosting everything from Exchange to AD, and even a full blown MOSS farm.

The bad news is that maintaining this virtual instance still costs time, and as you may have heard, time is money. This can be mitigated in part, by the ease of performing machine level backups on VMs, and other VM features like Undo Disks, Differencing Disks and saving state. Nevertheless it makes sense to budget some time to maintain this kind of environment, rarely does IT infrastructure maintain itself...

The Easy Value

Originally the thought was to set up some SharePoint farm and allow developers to stage their code in a farm environment prior to deploying at the clients. You'd be surprised at just how many deployment issues crop up just by deploying to a farm instead of a single machine.

The real value is that you give the solution a chance to break in the your farm first. I'd much rather start up a troubleshoot at my desk than in a client's server room.

As far as I'm concerned the farm has already paid for itself. It's already stopped me from deploying a MOSS update that I'm positive would have trashed one of our clients SharePoint farms. As some of you may know, there is no uninstall for most SharePoint Service Packs and Updates, you often have no choice but to continue forward...even if it's off a cliff.

By breaking things in our VM farm, not only do I have an easier time rolling back if I hit a wall, but I can also troubleshoot it with all the tools and brainpower that my fellow developers have to offer. The alternative is breaking it for the first time at some client location where I'm all alone and only have the tools I thought to bring with me. Some fights just shouldn't be fought alone (at least the first time around).

The Extra Value

This wasn't immediately apparent to me but a lot of developer benefit from this kind of sandbox for another reason. In addition to testing their code in a more complex environment, they actually get a chance to look under the hood and play with the many services that decorate typical IT environments. It's surprising how many of these services that your typical developer isn't familiar with (AD, DNS, ISA, IIS, MOM, etc...). Let me elaborate.

A ton of web developers have no idea how DNS works. This could also be said for HTTP, TCP and the web servers that host their applications.

This akin to a cab driver that knows how to drive, but doesn't know any traffic laws or anything about the car.

This isn't meant to be a roast for developers who haven't had the benefit of IT fundamentals. Some of these topics just aren't covered in a Computer Science degree. This is about enabling developers. Empowering them to discover what hidden gems exist in typical IT environments and how these existing IT assets can help deliver better solutions.

Trust me, there's a big difference between telling a developer what DNS is and simply having the guy create and A record and see that light switch on in his eyes. What's even better is letting him discover for himself what IT assets can do for him in a clean and controlled environment where mistakes are easy to rollback.

Imagine This

Don't get me wrong, there's always going to be a percentage of developers that are clueless when it comes to all things IT (starting with their workstation). That's where your job security comes from. But if you educate just a couple more bodies you'll start to notice the difference pretty quickly. There's a night and day difference between a dev who simply lobs a program over the wall and one who has a holistic understanding of the landing zone. These kinds of investments don't take long before they start to yield noticeable returns. There might even be a day where IT staff come to trust developers to not completely mangle production environments.

Sunday, November 16, 2008

Some Reasons

Maybe you're tired of putting your SharePoint web application in full trust and running web applications with huge security surface areas. Or maybe you've become tired of GAC'ing assemblies just so that they can run in full trust. Heck, maybe you've even read this article and decided to finally start handling Code Access Security in a more elegant way.

All of the above are good reasons to make a custom Code Access Security policy file, and in the following section we're going to do just that. If you're curious as to why you might be doing this in the first place, it might be a good idea to peruse common code access security issues described here.

Creating A Custom Policy File

The intent of the custom policy we're about to make is to allow yourcode and only your code to run in full trust regardless of where it is (doesn't have to be in the GAC). All other code in the application will be running as WSS_Minimal which means it will have a reduced set of privileges (see table). For example, codes running in WSS_Minimal can't access the SharePoint object model. Yourcode however will be running in full trust and will be able to do whatever it wants.

This is often desirable since we have no idea what web parts information workers or administrators will lob into our SharePoint application. It's a little presumptuous to assume that they're all safe and won't do anything malicious. Granting them full trust allows them to do things like write to sensitive areas of the disk, access the registry, etc... Users with high privileges (ie. administrators) get duped in to running malicious code all the time, that's one of the reasons Code Access Security exists in the first place. If you're curious about what the difference between code running in full trust and code running in WSS_Minimal is, refer to the table in the link above. Onward.

First make a copy of C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\CONFIG\WSS_Minimal.config and put it in your application folder, call it WSS_Custom.config. If you want to increase the privileges of other code you can start with WSS_Medium, this will allow other code to access the SharePoint API without having security exceptions thrown.

Strong name your assembly. We're not going to put in the GAC, but it still needs to be strong named so that it can be uniquely identified by the WSS_Custom.config policy file.

Open up the Visual Studio Command Prompt (or go find sn.exe) and extract the public key from your .snk file into another file. (It's worth noting that the Public Key is not the same as the Public Key Token that you might get from your assembly using Reflector).

sn -p YourStongNameFile.snk PublicKeyOnly.snk

You should see the output "Public key written to PublicKeyOnly.snk".

Now we print out that public key with the following command:

sn -tp PublicKeyOnly.snk

Copy that insanely long stream of numbers (your assembly's public key). Add the following line to your WSS_Custom.config just below the <CodeGroup class="FirstMatchCodeGroup"...>'s <IMembershipCondition> element.

If you were getting security exceptions before they should be gone now, at least all those coming from your assembly. Other code will continue to run in WSS_Minimal (or WSS_Medium if you used that .config as a template) and may throw security exceptions should they try to access APIs that they don't have the privileges to do so.

This is a lot more preferable than two popular alternatives:

Having the entire web application run in full trust (which gives all assemblies full trust).

Putting your assemblies in the GAC just so that they can run in full trust.

Thursday, November 13, 2008

Ugly Error

I've seen this error often enough that I think it deserves a brief entry. Essentially the symptoms are:

When you open up the Internet Information Services (IIS) Manager it takes a long time to load and when it finally does the console is blank. Running iisreset (Start->Run->iisreset) will temporarily fix the problem.

In the event viewer under Application you see error codes for 6398, 7076, and 6482. Specifically they look like:

Event Type: ErrorEvent Source: Windows SharePoint Services 3Event Category: TimerEvent ID: 6398Date: 11/13/2008Time: 1:34:47 PMUser: N/AComputer: [COMPUTER NAME]Description:The Execute method of job definition Microsoft.Office.Server.Administration.ApplicationServerAdministrationServiceJob (ID [GUID]) threw an exception. More information is included below.Attempted to read or write protected memory. This is often an indication that other memory is corrupt.For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

A Fix

Supposedly this behavior takes place when the Windows SharePoint Services Timer (OWSTimer.exe) has two threads that try to access IIS at the same time. There's currently a hot fix and description for this problem that patches IIS that. Ideally there'll be an IIS update that comes out in the future that isn't a hot fix. If you can bear it I would consider waiting.

Wednesday, November 12, 2008

Who Is This Handsome Helper?

If you've never heard of Robocopy (Robust File Copy) you're missing out. When it comes to reliably coping files over an unreliable network, there's no better tool. In fact Robocopy just recently saved my bacon, which is why I'm writing this in the first place.

Originally distributed with the Windows 2003 Resource Kit, Robocopy is now available on Windows XP, Vista and Windows 2008. It's worth mentioning that Robocopy isn't anything like RoboCop, they just sound the same, they both never quit and neither of them are very entertaining to watch (boo-yeah!).

Features

Relentless - By default if Robocopy fails (ie. your network connection cuts out) it will retry every 30 seconds until it's tried to copy the files 1 million times, if there's even a remote possibility of getting that file over the wire, it will get there. Given good uptime, odds are that your source machine will die from hardware failure before Robocopy has stopped trying to deliver your file. I dream of a world where USPS had this kind of stamina.

Resumable - You can throw down the /Z switch and files will pick up right where they left off. Don't believe me? Try the following; start copying a large file over the network (with Robocopy and the /Z flag) and then toy with Robocopy by unceremoniously pulling out the Ethernet cable. Robocopy will first throw an angry error and then relentlessly retry every 30 seconds until either your file has been copied or you've gone Office Space on the source machine (there's no switch to defend against that). Plug the cable back in and voila, the file automagically resumes. Trust me, this trick is great at cocktail parties and a smash hit with most women. It's worth mentioning that resumable copying does happen a little slower (30% slower in my tests).

Replication - Robocopy is exceedingly good at replicating/syncing directory structures. There's never been an easier way to mirror directory structures or file systems.

Others - There's a bunch of other features like decent logging, a User Interface if you don't like reading instructions, scripting, mirroring file ACLs, etc...

Faults

My only wish is that I could get this utility to call some code once it's done (for notification). Yes I could use a process or file watcher but I'd still like something a little cleaner. I feel like I'm being left in the dark. Maybe that's the very nature of fire and forget...I shouldn't need a notification.

Use It

So next time you're trying to move a lot of files over the wire consider using a utility that is specialized with this very same problem. I've seen (and written) a lot of code that tries to be smarter than it needs to be, none of it has had the effectiveness or cost of ownership of this precious little utility. I wish on you a buggy network, but great file copying.

Sunday, November 9, 2008

Problem Solving

I think most people would agree that problem solving comes in pretty handy when it comes to software development. Knowing what you can infer given a set of data or exactly what you need to provide a feature is key in coming up with a lean solution.

Puzzles

It was a Data Structures and Algorithms T.A. who first convinced me the value of puzzles when it comes to programming. We were trying to come up with a solution for Instant Insanity when a fellow student asked if there was really any value in knowing a solution for a children's game.

The T.A. then made the excellent point that when you brush away all the noise from most business problem you essentially end up with a simple (albeit potentially dull) puzzle. In fact being able to solve Instant Insanity puts you in a great position to solve a lot of constraint based scheduling problems, a family of concerns that many businesses (ie. airlines) can use to save millions of dollars a year. Being able to solve puzzles algorithmically is analogous to solving business problems in code.

Sharpening the Saw

So where do you go about getting good puzzles to solve? A coworker recently pointed me to a site called Project Euler (as in the mathematician). The site boasts a sizable collection of math based problems that are meant to be solved programatically. Once you come up with an answer you can submit it and get immediate feedback. The problems are ranked in terms of difficulty and some of them are guaranteed to get you thinking...in fact, that's the point!

Once you've solved a problem you also get access to a forum where other developers have solved the problem and discuss their solutions. For myself the merits have been three fold:

Once you come up with an answer, you can compare your approach to that of thousands of other developers and get some perspective on different approaches.

It's fun.

Consider the following problem (Problem 9). It's the 9th problem in the site's inventory of 216 problems. While finding the solution is far from impossible (in fact 19885 people have already solved it at time or writing) it will get definitely test your problem decomposition skills.

Tuesday, November 4, 2008

XSLT In SharePoint

Unfortunately for me, being a well rounded SharePoint developer involves a broad range in skills. Not only do you need to become familiar with the SharePoint object model and a litany of MOSS technologies (Excel Services, Search, BDC, IPFS). Knowledge of ASP.NET, Web Parts, Workflow, XML and XSLT are also near essential competencies when it comes to a rounded SharePoint tool belt. This is of course my opinion, fact...is another matter all together.

If I had to pick a weakest link out of the above, XSLT would probably be my Achilles heel. Lucky for me most of the changes I make to style sheets are mild in nature. They usually center around styling the output from some out of the box SharePoint web part. I'm not an all star, but when paired with a decent XSLT debugger I can usually muddle my way through.

Against The Grain

Today I was styling a BDC Web Part and having a difficult time getting the result I wanted. It got repetitive enough that I decided flailing at the problem probably wasn't going to improve the situation, and that pointing a debugger at the style sheet would probably save me time in the long run.

Here's the hitch though, the debugger of course takes the style sheet (which you have) and the source XML...which you don't. That particular ingredient is unavailable in these situations. So the options were two fold:

Reflect on the Web Part and try to figure out where it's getting the result set from and do the same.

Provide the Web Part a style sheet that simply produces the same XML which it gets applied to.

A Solution

The answer ended up being quite straight forward. There's an XSLT element by the name of <xsl:copy-of select="expression" /> which solves the problem quite nicely. The copy-of element makes a copy of not only the current node, but it also copies namespace nodes, child nodes and attributes. Because of this sweet little gaffer the solution ends up being quite terse (at least for XSLT):

At this point I'm positive there's some XSLT developer out there rolling his eyes at me for posting entry level XSLT on the web...but that's what newbs post. Newbish content :-). In fact my next post about XSLT isn't likely to get much better. If there's a simpler solution do let me know.

Sunday, November 2, 2008

Getting Into The Farm

A lot of developers first introduction to SharePoint is being asked to take an ASP.NET application that they're familiar with, and integrate/port it into a WSS/MOSS instance. These same developers are often thoroughly disappointed when it comes to the application developer experience for SharePoint. The prototypical ASP.NET developer is likely to miss a lot of the everyday ASP.NET developer comforts they've become used to writing typical ASP.NET applications. Common gripes are often along the lines of:

No designer surface for building user interfaces with code-behinds.

Lack of tools making it easy to deploy/maintain ASP.NET applications across the farm.

Reduced trust levels for the SharePoint web applications. If you read the last post, you'll remember that unlike ASP.NET web applications (which run in full trust), SharePoint web applications start out in WSS_Minimal which has far less CAS privileges.

Other Techniques

By no means is this the first post speaking to different ways of getting ASP.NET applications into SharePoint. In fact my favorite post by Chris Johnson compares and contrasts 4 different techniques not listed here. They include:

_layouts directory deployments.

Building custom Web Parts.

User Controls with the Smart Part web part.

Embedding ASP.NET pages directly into the content database.

This Technique

This techniques is a little different. It involves 6 steps.

Create a User Control that represents some functionality you'd like to use in SharePoint.

Deploy the user controls to the SharePoint web application folder along with assemblies.

Add Safe Control directive for you assemblies user controls to the web.config.

Create a Page Layout to dictate the layout/structure of a page instance.

Embed your user control into the page layout.

Create an instance page based off the page layout.

It's important to note that because this technique requires Page Layouts, it's only available to developers that are working with MOSS and have turned on the Office SharePoint Publishing Infrastructure feature for their given site collection, and the Office SharePoint Publishing feature for their given site. These two features are under the Site Settings->Site Collection Features and Site Settings->Site Features respectively. Or you can simply create a publishing site which starts with these two features turned on, your choice. Onward.

Create and Deploy the User Control

Open up Visual Studio 2005/2008 and create a Web Application Project. This is the kind of project that compiles all of it's contents into a single assembly (.dll). If you're running VS 2005 and you haven't already downloaded this project, you can get it here.

Add a new user control (HelloUser.ascx) with the following .ascx content.

Deploy the projects assemblies to C:\Inetpub\wwwroot\wss\VirtualDirectories\[ApplicationRoot]\Bin. Make a directory called PageLayoutUserControls in the [ApplicationRoot] and deploy the .ascx's to C:\Inetpub\wwwroot\wss\VirtualDirectories\[ApplicationRoot\PageLayoutUserControls.

Add the following lines to the web.config to safe control the assemblies and the .ascx's.

Create an instance of the page. Site Actions->Create Page->Pick the Page "(Page)Hello World" and give it a name->Create. You're done!

Costs and Benefits

I like this technique because it has a lot of the benefits from traditional ASP.NET and a lot of flexibility.

Developers can develop their user controls w/out SharePoint (unless they need to call against the SharePoint object model).

You get a design surface to develop these codes.

You can deploy/organize your user controls to any application directory want (unlike the Smart Part).

You can make use of the SharePoint object model and run in the SPContext of your given site.

It's very easy to create hybrid page layouts that include both your user controls, a couple of web part zones and come SharePoint publishing controls. This lets you keep a lot of the SharePoint functionality that works well for your project, and still let you customize pages using traditional ASP.NET development tools.

You can get rid of web parts should you be interested in cleaner markup. If you have some exotic layouts and don't want to fight with all the <table> markup this could be quite helpful.

There's still some overhead with this approach. This lets you leverage a lot of the stock SharePoint functionality but there's no such thing as a free lunch.

You'll potentially need to create a Page Layout and page instance for each page you want in your application. This can make deployment and change management more tricky. You'll need a build manager that is savvy with tools to manage the content database (stsadm, Content Deployment Wizard)

You'll need a MOSS license and access to SharePoint Designer. If you don't have these the Smart Part is your next best bet for this type of UserControl/deployment type development.

Wednesday, October 29, 2008

I Know You've Done It

It happens all the time, SharePoint web applications are put in Full Trust simply because the developer can't be bothered to learn about his options, nevermind exercise them.

Don't feel bad if this sounds familiar, I've done it too. In fact we're both just two of many, a growing army of developers that simply "fix" the problem by setting the <trust level="Full" originUrl="" /> in the web.config. We move on with our lives and gleefully watch runtime exceptions disappear thinking we're doing the world a favor. Unfortunately we're also exposing our applications to unnecessary security risks and when the SharePoint content database gets mangled because some developer decided to let all code run in full trust...you're going to wish you'd read to the end of this post.

CAS For Dummies

Code Access Security (CAS) is all about protecting a bunch of code in a runtime environment (as in the .NET runtime) from other code. It's meant to provide hooks for you the developer, so that you can decide what kinds of code can call your libraries. You can demand that the calling code at least has a minimum set of privileges before you let them call your members/classes/assemblies. This is pretty handy, especially if you're planning on providing some dangerous functionality in the form of an API.

Let's pretend that you're writing an API that lets users write to a sensitive disk area. Well what happens if some sketchy code wants to use your library to do it's dirty work!? Even if the user who's running the code has access to the disk, the user doesn't necessarily know what the application is doing, they could have gotten this off of Astalavista.com...they only really know what the UI is telling them.

We're trying to stop some sketchy application from using your API to perform dangerous operations on disk.

Shields Up

Enter code access security. Leveraging CAS you can put special attributes on your code that ensures that the calling code at least have certain privileges. If they don't, the runtime throws a security exception.

In the block below, the AccessSecretArea() method only runs if the calling code already has access to "c:\windows\system32\secretplace". We can be assured that this method isn't going to elevate the privileges of untrustworthy code downloaded off the Internet and run unwittingly by some callous user. Remember that this has nothing to do with the rights of the user, it has to do with the trustworthiness of the calling code.

[FileIOPermissionAttribute(SecurityAction.LinkDemand,Read="C:\\windows\system32\secretplace")]public string AccessSecretArea(){//Go get content from a sensitive area on the computer...}

In the code below we ensure that the calling code at least has rights to access the SharePoint object model. Otherwise the .NET runtime tells the calling code to hit the road.

Trust level explicitly set in web.config or bubbled down from the web.config held in c:\windows\microsoft.net\Framework\[version]\Config\web.config. Web Applications can dictate the trust level they want to run at with the <trust level="Full"> tag. The options for ASP.NET are Full, High, Medium, Low, Minimal. Out of the box, ASP.NET web applications run in full trust. Good idea?...who knows.

Permissions explicitly set in custom policy files. These files let you give certain assemblies in an application more trust than others. SharePoint creates two such custom policy files WSS_Minimal (default) and WSS_Medium. You can set them like so: <trust level="WSS_Medium" />

What Permissions Come With Which Trust Level?

Glad you asked. The table looks like below. It reads like so; when your app is running at trust level [column name], it gets [contents of cell] privileges for the [row name] permission group. Lets have a look.

* = All Privileges for that permission group- = No Privileges for that permission group

This speaks to why people often get the following exception when running code in SharePoint that tries to access to object model. Notice in the table above that WSS_Minimal (which is the default trust level that WSS ships with) doesn't have CAS rights to get at the object model. Hence the exception.

Request for the permission of type 'Microsoft.SharePoint.Security.SharePointPermission, Microsoft.SharePoint.Security,Version=12.0.0.0,Culture=neutral, PublicKeyToken=71e9bce111e9429c' failed.

When these kinds of exceptions happen (CAS exceptions that is, you have three options). You can either:

Raise the trust level of the application so you get the permissions you need.

GAC your assemblies (so that they run in full trust and get the permissions needed).

Leave your assemblies in the Bin folder and author a custom policy file.

Easy to implement.In a development environment, increasing the trust level allows you to test an assembly with increased permissions while allowing you to recompile assemblies directly into the BIN directory without resetting IIS.

This option is least secure.This option affects all assemblies used by the virtual server.There is no guarantee the destination server has the required trust level. Therefore, Web Parts may not work once installed on the destination server.

Create a custom policy file for your assemblies.

Recommended approach.This option is most secure.An assembly can operate with a unique policy that meets the minimum permission requirements for the assembly.By creating a custom security policy, you can ensure the destination server can run your Web Parts.

Requires the most configuration of all three options.

Install your assemblies in the GAC.

Easy to implement.This grants Full trust to your assembly without affecting the trust level of assemblies installed in the BIN directory.

This option is less secure.Assemblies installed in the GAC are available to all virtual servers and applications on a server running Windows SharePoint Services. This could represent a potential security risk as it potentially grants a higher level of permission to your assembly across a larger scope than necessaryIn a development environment, you must reset IIS every time you recompile assemblies.Licensing issues may arise due to the global availability of your assembly.

Customizing A Policy File

A great alternative to raising the trust level for the entire application, or GAC'ing your assemblies is to simply create a custom policy file that gives the assembly in question the trust level or permissions you have in mind.

Because this post is already getting long, I'm going to point to do a set of instructions in a future post.

Epilogue

The moral of the story is that by blindly cranking the trust level of your application you're allowing ANY code in that web application to run with additional CAS privileges you just handed out (see table above).

While this might have been cool when you were an ASP.NET developer and there were a finite amount of web parts/features running around, remember that SharePoint is a platform where other developers, administrators and even worse...information workers are actively installing prototyping web parts from god knows where. If you value your SharePoint instance, you really don't want to trust all that code. In fact, you should consider living in a bomb shelter and trusting no one...

About Me

Tyler Holmes is a Solutions Architect working in Portland, Oregon. He lives mostly in the MS tech stack and is currently treading the waters of Communication/Collaboration and Business Intelligence with off the shelf/open source technologies.