Archive

These are patterns I’ve noticed in our organization over the past ten years–ranging from hardware to software to development technical staff. These are my observations, experiences with recruiting, and a good dash of my opinions. I’m certain there are exceptions. If you’re an exception, you get a cookie. :)

This isn’t specifically focused on Microsoft’s certifications. We’re a .NET shop, but we’re also an Oracle shop, a Solaris shop, and a RHEL shop. So many certification opportunities, so little training dollars.

Finally, I’ll also throw out that I have a few certifications. When I made my living as a full-time consultant and contractor and was just getting started, they were the right thing to do (read on for why). Years later … things have changed.

Evaluating The Post-Certification Era

In today’s development ecosystem, certifications seem play a nearly unmentionable role outside of college recruitment offices and general practice consulting agencies. While certifications provide a baseline for those just entering the field, I rarely see established developers (read: >~2 years experience) heading out to the courseware to seek a new certification.

Primary reasons for certifications: entry into the field and “saleability”.
Entry into the field – provides a similar baseline to compare candidates for entry-level positions.

Example: An entry-level developer vs. hiring an experienced enterprise architect. For an entry-level developer, a certification usually provides a baseline of skills.

For an experienced architect, however, past project experience, core understanding of architecture practices, examples of work in open source communities, and scenario-based knowledge provides the best gauge of skills.

“Saleability” of certifications for consulting agencies allows “one upping” other organizations, but usually lack the actual real-world skills necessary for implementation.

Example: We had a couple of fiascos years back with a very reputable consulting company filled with certified developers, but simply couldn’t wrap those skills into a finished product. We managed to bring the project back in-house and get our customers squared away, but it broke the working relationship we had with that consulting company.

Certifications provide a baseline for experience and expertise similar to college degrees.
Like in college, being able to cram and pass a certification test is a poor indicator (or replacement) for handling real-life situations.

Example: Many certification “crammers” and boot camps are available for a fee–rapid memorization and passing of tests. I do not believe that these prepare you for actual situations AND do not prepare you to continue to expand your knowledge base.

Certifications are outdated before they’re even released.
Test-makers and publishers cannot keep up with technology at it’s current pace. The current core Microsoft certifications focus on v2.0 technologies (though are slowly being updated to 4.0).

I’m sure it’s a game of tag between the DivDev and Training teams up in Redmond. We, as developers, push for new features faster, but the courseware can only be written/edited/reviewed/approved so quickly.

In addition, almost all of our current, production applications are .NET applications; however, a great deal of functionality is derived from open-source and community-driven projects that go beyond the scope of a Microsoft certification.

Certifications do not account for today’s open-source/community environment.
A single “Microsoft” certification does not cover a large majority of the programming practices and tools used in modern development.

Looking beyond Microsoft allows us the flexibility to find the right tool/technology for the task. In nearly every case, these alternatives provide a cost savings to the district.

Example: Many sites that we develop now feature non-Microsoft ‘tools’ from the ground up.

web engine: FubuMVC, OpenRasta, ASP.NET MVC

view engine: Spark, HAML

dependency injection/management: StructureMap, Ninject, Cassette

source control: git, hg

data storage: NHibernate, RavenDB, MySQL

testing: TeamCity, MSpec, Moq, Jasmine

tooling: PowerShell, rake

This doesn’t even take into consideration the extensive use of client-side programming technologies, such as JavaScript.

A more personal example: I’ve used NHibernate/FluentNHibernate for years now. Fluent mappings, auto mappings, insane conventions and more fill my day-to-day data modeling. NH meets our needs in spades and, since many of our objects talk to vendor views and Oracle objects, Entity Framework doesn’t meet our needs. If I wanted our team to dig into the Microsoft certification path, we’d have to dig into Entity Framework. Why would I want to waste everyone’s time?

This same question applies to many of the plug-and-go features of .NET, especially since most certification examples focus on arcane things that most folks would look up in a time of crisis anyway and not on the meat and potatoes of daily tasks.

Certifications do not account for the current scope of modern development languages.
Being able to determine an integer from a string and when to call a certain method crosses language and vendor boundaries. A typical Student Achievement project contains anywhere from three to six different languages–only one of those being a Microsoft-based language.

Cultivating the Post-Certification Developer

In a “Google age”, knowing how and why components optimally fit together provides far more value than syntax and memorization. If someone needs a code syntax explanation, a quick search reveals the answer. For something more destructive, such as modifications to our Solaris servers, I’d PREFER our techs look up the syntax–especially if it’s something they do once a decade. There are no heroes when a backwards bash flag formats an array. ;)

Within small development shops, such as ours, a large percentage of development value-added skills lie in enterprise architecture, domain expertise, and understanding design patterns–typical skills not covered on technology certification exams.

Rather than focusing on outdated technologies and unused skills, a modern developer and development organization can best be ‘grown’ by an active community involvement. Active community involvement provides a post-certification developer with several learning tools:

Participating in open-source projects allows the developer to observe, comment, and learn from other professional developers using modern tools and technologies.

Example: Submitting a code example to an open source project where a dozen developers pick it apart and, if necessary, provide feedback on better coding techniques.

Developing a social network of professional developers provides an instant feedback loop for ideas, new technologies, and best practices. Blogging, and reading blogs, allows a developer to cultivate their programming skill set with a world-wide echo chamber.

Example: A simple message on Twitter about an error in a technology released that day can garner instant feedback from a project manager at that company, prompting email exchanges, telephone calls, and the necessary steps to resolve the problem directly from the developer who implemented the feature in the new technology.

Participating in community-driven events such as webinars/webcasts, user groups, and open space discussions. These groups bolster existing social networks and provide knowledge transfer of best practices and patterns on current subjects as well as provide networking opportunities with peers in the field.

Example: Community-driven events provide both a medium to learn and a medium to give back to the community through talks and online sessions. This helps build both a mentoring mentality in developers as well as a drive to fully understand the inner-workings of each technology.

Summary

While certifications can provide a bit of value–especially getting your foot in the door, I don’t see many on the resumes coming across my desk these days. Most, especially the younger crowd, flaunt their open source projects, hacks, and adventures with ‘technology X’ as a badge of achievement rather than certifications. In our shop and hiring process, that works out well. I doubt it’s the same everywhere.

Looking past certifications in ‘technology X’ to long-term development value-added skills adds more bang to the resume, and the individual, than any finite-lived piece of paper.

Over the past couple of years, I’ve slowly added various agile practices into our workflow at the office. Some have taken off really well, others—not so much. Change is difficult, especially when few see any value in the change and even fewer would “use” the changes (in an organization were anyone can just say “no” to leadership and that’s accepted).

In spite of this, I was really excited to attend the Certified ScrumMaster workshop when it came to town this week—I’ve been trying to get to one for a few months now, but something always came up and travel was impossible.

The Class

The firehose was at full blast the entire workshop. I honestly think we could have spread this out over a couple weeks and still had more to learn. The workshop itself focused not only on the “mechanics” of Scrum, but our own experiences. We spent a good deal of time describing our own issues, experiences, and ideas—and how scrum could be used in each situation.

It helped to have a very diverse group (yet small) to draw experiences from. My experience deals with consulting and small groups; we had a few large group implementers, a medium group implementer, and a consultant. I enjoyed seeing both the parallels and challenges from each side and storing a bit of it back—just incase I’m not in a “small group” forever. :)

Overall, the class helped me focus—opening up what I didn’t know and realizing that I have a long way to go until I can full walk in the CSM shoes. I appreciated Mike’s candidness in teaching both the “perscribed” way as well as sharing his insights to how this works in the real world. That brief look into reality will help brace us better than anything we’ll find in a manual.

Most memorable experience: the video of us singing the SPAM song. Seriously—if that ends up on YouTube…

The Instructor

Mike Vizdos, of Implementing Scrum fame, led an excellent session. Using real world examples, pulling from the class, and forcing us to not only attend—but participate—made the entire experience worthwhile. I hope that Mike will be teaching other courses in our area—I’d be very interested in taking another one of his courses.

What’s Next

While we’re not “certified” yet, my next goal is to earn it—even it it’s just in my own mind. We have several projects coming in the next few months that vary in scope—some large, some small. As I’ve learned in the past few days, focusing on organizational change and building on small successes can be key to the acceptance of Scrum in the enterprise and I plan to work with that. The more I can demonstrate the benefits and gain acceptance—even if it’s NOT at the speed I’d like it to be, the better.

Even if you don’t plan on being a “ScrumMaster,” but are using Scrum in your organization, if Mike’s in town, I’d recommend taking one of his CSM courses.

There are a few goodies in there that I’d never seen/heard of, but one just makes me twitch a bit and seems TOTALLY backwards (and I’ve never noticed the feature before now).

There’s a context menu item while in code-behind or classes called “Create Unit Tests…” that will prefab a unit test of the current class.

Oh really?

I really do appreciate Microsoft’s continued efforts to work towards TDD and agile techniques—but doesn’t this seem backwards? DDT? Development Driven Tests?

On the other hand, having a way to generate tests at all is pretty nifty, but what about those who may not understand how they’re generated or what the tests are actually doing? Simply having “tests” isn’t the solution, but having a concrete understanding of what’s being tested and the expected outcomes.

I’d be far more impressed to click on my test while it’s in red and see a scaffold of implementation magically appear (which, can basically be done using ReSharper).

NOTE: This is a prototype, an idea, a random thought expressed aloud (well, in type). The code explains a concept and isn’t “tested” or production worthy (in my opinion).

Feedback is always appreciated. :) This is also what happens when I have a week off work and come back with ‘ideas.’

I find that a few of my projects, like the WebGallery2, all have a similar functionality. On pages with GridViews, ListViews (which, I’m slowly replacing all my GridViews with), or other DataBoundControls, I follow a common theme for data binding:

If the data set will be cached or in session, is that session/cache null OR has the data set been explicitly modified?

true – regenerate the data set and repopulate session/cache.

false – read the current data set from session/cache to the control.

In a single instance, the code to do this might look like:

privatevoid BindList(bool hasBeenModified, string sessionVariable)

{

// If our session variable is null or the data has

// been explicitly modified, then rebuild the session variable.

if (Session[sessionVariable] == null || hasBeenModified)

Session[sessionVariable] =

db.GetWebFilesByGalleryName(this.Id);

resultsListView.DataSource =

Session[sessionVariable] asList<WebFile>;

resultsListView.DataBind();

}

This would then be called with:

BindList(true, “CurrentGallery”);

However, this code really bothers me.

The data source (the db.GetWebFilesByGalleryName method), data bound control (the resultsListView), and the type of the data source (List<WebFile>) are all hard coded.

How could this helper method use generics to add a bit of resuability? What about when I want to use a GridView instead of a ListView, or have a List<Gallery>, List<String>, string[] of information?

First Attempt

The first attempt works. It takes the generics as anticipated and is rather easy to use.

protectedvoid BindDataControl<TDataControlType, TEnumerableType>

(bool hasBeenModified, string sessionVariable,

TDataControlType dataControl, TEnumerableType dataSource)

where TDataControlType : DataBoundControl,

new() where TEnumerableType : IEnumerable

{

// Add the dataSource to session.

if (Session[sessionVariable] == null || hasBeenModified)

Session.Add(sessionVariable, dataSource);

// Read the data from session and bind the data control.

dataControl.DataSource =

(TEnumerableType)Session[sessionVariable];

dataControl.DataBind();

}

The constructor here has both generic parameters and standard parameters.

TDataControlType has a generic constraint that requires it to be part of or a subclass of DataBoundControl (GridView, ListView, etc).

What about a more complicated example using collections and a ListView? On the current build of the WebStorage2 project, the galleries are built in a similar method (see this post for more details). I could just as easily replace the logic in Show.aspx’s Page_PreRender with:

BindDataControl<ListView, List<WebFile>>(

true,

“CurrentGallery”,

lv,

db.GetWebFilesByGalleryName(Request.QueryString[“id”]));

So what’s the downfall to this method?

Downfall #1: It’s a performance nightmare. AFAIK, when passing a method (which GetWebFilesByGalleryName is a method from my LINQ DataContext), it is evaluated immediately. So, with that in mind, the hasBeenModified is irrelevant—it may not NEED to update the session, but the method will still go out, search the database, and return the results. That’s a bad deal.

What I don’t know and am not sure how to check is whether or not the lazy/delayed loading in LINQ would balance out this at all. Ideas?

Second Attempt

The second attempt adds in a bit more “generic” and a lot more reflective. By adding a reference to System.Reflection, we can simply pass a string reference to a “builder” method rather than the method it self—thus saving the prefabrication of the data source when it’s not really needed.

protectedvoid BindDataControl<TDataControlType, TEnumerableType>

(bool hasBeenModified,

string sessionVariable,

TDataControlType dataControl,

string dataSourceMethod)

where TDataControlType : DataBoundControl,

new() where TEnumerableType : IEnumerable

{

// If session is null or has been modified (thus invalidated),

// update the session state.

if (Session[sessionVariable] == null || hasBeenModified)

{

// Invoke the specified method that

// creates our data source.

var data = Page.GetType().InvokeMember(

dataSourceMethod,.

BindingFlags.InvokeMethod |

BindingFlags.NonPublic |

BindingFlags.Instance,

null, this, null);

// Add it to session.

Session.Add(sessionVariable, data);

}

// Read the data from session and bind the data control.

dataControl.DataSource =

(TEnumerableType)Session[sessionVariable];

dataControl.DataBind();

}

In this method, the Page.GetType().InvokeMember method iterates through the methods on the page, finds the one that matches the string name passed to it, and executes it.

Then, with our “data” results, the rest is the same as the previous method.

Unfortunately, I cannot pass the LINQ direct lookup anymore because the scope of InvokeMember is limited to the calling page. I’ll need to create another little method, called GetResults in this case, to do the query for me.

Now, since our constructor does not contain the method to fetch our data, simply a string, the results are not refetched each time the method is called—only when the requirements are met further in the code.

Downfall #1: This method requires an additional “helper” method on every page to fetch the data. You can’t access methods outside of the page—or can you?

Downfall #2: What happens if you need to pass parameters to your InvokeMember? You CAN, but the syntax is nasty and becomes even more difficult if the parameters are not always in the same order (which doubtfully they would be if you’re using generics).

Third Attempt

The third attempt looks more like the signature from Hell than a real method. There had to be a way around the “stuck on this page” snafu with the second attempt… and there was, by specifying the class too using a generic.

protectedvoid BindDataControl

<TDataControlType, TEnumerableType, TDataSourceClass>

(bool hasBeenModified, string sessionVariable,

TDataControlType dataControl,

string dataSourceMethod,

TDataSourceClass dataSourceClass)

where TDataControlType : DataBoundControl,

new() where TEnumerableType : IEnumerable,

new() where TDataSourceClass : class

{

// If session is null or has been modified

// (thus invalidated), update the session state.

if (Session[sessionVariable] == null || hasBeenModified)

{

// Invoke the specified method that

// creates our data source.

var data = dataSourceClass.GetType().InvokeMember(

dataSourceMethod,

BindingFlags.InvokeMethod |

BindingFlags.NonPublic |

BindingFlags.Public |

BindingFlags.Instance,

null, dataSourceClass, null);

// Add it to session.

Session.Add(sessionVariable, data);

}

// Read the data from session and bind the data control.

dataControl.DataSource =

(TEnumerableType)Session[sessionVariable];

dataControl.DataBind();

}

Good grief.

This method adds a third generic to the constructor—TDataSourceClass—as well as the additional constraint requirement. I’ve also added an additional BindingFlag—Public—since most of the methods in LINQ DataContext classes are decorated public.

Rather than pulling from this.Page, we’re now calling InvokeMember from the parameter class and returning the results to the calling page. There’s one other change—rather than looking at “this” as the Binder parameter, we’re referencing the class passed along in the constructor—the dataSourceClass.

Here we have two additional parameters, the generic parameter, TDataSourceClass, that I’ve passed the LINQ Data Context into to define the Type of the dataSourceClass parameter passed later in the constructor.

“Verbalized”, the method’s generics read: BindDataControl to a ListView with the expected data format of a List<WebFile> using the WebGalleryDataContext class.

The parameters (which could be reordered to make better sense) read: the data is new or has changed, so store the results in “CurrentGallery” and return them to ‘lv’ (the ListView object on the web form). Fetch the data with GetWebFiles from the instance of db (the WebGalleryDataContext object instanciated previously in the page).

Downfall #1: Methods are still required—you cannot pass a simple data source to the BindDataControl method.

This third attempt handles our most complex request—but what about the original “hello”/”world” array request? It can be done, but, as previously mentioned, requires extracting the array list outside of the method constructor.

protectedvoid Page_Load(object sender, EventArgs e)

{

var gv = newGridView();

this.Page.Controls.Add(gv);

BindDataControl<GridView, ArrayList, Page>(

true, “junk”, gv, “Get”, this.Page);

}

protectedArrayList Get()

{

returnnewArrayList {“hello”, “world”};

}

To reference methods that exist in the same code-behind page, this.Page and the Page class offer the correct class access.

So, is this the best way to do it? Probably not! How would you tidy this up or rewrite it? I’m interested!

A rant in the joys of communication and Microsoft Office SharePoint Server 2007 configuration.

It was determined that SSP (Shared Services Providers) would run internally on 8081. We were told nothing ran on that port in our enterprise. After FAR too much time (not going to say for sake of my ego) fiddling with why I couldn’t get the SSP services to work in MOSS 2007.

We were lied to like the step-children we are…

After finally just hitting the root of the URL (/ssp/admin/ is the default shortcut), I discovered one of our enterprise “monitoring” softwares had a web service running on that port… which means it’s running on that port on every server and desktop in our enterprise. wtf. Oh, and the people who were “in the know”… knew, but didn’t feel it was important or whatever to tell us.

So, now the joys of ripping the SSP out of MOSS and reconfigure it on a different port (and praying THAT one isn’t taken).

*grumbles*

On a side note, I’ll have a new article posted up pretty soon. The article goes into a bit of detail on setting on a small server farm with MOSS—everything from initial installation to setting up Active Directory profiles, search services, indexing, and updating to the latest Service Pack 1. After the past week of dinking with this, I now see why Bill English’s MOSS 2007 Administrator’s Guide is 1155 pages and heavy enough to beat someone with. Good book, by the way—just a bit difficult to follow as there’s no “order” to it.

[UPDATE: While out scraping ice off my car, I had an idea to help myself be more “in the know”. I use TCPView quite often to see what processes are going where—well, TCPView shows the ports! Just do a bit of monitoring, see where different services are, and go for it. The fancy alternative, of course, could be to setup Ethereal, set a filter for “tcp.port == {your port here}” and let it run for a day or so.]

Over the past few weeks, Microsoft have been hammering out frameworks and structures for .NET Development.

Microsoft Voltron Volta

Earlier last week, Microsoft Live Labs released the prototype of Microsoft Volta, a “let’s do AJAX and control the DOM without writing JavaScript” framework that is pretty cool. I’ve whipped up a few examples that I’ll post up later today.

The dream is that the system manages the tiered architecture of design—and automagically refactors your code onthe fly. Think of the syntax an odd mix of Astoria (web-based data services), Nikhil Kothari’s Script# access to the DOM from C#-esque code, and Volta’s new twist of tags and attributes for async transactions—all mixed into one bit application.

So, is this the platform of the future? AJAX-ala-C# and full DOM control with automagic architecture separation?

And yes, I keep accidently calling it Voltron.

Microsoft MVC Framework (ala 3.5 Extensions CTP)

Finally! Late last night, ScottGu announced that the .NET 3.5 Extentions were available; read all about it and download the bits from his blog.

The MVC framework sites at the same .NET revolutionary stage as LINQ (in my opinion)—something that .NET has been missing for quite a while. During the short works I’ve had with Java, the clear break between the layers of development made swapping in and out forms quite simplistic. I followed a similar path with .NET development, but this required a bit more work and didn’t flow quite as easily (and you had to hand code all the handlers).

So, is this the platform of the future? True tiered separation at design time and a step closer to the Ruby/Java world?

Silverlight 1.1 2.0 Alpha

While the Silverlight still boasts the 1.1 version number, the drastic changes between the two version (and the rumors from Microsoft) will probably see it at 2.0 when it hits CTP. I would like to have a few good examples to show regarding Silverlight (the prototypes on Silverlight.net are fun to play with), however, I’ve yet to get the VS2008 Extentions to work—I can create, but it won’t accept that I have Silverlight installed. :(.

So, is this the platform of the future? Rich, interactive applications using XAML markup and XML templates?

.NET WebForms Development

This isn’t new, but the changes in .NET 3.5 for WebForms and AJAX can’t discount this medium. I, for one, am still more comfortable with this than the “new fangled” technologies and find the latest tools that are out (VS2008, the new controls in .NET 3.5, and even the extensions that keep coming) are making WebForms easier and easier to create. Also, while the “new” may be cool, we have a slew of existing applications that can’t be forgotten.

So, is this the platform of the future? The pluggable framework and web forms that allows for the easy creation of anything from personal web sites to enterprise applications?

For now, the my gut feeling is that the “platform of the future” is what works for the situation. I can think of a few of our minor “web applications” that have no need for the complexity of Volta, MVC, or Silverlight; however, I cannot ignore the specific appears to each of the technologies for future projects. I do not believe that these REPLACE our current WebForms, but simply add additional tools to the toolkit (and requirements for us to learn).

Okay, so we’re reupping our SharePoint 2003 environment to a better environment. Why not MOSS 2007? Ehh, quite honestly, because that’s politics and would make kittens cry. But, that’s not the point of this post.

Our planned setup was:

Two web-front end servers, each running the web service.

One search server, running the search service.

One index server, running the indexing and job services.

Two SQL servers, clustered and attached to terabytes of space in the SAN.

Note After you follow these steps and then locate Configure Farm Topology (FarmTopologyView.aspx) on the Central Administration Web site, you may receive one or both of the following messages:

The current topology is not supported.

You selected an invalid server farm configuration.

These warnings may be safely ignored.

Yeah, well, that’s a lie. While running in an “unsupported topology,” you can’t add portal sites or do backup/restore operations. That’s not unsupported, that’s unfunctioning.

I found an old blog post that has a thread regarding layout and that error. The MVP helping the poster, Shane Young, responds with:

Technically it will let you setup this environment.But then SharePoint will give you an error message telling you that you have an unsupported config.Whenever SharePoint is displaying that error message it will not allow you to create new portals and it will not allow you to use its backup utility.This is just how it works.

That’s just how it works, huh? No recommendations or ideas? Searching through Bill English’s SPS 2003 Resource Kit is of no use either. In fact, the book has an architecutre of EXACTLY what I want in the book with no mention of issues or failures (pp. 116).

If you keep digging (in the Resource Kit), page 282 lists the “supported” topologies. For a large farm, it seems that there are three supported topologies: