This coming Saturday, September 28, will be the Houston Techfest. This is annual conference that is free to attend and seems to grow larger and better each year.

The conference will be at Reliant Center and has many great talks on the schedule. I did not have the time this year to submit a talk but I will be sitting on the Business of Software discussion panel during lunch.

I have never been disappointed with the selection of talks at Techfest so you should come out if you are in the Houston area and have some free time this Saturday.

I’ve been following the podcasts that Jeff Atwood and Joel Spolsky have been doing for StackOverflow. The podcasts are not really technical in nature, in fact they really do not have anything to do with what will ultimately be the purpose of the site they are building. They are more documenting the discussions and decisions they are making while creating the site.

I’ve only been through the first few so far, but an interesting discussion has come up in both podcasts about whether or not programmers should know the C programming language. Jeff does not know C and seems to come down on the side of the argument that this knowledge is not necessary. Jeff hasn’t specifically said this out loud, that I’ve heard, but I gather this is his opinion based on how the conversation seems to flow. Joel, on the other hand, is of the opinion that programmers should have knowledge of the lower levels of programming even though it is not part of their daily job. His thinking on this is that the lower level knowledge gives programmers an edge even when programming with the higher level popular programming languages of today.

I have to agree with Joel on this. In my experience working with programmers in both categories, those who have a background of knowledge of the lower level programming languages always seem to be quicker at solving more complicated problems. Of course, there are exceptions to this rule but I would say 98% of the time this is true.

It is interesting also that when this topic comes up with colleagues, it is almost split right down the line in opinion with those who do not know C believing it is not necessary and those who do have experience with C believing this sort of experience and knowledge makes them a much better developer.

One good example supporting my argument (actually Joel’s argument) is garbage collection related issues. I’ve seen programmers spend a huge amount of time attempting to understand why the runtime memory size of their program is continuing to grow when, in their minds, the garbage collector should be coming to the rescue. Of course, the problem is usually that they somehow have a reachable reference to a huge collection of objects or something of this nature (usually several in fact). Programmers with the lower level knowledge seem to pick up on these sorts of problems much quicker.

Another area I have seen many issues with is threading. Languages like C# and Java make threading a reachable concept for the programming masses. This is a good thing unless you do not understand the underlying concepts of threading. I cannot begin to calculate how many conversations I have had with programmers concerning the thread safety of their methods. I also cannot count the number of blank stares I have received when I ask about the concept of thread safety in interviews.

I know that most will say that I am bringing up edge case problems that are not normal in business programming. I am willing to concede that. However, I also agree with Joel’s approach in that I tend not to hire programmers that do not have this knowledge because it happens often enough to be a problem. There are exceptions, but those programmers are “exceptional”.

This is my first post utilizing my freshly installed T1 line. It is everything I thought it could be.

The image above is from SpeakEasy. I also tried another site that uses a java applet and it reported upload speed about the same.

The whole process worked like this:

I search google for T1 and my hometown, which got me a list of brokers. I filled out the online forms for a few and received pricing via email. I only looked at firms that would give me the pricing without speaking to me first.

You can get service either with or without a router included. I wanted a router included because I know next to nothing about networking hardware, and they configure the whole thing for you if you get one through the provider. You can also get voice service included in the quote or data only. I went with data only and also with a full T1. I saw quotes for 1/2 and 1/3 T1 service.

They fax you paperwork to sign and fax back and then the scheduling process begins. It took about 1 month from when I signed the paperwork until I had a working line. I assume it took this long because I live in the middle of nowhere but maybe it always takes this long even if you are in the city.

About once every week, Access2Go emailed me some configuration information and a status of what was going on. In the end, they give you the external ip address for the router, called the serial address, and a block of ip addresses that the router is pre-configured to route inside your network. The rest is information about the line itself and is only useful to you if something goes wrong. When you activate, they give you the addresses of the DNS servers and other information like that.

The line is actually through Quest, although AT&T is responsible for putting the line in. Two days ago, the installer showed up and told me that I already had adequate lines run up to my house, which was nice because we have received quite a bit of rain so I was concerned about giant trench marks across my property. Normally, AT&T only runs the wire to the outside of the building and it is your responsibility to run it to where you need it. In my case, that was the closet in my office. Since I also no nothing about pulling wire, I was going to pay them to run wire to my closet but was pleasantly surprised when the installer told me the wire I need was already run to the closet. He put in a jack, did a bunch of testing and left. He also told me that some rather well off folks down the road have 2 T1 lines running to their house. One for data, and one for monitoring their wine cellar. Not sure why a high speed line is needed to monitor wine but I thought that was interesting.

Later the same day that the installer was here, I received the T1 router which is a Cisco 1721 with WAN card, via UPS. I received no documentation with it so I registered with the Cisco website and downloaded everything. I read it all and it did help me to understand all of the configuration information that Access2Go had emailed to me. In reality, you don’t need the documentation except to know what cables plug-in where. I also tried out the serial port interface to the router just because I thought it was cool.

This morning I called Access2Go to hook up everything. A patch cable (regular ethernet cable) goes from the jack to the WAN port on the T1 router. A crossover cable goes from the ethernet port on the T1 router out to my network. In my case, this is a business class router from linksys that supports secure vpn and has a good firewall. The router on your network gets assigned one of the lan ip addresses from the block they give you. Access2Go got a Quest rep on the line and we tested everything and I was up and running. The call lasted less than 10 minutes and was painless.

This is a good option for telecommuters that live somewhat out in the country like I do. It is much more expensive than DSL or cable that you can get in the city and the download speed is not as good as the higher end of these services. Upload speed is outstanding, however. Also, a T1 is dedicated and has an SLA of 4 hours for someone to arrive on the scene if it quits working. You won’t get that with DSL or cable, at least not at the consumer rate.

Microsoft wants developers to write applications for Windows. This is no secret. What is not publicized very well, in my opinion, are the various ways that developers can get the tools they need to develop these applications for free.

Yesterday, I received my absolutely free, no strings attached, completely legal version of Visual Studio 2008 Professional via DHL. I was given this because I showed up to the Visual Studio 2008 Installfest at the Houston .Net User’s Group just before Christmas. For the effort of showing up, Microsoft also compensated me with all the pizza I could eat, all the Halo 3 and Guitar Hero I could play, and a tshirt.

If you missed out on this, Microsoft is having the official Visual Studio 2008 launch in a city near you coming very soon. I know that 2008 has been out for a while now, but this is the official launch party that is combined with Server 2008 and SQLServer 2008. I am sure they will be giving out more free stuff. The Houston event is March 20 and I’ll be there. You can register here for any of the scheduled cities.

Come on, break free from your cubicle for a day. You know you want to.

This weekend was spent working on the marketing plan and website for my upcoming product, which I will start talking a lot more about in the next few weeks. Yes, even a small company had better have a marketing plan. Maybe not the huge formal document you are thinking of, but I believe something written down is important for no other reason than it forces you to think about marketing in detail.

The book covers mistakes that ISVs make on their sites and why they make them. The next part covers how to refine your message for your target market. That is as far as I’ve gone so far, but I think the book is applicable for more than just small software companies. I think it can be useful for anyone presenting a product or service on the internet. I plan on putting up a full review when I’m finished reading the book.

I ran across this article a few days ago and I saw that it was also referenced on Slashdot this morning. The article is titled Better Than Free and is written by Kevin Kelly. The article is not about software specifically as much as any object that could have a free alternative like a bootlegged copy of a movie or a copied version of a book, etc..

Kelly outlines 8 attributes of a transaction which he believes will raise the value above the free alternative. In other words, these 8 attributes are the values which people are willing to pay for. I very good article and worth the time to read because I think it also applies to not only free alternatives but also to lower cost alternatives. For instance, I believe the Immediacy attribute he describes can apply to having on-shore developers versus having off-shore developers. On-shore developers have the advantage in Immediacy because they are in the same office or at least in the same time zone.

I am not sure I am completely sold on Immediacy for all transactions since I have seen developers in the past spend many hours or even days or weeks attempting to configure a free software alternative when easier to configure commercial alternatives were available. Most commercial software excels in the Interpretation category, described by Kelly, where more comprehensive documentation is available and more time is often spent on usability. Of coarse, this is not always the case either.

Anyway, the article is worth reading because I think it gives everyone something to think about in terms of what service you are providing, either as an employee, consultant, ISV, or OEM. For all of these services, there are free or at least lower cost alternatives.

The sessions were quite diverse and interesting on the first day of BarCamp. Since most sessions were only 30 minutes, it was difficult for the presenters to go in to much depth. However, 30 minutes is good enough to give a good introduction to a topic you might not know very well and give direction as to where more information can be obtained. Here is a brief description of the sessions I attended.

Startup Methodology

I am not sure that this was the exact title, but James Lancaster of Research Valley Innovation Center gave an excellent overview of his methodology for working with startup companies. The methodology, INNOVATEnRV, describes the stages of company startup and James explained the types of issues that startups would and should be concerned with at each stage. This presentation was the most polished of the day, at least of the sessions that I watched.

Drupal in 30 Minutes

Chip Rosenthal, from Unicom Systems Development, presented an overview of Drupal. For the uninitiated, Drupal is a content management system written in php. I felt is was a good introduction since I had never taken the time to look over Drupal. The one interesting nugget in this presentation was that Chip recommended looking at the Zen Drupal theme, that is not one of the stock themes in the installation, because it is very configurable and can give your site a look that is less like every other site running Drupal.

Social Media Marketing

Nikhil Nilakantan, from Social Span Media, gave a session on marketing on social networking sites. I have to admit that this is not a segment of the internet that I have paid much attention to. I am fascinated that these sites are as popular as they are with people over 22 years old. I understand LinkedIn, but I do not understand why I would want to really participate in the others. Nonetheless, Nikhil presented statistics on social networking site traffic that did make me take notice. He stated that the top 10 sites attracted 131.5 million unique visitors during December 2007 alone. The more interesting statistic was that the average visit on MySpace lasted 30 minutes (20 minutes for Facebook). Nikhil estimates that $1.8 billion were spent on social network related advertising last year. I have no doubt that someone will find out how to make advertising work properly with this type of market and usage pattern. The market is simply too juicy to pass up.

GWT and Gears

Tom Peck, from AppEngines, presented a session on both GWT and Google Gears. This was one of the sessions that really could have been an hour long or maybe even longer. GWT is a framework that allows developers to code web application GUI layers in java while using an api that is quite similar to swing. I have seen demos of this technology before and it is quite impressive. Google Gears is in beta and allows the development of applications that run in a browser and allow offline data storage on the local machine. Gears accomplishes this via a browser plug-in that utilizes the embedded SQLite database engine for the storage. There is an api available from both GWT and javascript. Whoever figures out offline storage in browsers can potentially make a gazillion dollars so it is weird to me why Google is giving this away. Since there are many people working at Google that are much smarter than me, I am sure they have a strategy. Here is my prediction for a competitor to Gears: When Silverlight 2.0 is released, someone will provide this capability via the .Net Isolated Storage api (and no I haven’t spoken with anyone already working on this).

Introducing AlphaBetaFinder.com

Anita DuBose presented the freshly launched AlphaBetaFinder.com from AppEngines. The site is a matching service for software and hardware vendors looking for alpha and beta testers for their products. The idea is that potential testers can register and provide information about what they are willing to test and what equipment they have available. Vendors can then search the database for matches and send invitations for testers. The site will inform the vendor when testers are interested and they can purchase the contact information for the testers. For now, everything on the site is free because AlphaBetaFinder is currently undergoing its own beta testing.

Tips on Podcasting

Jonny Dover presented some tips on Podcasting and Brad Dressler joined him to demo Audacity, an open source audio editing tool. I was quite interested in this session because podcasting is something I would like to try. Good tips were given such as eliminating pauses, working from a prompter or script (CuePrompter.com was recommended), and finding a way to include more than one voice on the podcast were given. I spoke with a few guys in the audience (sorry guys, I forgot to get your names) that mentioned GarageBand was great to replace using Audacity if you are using a Mac. They also pointed me to PumpAudio if you need low cost music to mix in to a podcast or the Creative Commons Audio section if you need low cost podcasting solutions.

GeoDjango

Justin Bronn and Travis Pinney presented GeoDjango, which is the GIS branch of the Django project. Django is a rapid web application development framework for python, similar to what Ruby on Rails does for the Ruby developers. I felt is was a good presentation but really was limited in depth due to the 30 minute length.

LINQ

I missed some of this presentation and I did not get the names of the guys presenting. The session introduced LINQ and the new C# 3.0 language features that make LINQ possible. They also gave a brief introduction to LINQ to SQL. This was one of the more interactive sessions that I attended. The audience was clearly interested.

Dinner

Several of us adjourned to a local pizza place, which claimed to have the world’s greatest pizza. It was good but a claim like world’s greatest is difficult to verify. I sat across the table from Eric Fortenberry, Cayce Stone, and Jeff Jurica from OrgSync. These guys have a website product that is targeted mostly at universities to give their organizations (fraternities, student congress, etc.) a way to manage their membership and calendars and such. I did catch a portion of their demo earlier in the day and site was quite nice.

After dinner, I retired to my hotel room but activities continued late in to the night. A party went until 2am and folks continued to talk until around 4am. Scott Riggins, a good friend of mine from Social Mobility, filled me in. I wished I had kept going but a late night working for a client the night before kept me from pressing on.

Today I am at the BarCamp Texas conference in Bryan, Texas. In contrast to the standard tech conference that you have attended before, BarCamps are attendee driven. The attendees decide what sessions they want to present the day of the conference and nothing is decided in advance, other than the starting time and location. I did not know what to expect, but I have been pleasantly surprised. The day has gone like this so far:

I showed up early so was recruited to help setup tables and chairs, which I didn’t mind. The entire conference is volunteer led and free to attendees so it felt good to at least put forth some effort to help out.

Everyone registers as they get there. There is a wiki to add your contact information in advance but there is no obligation to sign up or pre-register beforehand. The registration was accomplished with a few Macbooks running spreadsheets and each attendee simply typed in their information.

After the registration, a free t-shirt was given to each attendee, courtesy of the sponsors.

Attendees then wrote the topics they wished to present on a whiteboard and signed up for time slots.

The first few hours were simply meeting and greeting and ad-hoc conversations. Most of the attendees seemed to be working for either startups, companies that would work with startup companies, or freelance/independent consultants.

Lunch wherever you can find it nearby.

Sessions from 1pm – whenever people want to stop. So far, sessions are scheduled to about 6pm and the organizers made it clear than people are free to go all night if they wish. Sessions are going two at a time in one large room separated by a temporary barrier.

Rooms are available for ad-hoc side discussions.

The sessions I have attended so far have been very good. They are not nearly as formal as a typical conference and there is much more audience participation. The one negative aspect of the sessions so far is that the room is divided by a temporary barrier and both speakers are standing on either side of that barrier so they occasionally will drown each other out.

I will post a summary of the sessions that I attended later in the day.

Agile development methodologies have become the in vogue thing in software over the last few years. Hardly a month passes when an article concerning agile development does not appear in a major software development magazine or journal. An entire career path has even sprung up – the "Agile Coach".

There is little doubt in my mind that agile development is better than a more traditional, waterfall type methodology. When properly followed, agile development methodologies seem to keep programming teams more focused on the end goal. A large, up front design effort tends to grow out of control and lends itself to focus on unimportant details that could easily be put off until later. Estimating such efforts generally end with deadlines that are not as near to reality as management would like.

With all of the goodness to be had with the agile way, there are persistent myths that seem to surface. The most common myths I hear are:

Agile development is faster

Agile development removes the need for documentation

Doing Test Driven Development (TDD) means doing agile development

Agile Development is Faster

The most common reason I am given for why an agile development methodology is chosen is because it will be faster than a waterfall methodology. I believe this idea manifests itself because of the agile development practice of iterative releases.

In my experience, agile development is not necessary faster than waterfall development. In fact, during the planning process, agile work plans often end up longer than waterfall work plans. Why is this? Because iterative development forces you to think in a more detailed fashion about the features being delivered, the testing time required for those features, and the integration time for these features after the development team is finished developing them.

Waterfall planning does not force this level of thought on the planner. I have rarely seen a project plan for a waterfall development project that didn’t have 2 big items at the bottom: Test and Release. Next to Test is a large number of hours (or days) and is generally pulled out of the air and "seems to be correct".

Testing, integrating, and releasing software is like paying tax to the IRS. Generally, tax is taken out of your paycheck (if you are an employee) and paid to the IRS at that time. Since it is taken a little at a time, you hardly notice it. Paying tax a little at a time is like testing and releasing often in agile development.

When I became self employed, I had to pay 4 times a year instead and I really noticed the large payments much more. If you pay at the end of the year in a lump sum, the amount would seem much larger. Not only that, but you have to make sure you have been saving enough over the year to make the large payment. Borrowing money to cover a shortfall can be expensive and draw out the payment process. This is similar to testing and releasing at the end during waterfall development. If you haven’t allocated enough time to test properly, you end up extending your testing time and probably missing your deadlines. The real problem is that you generally do not find this out until close to your deadline and users are not happy with the late news. "Borrowing" from quality by cutting testing short always ends up with disastrous results.

Agile development might not be faster, but it is more predictable and more reliable for planning purposes. Generally you end up with a much better product from a quality perspective.

Agile Development Removes the Need for Documentation

The second most common reason that I hear for using an agile methodology is that agile development removes the need for documentation. Most of the time, those who give this reasoning have only heard about agile development and not actually taken the time to read up on it.

There are agile methodologies that promote less documentation than is traditionally recommended. Closer examination of these methodologies will show that documentation is actually not necessarily removed, but instead moved around. Sometimes the documentation appears as note cards or post-it notes on a wall. Often times the documentation appears as an actual user, placed full-time on a project, who serves as the requirements specialist for the new system.

I am not very found of low documentation methodologies, with one exception. When the solution being developed is pure technical solution, I believe it is more acceptable. In other words, when the developers themselves will be the users of the end product I think it is acceptable to go with a lower documentation methodology like Extreme Programming (XP).

Even when a user is present on the team full-time, I believe that some documentation still must exist. There is too much opportunity for miscommunication between developers and the user when nothing is written down or diagrams are not drawn.

I often see projects that wish to pursue XP, or a similar methodology, but balk at the thought of dedicating a user full-time to the effort. Most of the time, they will dedicate the user for 25% and sometimes up to 50%. The allocation is generally less in reality than what is agreed upon because it is too easy to ignore something if you are not allocated full-time.

If a user is not present on the project team full-time, a portion of each iteration should be allocated to document the details of that iteration’s feature list to prevent unfilled expectations on both sides.

Doing TDD is the Same as Doing Agile Development

I could also replace TDD with several other agile development practices and the topic would hold true.

TDD can be pursued while using many development methodologies. I have seen it work well on waterfall projects as well as agile projects. TDD is a quality control mechanism to make sure components meet their expected functionality when completed and I recommend its usage on all projects.

TDD has become popular because of the promotion of its usage among agile development advocates, but the goodness can be shared by all.

In my opinion, the main practices that separate agile development from more traditional methods are iterative development and releases at the end of each iteration. If you are not following those practices, I do not believe you engaged in agile development. Iterative development forces development teams to constantly focus on features of the software and iterative releases focuses the team on quality. Those two practices are what drive the agile methodologies to deliver on their promises.

I love ReSharper. I am using Visual Studio 2008 and Jetbrains does not have a released version of ReSharper that completely works with the new language features in C# 3.0. This morning I was looking at the ReSharper newsgroup to see if there is any discussion about the upcoming 4.0 release and I did find an informative discussion. A user of ReSharper was met with an interesting reply when he asked when the product would support the C# 3.0 language features. I have removed names and links that would identify people because the purpose of this post is not to embarrass people.

Hello <potential user>,

It seems to me there are two parts in the topic.

First, current state of ReSharper in regards of VS2008 I tried to address in my post here: <link to post at employee’s personal blog>

Second, why do you need to use C# 3.0 right now? It is not mature enough, it is not tested in industry, pitfalls and glitches are not known yet, there are plenty of scepticism out there on the web and nobody really knows how to work with it. Well, there are some marketing and other "it’s so cool" stuff on the web, but do you believe that LINQ or extension methods will do their job better than existing solutions like ActiveRecord (http://www.castleproject.org/activerecord/index.html) and other alternative, non-microsoft tools? We really want to know this, honestly. There is so much buzz about how cool "var" keyword or automatic properties are, but with ReSharper you almost don’t need them 🙂 So, could you please tell us, why do you need to use VS2008 with C# 3.0 right now?

After a few users chimed in with WTF types of responses, <employee> realized that this was probably not the correct way to respond and attempted to rectify the situation with the following reply.

Hello <another potential user>,

My apologise, I didn’t mean to abuse anyone. I just expressed my own opinion, it is community newsgroup after all 🙂 May be I was too much expressive…

Anyway, I’m really interested in the reasons people are so anxious about VS2008 and C# 3.0. And more than that, I’m interested why people need it *now*! It was out there for a while, in a CTP, then Beta. Release in not something that significantly changes products like VS2008 or .NET 3.5.

For me, I have to use VS2008 and C# 3.0, so that ReSharper 4 will be good in terms of usability, feature set and language support. If I were developing business application, I’d wait several months while constantly monitoring the web to understand the problems I may have when I upgrade. That’s how I would treat VS2008 release if I were on different project. On the other hand, if I were on the hobby-project or something research-like, I’d be using Orcas since any time it was usable to write code and do not crash too often. For me, "RTM" mark doesn’t change much.

First, I would like to say that ReSharper is an outstanding product and I have a great deal of admiration for the Jetbrains developers. As a Visual Studio addin developer, I realize how difficult the functionality is to develop that they have in their product.

That said, these were both terrible posts. I realize this employee was voicing his own personal feelings but it was on a newsgroup for the company’s product and the signature of the post makes it seem like an official company stance on the matter.

There are two major roles in software development: users and developers. I know there are actually many more than that, like QA staff, technical writers, project managers, etc. but I am lumping all of those roles into developers for this discussion. If you are in the developer category, people in the user category tell you the functionality they want. Your job, as a developer, is to evaluate those requests and to deliver that functionality if it is cost effective to do so. Obviously, you can’t spend a huge amount of money working on a feature that will never make back your investment in that feature.

However, in this case, users want the product to work with the latest C# language features and I do not think that is unreasonable for them to expect. There was a long beta period for VS.net 2008 and I know that the VS.net SDK documentation was not kept up to date with the beta releases, which makes it difficult for addin and package developers to release products simultaneously with the Visual Studio release.

Yes, Visual Studio betas are not always stable. Yes, supporting C# 3.0 was very difficult even for the Microsoft compiler developers. That is why people are paying for ReSharper. The product does amazing things that are difficult to develop. If it were easy, nobody would need to buy the product. Instead, they would develop it themselves. In fact, there is a value proposition that every potential customer of every development or support tool product weighs when they look at options to fill a need.

"Will it cost me more to develop the equivalent functionality than it will to buy it?"

In the case of ReSharper and Visual Studio addins specifically, the question is more like

"Will this product pay for itself by increasing my productivity to cover the cost of the initial expense?"

If you need to use C# 3.0 features, the answer currently for ReSharper is more questionable than it has been in the past.

Telling your users that the features they want are not important right now is a sure way to get them looking at your competitors.