If you still support Windows XP (I’m sure a lot of you do), you are probably aware that the anti-malware software Microsoft Security Essentials has not been available for download (at least from Microsoft) since official support for Windows XP ended last year. However, previous installations of MSE continued to receive updates. According to Microsoft’s web site, July 14, 2015 is the last day that updates to Microsoft Security Essentials will be provided (along with the Malicious Software Removal Tool). Check your calendars because that day is today!

A lot of us deployed Microsoft Security Essentials because it was free, relatively lightweight, and perhaps most importantly, did not bug the user with unnecessary security warnings and/or advertisements intended to upsell them. We were biding our time with MSE on Windows XP as users migrated away from that operating system, but many users still use XP with older hardware that is in good shape or through virtual machines. Now that MSE is effectively dead (barring any last minute change of heart from Microsoft), we must transition our XP users to a different anti-malware software.

There are many solid free anti-virus solutions available, especially for XP machines that are serving a limited use, like a lot of installations running under virtual machines do. The problem is that most free anti-malware softwares nag the heck out of users with upsell ads or unnecessary security warnings (which are often merely upsell attempts themselves). So far in my testing, I’ve found that Panda Free Anti-Virus 2015 can be configured to not display ads by turning off “Show Panda news” in its settings screen. This setting is found in the General section by scrolling all the way down. Also, during installation, make sure to deselect installing its browser toolbar option as well as changing the default search engine to Yahoo.

What free anti-virus solutions have you found to be the most unobtrusive for your users?

With the upcoming End of Support for Windows XP, I expect that many of us will be upgrading some older machines that run Windows XP to Windows 7. My suggestion is to only upgrade machines that are relatively newer, for example PCs that were bought in the Vista or early Windows 7 years but had XP pre-installed. I wouldn’t recommend upgrading older XP machines that will struggle with Windows 7. But if you have an XP machine that can handle Windows 7, then it might be a good idea to upgrade it.

I ran into a situation with a Dell Precision T3400 workstation where the installation of Windows 7 Professional 64-bit wouldn’t install properly. I had installed a brand new Seagate Hybrid Solid State/Hard Drive in the machine and was doing a clean install. It kept rebooting after the initial install sequence. It appeared to me that it was blue screening in a manner that was indicative of a SATA AHCI driver problem. Sure enough, when I went into the BIOS and set the SATA operation to ATA, the installation completed without further issue. However, now I had a Windows 7 installation with ATA drivers instead of the higher performing AHCI drivers (it was puzzling to me why the installation was able to get that far in AHCI mode if it wasn’t going to install a suitable AHCI driver and simply crash later, but I chalk that up to Windows being Windows).

Usually, this isn’t such a hard problem to fix: simply download the proper AHCI drivers, install them, change SATA operation back to AHCI in the BIOS and you’re golden. However, in this case, the AHCI drivers (Intel ICH9R/DO/DH) would not install. The installer kept claiming the computer did not meet minimum requirements or something similar. I couldn’t even force the drivers to manually install after extracting them. It appeared to me that since SATA operation was set to ATA mode, the installer/drivers didn’t recognize the controller as AHCI capable. Brilliant. I couldn’t set SATA operation back to AHCI because that caused a blue screen crash during boot, so I was almost of the mind that I would need to reinstall Windows 7 making sure to specifically add the AHCI drivers during the installation process. But I had already installed Windows 7 SP1 and a bunch of other updates that took a long time, so I didn’t want to use this nuclear option unless it was absolutely necessary.

Restart the computer once (without changing any BIOS settings) and Windows will load default AHCI drivers. You may not notice anything happening, so let Windows sit for a minute or two after reboot to give it enough time to make sure this process happened.

Restart the computer again and change the SATA operation to AHCI in BIOS. Save the BIOS settings and when Windows boots again, it should not crash with a blue screen. This time you should notice that Windows is installing new hardware, which includes several SATA AHCI components (similar to what the picture above shows). Windows will probably ask to restart once more.

At this point you should have a functioning Windows 7 install with default SATA AHCI drivers operating. I then was able to install the vendor specific drivers from Intel because they now recognized the controller properly. Yay! A few hours of my life gone, but at least I learned something!

This whole problem could have been avoided if Windows 7 would have installed the AHCI drivers properly in the first place, and/or if the Intel drivers were smart enough to recognize that the controller in ATA mode is in fact the correct controller and simply installed the drivers anyway. But issues like this are why we do what we do, so hopefully my experience will save some of you some grief!

Update:With the variety of hardware and drivers out there, it seems that my procedure above does not always work. Based on comments and other research I have done, changing the values of the following Start registry keys (to either 0 or 3) might also be necessary.

If you’re an Independent Technology Professional like me, you are probably still working a lot with Windows XP. Let’s face it: for everything Microsoft is doing to move users to newer versions of Windows, Windows XP is simply entrenched in the marketplace and will probably continue to be for at least a few more years. The reality is there are a lot of people and business out there who run older Windows software and do not need to move to newer versions of Windows – especially now that Windows 8 is scaring the pants off people.

A lot of my clients still run Windows XP, especially those who bought PCs after 2007 and did their best to avoid Windows Vista. Yes, their machines are old, but with a little TLC they continue to run well for them and they aren’t interested in upgrading their PCs for the time being. Additionally, for several years now I’ve been heavily involved in setting up computers with Windows XP running in a virtual machine, whether it is for new Macintosh users who still need to run a particular Windows-only software, or Windows 7 or 8 users who need compatibility with older software. In the last year I kept running into a tricky issue I couldn’t quite squash because of its transient nature. But after a lot research I finally found the root cause and a possible solution.

I began to see this problem a lot as I was setting up brand new Windows XP SP3 installations under virtual machine software, whether that software was VirtualBox, Parallels, Virtual PC, Hyper-V, or others on either Macintosh or Windows host computers. It seemed to me like running the several rounds of updates after the initial Windows XP installation was taking forever – significantly longer than the usual lengthy process. Investigating the issue, I noticed that a SVCHOST.EXE process was eating up all the CPU. Further investigation showed that WUAUCLT.EXE was the core process behind the particular SVCHOST.EXE process. WUAUCLT.EXE is the Windows Update Automatic Update Client software that obviously manages automatic updates. What I observed, however, was that the problem really only manifested itself noticeably during the initial rounds of updates after the Windows XP install. The problem appeared to go away after that so I didn’t bother to troubleshoot it further. However, I later did start to observe the issue on client machines that were not brand-new Windows XP installs. I also noticed that that the problem seemed to intermittently return on the Windows XP installs that I had set up in virtual machines. After troubleshooting a few cases independently, I realized there was a common thread between all of them.

Obviously there is a problem with the Windows Update Automatic Update Client. Woody Leonhard from InfoWorld.com has done the best job of explaining the cause of the SVCHOST problem that I’ve found anywhere. Apparently, this problem has been in existence in various forms for many years. However, it seems to have gotten a lot worse lately. The simple explanation is that Microsoft believes that the amount of old updates in the automatic update chain has gotten to the point where it is overwhelming the WUAUCLT.EXE process. Microsoft is working to fix the problem but it seems that successive attempts have had mixed results. Based on my research, I believe I have found the general process for resolving the issue.

When I first started investigating this issue heavily a few months ago, the fix I found involved installing MS13-080/KB 2879017, which was released in October 2013. Ironically this patch is described as a Cumulative Security Update for Internet Explorer and does not mention fixing Windows Update Automatic Update Client. This did seem to fix the issues I was working on at the time. Later, however, it appeared that this fix no longer worked. It seemed that there were new Cumulative Security Updates for Internet Explorer. First came MS13-088/KB2888505 in November, followed by MS13-097/KB 2898785 in December. At the time of this writing, MS13-097/KB 2898785 seems to be the magic bullet for most situations. That being said, given the history of this issue, I would not be surprised if we see the problem re-emerge when the next Cumulative Security Update for Internet Explorer is released. I will update the article if/when this problem returns and/or if Microsoft finally fixes the root cause of the issue on their end.

For brand new Windows XP SP3 installs, where the SVCHOST problem rears its ugly head almost immediately, I have confirmed that manually installing MS13-097/KB 2898785 fixes the issue. Running Windows Update or Automatic Updates proceeds normally and in fact, is much quicker than it has seemed in years. Likely we have been experiencing this issue for a long time and simply chalked it up to Windows Updates being slow in general. Oh, the wasted hours!

On deployed machines, if you’re lucky the Cumulative Security Update for Internet Explorer will be installed by Automatic Updates. Likely this is what is happening for most people that have Automatic Updates turned on and they simply never notice the slowdown that SVCHOST causes. However, if you run into this issue in the field, it can be very time consuming to run Windows Updates since the SVCHOST problem makes the computer run like molasses. In this situation, you should kill the SVCHOST.EXE process in Task Manager which will then free up the computer’s CPU so that you can quickly manually download and install the update. After restarting, you may notice that SVCHOST still spikes the CPU when Windows is searching for updates, but it should be brief and the effect should be negligible.

The Cumulative Security Update for Internet Explorer is dependent on the version of Internet Explorer installed, as I list below for quick reference:

If you’re like me and you do a lot of virtual machine Windows XP installs I have two suggestions. First, if you are doing clean installs of Windows XP SP3 then install the IE 6 patch right away and save yourself a lot of time before running the initial rounds of updates. Second, where possible, you may want to save a copy of a clean Windows XP SP3 virtual machine installation that is fully patched and ready to go. This way you can clone the installs without needing to go through the Windows installation and update cycle over and over again. Of course, you’ll need to apply the correct Windows product key code and reinitialize the MAC addresses of the virtual machines for proper cloning. I’ve become a big fan of using VirtualBox and the Open Virtualization Format (OVF) to package virtual machine installs for this purpose, as the OVF/OVA files that are generated can be imported into almost any virtual machine software (the glaring exception being Parallels, who doesn’t seem to be interested in supporting OVF even after years of customer requests).

Hopefully this article will save you a lot of time, as it took me awhile to nail down the cause of the issue and then find the cure. I’m curious how many of you still have large Windows XP user bases and what your plans are for supporting them going forward. Post your comments below.

I write a blog about technology history (thisdayintechhistory.com) where I compose at least one post for every day of the calendar year detailing an event that took place on that day. I’ve been writing this blog for a couple of years and have developed a little bit of a following. After the first year of writing the blog, I realized I was going to have a problem. The way WordPress works, the latest post is always shown first. This was fine when I making posts daily during the first year, since I was posting in chronological order anyway. But when January 1st of the next year came around, the posts I had made for the previous year would be buried a year back. I wanted my readers to be able to visit the site daily and see the latest posts for that day, no matter what day of the year it was.

At one point I figured out a way to make WordPress display the blog posts for any calendar day, depending on what day it was. But I wasn’t particularly satisfied with that solution, especially when I realized that I had a bigger problem on my hands. The bigger issue was the fact that if I wasn’t posting articles for every day of the year, the RSS feed wouldn’t update. Since I was using the RSS feed to update articles to a Facebook page, as well as send out a daily e-mail, I had to figure out a way to “refresh” the previous year’s posts for every day of the year. Plus that way anyone who was subscribed to my feed would always get every day’s history articles for that particular day.

I did some research but could not find any automated way to do what I had in mind. So eventually I settled on simply manually updating each post. Since the events that happened on a particular day of the year don’t change, I simply “recycle” the posts by updating the year of the post when their particular day rolls around. For example, on January 1st I update the publishing date for all the posts of that day to January 1, 2013. This puts that day’s posts at the top of the blog and also re-injects them into the RSS feed. The Facebook page gets updated with that day’s posts and an e-mail goes out with the posts for that day.

The problem was that I had to do this every single day for just the posts of that day. I couldn’t do posts ahead of time, as that would make the posts disappear from the blog since they now became “scheduled” for a future time. Since I had people using my blog to research technology history for any day of the year, that wouldn’t work. I had to keep all posts available to view in the archives. So I had to try to remember every day to update my blog, preferably in the morning.

As you can imagine, this wasn’t a great solution. Life gets in the way and I’d forget to do this most days. So at times I was catching up for several days at a time. Not a huge deal, but some of my readers were complaining. I couldn’t blame them. It’s not as much fun to read daily history articles for events that happened a few days ago. So I finally decided to do something about it, since I still could not find a solution that met my needs.

My Big Fat WordPress Adventure

In my career, I have learned much about various programming and scripting languages. I am pretty good at understanding code, but by no means am I a professional programmer. This is why I put off this project for so long. I knew it would take a lot of research and testing as I hacked my way towards a solution. My adventure to create a working process for my problem actually was more difficult than I had imagined. It seemed at every point I made progress, I would discover more details that needed to be taken care of. All in all, the story I’m about to tell you covered the span of about a week-and-a-half.

I knew that WordPress stores all data in a MySQL database. My plan was to figure out a way to run a job at the same time every day that would directly edit the database with the updated date info. My first goal was to find where the date information was stored and create a MySQL statement that would update the year with the current year. I have a site setup on my reseller account that I use for testing and demoing sites for my clients, so I used that for a lot of my testing.

I found that each post stored the published date in the post_date field of the wp_posts table. I discovered that there is no simple function in MySQL to update a date field’s year to the current year. There is a function to increment a date by one year, which would work if I could guarantee that each post was only one year old. But I figured it would be better to simply set it to this year’s date regardless of which year the post was currently published. I figured this would cover more situations, as I did have a few posts that still had 2011 publish dates, plus I wanted my solution to be applicable to other people’s situations. So I created and tested a MySQL statement that accomplished what I wanted. Great, step one complete, right?

Not quite. I discovered that while the posts themselves did indeed show up with the current year, when listed in the WordPress administrative interface under the All Posts section, they showed as still being posted in the previous year. If I clicked on them to edit them, they showed the current year. Weird. What was wrong? In attempting to figure out the problem, I studied the raw XML RSS feed from my testing server and noticed that the <pubdate> element showed a timestamp represented with a +0000 time zone. +0000 represents Greenwich Mean Time, or GMT, now officially known as UTC, or Universal Time Coordinated (also Coordinated Universal Time). Many computer programs store time in UTC to avoid the complications of time zone information. I remembered that previously I had found another field in the wp_posts table called post_date_gmt. It appears that WordPress uses both post_date and post_date_gmt for different purposes. Why WordPress stores time information in two different ways, I’m not sure. It would seem to make sense to only store it in UTC, but I’m sure they have a good reason. I found one possible explanation here. So anyway, I now had to modify my SQL statement to not only modify post_date but also post_date_gmt. After some trial and error, I settled on a method of converting post_date into UTC time and setting post_date_gmt to that value.

After further testing, I was satisfied that I had created the correct procedure for refreshing the date of a post. Now the next step was how to fire off the MySQL statement at a specific time. I researched SQL events, but discovered that my host, Hostgator, did not exactly support the set up of SQL events. I needed SUPER privileges to start the event manager and Hostgator does not support this on shared or reseller accounts. The support representative I chatted with claimed that Hostgator does support MySQL events without granting SUPER privilege but that was beyond the scope of their support. I’m not sure how this is possible, but regardless I gave up on this option because I figured out that I could run a UNIX cron job with the mysql command line tool.

Sure enough I was able to refresh my posts using the MySQL statement running from a cron job on my testing server. But there was a problem. When calling the mysql command line tool, I had to pass the password as a parameter on the command itself. This is considered a security hole, as passing a password in cleartext is not good practice, especially when the cron job logs the command and sends it by e-mail. So before I implemented the MySQL commands on my live server, I had to find a way around this. Luckily, I quickly discovered that if you store the password in a MySQL configuration file, you do not need to pass the password on the command line. With Hostgator, I could created a file called “.my.cnf” on the root of my server directory, outside of the “public_html” folder. This was a sufficiently secure method of storing the password for use with the mysql command line tool.

Feed Me, Seymour!

I implemented the process on my live server and I thought I was golden. The posts seemed to be updated correctly but I noticed that my RSS feed wasn’t updating. I use the new service FeedPress to syndicate my RSS feed, and it wasn’t reflecting the latest posts. In my testing I discovered that if I manually updated my latest post, even if I didn’t actually change anything but simply pressed the update button, the feed would in fact refresh. I studied the raw XML RSS feed from FeedPress and noticed the <lastBuildDate> element was not updated. Further research showed that <lastBuildDate> is filled with data returned by the get_lastpostmodified() function. Studying the MySQL database, I saw fields called post_modified and post_modified_gmt. Could it be that WordPress searches the post_modified and/or post_modified_gmt fields to find the latest date and uses that for <lastBuildDate>? I tested modifying both fields with MySQL statements and sure enough that seemed to bump the <lastBuildDate> element in the RSS feed from my testing server. Hoping I had reached the end of my journey, I implemented the changes to my live server.

Unfortunately, the RSS feed did not update on my live server. Obviously, the fact that I was using FeedPress created a different scenario than the setup I had on my testing server, which isn’t using FeedPress. Something wasn’t happening to trigger FeedPress to rebuild its feed. However, I could force refresh my FeedPress feed from FeedPress’s web site and I verified that the feed was updated and the <lastBuildDate> element was correct. So my feed was being properly generated, it simply wasn’t triggering FeedPress to automatically refresh itself. According to the FeedPress plugin documentation, when a post is published, it uses an XML-RPC “ping” to notify the FeedPress service to update itself when a post is published or updated. Something wasn’t happening to trigger this ping using the MySQL statements I had created. Off to do more research.

What I discovered in regard to XML-RPC pings was that WordPress automatically triggers these pings when a post is updated or published by normal means. The first thing I found was that WordPress inserts a row into the table wp_postmeta containing the id of the post that was updated, a value of “_pingme”, and a value of “1”. In english it seems WordPress is saying to itself, “I need to send out an XML-RPC ping for the post with ID xxx”. So I created and tested an additional MySQL statement to insert these rows as necessary. Was I finished? Not yet, but I was much closer.

The next issue I ran into was the fact that the rows I was inserting weren’t being processed in a consistent manner. Once WordPress processes the “_pingme” records, it removes them. I could verify that WordPress cleans up these rows when the normal publishing process was followed, but it would not remove the rows I created with MySQL – at least not right away. Further research indicated that WordPress processes those “_pingme” rows using a function called wp_cron(). Basically, WordPress uses the wp_cron() function to do various maintenance tasks on itself, including processing any XML-RPC pings that need to be fired off. Unlike the UNIX cron function which can be triggered on a time-based schedule, by default WordPress checks if it needs to run wp_cron() every time a page is loaded either in the administrative interface or by a user visiting the WordPress site. This isn’t to say that wp_cron() runs every single time a user visits a WordPress site. WordPress only checks to see if wp_cron() needs to be run. If wp_cron() actually does run is dependent on whether any events have been scheduled for it to run. So my method of inserting the rows by using MySQL statements wasn’t generating an event for wp_cron() to process. However, if anything happened to generate an event for wp_cron() to process, such as manually updating a single post, then the very next time I would load a page on my WordPress site, wp_cron() would run and process all the “_pingme” rows I had created. In theory, I could have been satisfied with the fact that I could count on WordPress to eventually process the XML-RPC pings I created, but I really wanted to make sure that things would happen in a timely basis. So I needed a way create events for wp_cron() to process. Off to do some more research.

My research lead me to the function wp_schedule_single_event(). This function was the one that would schedule events for wp_cron() to process. By passing this function a “do_pings” value, I could tell WordPress that it needed to process those “_pingme” rows I had generated with MySQL. But how could I call this function, given that WordPress functions are all in PHP code? I had to figure out a way to run PHP code from a cron job, similar to the way I was running the MySQL statements from a cron job. Yet even more research led me to the correct way to call WordPress functions from a PHP command line. So I implemented the PHP script to call wp_schedule_single_event() with a “do_pings” value, followed by a call to wp_cron() to process the event. My testing showed that everything was working correctly … yet FeedPress was still not updating! Argh!

At this point in my testing I literally went to bed to sleep on it. At some point in my sleep I came up with an epiphany. I needed to study the code of the FeedPress plugin to see how it pings the FeedPress service. It must do something outside the normal WordPress XML-RPC ping process. So when I woke up, I discovered that the FeedPress plugin uses a function called feedpress_publish_post() to send the service an XML-RPC ping. This function is added to a WordPress action called publish_post, which most certainly runs when a post is published using normal means. So it would appear that all I had to do was call this feedpress_publish_post() function in my PHP code and I should be done. I wasn’t sure if I would be able to call this function in my existing PHP code, or jump through some more hoops to call the plugin code first. So I went ahead and just added the function call to my PHP code and tested. After all this work, was I finally done?

Halleujah! The function call worked perfectly the first time I tested it! All that I had left to do was to clean up the way I was calling the MySQL statements and PHP code to satisfy my own code compulsion. I did this by putting the MySQL statements into a file instead of directly on a command line. Then I created a simple linux shell script that called first the mysql command line, running the MySQL statements, then called the PHP code that scheduled the “do_pings”, called wp_cron(), and feedpress_publish_post().

I’ve been running this setup for 4 solid days now and all has worked without a hitch. Looking back, given all that I’ve learned during this process, I’m certain I could have accomplished all this a lot cleaner by using various PHP functions. But since the process is working, at this point I’m satisfied and need to spend time on other things, like my paying clients! So if anyone would like to look over my code and suggest a more streamlined or “correct” way of implementing this, I’m happy to review your ideas. I know I’m not the only one who writes a daily history blog, so I’m sure this is something that many others need help with.

The Code

The following is the actual code I have implemented to update the publish date for every post on a particular calendar date. Offhand, I’m sure there will be a problem during a leap year, but I’ve got a few years to deal with that! If you want to implement this on your own setup, there are no guarantees it will work, but you will need to change a few lines to match your own setup, which I’ve added as comments to the code.

I had a client call me today that was experiencing a message repeatedly popping up on their Windows 7 computer. The message was “pleaes remove all ity.im ads from your website”. Note that the first word is misspelled (“pleaes” probably meaning “please”). Suspecting some sort of malware, I did some research but found remarkably little information about it. The information I did find was all from within a day or two at the most and did confirm my suspicion of malware. However, I did not find any definitive resolution.

I proceeded with my normal process of sniffing out malware but I did not find anything myself. So I went ahead and ran my usual ace-in-the-hole anti-malware utility Combofix. It removed the following file:

C:\Windows\SysWow64\Email.exe

However, the pop-up still occurs.

Looking at the pop-up window in Task Manager, it appears to be tied to explorer.exe. I’ve also noticed that the explorer.exe process is using an unusual amount of CPU (20-40% when seemingly doing nothing), plus its RAM usage goes through the roof, taking up 1 – 3 GB. I would suspect it would use more, except this machine only has 4 GB.

At this point, I’m still researching how to fix the problem and testing various methods to clean it. I’ll update this post as I find a resolution. Please comment below if you have encountered this malware and if you have found a successful resolution.

Update 1: It appears that running TDSSKiller from an external boot device identifies a Rootkit malware, Rootkit.Boot.SST.b. Another commenter suggested that HitmanPro identifies the rootkit as Trojan.MBR.Alureon!IK. Some research shows that these may be the same rootkit with different names. I will continue to monitor the infected PC to ensure it stays clean.

Thanks to commenters “Bretnerjm” and “Carolin Gehle” for their help! And a special shout-out to my friend and fellow virus slayer Rusty Herman. He suggested running TDSSKiller from an external drive to me earlier this morning. I just hadn’t had a chance to test it out until now.

Update 2: For those who are less technically savvy, you may want to try downloading and using Windows Defender Offline. It is a ready-made executable from Microsoft that can create a bootable USB or CD/DVD for dealing with rootkits such as these. I have not had a chance to try this myself for this particular infection, so I would love to hear any feedback on this method. It appears a few commenters have had success using this method, so this is what I now recommend since it is probably the easiest method for most people.

As technology professionals, we can’t afford to stop learning about new technology. But the same thing applies to us as business owners – we can’t afford to stop learning about business. The problem is that many of us start our businesses without a solid, fundamental understanding about how to run a successful business. Then we focus all our time on taking care of clients or the technology side of our business. Before we know it, we’re overwhelmed. That’s why I started Solo Tech Pros. So make sure to take the time for this course and continue coming back to Solo Tech Pros!

I found a nice article from Chris Guillebeau on his site The Art of Non-Conformity regarding starting a business while working full-time. For those of you who are contemplating starting your own business doing technology work, this is a worthwhile read.

I’m curious how many of you independent technology professionals also work another job? Are you working towards making your business full-time or are you satisfied with keeping your business on the side? How many of you that work your business full-time started off part-time?

I know I started my business on the side, almost by accident. As a technology professional in the mid-to-late 1990’s, there wasn’t much help available for small business or individuals in the home. This was before the days of the Geek Squad or before other tech support companies started popping up. People who found out I did technology work latched on to me to help them with their businesses or home technology. In fact, I coined the term “leech effect” to describe the way desperate people would “leech” on to anybody with a pulse and a hint of tech knowledge. As I did more and more work on the side, I discovered I much preferred helping my clients than working full-time for someone else. My wife and I prepared (albeit not very well) for me to leave full-time work and when the opportunity presented itself, I jumped and haven’t looked back since.

If you are doing tech work on the side, what is holding you back from taking your business full-time? If you now work your business full-time, what finally prompted you to leave employment? For me, it was wanting the flexibility to spend more time with my newborn daughter. I’d love to hear from all of you out there what it was for you.

Recently as I was ordering a pizza online, I took notice of the fact that the company was charging me for delivery. They took great pains to make it clear that this additional charge was not a tip. I understand the need to charge for delivery. Companies need to cover their cost for the driver. But I also have the expectation that when I order pizza, delivery is part of that. Seriously, how many people pick up pizza anymore? So to me, the fact that I’m being charged “extra” for delivery is a little irritating. I would rather they simply roll any delivery costs into the standard price of the pizza instead of calling out the fact that I’m being nickel-and-dimed for something that should be part of the service anyway.

This got me thinking about how we as independent technology professionals charge our clients. I know some charge for things such as travel time or phone calls. I have absolutely no problem charging clients fairly for services performed. But it doesn’t really matter what I think is right. If the client feels they’re being nickeled-and-dimed, that’s the only opinion that matters. I can tell you that I’ve gained many clients that have complained about their previous technology help charging them for travel time or other “extras” and were happy that I didn’t charge for drive time.

So how can independent technology professionals fairly charge clients for things such as travel or phone calls? I think the key is to starting thinking in terms of value delivered, rather than time involved. Time is not always a fair indicator of value. In fact in many cases it is not at all. I know that I can get a whole lot more done than other technology professionals in a shorter amount of time. You probably feel the same way.

True, in some situations time may be the only reasonable measure of how to bill a client. But let’s look at the example of how shipping companies charge. UPS or FedEx charge more to get something delivered quicker. That’s because faster delivery is more valuable for clients. Wouldn’t it be ridiculous if delivery companies charged based on how long they had the package? Slower delivery would cost more!

Just like shipping companies, it is often far more valuable for our clients if we deliver a solution faster. So should we charge less for work done quicker than expected? Conversely, should we charge our clients more when we take longer than expected if we didn’t estimate the time of a project correctly? I can tell you that I’ve also gained clients who complained about a particular bill from their previous technology help being several hours more than expected.

To prevent the nickel-and-dime perception, what I do is charge a fair hourly rate to cover expenses like drive time and answering quick phone calls. Why don’t I charge for drive time? Because I’m not delivering value to the client by driving. It’s not the client’s fault I live a certain distance away. Why should I charge for the time it takes me to get to them? All that does is make them shop around for another technology expert that is closer to them. To be fair, if the client is very far away I will let the client know that I will need to charge something extra to cover travel expenses. But I don’t charge my normal hourly rate nor do I charge for the entire travel time. Again, I’m simply not delivering value sitting behind a wheel. Which is one reason I’m doing more and more work by remote screen sharing. It lets me help my clients without the whole issue of drive time. More on that later.

I do charge a one-hour minimum for my services. Again, I look at it more from the standpoint of value delivered rather than time involved. If I can resolve a problem or implement a solution for a client in 15 or 30 minutes, isn’t that actually better than if I took a full 60? The client is back up and running and making money sooner. So I look at my one-hour minimum as more of a flat-rate charge for a solution. I rarely get any complaints about this method of charging.

Phone calls are a bit of trickier issue. My rule is that I do not charge for a quick call where I’m answering a simple question or giving a little bit of advice. Charging for phone calls just makes clients not want to call you! My apologies to all my attorney clients out there, but just ask people’s opinions of lawyers that charge by the minute for phone calls! However, if I’m troubleshooting an issue that ends in a resolution of a problem, then I probably will charge. The difference again is in the delivery of value. If I’m on the phone for a few minutes and give the client a little advice, then I chalk that up to good customer relations. But if I help the client solve a problem that was costing them time and/or money, the fact that I was able to do it quickly over the phone is more valuable than if the client had to wait for me to drive there. I also charge for longer phone calls where I’m working with a client on a project plan or something similar. Again, I’m delivering value and pretty much the same value if I had been there in person.

I do charge less for phone and remote work. This is because I can pass along some drive time savings to the client. Passing along savings to clients is always a sure-fire way to keep them happy. It’s a win-win situation because I don’t need to drive and my client saves some money. However, my one-hour minimum also applies for phone or remote work. By doing this, even though I’m charging a discounted hourly rate, it often it works out in my favor because I can resolve many problems in less than an hour. So I’ve had instances where I’ve been able to take care of 2 or 3 clients with remote sessions in less than 2 hours, yet I’m able to bill for 2 or 3 hours of work. So my effective hourly rate is actually higher. This is even before I factor in that I didn’t have any unproductive drive time getting to those clients.

Bottom line, your clients’ perception of the way you charge them is their reality. You may not feel like you’re nickel-and-diming them, but charging extra for things that your clients may feel should be part of your service leads to this perception. Consider how you’re charging your clients and look at it from their perspective. You may find that you’re no better than a pizza delivery company in their view.

I’m curious how all of you out there handle the “extras” when working with your clients. Do you have certain methods in place like I do? Have you ever had a client complain about the way you bill them and how did you handle it?

This picture is for real. One of my service techs in the store I run found this nice little surprise when he opened a laptop for diagnosis. Yes, those are real bugs. Which reminded me that the term “bugs” came from back in the day of vacuum tube computers. Operators had to diligently troubleshoot the problems caused by insects finding their way into those early computers. The term has stuck to this day, but it rarely does it refer to what we found here. This computer definitely needed to be debugged … in the original sense!

Those of us who support networks for our clients (especially networks with Windows computers) must employ various measures to keep those networks free of malware and other threats. One method that I believe is underutilized is the use of DNS redirection as a protective layer, such as is implemented by the service OpenDNS. Free to use in many instances, by simple replacement of a network’s standard DNS numbers with the numbers provided by OpenDNS, many identified sources of malicious software are redirected to an innocuous warning page, negating the possibility of attack before it gets started. These sources are constantly updated by OpenDNS so you or your clients do not need to do anything to stay up to date.

The use of OpenDNS can be used as an additional layer of protection to your standard use of anti-virus software and firewalls. Because it is so simple to implement and inexpensive to use, it has become a standard part of my network setups for all my small business and residential clients.

The DNS system has been in the news lately due to the DNSChanger malware. Do you use OpenDNS or another similar DNS service to protect your clients’ networks?