We’ve been consolidating systems at work and were faced with scrapping our asset management system in favour of our new case management system, Cherwell. Previously we used Alloy for inventory, and Alloy had no problem reporting hardware data during task sequence, even for machines that aren’t going to be connected to our network. We’ve got a few so-called “Open-PC’s” that won’t be on our network, aren’t connected to the domain and thusly won’t be reporting any hardware inventory to SCCM. I spent a long time trying to force a hardware inventory within the task sequence, but it doesn’t seem to be possible. One method would be to create a script and a scheduled task to remove the machine from domain some time after image having been applied, but this would just be annoying on a day-to-day basis. And it wouldn’t be reliable.

Thus my eyes fell to InTune, which doesn’t require a domain connection and will deliver inventory details, which we can pull to our case system. (For insurance reasons.) I tried many different methods of accomplishing this, and kept getting stuck because InTune and SCCM client inherently are incompatible with one another. InTune will simply refuse to be installed where the SCCM already exists.

I ended up creating a task sequence structure that accomplished what was required through scripts and SCCM PostAction.

Pre-requisites:

Create a package pointing to a directory where you’ll keep your source files. We’ll use this later. Throw your Intune .msi and certificate in this directory.

Create Install-IntuneClient.ps1 file containing the following(Thanks to Peter for most of the powershell script.):

Run Command Line containing: cmd /c “yourfilenamehere.cmd”
I chose to point this at the package to pull the file from the server. You can of course do this in other ways, but this is my personal preference.

It’s been a while since we built our initial Windows 10 image, and it’s fallen quite a bit behind the times. We didn’t want to spend time updating the image until we were ready to go in to production, and WSUS seemed to have issues installing some of the updates, resulting in systems that wouldn’t update properly unless manual update packages were installed.

To fix this I deployed a package to the task sequence containing this script:

Credit to this fella for the source: http://www.applepie.se/apply-hotfixes-during-task-sequence

Essentially this will run within the package directory, installing all .msu’s found within. Currently it’s just throwing on kb3213522 – which enables WSUS to take over from there, but it could potentially save people who don’t have WSUS some serious hassle.

Add the package to task sequence and point it at the powershell script and you’re golden. Modify the text displayed within ” ” to your liking.

I recently had to update our corporate Adobe Reader installation, as it was getting old and throwing off errors.

This proved to be somewhat more complex than initially anticipated, due to how Adobe has decided to handle it’s versioning process. Essentially they have released one master version to which they release update packs. During my research I saw a thousand different ways people had chosen to go about it, but I think this method is the simplest by far – and it doesn’t require any scripts at all. First of all you need to sign up for an Adobe distribution agreement: https://distribute.adobe.com/mmform/index.cfm?name=distribution_form&pv=rdr

Once this is done you will be supplied with links to all the pertinent software. I can’t share these publicly accessible links, as that violates their terms of use, brilliant as that is. So sign up, get approved and come back.

Get your source .msi package from Adobe. This will be the earliest version number you can find in continuous track.

Customize it with Customization Wizard to automate the setup steps and generate an MST File.

Create an application in SCCM which points to the .msi file for uninstall information. I refer to this as the “base” install. Set it up with the following for installation program:msiexec.exe /i AcroRead.msi /quiet /norestart TRANSFORMS=AcroRead.mst
Uninstall is left at default.
Detection method I set to .msi product code. Detection methods are key in this deployment.

Distribute this application to your distribution points and test that it works in a small deployment. Make sure it’s set to uninstall and supercede any prior versions of Reader, as they seem to keep the same GUID.

Create a new application and point it at the latest .msp update file available from Adobe. Don’t use an incremental though.

Make sure you target the uninstall for the “base” MSI file, not the .msp file or anything else. This will make sure you get the correct uninstall codes in to SCCM, should you need them later.

Set detection method for the update for the version number of the .exe file once the update has been applied. So install the update manually, check the version number of the exe and use that for detection.

Make sure that the update is set up with a dependency on the Reader Base installation.

Deploy the update to a test group to verify proper handling.

Short and sweet.

With this method, I can supersede older versions of Reader with a new .msp file, keeping the base installation as is. As the .msp updates can be applied on top of each other, there are no concerns at all maintaining the reader application using this method. When an update is released, repeat the update steps and supersede the old application and you’re done. Could not be simpler.

If anyone’s found a simpler way of making all this work, I’d love to hear it. But every single option I have found, heard of or seen has been much more complex. Some requiring .ini file hacks, some requiring powershell scripts, vbscripts etc.

This works within the method Adobe has decided to go with, and it takes minutes to deploy the updated application.

This is for blocking internet explorer extensions that are unwanted in the corporate space. I could find not resource available with these CLSID’s, so I had to install the unwanted add-ons and get them myself, hopefully this will save some of you some time.

Each entry is marked with a date, since this list won’t be relevant in perpetuity. As I discover issues, I will update with new entries.

To disable you enter the unique ID; including brackets {}. A value of 0 will deactivate the add-on from running in IE. This will not prevent installation nor will it, in the case of Amazon, actually remove the button. It will prevent it’s functionality, however, which is what my aim is, as Amazon in particular caused scripting errors interrupting our end users in their tasks.

A continuation of my previous post, the aim with this particular entry is twofold.

Documentation. I want to document which steps I took and what had to be done to make this work in case I need to review the information at a later date.

It can potentially help others who are just starting out with Ubuntu Linux Server, too.

I’ve never had much experience with Ubuntu Linux, save for being able to install, configure and use the Desktop version of Ubuntu. While that’s all fine and dandy, it doesn’t really qualify one to install and configure an Ubuntu Server. Nevertheless, I decided that a learning experience never hurt anyone, so I set out the following goals and read up on the subjects:

Set up Ubuntu Server.

Creating RAID5 partitions on Ubuntu Server.

Samba network sharing.

Plex media server over the command line.

That’s it. It doesn’t sound terribly complicated, I’ll admit, but there really was a lot of fiddling, reading and trial and error involved.

Creating the installation media.

Having attached a VGA debug board and keyboard to my EX490 I was ready to create something to install Ubuntu from. Since I don’t have a USB optical drive, I opted for the flash drive variant. I headed on over to the official Ubuntu download page and got the latest version of 64 bit Ubuntu Server. (12.10 at the time of writing this.)

I also went and grabbed Linux Live USB Creator. There is absolutely nothing complicated about using this application. Point it at an image file and at a thumb drive – and everything is handled for you.

On the back of the EX490 there are three USB ports. Insert the installation thumb drive in to the bottom one, enter the BIOS and select boot from USB. Unplug all hard-drives, if present, from the server and reboot.

Installing Ubuntu Server and configuring a RAID 5 array.

Once the Live USB has initialized you’ll see the Ubuntu logo and various options. At this point you should slide the hard drives back in to the server and select Install Ubuntu. Follow the on-screen instructions and provide information where necessary. Once you reach the hard drive setup screen, you should select the manual option. I relied heavily on this post: https://help.ubuntu.com/12.04/serverguide/advanced-installation.html from the Ubuntu documentation. If you find my list lacking in detail, reference that instead.

Select the first hard drive, hit enter, and agree to the prompt asking if you want to create a new empty partition table.
When finished the drive should only list “FREE SPACE”.

Repeat step #1 for every drive you want to be in the array. If using RAID5, like me, a minimum of 3 is required.

Selecting the first drive hit enter and select automatically partition in the list. I deviate a bit from the Ubuntu documentation in this, but I found it much less difficult and I was also sure that I wouldn’t do anything completely daft using this method.

Repeat step #3 for every drive.

In the main partitioning list select “Configure Software RAID”, commit the changes after which you should select the “Create MD Device” option. Select RAID5 and enter the number of drives you’ve decided to use. You might be asked if you want to use a “hot spare” – I recommend against it, but it’s entirely up to you.

Now you’re presented with a list of partitions. Select all partitions of the same size (use the spacebar to select and navigation keys to move up and down.) and hit enter. I chose the largest partitions first, but it’s entirely up to you which way to go. Please make a note of your partition names here. I.e. /dev/sda1 etc. It’s important later.

Repeat step #6 for the remaining, smaller, partitions after which you’ll be sent back to the main partitioning screen. With any luck you’ll have md0 and md1 in your list now, which are your RAID arrays. Note that your drives will still be visible in the list.

Open the large RAID array and select ext4 and mounting point “/”. For the smaller array chose “swap”

Select “Finish” and you’re done configuring the array itself.

Installation will now continue and ask you a few things, fairly self-explanatory. Feel free to encrypt the home directory, but I find it shouldn’t matter since this is a server we’re setting up after all. At some point you’ll be asked about which services to add – Open SSH and Samba should be sufficient.

You’ll be asked to set up the grub boot loader. This could be a chapter all in itself, but I’ll try to simplify. Basically you’ll want the boot loader on every device. If you only select one device, and that fails, you’ll be left with a system that cannot boot – and we don’t want that. Check your list from #6 and specify the non-swap drives. i.e. /dev/sdx /dev/sdx etc. This will install the boot loader on all drives.Side note:if you screw up completely not all hope it lost. When the system is up and running you can open dpkg-reconfigure grub-pc from the terminal, select all the devices and install the grub bootloader to all of them. You’ll probably want to do this after a disk failure, too, just to be safe.

Done! Sit back, enjoy the feeling, open a cold one and pat yourself on the back.

Improving the RAID array speed.

By default the synchronization of your RAID array will be, for lack of a better word, atrocious. To fix this and to see why, you can read the full explanation in this article. If you just want to be done with it, enter the following in your terminal:

If you want to see what your array is doing at any time, use the following command.

cat /proc/mdstat

Try it now, note the speed it’s working at and the progress, then apply the above and see the difference. Thank me later. With beer.

Network configuration.

First things first you’ll want to set up your sparkling new server with a static IP address, it’ll save you lots of annoyances in the long run. First, you’ll open up your interfaces file using a command line based text editor. In your terminal, enter the following:

sudo nano /etc/network/interfaces

Sudo is “super user do” and nano is the name of the text editor. You should have the eth0 interface in the list. Now change it to suit your needs. Here’s an example:

Yours will most likely be different from the above. Change IP address of the server to something easy to remember, network and broadcast addresses are not strictly needed – but I like to keep them just in case I make a typo in the netmask. The most important part here is to change the dhcp to static – as highlighted in bold. Press ctrl+x to save and then exit the editor. Then write the following command in terminal to restart the networking service.

sudo /etc/init.d/networking restart

Now test that your network settings are properly configured by opening an ssh tool, I use Putty, and connect to your server. If you can connect and log on, you could technically remove the debug cable, close up the server and put it back in place. The remaining configuration can be done via Putty over the network. I suggest you wait until the RAID array has finished resyncing, and then safely shut down the server using this command:

sudo shutdown -h now

You could also ignore the resync and proceed – do so as you wish. At any rate, don’t move the server to it’s final location and remove the debug cable until the resync is finished.

Updating the server and applying updates.

Having an up-to-date server is obviously a great idea, here are the commands you need to issue to accomplish that:
sudo apt-get update
sudo apt-get upgrade

get-update will update your package lists from the ubuntu repositories, while upgrade will apply the found updates. Confirm any queries and you’re done. You may need to restart the server to finish applying all updates. A restart is simple and is done with this command:

sudo reboot

Setting up Webmin.

Webmin is a web based administration tool, which simplifies some administrative tasks and saves you from having to rely so much on remembering commands. I used this to create my Samba shares and users.

Setting up Samba.

Unlike the guide, however, I did not create my share within the home folder. I simply used mkdir to create /shares/TV, /shares/movies, /shares/music etc.

If, for some reason, when trying to mount the resulting drives in Windows, you get “access denied” or similar errors, you should check that the share folders have the correct permissions set.

For example, you want to make sure that the user “paul” has permissions within the /shares/music directory, even though the Samba configuration should have given him these permissions. Issue the following command, and see if that resolves the issue:

sudo chown -R paul:users /shares/music

The syntax, then, is: sudo chown -R username:group directory

The -R changes permissions for all folders and files within the directory as well. Leave it out, and it will only change permissions for the directory itself.

With any luck, you’ll now be able to mount your shares as network drives in windows.

Installing the Plex Media Server.

Update August 17th 2013 - text changed since the information changed. Refer to official documentation instead.

http://wiki.plexapp.com/index.php/Downloads#Ubuntu_-_PMS

Once the server has been installed, open your browser and navigate to http://server-ip:34200/manage – and set up your server! Basically you only need to point it at your media and it’s ready to go.

Final Thoughts

This was a first for me. Installing Ubuntu Server, creating a SoftRAID, and much of the above – I’d never tried before. There’s certainly a somewhat steep learning curve to all this, and I must admit I find documentation, that’s applicable to whatever version you’re running, fairly hard to find – not to mention that the Ubuntu forums of the internet tend to be flooded with a lot of outdated information. I’m sure I’ll contribute to this trend with this post, but I can laugh at my idiot self in a year or two.

For those of you who know what IRC is, I highly recommend #ubuntu-server on FreeNode as a great place to ask questions and have them answered, provided you have patience. I Certainly owe a few chaps on there a great deal of thanks for helping me troubleshoot issues with, amongst other things, the grub loader.

That said, the performance gains from using Ubuntu Server, rather than the desktop edition, are certainly not negligible. Use it, if you dare, otherwise just install Ubuntu Desktop on a drive and pop it in the box, as I mentioned briefly in my first post about this server.

I hope you’ve enjoyed this mini-series about this project of mine, I’ll make new posts if I find some new interesting things to do with this neat little Home Server.

I finally received the debug board for the EX490 and the much anticipated four Western Digital 3TB Red disk drives. I had a bit of an episode with customs here in Denmark and ended up paying a massive premium for the debug board, so I was really hoping it would be worth the added cost.

I opted for the VGA/PS2 assembly, as the VGA only was not available at the time I was building this project. From what I know a USB keyboard should work equally well. The serial port I had no use for, so I decided on the 80$ unit.

I hope it shows on the pictures, but if you’re in doubt, the assembly kit is an excellent value and has great build quality. I set to work disassembling the server to install the debug cable, you can find a complete guide to the disassembly itself here. You may also want to read this regarding where to place the jumper to unlock the BIOS.

Once I had finished the disassembly I carefully removed the CPU heatsink, used a cleaning solution to remove the excess thermal paste from the heatsink and installed my new CPU with fresh thermal paste. I chose an E6400 for this project, as it’s much faster than the Celeron processor the unit shipped with, and shouldn’t cause any thermal issues. I decided against upgrading the memory, two gigabytes should be sufficient for the needs I have, and if not – it’s an easy upgrade.

I played around with various mounting options for the debug cable, but ended up running it up next to the S-ATA backplane and having it on the top of the server.

Remember to exercise extreme caution when re-assembling the server. You’re at risk of damaging the cable from the assembly when sliding the motherboard/PSU tray back in to the case. I got around it by pulling slightly and very carefully on the cable as I slid the assembly back in place.

Clearly this wasn’t done with aesthetics in mind. I’ve seen options where users have drilled holes in the case and mounted the assembly on the side with velcro, or modified the back to install it there – even using the top drive bay for the assembly kit. Since I’m using all four drive bays I don’t have the latter option and the others do not appeal to me terribly.

I took a closer look at the assembly kit and realized that the cable itself would easily detach from the connector on the PCB itself. This allows me to leave the cable in place within the server and simply remove the assembly. I considered stuffing it down the back of the server, but there seemed to be ample room on the top of the unit, so I decided to test it out.

Placing the cable on top, like this, allowed me to slide the case top back on without a hitch. Now I have the connector cable readily available whenever I require it – without having to disassemble the entire unit. This way, when I have to troubleshoot, I merely need to remove the top cover by sliding out a few drives and pressing the plastic clip holding it in place, re-attach the assembly and away we go.

I installed the four hard-drives I’d purchased for this project in to their drive bays, attached a monitor and a PS/2 keyboard to the assembly kit and powered on the server. It worked without a hitch.

If you’re reading this because you’re considering purchasing the debug board yourself, please do not hesitate. It’s completely worth it. Being able to change boot devices, change fan settings and connect a monitor if something goes wrong – it’s just absolutely essential. Build one yourself or get one from Vovtec, it matters not, if you plan to fiddle around you absolutely must have this capability.

Introduction and considerations.

For a long time I’ve been pondering building myself a home server. To host my media files, handle my file transfers and save me from having to have a 24/7 turned on workstation. The primary motivation for such an endeavour has been the reduction of my electricity bill – not to mention the wide array of capabilities offered by such an item.

It hasn’t amounted to more than a loose collection of ideas and thoughts for years on end, but about a month ago I decided that the time had come. I had been given a HP MediaSmart EX490 about a year ago, but had only fiddled with it a little bit before it ended up in a state of permanent shut-off.

I received it without disc drives, which means that the original operating system was lost – I had no installation media and HP had discontinued support of the unit. As such I figured I’d look at commercial NAS options to see if I could find something sparkling new that would suit my needs. I looked at Synology, QNAP and ReadyNAS offerings. I read exhaustive amounts of reviews, opinions, arguments and looked at the available features and assisting communities. I was inches away from adding a ReadyNAS Ultra 6 to my basket, when I heard that NetGear had dropped support for it – and were introducing new models. Great, I thought, until I looked at the price. At almost twice the price I decided that it wasn’t worth the investment (USD $ 1000 at the time of this post for the 516.) and that I didn’t much care for the way NetGear decided to leave it’s existing customers out in the cold on a now dead platform, which would not receive updates – despite the hardware seemingly being more than capable of future versions.

That’s when I decided to take another look at the MediaSmart server, which by now was covered in thick layer of dust and rotting away silently in a closet.

For my money, this is a really attractive piece of kit. The build quality is very good and the aesthetics are great as well.

It has four drive bays, which is two less than I had initially planned for, but it should suffice for a long while to come. Naturally all drive bays are hot-swappable and easily accessible – as evidenced from the photo here:

My concerns were primarily that;

The unit is headless – i.e. one cannot connect a monitor to it.

The unit can only boot up from the first drive in the array.

I had none of the original software or installation media.

Concerned about the Celeron CPU contained within the unit, and whether it had sufficient processing power for my needs. First and foremost, the Plex Media Server. Specifically transcoding of media.

Now, before I could settle on the unit for permanent use I decided to test it out – and see if I could even load anything on to it.

I grabbed an old Western Digital drive, popped it in an external usb enclosure, connected it to a spare computer and began the process of installing Ubuntu Desktop on it. Once Ubuntu Desktop was up and running, I made sure to allow remote connections and disconnected the drive. I pushed it in to the first drive-bay of the EX490 – and lo and behold, it booted! I got the assigned IP address from my router and attempted a VNC(I highly recommend Tight VNC for this.) connection to the device and got in! At this point I was completely ecstatic – I had in no way anticipated that this method would actually work, much less that it would work so flawlessly.

I quickly populated the server with more junk drives on to which I dropped various media and installed the Plex Server application.

In short order I was up and running, and was able to access my content through both my DLNA enabled Samsung blu-ray player and the Plex Client application.

Proof of concept done, it was time to address the concerns I had – which after a lot of reading has resulted in the following:

Bought a second hand Core 2 Duo E6400 CPU, which will fit in to the socket, upgrade the processing power of my unit and won’t make it hotter than the surface of the sun.

Placed an order for a VGA/PS2 Debug cable, courtesy of Vovtech, which I’m currently awaiting delivery on. This will allow me to, finally, access the BIOS of my device, change boot priority and generally poke around the other options available. This will greatly help with troubleshooting if the network connection fails, or if I want to install an operating system from a flash drive.

Placed an order for 4 Western Digital 3 TB Red drives. These seem like the most obvious choice for a home server setup. They do cost a bit more than other drives with this capacity, but feature a five year warranty, have RAID specific benefits and come with a five year warranty from Western Digital. I am currently awaiting delivery on these as well.

As I’m awaiting delivery I am reading up on Ubuntu server, which I will attempt to load on to the server. While the Ubuntu Desktop client has been kind to me, it seems fruitless to utilize it for any extended period as a primary operating system on a server – and I’ve been looking for an excuse to learn Ubuntu Server.

I plan on following up on this post with the hardware upgrades, specifically if I managed to perform them without screwing everything up, after which I will detail my Ubuntu Server experience and possibly provide help to others who are looking to try their hand at this as well.

My goal is to set up the server with the four drives in a Raid 5 configuration to maximize available space and – hopefully – ensure that data loss is highly unlikely in the event of drive failure. Backups will be done on external drives for now, but at a later date I will likely be utilizing the e-sata port on the server to attach an external 4 drive-bay enclosure for proper backups.

The famous esata port – not too shabby with the extra USB connections either.

This post has already become long enough, considering it contains no information that could be said to be terribly useful to anyone. I will, to the best of my ability, document both the hardware upgrades and the installation of Ubuntu Server – with any luck there’s one or two people who could benefit, and if not, that’s okay too. 😉

Update April 13th 2012, I cannot get this to function any more.. may be due to my device.
Update May 30th 2012 – I can now do this procedure again. But there’s an addendum. I’ve amended my original post.
Update April 17th 2013 – After moving to Windows 8 I no longer had any luck in getting my regular backup procedure to work. Everything but phone contents (images) and settings would fail to be backed up.
I went to Nokia Market and found an application by Mr. Tong Zang which seemed to offer the same functionality, only with export to a .csv file rather than individual message files. This has always been something I wanted to get, so I decided to give it a shot. It’s not free, but the cost is negligible for the function it offers. I installed the app and started my backup, but the application seemed to hang. After a few tries with the same result, I let it lie on my table in the “unresponsive” state, and after a few minutes there was a nice “backup complete” message. I now have a .csv file containing some 8000 messages – which is a lot nicer than individual files which this guide covers, let’s be honest.

The functionality is similar to the free tool which can be found on maemo.org – but I’ve always had issues with this command line based tool. It will also require you to activate developer mode on your phone, which SMSBackup will not.Hopefully the app will prove more stable if I export smaller batches at a time, I’ll update this post as appropriate.
April 20th 2013 – Mr. Tong Zang and I have exchanged some emails, he says he has identified the problem and will fix it as soon as possible. Still, it works, you just have to ignore all warnings and be patient.

The original backup procedure:

If you’re here, you’re likely much like me. I like to keep a record of text messages on my workstation. It’s nice to go in to nostalgia mode from time to time and revisit them.

This has never been much of an issue for me, as I’ve always had Nokia phones. However, since I got my Nokia N9 I’ve struggled to get a working backup of my text messages, but I found a workaround. It’s not as pretty as you’d might like, but it works.

First, you backup your MMS messages. Do this by creating a backup on your N9.
Navigate to:

Settings

Sync and backup

Backup

This will create a backup of your “user” data. Connect your phone to your PC in Mass Storage Mode and go in to the .backups folder. Here you’ll find an archive containing your MMS messages, photos and text content. It’s a simple .zip file so extract it to your usual SMS backup location. Be warned, however, that you will find the folder names to be very long and weird, but everything is there.

After doing this step, you’ll have noticed that you can’t see all your other messages. This is where the tried and true PC Suite enters the picture.

Reconnect your Nokia N9 and select Sync mode, PC Suite will detect it. Create a backup of your phone from within the application and save it somewhere locally.

NOTE: Since the PR 1.2 update I’ve had to keep the device screen active, to prevent it from locking, whilst doing the backup. If I didn’t do this, I kept getting a failed backup. It seems that the “Phone Specific Backup” part of the backup procedure will fail if you don’t keep the screen active. The internal backup function activates, and it will lock you out of your device (making a forced reboot necessary) if you don’t keep it active and away from the lock screen. When I did this, the backup worked as it should again.

Now you open up NBU explorer and navigate to where you placed your backup. You’ll see two folders, predefinbox and predefsent – these are your inbox and outbox. Everything is given a name based on message number and sorted by date. You will not get conversation view from this, but the object here is to safekeep your messages, not make it easy.

Right click either inbox or outbox and select export. This will let you export all your text messages from the .nbu file to a location you choose. All your texts are now stored locally in .vmg format. They can be viewed within the PC Suite application, and an application such as Notepad++ can display them as well.

Congratulations, you’ve foiled Nokia and their vile attempts at forcing N9 users to be stuck with the Link applications and only three options. Enjoy your new-found freedom!

Comment below if you’ve found a better way of accomplishing this task, if you’ve got questions or if you want to report your success!

I don’t think a step-by-step picture guide is necessary, but if it’s requested I’ll see what I can do.

I have to admit to a certain fascination with American late-night shows. It started out innocently enough with Jay Leno and David Letterman which have been a mainstay of Danish television for years on end.

By chance, a friend of mine introduced me to Conan O’Brien and I thought he was brilliant, for a time. I stopped watching Letterman and Leno entirely after I discovered his brilliance. When he got Leno’s spot on Late Night I was thrilled, but he didn’t do well with the new format in my opinion. He was, as everyone is painfully aware, shortly thereafter kicked out by Leno again and wound up on Cable instead.
He has, in my opinion, lost his “mojo”. I don’t get the enjoyment I used to from watching his show, and as such I’ve stopped viewing it entirely.

Fortunately I was fortunate enough to stumble, by sheer blind luck, over Craig Ferguson who is absolutely hilarious.
Could be that I’m just a sucker for dark humour, which I admittedly am, but he just seems to have a much better screen presence and he never fails to get me to laugh.

Between him and Jon Stewart on the Daily Show I’m in heaven! My worry is that eventually he’ll go the way of Conan and lose his way. Hopefully he’ll stick to the format that works, even if he gets promoted to an earlier timeslot.

For now, I’ll do my darnedest to enjoy Jon Stewart and Craig Ferguson with Geoffrey Peterson for as long as it lasts. Hopefully, it’ll be years and years still until some CEO decides to RUIN everything.