Thursday, July 30, 2009

I just noticed today that the Linux version of Chromium now supports plugins. This has been the biggest problem for using Chromium on Linux. Chromium reports that the support is currently buggy. I played some Youtube and Hulu videos without issue, so while it's probably not perfect, it doesn't seem bad either.

Monday, July 27, 2009

Hulu is releasing a desktop application to watch their TV shows and movies through. They are presently only developing it for the Windows and Mac platforms. I decided to give it a spin on Ubuntu using Wine. The application is basically an Adobe Flash player that runs their flash application.

I downloaded the application from http://www.hulu.com/labs/hulu-desktop and ran the installer. Everything about the installation went well. It informed me that Adobe Flash was not installed and asked me if I wanted to install it. I selected Yes and it took care of grabbing and installing the Windows version of Flash in Wine, and that too gave me no problems.

The application runs just fine until you go to watch a TV show. I chose arrested development. The playback is not bad. It is only slightly choppy, but I found that little bit of chop to be enough to not want to use it over the browser interface at hulu.com.

Since the program is written in flash and runs inside a flash player, and Adobe provides flash support for Linux it should be rather easy for Hulu to bring their application to Linux. Maybe someone out there will pick apart the pieces of the downloaded application and be able to point a Linux-native flash player to their flash application.

The screenshot below was taken on my dual-monitor PC. The left image is my left monitor, and the right image is my right monitor, which is playing and episode of Arrested Development in the Windows version of hulu-desktop inside of Wine.

Monday, July 20, 2009

First we have an online service where we store our data. This would hold our e-mail, pictures, documents, videos, music, etc.

We access our data through web applications. Take Google Docs for example. Currently Google hosts both the application and the data. I would like to see that come to an end. In my model I would log into my Google Docs and set up an account. I would give Google Docs the URL of my web storage server and then Google Docs would give me a unique private key. I would simply highlight this key and copy it into my computer's memory. I would then log into my web storage site and select that I want to share my document data. I would paste the private key provided by Google Docs into my storage interface and receive a unique private key from the storage server. I would provide this unique private key back to Google as authorization.

Behind the scenes Google would contact the web storage server, tell them who's account it wants to access, give it both its own private key that is unique to my Google Docs account, and provide the private key that is unique to the storage server. With both of these keys the web storage server can be confident that it is talking to the correct server that I have authorized to access my documents.

With the trust between the two servers setup I could use Google Docs to modify my documents.

I could repeat this process for different online applications. Files would be saved with open standards so that I can be guaranteed that any application on the web can reliably read and write to them.

I could even use two providers for the same service. I could switch back and forth between using Google's picasaweb and Flickr, for example. There is no reason why only one application would need to be tied to a file type.

If down the road Microsoft comes out with a better online application for modifying documents I could go in to my storage service and deny Google access and go through the process of allowing Microsoft. At any point I could ditch Microsoft and go with another service provider.

Likewise, if I want to change my storage provider I should be able to import all my data to a local file and then upload it to a new provider.

Both the web applications and the data should always be synched up to our primary computers. This way if we don't have an internet connection we can still use the application to modify our data, and the next time the computer comes online it will all synch back up. Google's Gears application already allows for application and data synchronization for offline use, and that's exactly how I envision this working.

The benefits to this are:

No vendor lock-in.We are free to change providers at any time. It is not like currently where one must choose Microsoft Office because it is the only application suite that can reliably open and close the .doc format which is the document format.

Access to data from anywhere with an Internet connection.You no longer have to carry around a thumb drive or go through any such hassle. Your data is available from almost anywhere.

Enhanced data persistenceComputer crashes will no longer cause you to lose any of your work.

Enhanced data security.A lot is made of the security problems of data existing on the cloud. I believe that you are better served with your data in the hands of a team of professionals than in yours. The average PC user is simply the largest security hole that exists today. As long as the average user is in charge of their data, the average user's data is vulnerable. Google is more likely to keep your data secure than you are.

OS independence.Whether you are using Windows, Mac, Linux, your cell phone, game console or any other device, all you need is an Internet connection and a browser to get to your content.

Cheaper computers.Because most of the hard work is being done by the application server, your computer no longer needs a lot of ram and processing to run your applications. You just need enough resources to run your web browser.

This is cloud computing at its best and I believe that we will be seeing technology continue to move in this direction over the coming years. The development of Google's Chrome OS will be the first of many major steps towards this model.

Sunday, July 19, 2009

The following works in Ubuntu 9.04 Jaunty, but probably works in most versions of Linux. If you find yourself at the grub console line at boot time you can easily boot up your system with just a few commands.

To boot into Ubuntu we need to first specify the kernel. By default the latest kernel installed is linked to at /boot/vmlinuz. In the /boot folder lies all of the other kernels you have installed as well. At the console type:

kernel /boot/vmlinuz

You can stop there, but if you want a list of all kernels hit the tab key twice and it will list out each kernel. You can start typing any kernel you wish to boot from, but leaving vmlinuz will boot to the newest kernel.

Now all we have to do is tell grub to boot the os.

boot

If you have Windows installed, we can still boot in it as well, however the instructions are different.

First we have to tell grub which hard drive and partition Windows is installed. The first hard drive, as the bios sees it is 0, and the second is 1 and so on. If you only have one hard drive then we know that it is on hard drive 0. Partitions are zero-based as well. If Windows is on the first partition, then it is partition 0, and so on. For our example Windows is installed on the first hard drive, but the second partition. Here is how we enter that.

root (hd0,1)

Remember that there is no space after the comma, but there is one after root. If you aren't sure what your options are, simply type everything up to the hd and hit tab. If you only have one hard drive it will auto-complete the "0," If you have more than one it will list the available hard drives. After you have selected a hard drive you can hit the tab key after the comma and it will again either give you 0 if you only have one partition on that hard drive, or it will give you a list of possible partitions to choose from.

The next three commands will bot up Windows:

makeactivechainloader +1boot

If you just find yourself back at the grub console, then you probably entered in the wrong hard drive and/or partitions. Keep trying until you find the right location of Windows.

I haven't tested this out, but if have multiple installs of Linux you can use the root and makeactive command to select the hard drive and partition of the Linux you want to boot into, and then use the kernel command to select the kernel on that hard drive and partition. If anyone knows for sure please drop me a line.

If you want to setup auto-login in Ubuntu, but for whatever reason you don't have access to the full gnome desktop, such as if you are trying to configure the machine remotely, then these instructions will work for you.
First we'll backup the configuration file and then in your editor of choice, probably vi open up /etc/gdm/gdm.conf.

In the first line change "false" to "true" and in the second line append the username you want to auto-login after the equals sign. Here is how mine looked:

AutomaticLoginEnable=true
AutomaticLogin=david

After a reboot you will find that Ubuntu goes straight to the desktop of the user you defined.
There are other settings you can set in here, such as a time login. Most of these are available from the GUI, but feel free to look around and find any other settings you might want to change.

Update: Ubuntu 11.10 Oneric and onwards use LightDM in place of GDM. Instructions for version of Ubuntu running LightDM can be found here.

Saturday, July 18, 2009

I have been playing around with the Release Client 1 of Windows 7 for a while and I think Microsoft has done a pretty good job. Almost everything about it seems very well polished and usable. I believe that 7:Vista::XP:ME. While everything thought ME was the biggest piece of crap ever, when XP came out people quickly jumped on the bandwagon. I believe the same will happen with 7.

Some of the things I will highlight are also in Vista, but I'm going to show some of the things I think Microsoft has done right.

The "Start" menu.This is very much like KDE's menu. I don't need to spend time finding where stuff is. I simply type in the name of the application I want to use and it pulls it up in the menu for me. If I want to play my new game, "Call of Duty: World at War", I simply type in any of those words and it shows up in the menu. I no longer have to click Start->Programs->Call of Duty->Game, or whatever. I just type War and I have it. The layout of the item on the right is very helpful as well.I was never a fan of XP's default menu with the Control Panel, My Computer and all the other items in there because it just made for a lot of clutter. Visually, the default Start Menu in XP was very confusing. The new menu is well organized and easy on the eyes for locating what I want.As I hover over the items on the right, the picture of me changes to a relevant picture of the item I am hovering over. It has a nice fade effect that works out very well.If you notice to the right of Sticky Notes there is an arrow. If I click on that arrow I get a list of recently opened documents with that application. So if I had Word installed, it would show Word, and I could click on that arrow to get a list of recently opened Word documents. If I click on the word document then it opens it up.Unfortunately, if you delete a file, it still shows it as a recently opened file, which I think will be a spot of user confusion.

The Control Panel.

I have always like how OSX is so easy to configure. Everything is laid out in a very logical fashion that makes finding the setting you want to modify easy to find. Windows and Ubuntu have always made simple tasks very confusing to find. I think that this new Control Panel simplifies everything. I have found that anything I want to change was easy to do by drilling down through each logical item in the control panel. Again, Microsoft has hit the nail on the head with this one.

Boot time and time to go from login to desktop.

I haven't timed it yet, but to go from BIOS to login takes about thirty seconds. To go from login to usable desktop is about five seconds. I know once computer manufacturers start loading up all their crap on people's machines before they buy them, this time will go up considerably, but Windows has done their job at making a very clean bootup.

Explorer.

There are two improvement here. In the location bar at the top, you can simply go back down the folder tree by clicking on any of the previous folders listed. Ubuntu has this, but I always turn it off. But Windows is doing domething different here. If I click in the location bar I can still type a location manually. This probably doesn't help the majority of people out there. However, I am constantly typing full locations in the location bar, because for me I can do that faster than I can click on a bunch of folders. In Ubuntu your option is the click view or the type view. I like the trade-off between the two views that are present in 7.The next improvement is the search functionality. In the top-right I can start typing a search, and it will give me the results for the folder, and subfolders that I am currently in. This makes finding the exact file I am looking for very easy.

Taskbar pins.

By default IE, Explorer and Media Player are pinned down in the bottom-left, though they can be removed and other programs added. If I have multiple folders opened, I can click on the pin and it gives me a preview of each open folder. If I hover over one of those previews, the desktop goes black and the full image of the folder is shown to me. If I hover over another folder then that folder is shown to me. If I click on one, then it becomes the active window. This makes finding the right folder very easy. The same is true with IE and having multiple web-pages open.

Default user.

For a default install of Windows XP, the user setup at install time is the administrator. When the computer boots up the default behavior is that anyone can log in as that administrator without a password. This is horrible security, and I think it is what causes so many people to get infected so very easily. That was Microsoft trying to be user-friendly at the cost of security, which caused more user experience issues than it solved. Now the default account has a password setup at install time, and nobody can log in as that user without the password. I hope this results in more people setting up multiple users on the machine.When creating a new user it defaults to setting those users as "user" and not "administrator" which should help things.

Updates.

I know this was added in Vista, but I'm still thrilled about it. Updating Windows is no longer tied to Internet Explorer. Updates are done through the control panel. I still have problems with how updates are handled beyond that, which I list below.The following are my gripes.

UAC.

UAC is still dumb. It is bad security practice. All it will do is teach people to click Yes or Accept more because they will be so tired of trying to read every dialog box. This is Microsoft again refusing to adept sane security because they don't want to compromise usability, even though it is going to get users infected and cause more problems than it solves. One example of how stupid this is, I was trying to install a program and it kept giving me errors all over the place during the install. Once I cancelled I got a box that reading something to the effect of, "It appears you tried to install a program that needs Administrative right. Would you like to rerun this program with Administrative right?" Getting errors all over the place, and then given the option to do it right is not good. Windows should have recognized the need for administrative rights the first time it tried to access a restricted area and prompted me then. And it should have asked for an admin password to ensure that the user had legitimate rights to do that.

FTP.

FTP access from within Explorer is still broken. I have been using 7 while working on some class projects and I couldn't get to me FTP site in Explorer. I could from the command line version of FTP, which is what I ended up doing.

SSH.The inability to SSH across computers is another security issue. SSH is kind of like FTP (but certainly not exactly) but it is encrypted, so security is maintained. In the Unix world (which includes OSX) SSH is very powerful, and makes many tasks over the network very easy and secure.

Updates.

Updates still make you reboot constantly. It has taken me as many as four reboots to get all of the updates I needed installed. Why can't it install all the updates the first time, and only make me reboot for very low-level system updates like kernel updates? When I update Ubuntu I do it one time, and I very rarely need to reboot. In fact, I almost never need to reboot.If you don't reboot in Windows after an update then it will keep nagging you. I griped about there here. I don't understand why Microsoft has such a hard time getting it right. People have been complaining about this since Windows 95, and back then Apple had already made this a non-issue for their OS.Microsoft should also find a way that users can point to 3rd party update sites, so that users can keep their other software up-to-date using the built-in updating system. So if I have product A, I can point the OS to their updating site and I can get updates to product A with my regular Windows updates.So there you have it. I still find Ubuntu to be a vastly superior operating system in so many aspects. There are still many questions about Windows 7, such as how well it will run on MIDs, Netbooks, and other lower-end hardware, which we will see pan out after it releases this October.

Wednesday, July 15, 2009

At work we have scratched SCCS and are moving to subversion. For those not in the know, this is basically a way to keep our source code somewhere safe. They basically revision control software that allow you to go back in time and view previous versions of your software.

SCCS was the very first source code revision control system ever built, back in 1972 and is considered obsolete within the industry. Subversion and GIT are the two leading source code revision control systems today, and for a variety of reasons we decided to go with subversion.

As just about everything else in technology, thing have changed rather significantly over the last 37 years. Today's systems allow you to keep track of different version of your software as well as create all sorts of different branches of your software.

With subversion I can come in to work, grab the latest copy out of "trunk" and start working on it. When I am done I put my code back into trunk and go home. If someone comes in after me and starts working on the project then they'll have the work I did, and after they leave and I come back in I get all the work they did. If someone broke something along the way it is easy to go back and pull out an earlier version. It will even tell you the different between the two files so you can see what changes caused the problems.

It is also common, during a project, to try a new direction for accomplishing a task, only to realize that it won't work. Without a good revision control system you have to make a copy somewhere on your computer and then play with that. With subversion you can stay in trunk, and if you decide against where you where headed, simply roll back. If the changes are really big you can create a new branch and have the entire history of your new direction and then decided if later you want to merge those changes back into your trunk or not. The other great part is that it will look at the difference between trunk and your branched version and automatically put only your changes in trunk.

At some point in your development you will be ready for your users to start testing and your product will go in beta. At this point you branch your code off into a beta branch. Once testing is complete you create another branch. This will usually be given a version number, such as 1.0.

As your released product goes out, it is inevitable that your users will find problems. So at that point you fix them in trunk, push them out to beta and ask your users to test those changes, and finally make a new release, probably called 1.1.

In SCCS, at least how my coworkers have been using it, you have one version, and that is your release version. When you want to make changes you get the code out of SCCS and put it on your desktop. When you are ready for testing you give the program to your customers, and once they have accepted that your changes fix their problem, you then put your modified version back into SCCS. So SCCS keeps your production version and nothing else.

So in SCCS you take the release version and work from there. In subversion you take the trunk (i.e. dev) version and work from there.

SCCS was a big advancement when it came out. Today subversion gives us a lot more power and a sane path between development, testing, and release, and a way to track it all.

We have already had problems where we where developing applications outside of our source control and unforeseen problems wiped hours, or even days and weeks of work, which we couldn't recover because the development code only existed on a hard drive somewhere.

We have also ran into issues where we had a need to dip back into old code we weren't using anymore to see how we did things in days of yonder.

So you would think that with our move to subversion our team would be excited about the ability to merge and branch and keep track of our whole development cycle, given the issues we've seen in the past.

Our complete codebase is in the process of being rewritten, and heading off this new direction is someone who, until recently, had been working outside our group. His name is Nelson. He came in to our group once he had the base of the system mostly built and ready for our input. He is just a little older than myself and is more technologically in line with my way of thinking than that my other coworkers.

When it came down to have a meeting about how we are going to use subversion Nelson started to lay out how to use it and everyone else just about flipped. They had never seen anything like it before and it was way outside of what they know and love. They immediately rejected it and started drawing up plans for how we can use subversion like SCCS. Their solution is basically to put the released code into trunk and we work out of there.

Nelson and I argued until we where both blue in the face. Mike kept calling out, "This is the way we've always done it." Eventually I called him out on it, pointing out that that is usually a key indicator that it is time to change.

Despite my explanation above, I don't fully understand how to use subversion. Because I am a junkie for open source development, I read blogs from lots of developers that use subversion for their products, and I occasionally pull code from their trunk and compile it. So I've used it a little and am only vaguely familiar with the details on using. This is one of the reasons I was so excited about using it, until I found out that just wasn't what everyone else had in mind.

As I listened to Nelson explain it in our first meeting I started understanding what people where talking about in their software development blogs. So I was doing a very poor job articulating its power while trying to get everyone else to see the light. I started pointing out that everyone else uses subversion this way. Nobody called me out on it, but I realised that "everyone else does it that way" is just about as poor an excuse as "we've always done it that way."

Eventually Nelson decided that he would no longer discuss the issue as it "makes my blood boil." However, occasionally he can't help but to get going and pour his heart out.

Talking to my brother, and a few other people who work in the software industry, my fight is not new. It appears that the older generation of programmers just don't get it. Here I am, with hardly a year under my belt telling a group of people with 10+ years of experience that they are all wrong and need to listen to me. Not just telling them, but having very heated debates about why I think they are wrong.

So as of yesterday our code is loaded up into a subversion repository with our "release" code sitting in "trunk." I don't know how to articulate in a non-condescending way how absolutely asinine this is. I even made a plea to, if nothing else, don't call it trunk. Just call it release or production. I know it is just an issue of semantics, but I don't report to type with my toes, so we shouldn't report our production code to be trunk.

Another part that bothers about this is my career growth. If I go to interview for a job, and I am asked about revision control I want to sound competent. I'd like to be able to explain in great detail how it all works. I don't want to sound like I either came from cave man days of programming, or that I think I should be wearing my cap on my captains quarters.

Fortunately there is a light off in the distance. As I mentioned earlier Nelson has been building the basis for our new system. It will be a fazed complete rewrite. He has already started putting his code in a completely different subversion repository using the proper method of trunk, beta, releases and branches. As our old system is slowly merged into the new system, it will all have to go in the proper way using the proper methods, and eventually this nonsense my coworkers have concocted will go away, and I will ultimately be the victor.

I am also positioning myself so that my primary work will be doing the interface designs in the new system, so hopefully the majority of my work will not reside in the old system anyhow.

Tuesday, July 14, 2009

Tell your users to exit the program and then browse to your website. At your website provide a list of links to others company's websites that won't allow you to download the update until you register with their site. When the customer goes to install the update inform them that they actually can't install this update and they must go back through the same process to download all the earlier updates, starting with the first one, before they can install the latest update. This is what Call of Duty, World at War wants you to do.

Thursday, July 9, 2009

So for a couple weeks I've been brewing in my head a blog post about how what Linux really needs is for Google to come in and work it's magic, but that will never happen because it doesn't fit into Google's business model. So I've been thinking about this for a while and was hoping to post something real soon when just yesterday Google announced they are gearing up to do what I've been working on posting they will never do, which is build their own Linux-based operating system.

From http://googleblog.blogspot.com/2009/07/introducing-google-chrome-os.html

So today, we're announcing a new project that's a natural extension of Google Chrome — the Google Chrome Operating System. It's our attempt to re-think what operating systems should be.

That's right, Google is going to have it's own OS. For now it is just aimed at netbooks, and maybe that is as far as it will go. However, the increased influx of Linux development Google will be putting behind Linux should increase as it works to ensure that the new OS works as great as every other Google product.

I am curious to see how this all plays out. I have read some say that this is just another Linux distribution, but that comes from a short-sighted view of what Linux is. It is not an operating system, it is a kernel which an operating system can be built on top of. Nobody pays any attention when they use their TIVO, gPhone, router, or other Linux powered device that they are using Linux. They are simply paying attention to the fact that they are using their device, which is exactly how a device should be.

Google is probably the only company that has the money, power, clout, commitment, and proper understanding of how to utilize open source effectively to make this work. Canonical has made a really good stab at it, but they simply are no Google.

Thursday, July 2, 2009

As a child I spent a lot of time at my grandparents house in Pensacola. They liked to travel, and when they traveled camping was more-or-less their only means of resting. My grandfather owned a 70s model Volkswagen Bus with all the camping accessories and we all loved it.

I had always wanted one, but they have that dirty hippie history behind them, and people already seem to project that idea on me as it is.

So while looking around on the Internet at a vehicle to replace my Oldsmobile Silhouette I came across a 1980 VW Vanagon and I ended up purchasing it.

While I get plenty of weird stares and occasional hoops from other motorists and pedestrian, I have found that by owning a bus I am also now part of a larger community. This community includes previous bus owners, bus admirers, and what I have termed member of the "free riding culture" which includes bikers and others who ride the open road as sport.

I find people approaching me while I pump gas, as I enter and leave stores, and at red lights. They usually want to ask my what year and then give me a brief synopsis of their days as a bus owner. Bikers and other bus owners either wave or point in my direction at the sky as I pass them on the road.

I had been looking for a good riding hat for the bus and found one last week at Sports Authority while looking for a bike helmet. It was in the golf clothes section and it is perfect. It is similar to a panama. At the Saint Marks river Athena found a white feather so I stuck it in and now I ride around with a feathered cap with maximum style.

License

All content on this blog is licensed under the Creative Commons Attribution Share Alike license, unless cited from an external source or noted otherwise. Please click here for full licensing information.

All software code published on this blog is public domain, and may be used at will as one sees fit.