2007-10-17

Some of you may know that in my spare time, I like to ride bicycles. I ride for fun, and for basic transportation when I feel up to it. When I park my bicycle at work, I use a heavy-duty chain and padlock to hold it to the rack in the security-patrolled private parking garage. My bike isn't going anywhere. When I'm just out and about running errands, I usually lock my bike up with an inexpensive cable lock. In this case, it's a "Python" by Master.

The Python is a pretty resilient lock. It has a steel braided cable that's covered in a hard plastic material. The cable itself is 6' long and can easily be wrapped around a large light post or pillar. The lock cylinder itself is only four tumblers, but the keyway is small and obstructed. To further complicate the task of picking the cylinder, the lock requires a very impressive amount of tension in order to turn. In an attempt to figure out a good method of bypass, I turned to the ancient art of shimming the lock.

Shimming is when you place a sheath or other material around the shackle of a lock, and force the shim into the locking mechanism, thus unlatching the grip on the hasp and allowing the attacker to open the lock. This usually only works on lower-quality padlocks. The Python works by providing a pair of ribbed surfaces that allow the cable to easily slide into the lock, but resist any attempts to pull the cable outward. By its very nature, this lock design is meant to have some slack between the lock itself and the thickness of the cable. With that, I went to work fabricating my shim.

I used only a utility knife and a soda can for this attack. I cut a long strip out of the soda can that would be wide enough to wrap almost completely around the cable body. Both the utility knife and the resulting metal edges on the can and shim will be very sharp. Use good work gloves or at least a lot of caution if you choose to replicate what you see here.

Next, I wrapped the shim around the body of the cable, and inserted the end into the entrance to the lock body just enough to hold the shim into shape.

I then pushed the cable and shim further into the lock body. This squeezes the shim between the jaws and the cable, allowing the cable to slide out of the lock without being held into place by the one-way jaws.

I held one end of the shim (not shown, my other hand was taking the picture) while gently and easily twisting and pulling the cable back out of the lock. This takes patience, and remember what I said about sharp edges!

Eventually, the cable will come all the way out. Note, you can still see the shim inside the lock body.

Then, you simply remove the shim, coil the lock back up, and away you go. Of course, I'd never advocate theft in any way. If you do attempt to steal my bike while it's locked up this way, you can expect to find yourself trying to shim this lock to get it off from around your neck! This is a very quick way to bypass many inexpensive locking systems, however. It's often easier to shim a cheap lock than to pick it. You can apply this same method to some combination locks, keyed padlocks, and certain "U" shaped bicycle locks as well. Next time someone needs their cheap lock opened without the hassle and carnage of bolt cutters, just reach for a soda can.

It's worth mentioning that this attack relies on the attacker's ability to move the shim into place. Had the cable lock been pulled tightly as to remove all of the cable slack, an attack such as this one would be nearly impossible.

2007-10-12

I do almost everything within either OpenBSD, Solaris or Mac OS X. All of them required me to install quite a few extra pieces of software to work just the way I like, but at the end of the day, they're great for the things I do, with some exceptions noted in Solaris. I spend the majority of my time doing web stuff (surfing, forums, blogging), listening to music, writing e-mail, word processing, performing systems administration, and tinkering with encryption and information security. Occasionally, I may goof around with my own music or graphical art. Solaris lacks easily-installed free or bundled graphics, MIDI, and audio editing software.

Enter Linux. Linux is a pretty broad brush to be painting with these days. Linux is a kernel. It's also a highly generalized term for any operating environment with Linux at its core. The end result is quite confusing. As part of my job, I take care of a bunch of Red Hat Enterprise Linux servers. I've been familiar with Red Hat for quite some time. While I don't particularly like how Red Hat approaches certain things, I am quite good at installing, patching, managing, and tweaking Red Hat Linux servers simply because I've been doing it for so long. When I went to play with a totally different flavor of Linux on a spare server at home, however, my first instinct to use the command-line for everything was met with a few problems. Primarily, many of the tools and programs that Red Hat provides me with are nowhere to be found. Only because of my familiarity with Linux and UNIX flavors in general (okay, and my ability to read documentation) was I able to figure out how certain things were set up. For those who care, it was ArchLinux, but I had similar issues with SME Server as well, despite being loosely derived Red Hat Enterprise Linux.

Right now, the big push in the Linux world is getting Linux onto the desktop. Linux for everyone. Break free from your commercial operating system hell! Linux is here to save the day! Ubuntu is the big name that gets thrown around most often. Self-described as "Linux for human beings", Ubuntu aims to be the final answer to the Linux desktop quandary. After trying Ubuntu Desktop, Ubuntu Server, and Kubuntu Desktop, I can say that "Linux" has come quite a way in its quest for desktop domination.

Ubuntu Desktop is based on the Gnome desktop environment. Asmodian X pointed out to me that Gnome feels an awful lot like Mac OS 9, and I wouldn't have quoted him on that unless I agreed. Part of the clunky feel is the fact that Linux is still bound by the X Window System. Essentially, all graphics go through a network or local socket. Windows and MacOS X don't suffer the same fate, and their interfaces simply feel more responsive. I can deal with a sluggish display, though. There are bigger fish to fry. All flavors of Ubuntu install quickly and ask a very minimal set of questions during installation. As long as the hardware is supported, pretty much anyone can get any of the Ubuntu flavors installed in minutes.

Ubuntu server is everything you'd expect in an open source LAMP server distribution that's released by a company that believes in ease-of-installation. Much like Ubuntu Desktop, only a small set of options are available during installation. The end result is a server distro that is neither lean and mean, nor bloated. It's pretty damned generic, and up to the user to install and configure what needs to be installed if anything more than a basic web application and database server is desired.

Kubuntu Desktop replaces Gnome with the K Desktop Environment (KDE) and a different set of bundled applications -- for the most part, the KDE-based apps are chosen over the competing software packages where available. Konqueror is the default web browser as opposed to Firefox. Kontact and Kopete handle mail/scheduling and Instant messaging respectively. The list goes on and on. If I had to compare Gnome to MacOS Classic, I'd have to say that on a user interface level, KDE feels a bit like Windows Vista with most of the snazzy features turned off, except a little more "Fisher Price." It kind of feels like a toy, but it gets the job done nicely.

Keep in mind that my impression of the two desktop environments is based only on Ubuntu. I haven't used KDE or Gnome prior to this in several years. Right now, I'd say I favor KDE over Gnome, at least in the configurations provided by Canonical (the company behind Ubuntu). There are other variants of Ubuntu which I have not yet tried, so they aren't being reviewed here.

After you get one of the desktop flavors of Ubuntu up and running, keeping the system secure and up-to-date is a breeze. The system checks for upgraded packages that are available for download, and alerts you to their presence. It's easier to keep Ubuntu up-to-date than it is to do the same on Windows. It really is that easy. Installing other software packages can be a breeze as well. Ubuntu provides a graphical application installer that lets you simply choose programs from a list or search through the list for what you want. You simply select the programs you want to install, then install them. The system handles all the downloading and installation procedures on its own, including any other packages that are required by the software you selected.

BSD has been doing package management like this for years without the graphical installation wizard. You still need to know what you want, and have to look through the list manually. Ubuntu is based on the Debian package system, and Debian has also had similar functionality for many years. This stuff isn't new, but combined with the other aspects of Ubuntu, it makes for a system that's pretty user-friendly.

Installation is a breeze.

Applications that you need to get going are already installed by default

Patching and upgrading software is automated.

Installing new software is as easy as picking it from a list.

Almost anyone can install and use Ubuntu without much of a fuss. What more could you ask for? Quite a bit, actually. Compared to Windows or Mac OS X (still the two heaviest hitters in the desktop operating system market), all Linux flavors are left wanting. Configuration of anything but the most rudimentary options requires the use of the command-line, which is not an environment that many people are comfortable in. For me? I live and die by the CLI and don't mind it one bit. If there's a software package that you read about for Linux and it's not on the list of stuff that Ubuntu provides, then there's no easy way to install it. Someone like me could download and unpack it, and compile it if needed. Most people are used to double-clicking on the installer or dragging the application (seemingly one file) to their hard drive. Don't get me started on the difficulty of installing certain drivers under Ubuntu.

The other advent that the Linux desktop has brought to the table is "Live" distributions. A Live distribution is an operating environment that boots from removable media such as a CD-ROM or Flash drive, providing an instantly functional system that doesn't rely on a hard drive to operate. Ubuntu and Kubuntu Desktop installation CDs initially launch in this mode. You truly get to try it before you install it. Things tend to load very slowly from CD, so the whole operating system seems very sluggish when run this way.

There are dozens of popular Live distributions that you could check out. Back|Track is my favorite so far: for hackers, geeks, auditors, and security professionals alike. Back|Track, is the end result of Whax and Auditor joining forces. Upon booting, you get a clean, functional desktop platform from which to launch any number of tests and exploits.

Truly, Ubuntu is only good for end users who are happy using it pretty much just as it comes from a default installation. The Live version is sluggish and not recommended as a replacement for Windows - a preview if you will. Other Live distos are great for tinkerers and nerds. For geeks and hackers, a full install of Debian or ArchLinux would be considerably more flexible than Ubuntu if you wish to stick with the Linux kernel.

In closing, I'll say that the biggest hurdle remaining for Linux to conquer on the way to end-user desktops is the fact that the command-line is still not optional despite the best efforts of the Linux community. A command-line should only be required as a last-ditch interface to the operating system in order to recover from some earth-shattering catastrophic failure. Windows has been to this point for years. OS X has as well. For some reason, Linux is lollygagging. It would also help if everyone could just agree on one package distribution model and stick with it. So far, I think Debian's system holds the most promise for the Desktop and enterprise workstations.

2007-10-05

Ah, the joys of shell scripting! If you've spent any time on UNIX-like operating systems, you've probably encountered or written shell scripts. The theory is simple. For the most part, shell scripts simply execute shell commands in order. You find them everywhere. Simply booting a Linux or BSD host might execute scores of shell scripts. Scheduled processes like those launched with cron or at are usually shell scripts. As a system administrator or a hacker, well-programmed scripts can make your life and the lives of your users a lot easier. On the other hand, scripts that are arcane and cryptic can be more trouble than they're worth.

One of the major hang-ups of complex shell scripts is the strict syntax of the command-line arguments. When calling the command-line arguments from within the script, the first argument is referenced as $1, the next as $2 and so on. $0 is the name of the script itself as it was entered on the command line (including the path, if typed). Also, $# is a numeric variable that contains the number of command line arguments passed. Using the exit command is a way to make sure the script stops where it is without processing any further commands. Using exit 0 creates a "clean" exit, whereas exit 1 (or any other integer) is a way to symbolize an error. This doesn't matter much unless other scripts rely on the ones you're making. It's good practice to specify a proper exit status for your scripts, but it's not mandatory.

Most shell scripts that accept arguments require the end-user to know exactly what arguments to pass or they will simply fail. Take, for example, this script I wrote to get my wireless adapter online in OpenBSD.

It requires me to select the device name of my wireless ethernet adapter, the SSID, and the WEP password. The command line could look like this:

wifi.sh ural0 mywlan 0x31337e1ee7

If I just executed wifi.sh without any arguments, the ifconfig would fail miserably on syntax alone, and dhclient would not know what ethernet adapter to use to get an IP address. The script would not work.

Some more advanced scripts will determine if you entered enough arguments. If you did not, it may give you some brief explanation as to what it wants for arguments. The "apachectl" script for controlling the Apache Web Server is a good example of this. If you run it alone, you are shown a list of arguments that it accepts:

You can fill in the echo commands with whatever you find useful. This is fairly mundane, and doesn't allow you to pass parameters or multiple flags to your script. For those unfamiliar with the "case" command, it's quite simple to use. If the contents of the variable referenced in the "case" line match the expression before the parenthesis, it executes the code on the following lines, and stops processing when it encounters two semicolons.

Let's face it, with just a single "case" structure and some error checking, you won't be writing any truly powerful shell scripts.

Enter "shift". Within a shell script, shift destroys $1 and shifts all the other arguments down by one, and decrements the value in $# by one as well in order to reflect the new (lower) number of command-line arguments left. The contents of $2 become $1, $3 becomes $2, etc. While you might not think that sounds too exciting, it will allow you to pull off some argument-processing trickery with a simple loop to read arguments. Check out this example:

The until/do loop above simply keeps running the arguments through the case block until there are 0 arguments left, then exits. If you pass it an argument that is not in the case block, it simply does a shift and ignores it. It runs the arguments in the order we choose.

bash-3.1$ ./foo.sh foo baryou have selected fooyou have selected bar

In the case of my made-up wireless configuration script, this isn't directly all that helpful. Another thing you can do, however, is run another shift within a case. This allows you to give your script many very flexible command flags, much like other UNIX commands. Using my wifi script as an example, I'll show you how it's done. For arguments that don't require a second parameter (such as -h and -d) just use one shift statement within the case. For arguments that do require a parameter (such as -d, -s, -k, or -p, use two shifts: one before you assign $1 to a variable, and again at the end of the case. Notice that the * catch-all case is simply there to shift un-recognized arguments. If we don't do this, our loop will hang forever because there will always be an argument that hasn't been processed.

This script puts it all together. Its arguments are just as flexible as most compiled programs. After the script has processed all of the arguments, I use a series of if statements to build the command line for ifconfig by appending to the $ifconfig_args variable, then run dhclient if desired.

There's a little extra scripting (more if statements) to make sure that a value follows the arguments that require another parameter. In the end, this is a pretty lengthy script, but it's almost bullet-proof and a lot friendlier than most shell scripts. People might not even know it's a script!

The ifconfig syntax I used in my examples is fairly platform specific to the BSD family, but you can change it to work on Linux, Solaris, or any other UNIX-like OS.

The UNIX userland contains hundreds of little utilities that can be strung together with scripts and pipes to create very powerful programs without having to spend a lot of time learning a new programming language. Hopefully you don't just learn how to make an ifconfig script out of this, but take what I've written as an example of how to improve your own scripts or inspire you to start creating your own scripts.

HiR Featured Columns

HiR Tools

HiR Categories

About HiR

HiR is what happens when 1990s-era e-Zine writers decide to form a blog. Most of us hail from the Great Plains region of the United States.

Ax0n, HiR founder and editor-in-chief is an information security specialist currently working in the luxury goods industry.

Asmodian X joined HiR in December 1997 and currently works as a web developer and SysAdmin in the education industry.

Frogman has been on board since May 1998 and has many technical passions. When not experimenting with obscure hardware, he can be found leaping from one rooftop to the next, making the world his office.

TMiB has also been helping since 1998. Also our resident Physicist and go-to guy for xkcd jokes we don't get, The Man in Black currently works in the Internet industry in an east-coast data center.