Friday, December 31, 2004

After doing some research on grid computing through IBM's web resources, I have come across the following outline which amplifies the differences between grid and cluster computing. This topic has been mis-understood by most people who I have discussed it with. Hopefully this will aid in my understanding and ability to discuss the topic intelligently.

Quoted from http://www-106.ibm.com/developerworks/grid/library/gr-heritage/

How grid differs from cluster computingCluster computing can't truly be characterized as a distributed computing solution; however, it's useful to understand the relationship of grid computing to cluster computing. Often, people confuse grid computing with cluster-based computing, but there are important differences.

Grids consist of heterogeneous resources. Cluster computing is primarily concerned with computational resources; grid computing integrates storage, networking, and computation resources. Clusters usually contain a single type of processor and operating system; grids can contain machines from different vendors running various operating systems. (Grid workload-management software from IBM, Platform Computing, DataSynapse, and United Devices are able to distribute workload to a multitude of machine types and configurations.)

Grids are dynamic by their nature. Clusters typically contain a static number of processors and resources; resources come and go on the grid. Resources are provisioned onto and removed from the grid on an ongoing basis.

Grids are inherently distributed over a local, metropolitan, or wide-area network. Usually, clusters are physically contained in the same complex in a single location; grids can be (and are) located everywhere. Cluster interconnect technology delivers extremely low network latency, which can cause problems if clusters are not close together.

Grids offer increased scalability. Physical proximity and network latency limit the ability of clusters to scale out; due to their dynamic nature, grids offer the promise of high scalability.

For example, recently, IBM, United Devices, and multiple life-science partners completed a grid project designed to identify promising drug compounds to treat smallpox. The grid consisted of approximately two million personal computers. Using conventional means, the project most probably would have taken several years -- on the grid it took six months. Imagine what could have happened if there had been 20 million PCs on the grid. Taken to the extreme, the smallpox project could have been completed in minutes.

Cluster and grid computing are completely complementary; many grids incorporate clusters among the resources they manage. Indeed, a grid user may be unaware that his workload is in fact being executed on a remote cluster. And while there are differences between grids and clusters, these differences afford them an important relationship because there will always be a place for clusters -- certain problems will always require a tight coupling of processors.

However, as networking capability and bandwidth advances, problems that were previously the exclusive domain of cluster computing will be solvable by grid computing. It is vital to comprehend the balance between the inherent scalability of grids and the performance advantages of tightly coupled interconnections that clusters offer.

Quoted from http://www-106.ibm.com/developerworks/grid/library/gr-heritage/

The Passport system, used by Microsoft to authenticate users in their Hotmail and MSN instant messenger technologies, is no longer being pushed as an authentication tool for on-line transactions at non-Microsoft websites. According to an article on The Seattle Times, E-Bay and Monster.com have stopped using Passport to authenticate users on their systems. This is excellent news, as having a proprietary system implemented as an Internet standard would be a terrible blow to the freedom and security of the existing systems. Imagine having to wait another month for the next roll-out of security patches from Microsoft (or any other vendor) before you could make any "safe" online transactions.

The importance of having open standards that are available for auditing by the public and professionals who are not worried about how much they are going to make from the success or failure of the system they are auditing is of extreme importance when considering the security and privacy of financial information. We, as consumers and security professionals, must take a stand against proprietary standards and push for open standards that are not subject to a profit margin and investors who only care about how their bank account grows.

Wednesday, December 29, 2004

If spooks and spies, computer security, and down to earth common sense with the scientific method appeal to you, this book will be an exciting read that you will not be able to put down.

"The Cuckoo's Egg", by Cliff Stoll, is a novel written by an astronomer at Berkeley who begins by helping some systems administrators and ends up chasing a German hacker who works for the KGB. Cliff, who is relating a true story based upon his experience, tracks down and provides the evidence to convict this hacker by working with the FBI, CIA, NSA, OSI, and other agencies. The book is around 350 pages, and has over 50 chapters, making for short chapters.

On a technical note, the techniques, procedures, and equipment in the book are described well so that the reader will have their curiosity piqued. The hacking techniques show how insecure the common administrator left their system 20 years ago (and to some degree today). The most common way that the hacker entered a system was by using a default username and password. All the technical readers will note how the interaction by the "computer experts" is very minimal, while giving credit to the author (I imagine we would all do the same, so I don't blame him).

I highly recommend this book to technical and non-technical crowds alike. Enjoy!

On December 11th there was an article posted to Technet which is a discussion between two Microsoft employees about the differences between Windows and Linux. The discussion was well done at promoting the strengths of Microsoft and spreading some myths about Linux, as well as revealing some truths that people should be aware of.

One of the problems with the mindset of those who set Microsoft against Linux is that the two really are not comparable. This is brought up in the article by stating that "nobody runs just a kernel". Linux is a kernel. Windows is not just a kernel, it's an entire suite of software with a kernel in there somewhere. Another point that Microsoft tries to press in the article is how Microsoft supports open-standards better than the open-source community, which is completely incorrect. Microsoft works with Microsoft products. I know that Microsoft is working toward creating software to work with other technologies (SFU), but most people I know are not using these technologies and are not aware of them. Most users are not upgrading to the latest version of Windows and Office.

I like the way that Microsoft admits that many people in the IT industry want to be able to integrate their systems and not have to hassle with incompatibilities, I second that. I would love to be able to work with Windows and Linux with fewer hassles. I believe that those hassles could be solved easier if Microsoft were willing to work with others, although the open-source community will continue to solve the problems of compatibility.

The end result for me as an IT professional is "which tool will perform the job better now and in the future". The most common answer for me happens to be Linux. A huge factor that I see in this arena is that most IT people I work with know next to nothing about Linux and have no experience working with Linux. As the demand for Linux and the ability to do what you want to do with your systems increases, the demand for people with Linux skills will increase. I think that a very common mindset today among the IT crowd is that they won't learn a new technology unless their employer pays them to learn it. This makes it easy to set ourselves above the crowd by learning these new technologies. The hard-working self-starter will rarely be out of a job.

I think that it is very important to know the technologies in the field and not take a stand that limits yourself to just one vendor or technology. The IT professional who is valuable to his place of work will make sure they study both Microsoft and OTM technologies.

Wednesday, December 22, 2004

Having recently installed Fedora Core 3 on one of my machines, I have had an opportunity to view the firewall settings when using the GUI provided to set the security on Red Hat. When installing the operating system, I chose to enable the firewall and allow SSH connections from the internet. With these settings in mind, the following output results from 'iptables -L -v':

To interpret what RedHat has done here, they have allowed SSH access into my machine, but they have also allowed other items to get in. I meant to only allow SSH access into my machine, but it seems RedHat has other ideas.

Several issues come to mind when viewing this configuration. If I were a cyber-criminal, I would now know what the signature for a Red Hat system was so that I could exploit it based on the ports that are open by default. I could do a mass portscan with nmap, hping2, or another port scanner, and identify systems to match with known vulnerabilities. This is why it is very important to know your system, or pay someone to know your system, and to use third-party tools to verify the security of your system.

The important thing to do at this point is to close down the holes in the firewall that do not belong there. I have adopted a method that I have taken from an IPTables/Netfilter tutorial which creates a block table to reference in INPUT and FORWARD, while then going back to add additional tables for other services that you want available. I will demonstrate how to do this here:

You will note that the block table is referenced above the RedHat default tables which will make all traffic go through the block table and not reach the RedHat table. The firewall is now at a good starting point, and will block all traffic not requested from the inside first. Now the RedHat firewall table can be de-referenced and deleted, as shown:

Now, with another 'iptables -L -v -n', you will see that the configuration is ready to be saved and tweaked for specific uses. Save the configuration with '/etc/init.d/iptables save active'.

The next step for this specific setup is to allow SSH access into the machine. This is done using the same concept as before, by creating a table for this service and adding it to the INPUT and FORWARD tables when it is ready. The commands used are as follows:

Now we reference this table from the INPUT and FORWARD tables as follows:

iptables -I INPUT 1 -j ssh_tableiptables -I FORWARD 1 -j ssh_table

Finally, save the changes:
/etc/init.d/iptables save active

Take another look at your configuration and make sure it looks right. This small tutorial does not even touch on the many, many capabilities of IPTables/Netfilter, but it does provide a starting point to secure your system from many overt and brute-force attacks.

Monday, December 20, 2004

SSH tunneling is a bit of knowledge that any security professional should have under their belt. Here I will explain some rudimentary elements to SSH tunneling. The purpose of SSH tunneling is to provide a secure means of transporting data over a non-secure channel. In essence, SSH tunneling is creating a VPN (Virtual Private Network).

SSH tunneling can be used to route any traffic from one computer to another, as long as there is an SSH server on one end and an SSH client on the other end. It is a requirement that there be an account with shell access used to create the tunnel. One common use of SSH tunneling is to secure email transfers when the email server has no secure transport protocol in place. This is a problem at my place of work, which is a large university that does not provide a secure means to check email. In order to prevent my username and password from passing between my machine and the mailserver in the clear, I use SSH tunneling to encrypt all traffic. I will explain how I did this in the following steps:

1. First, setup the SSH tunnel between the local machine and the mailserver for pop3 and smtp transport, note the use of high-end port numbers to allow the tunnels to be established by a non-privileged user:

ssh -L 52110:localhost:110 -L 52025:localhost:25 -l -N mailserver

This command will be answered with a password prompt, which is the password for the SSH account you are using to connect to the mailserver. To verify the tunnels have been established, open another shell and use netstat to see if your machine is now listening on those two ports:

Two questions that I'd like to consider and possibly come up with an answer to. These questions are in regard to a Windows/Linux domain environment:

1. Is it more secure, yet still easy to administer, to have all client machines under their own control, with a domain administrator role only having the power to patch and virus scan, rather than have a domain administrator have more power than the local administrator?

I think it is more secure to have a network/domain where the domain administrator only has update and user management, while the local administrators have more and complete control over their system. This would prevent the entire domain from being compromised when the domain administrator account becomes compromised.

2. Is it more secure, yet still easy to administer, to have only one user with administrator privileges, rather than have multiple levels of administrator access? One example here is the Windows method of having a domain administrator and an enterprise administrator, in addition to a plethora of other administrators...

With most *nix systems, there is a user assigned to each application or process that has administrative rights over that process, which makes it unnecessary to use root to perform most actions. The root user is only required for actual system administration. Along with this, it is very easy in *nix systems to switch roles and become the root user, or the apache user.

Tuesday, December 14, 2004

In an article posted on http://news.com, Microsoft's CIO, Ron Markezich,talks about many issues from outsourcing to testing software in-house. One point that he brings up is how his users "are the admins of their machines". This statement is not suprising, but provides more insight to me as to why it is so difficult to administer a Windows domain full of non-privileged users. A well known security basic is that users should be granted only enough power to perform their function, and if the software vendor is testing the application in a state that is not the normal method of use by their customers, the customer is going to have a less than satisfying experience using the application. One way to correct this problem would be to perform testing by half of the users as admins with the other half as non-privileged users.

When a department in Microsoft is testing software in a non-secure manner, this indicates to me that they are not taking security seriously. Security is a market that Microsoft has taken a huge hit on. Only when they realize just how important security is will they stand a chance and have the potential to be competitive among people who do take security seriously -- until then, they will have to cater to people who don't know any better than to run as admins and become infected with spyware and other malware, which will further tarnish their reputation.

Thursday, December 09, 2004

I figured out what the problem was with my mouse on FreeBSD, and learned some interesting things in the process. The problem with the configuration was that my mouse is not supported by the FreeBSD system, which caused the mouse to behave very erratically. The mouse that I was using at the time was a PS/2 LabTec Optical Wheel mouse. I am now using a Packard Bell PS/2 2-button mouse. I'll have to find a supported optical wheel mouse to use with this system.

An interesting thing about FreeBSD is that you have two options when configuring the mouse. You can use what is called moused to control the mouse, or you can let X control the mouse. If you use moused to control the mouse, you have a mouse even in the console when X is not running. There are a couple of different settings to adjust when using one or the other, but neither is difficult to configure. If you are using the 'sysinstall' program to configure the mouse, you don't have to touch any configuration files. You can just type 'sysinstall' from the command prompt as the root user, then select 'configuration', then select 'mouse'. At that point you will be prompted with setting the port, protocol, and enabling the mouse. Configuring the mouse to work with X is the same process as when configuring X on Linux.

Another interesting item to note is that when you use the 'sysinstall' program to configure the system, it does not delete entries in the /etc/rc.conf file, it shows the changes and maintains the old settings as a history. This feature is very useful, as you can see what the old settings were and when you changed them. I heartily recommend this feature to any developers for integration into existing systems.

Here is an excellent article from SysAdmin magazine which talks about how to deploy ClamAV on your network using Samba VFS and the Windows client. The article talks about how ClamAV follows the simple rule, do one thing and do it well, so it takes any input and scans it. The short-comings of this product are noted in the article, namely that it does not support network deployment and control like Symantec and some others.

Tuesday, December 07, 2004

I have re-installed FreeBSD and am now configuring the system to my liking. The install of BASH during the install helped my situation and allowed me to select BASH as the shell for my users. I was not able to select BASH for the root user during the install, as those settings are already selected and the install only gives the option of choosing the password. I was able to change the shell after the fact. The way to change the shell environment for the account post-creation is to use the chsh command. When using this command, you can just type the command and follow the prompt, or use the command with a '-s' and the path to the shell. The command that I used in FreeBSD was:

'chsh -s /usr/local/bin/bash'

Note: when I had not installed BASH during the system install phase last time, I was unable to use this command without a segmentation fault.

The next item to fix is the mouse. I configured the mouse with the settings that I thought would work best and it does not work properly. The mouse shows that it's detected in the dmesg output as the psm0 device. The mouse is acting like it is using the wrong device or the wrong driver.

Monday, December 06, 2004

Today I am re-installing FreeBSD 5.3 RELEASE. The reason for this is because I did not install the BASH shell with the first install, and that minor annoyance is probably keeping me from using the BASH shell on any users that I create. I am also configuring some other items a little bit differently. I anticipate that I will have to re-install a few more times before I get it the way that I want it.

With this install, I am not installing a boot loader, but installling the standard MBR, which should lower the options that have to be decided when booting, decreasing the time involved. I'm also installing the whole package list, everything on the CDs is getting installed. I'm going to try out the X Window system with FreeBSD to see how user-friendly it is.

One item to note at this point, (I'm installing the software right now, just finished selecting additional packages), is that after you select 'install everything', you still have the option to select more. There are several pre-selected package sets that you can opt to install. Each package set is profiled for a specific user or purpose from minimal to everything. I have selected everything for this install, which did not install everything. I even had to go in and select BASH for installation, which is one of the main reasons why I have opted to re-install (see above).

I'll post an update later with the details and how the X Server is working.

Saturday, December 04, 2004

I have downloaded and installed FreeBSD 5.3 RELEASE. After using Gentoo Linux for the past year, and Red Hat Linux for several years before that, it was a different experience. It is going to take a little bit more time than I initially thought before I can run the tests on network processing speeds with Snort. Right now I am working on getting all of the necessary packages and tools installed to get the system up to date. I am very suprised at how different FreeBSD is compared to Linux... I had always assumed that the two were identical.

The installation went very smoothly, with it taking very little time (less than 1 hour). I re-booted the machine and it was lightning fast. I have not had a machine boot this fast before. The boot process does pause twice to accept user input, which is not what I prefer, I would rather the machine just boot straight into the default kernel, but I'll have to play with that setting later. I'll try to post a time for the boot as well.

Friday, December 03, 2004

I have been doing some research on which OS would be the best for a network device. According to Richard Bejtlich at http://www.taosecurity.com/, FreeBSD is a very good OS for this purpose, better than Windows or Linux. I'd like a device that will support running Snort constantly, with IPTables/Netfilter logging, and also support a webserver. I have tried running Snort on a P4, 2.66GHZ machine with 512 MB RAM with little success. The system was unable to process faster than 25 kbps. This speed is unacceptable with an average speed of 300 kbps on a cable modem. The OS that I used with this machine was Gentoo Linux, using the 2.6 kernel. The target machine for my FreeBSD testing is a custom job; Tyan Tiger MPX motherboard with dual Athlon MP 1200 processors, and 512 MB RAM. I am using 100BaseT Ethernet for all testing.

I will follow-up with the results of my testing, I am downloading the ISOs for FreeBSD right now.

Wednesday, December 01, 2004

I have subscribed to several blogs lately and have been very impressed. I like the idea of having a place on the web where I can contribute my ideas and experience to those who may be interested, without interferring with those who don't care.

As an IT professional with a strong interest in network security and operating system security, I would like to present my experiences and miscellaneous ideas related to operating systems and security on this blog. Please post comments with any and all suggestions. Thanks for reading.