The metaphor used in the title is unfortunately valid for many places - some (more visible) parts could be well-protected, while others that are not so visible are in deep neglect. Sometimes, using a well-known solution from a reputable provider (e.g. "We have a Cisco firewall installed") is enough to generate false sense of security which results in neglecting other areas of technological measures, user training as well as policy development. In fact, the Mitnick formula of "technology + training + policy" should use multiplication in place of addition - if one of them is zero, the result is zero too (and a very small - fractional - value in one will decrease the result greatly).

Note: typically, using both rules at the same time means that only SSH is allowed (i.e. the latter "forbid all" does not invalidate the previous "allow this" rule).

While early packet filter firewalls just inspected single packets (without considering their "neighbours"), newer ones also analyzed series of packets (stateful firewalls). Recent application layer firewalls are also able to recognize certain applications and protocol and judge the behaviour of packages accordingly.

A firewall is similar to a container - in the sense that the content (not the vessel) is what matters. Properly configured firewalls are very effective, misconfigured ones are ineffective or sometimes inhibiting. Effective use of firewalls also includes a general policy about what kind of traffic is acceptable and what is not - thus being a good proof for the Mitnick's maxim "technology, training, policies". In an ideal case, the access control rules in a firewall are a concentrated version of the overall access policy.

Firewalls are effective tools for regulating traffic and providing single 'gates' with checkpoints (that are easier to supervise and log) into computer systems. However, they cannot protect against various security risks which bypass them, e.g.

removal/theft of data on physical media (e.g. by stealing a USB stick)

They also tend to be ineffective against many kinds of malware - partially due to being too complex to react on rapidly-changing and constantly emerging new malware. Also, most malware tends to use legal channels to propagate (e.g. e-mail attachments).

Often seen as ambiguous tools used by both attackers and defenders, vulnerability scanners first appeared in the 90s. One of them, aptly named SATAN for Security Administrator Tool for Analyzing Networks, caused one of the biggest scare campaigns in the media at that time (the name was possible the culprit).

Most scanners search for known vulnerabilities in certain systems and software, but also for open ports (some of them may point towards specific problem, e.g. a rootkit), default or very common passwords, outdated or misconfigured software (e.g. an open relay mail server) etc.

A word of caution - although some scanners can discover problems by overall symptoms, they mostly rely on their database of known vulnerabilities (similarly to most antivirus software). This means that unpublished weak spots (zero-day vulnerabilities) will likely go unnoticed. Likewise, creative ignorance of users (whether on their own or fueled and directed by a social engineering attack) may render these systems surprisingly inadequate. On the other hand, latent human errors can and should be minimized.

These utilities monitor systems for suspicious activity and report it to those in charge, some of them also contain mechanisms to interrupt/stop the activity (prevention; e.g. the system can change firewall settings, reset connections etc). The former are known as passive (IDS) and the latter as active (IDPS) systems.

The main types by location are

host-based - analyse in- and outbound traffic at just one computer; usually keeps periodic snapshots of important data (e.g. system files) and compares current situation to it.

network-based - analyzes traffic in an entire subnet by checking compliance with protocols.

statistical anomalies - the system "knows" what behaviour is "normal" and alerts on substantial changes (e.g. sudden surges of traffic). Its strength is customizability (different "normal" levels can be set for specific situations) and chances to intercept new, unpublished attacks. The weakness is a level of false positive alarms, which may be substantial if the system is (even slightly) misconfigured.

signatures - similarly to antiviruses and some vulnerability scanners, these systems check the signatures of "good" packets and report any deviance. While they do not typically generate false positive alarms, their ability to detect intrusion depends on (and is limited to) the database which may be out of date - and even in case of rapid updates, there will always be a lag between the attack appearing "in the wild" and its signature registered in the database.

stateful protocol analysis - a somewhat hybrid approach which (akin to stateful firewalls described above) will analyse a series of packages and their compliance with estimated protocols.

Logging all events has been a feature of Unix-based systems (including BSDs, Linuxes and OS X) since their beginning. Microsoft systems, however, introduced proper logging only with NT in 1993 (and for several years, the ordinary users' Windowses - 95, 98 and ME - only contained rudimentary logging features).

But even in systems with proper logging, changing them to hide one's tracks has been a prime activity of crackers. There are both dedicated utilities and rootkit components (e.g. Azazel) that hide the presence of the attacker. Countermeasures to log tampering have been using a write-once medium (e.g. a CD-R) or even a serial port connected to another computer.

The history of computer passwords is a colourful one, and so is the present. From the initial opposition ("Who needs them? The bosses want to regulate us?") among the old-school hackers to the no-password approach of DOS and early Windowses to the token ones in Windows 9x - the habits of users have largely remained the same (as seen from here).

The problem is that in many systems, a user with weak password will be a risk for all others. In some places, passwords can be substituted with certificate-based logins. In others, periodical testing and changing passwords must be taken up. While here are several simple web-based password checkers available (which in turn raise the question "if I enter my actual password, who will record it?"), checks by admins (using e.g. John the Ripper) is definitely a better way.

In computer security, a honeypot is a system that is seemingly neglected and easy to attack. Actually, it is configured to be a dead-end trap with mostly worthless content and well-controlled connections to the outside world (to prevent attackers using it as a springboard). Nowadays, they can be implemented as virtual machines which reset their content periodically (usually simulating a full production server complete with services and users to attract the attackers and waste their time).

Adequate and well-maintained technological infrastructure is an important factor in security. Still, many of these technologies and systems are ethically agnostic, allowing both constructive and destructive usage. Also, they are often powerful, making ignorance and unawareness very costly. Therefore, adding proper training and adequate policy are of prime importance.

Pick one of the technologies described above, study available solutions and compile a small overview. Suggest a suitable solution for a) Estonian National Library in Tallinn, b) a gymnasium / high school in Tartu, c) a small computer retail store in Pärnu.