Wednesday, August 31, 2011

Lessfs offers a flexible solution to utilize data deduplication on affordable commodity hardware.
In recent years, the storage industry has been busy providing some of the most advanced features to its customers, including data deduplication. Data deduplication is a unique data compression technique used to eliminate redundant data and decrease the total capacities consumed on an enabled storage volume. A volume can refer to a disk device, a partition or a grouped set of disk devices all represented as single device. During the process of deduplication, redundant data is deleted, leaving a single copy of the data to be stored on the storage volume.
One ideal use-case scenario is when multiple copies of a large e-mail message are distributed and stored on a mail server. An e-mail message the size of just a couple megabytes does not seem too bad, but if it were sent and forwarded to more than 100 recipients—that's more than 200MB of copies of the same file(s).
Another great example is in the arena of host virtualization. In recent years, virtualization has been the hottest trend in server administration. If you are deploying multiple virtual guests across a network that may share the same common operating system image, data deduplication significantly can reduce the total size of capacity consumed to a single copy and, in turn, reference the differences when and where needed.
Again, the primary focus of this technology is to identify large sections of data that can include entire files or large sections of files, which are identical, and store only one copy of it. Other benefits include reduced costs for additional capacities of storage equipment, which, in turn, can be used to increase volume sizes or protect large numbers of existing volumes (such as RAID, archivals and so on). Using less storage equipment also leads to a reduced cost in energy, space and cooling.
Two types of data deduplication exist: post-process and inline deduplication. Each has its advantages and disadvantages. To summarize, post-process deduplication occurs after the data has been written to the storage volume in a separate process. While you are not losing performance in computing the necessary deduplication, multiple copies of a single file will be written multiple times, until post-process deduplication has completed, and this may become problematic if the available capacity becomes low. During inline deduplication, less storage is required, because all deduplication is handled in real time as the data is written to the storage volume, although you will notice a degradation in performance as the process attempts to identify redundant copies of the data coming in.
Storage technology manufacturers have been providing the technology as part of their proprietary and external storage solutions, but with Linux, it also is possible to use the same technology on commodity and very affordable hardware. The solutions provided by these storage technology manufacturers are in some cases available only on the physical device level (that is, the block level) and are able to work only with redundant streams of data blocks as opposed to individual files, because the logic is unable to recognize separate files over the most commonly used protocols, such as SCSI, Serial Attached SCSI (SAS), Fibre Channel, InfiniBand and even Serial ATA (SATA). This is referred to as a chunking method. The filesystem I cover here is Lessfs, a block-level-based deduplication and FUSE-enabled Linux filesystem.
FUSE or Filesystem in USEr Space is a kernel module commonly seen on UNIX-like operating systems, which provides the ability for users to create their own filesystems without touching kernel code. It is designed to run filesystem code in user space while the FUSE module acts as a bridge for communication to the kernel interfaces.
In order to use these filesystems, it is required to install FUSE on the system. Most mainstream Linux distributions, such as Ubuntu and Fedora, most likely will have the module and userland tools already preinstalled, most likely to support the ntfs-3g filesystem.

Lessfs

Lessfs is a high-performance inline data deduplication filesystem written for Linux and is currently licensed under the GNU General Public License version 3. It also supports LZO, QuickLZ and BZip compression (among a couple others), and data encryption. At the time of this writing, the latest stable version is 1.3.3.1, which can be downloaded from the SourceForge project page: http://sourceforge.net/projects/lessfs/files/lessfs.
Before installing the lessfs package, make sure you install all known dependencies for it. Some, if not most, of these dependencies may be available in your distribution's package repositories. You will need to install a few manually though, including mhash, tokyocabinet and fuse (if not already installed).
Your distribution may have the libraries for mhash2 either available or installed, but lessfs still requires mhash. This also can be downloaded from SourceForge: http://sourceforge.net/projects/mhash/files/mhash. At the time of this writing, the latest stable build is 0.9.9.9. Download, build and install the package:

Lessfs also requires tokyocabinet, as it is the main database on which it relies. The latest stable build is 1-4.47. To build tokyocabinet, you need to have zlib1g-dev and libbz2-dev already installed, which usually are provided by most, if not all, mainstream Linux distributions.
Download, build and install the package using the same configure, make and sudo make install commands from earlier. On 32-bit systems, you need to append --enable-off64 to the configure command. Failure to use --enable-off64 limits the databases to a 2GB file size.
If it is not already installed or if you want to use the latest and greatest stable build of FUSE, download it from SourceForge: http://sourceforge.net/projects/fuse. At the time of this writing, the latest stable build is 2.8.5. Download, build and install the package using the same configure, make and sudo make install commands from earlier.

After resolving all the more obscure dependencies, you're ready to build and install the lessfs package. Download, build and install the package using the same configure, make and sudo make install commands from earlier.
Now you're ready to go, but before you can do anything, some preparation is needed. In the lessfs source directory, there is a subdirectory called etc/, and in it is a configuration file. Copy the configuration file to the system's /etc directory path:

$ sudo cp etc/lessfs.cfg /etc/

This file defines the location of the databases among a few other details (which I discuss later in this article, but for now let's concentrate on getting the filesystem up and running). You will need to create the directory path for the file data (default is /data/dta) and also for the metadata (default is /data/mta) for all file I/O operations sent to/from the lessfs filesystem. Create the directory paths:

$ sudo mkdir -p /data/{dta,mta}

Initialize the databases in the directory paths with the mklessfs command:

$ sudo mklessfs -c /etc/lessfs.cfg

The -c option is used to specify the path and name of the configuration file. A man page does not exist for the command, but you still can invoke the on-line menu with the -h command option.
Now that the databases have been initialized, you're ready to mount a lessfs-enabled filesystem. In the following example, let's mount it to the /mnt path:

$ sudo lessfs /etc/lessfs.cfg /mnt

When mounted, the filesystem assumes the total capacity of the filesystem to which it is being mounted. In my case, it is the filesystem on /dev/sda1:

Currently, you should see nothing but a hidden .lessfs subdirectory when listing the contents of the newly mounted lessfs volume:

$ ls -a /mnt/
. .. .lessfs

Once mounted, the lessfs volume can be unmounted like any other volume:

$ sudo umount /mnt

Let's put the volume to the test. Writing file data to a lessfs volume is no different from what it would be to any other filesystem. In the example below, I'm using the dd command to write approximately 100MB of all zeros to /mnt/test.dat:

Seeing how the filesystem is designed to eliminate all redundant copies of data and being that a file filled with nothing but zeros qualifies as a prime example of this, you can observe that only 48KB of capacity was consumed, and that may just be nothing more than the necessary data synchronized to the databases:

If you list a detailed listing of that same file in the lessfs-enabled directory, it appears that all 100MB have been written. Utilizing its embedded logic, lessfs reconstructs all data on the fly when additional read and write operations are initiated to the file(s):

Now, let's work with something a bit more complex—something containing a lot of random data. For this example, I decided to download the latest stable release candidate of the Linux kernel source from http://www.kernel.org, but before I did, I listed the total capacity consumed available on the lessfs volume as a reference point:

And, because the databases contain the actual file and metadata, if an accidental or intentional system reboot occurred, or if for whatever reason you need to unmount the filesystem, the physical data will not be lost. All you need to do is invoke the same mount command and everything is restored:

In the situation when a system suffers from an accidental reboot, possibly due to power loss, as of version 1.0.4, lessfs supports transactions, which eliminates the need for an fsck after a crash.

Shifting focus back to lessfs preparation, note that the lessfs volume's options can be defined by the user when mounting. For instance, you can define the desired options for big_write, max_read and max_write. The big_write improves throughput when used for backup purposes, and both max_read and max_write must be defined to use it. The max_read and max_write options always must be equal to one another and define the block size for lessfs to use: 4, 8, 16, 32, 64 and 128KB.
The definition of a block size can be used to tune the filesystem. For example, a larger block size, such as 128KB (131072), offers faster performance but, unfortunately, at the cost of less deduplication (remember from earlier that lessfs uses block-level deduplication). All other options are FUSE-generic options defined in the FUSE documentation. An example of the use of supported mount options can be found in the lessfs man page:

$ man 1 lessfs

The following example is given to mount lessfs with a 128KB block size:

Additional configurable options for the database exist in your lessfs.cfg file (the same file you copied over to the /etc directory path earlier). The block size can be defined here as well as even the method of additional data compression to use on the deduplicated data and more. Below is an excerpt of what the configuration file contains. In order to define a new value for various options clearly, just uncomment the option desired and, in turn, comment everything else:

This excerpt defines the default block size to 128KB and the default compression method to QuickLZ. If the defaults are not to your liking, in this file you also can define the commit to disk intervals (default is 30 seconds) or a new path for your databases, but make sure to initialize the databases before use; otherwise, you'll get an error when you try to mount the lessfs filesystem.

Summary

Now, Linux is not limited to a single data deduplication solution. There also is SDFS, a file-level deduplication filesystem that also runs on the FUSE module. SDFS is a freely available cross-platform solution (Linux and Windows) made available by the Opendedup Project. On its official Web site, the project highlights the filesystem's scalability (it can dedup a petabyte or more of data); speed, performing deduplication/reduplication at a line speed of 290MB/s and higher; support for VMware while also mentioning its usage in Xen and KVM; flexibility in storage, as deduplicated data can be stored locally, on the network across multiple nodes (NFS/CIFS and iSCSI), or in the cloud; inline and batch mode deduplication (a method of post-process deduplication); and file and folder snapshot support. The project seems to be pushing itself as an enterprise-class solution, and with features like these, Opendedup means business.
It is also not surprising that since 2008, data deduplication has been a requested feature for Btrfs, the next-generation Linux filesystem. Although that also may be in response to Sun Microsystem's (now Oracle's) development of data deduplication into its advanced ZFS filesystem. Unfortunately, at this point in time, it is unknown if and when Btrfs will introduce data deduplication support, although it already contains support for various types of data compression (such as zlib and LZO).
Currently, the lessfs2 release is under development, and it is supposed to introduce snapshot support, fast inode cloning, new databases (including hamsterdb and possibly BerkeleyDB) apart from tokyocabinet, self-healing RAID (to repair corrupted chunks) and more.
As you can see, with a little time and effort, it is relatively simple to utilize the recent trend of data deduplication to reduce the total capacity consumed on a storage volume by removing all redundant copies of data. I recommend its usage in not only server administration but even for personal use, primarily because with implementations such as lessfs, even if there isn't too much redundant data, the additional data compression will help reduce the total size of the file when it is eventually written to disk. It is also worth mentioning that the lessfs-enabled volume does not need to remain local to the host system, but it also can be exported across a network via NFS to even iSCSI and utilized by other devices within that same network, providing a more flexible solution.

Sunday, August 28, 2011

Popular open source Content Management Systems (CMSs) like Drupal, Joomla! and WordPress, are regularly subject to source code reviews as well as blackbox pentesting. Thus, vulnerabilities in these systems are quickly identified and fixed. And security updates are frequently released.

Unfortunately, people tend to install the base CMS, add plugins, build their website and then never upgrade when security patches are available. Furthermore, third party developed plugins usually extend the offender's attack surface and expose the CMS-based website to new threats.

During pentests, and facing a CMS based website, I often look for open source security tools that are targeted specifically at the CMS in question. These tools usually excel at fingerprinting the CMS version used by the target, detecting installed plugins/themes, and identifying corresponding vulnerabilities.

Of course, I'd love to fire up generic web active scanners (Skipfish, Arachni, w3af, etc), as well as my preferred proxy tools (ZAP and WebScarab) to perform a full-blown web pentest of the target application. However, during short-timed penetration tests, I'm compelled to look for the low hanging fruit. Hence, instead of trying to reinvent the wheel, I make good use of CMS-targetted tools.

In this post, I'm going to describe the free security tools I use against Joomla! based websites. If you know another utility/tip to use against Joomla! Installations, feel free to mention it below as a comment.

The base operating system for the attack toolset is going to be BackTrack 5. Lucky me, all three tools are pre-installed on the distribution.

CMS Explorer

CMS Explorer is a tool developed by the creator of Nikto. It covers several CMSs like Drupal, WordPress, and Joomla!.

The first thing you should do when using CMS Explorer is to create an osvdb.key containing an OSVDB API key, and place it into the CMS Explorer install directory. You can get an OSVDB API key for free from http://osvdb.org/api/about. The CMS Explorer install directory in BackTrack 5 is /pentest/enumeration/web/cms-explorer.

Anyway, this key will be used by the tool to query OSVDB for vulnerabilities corresponding to the identified installed plugins and themes.

Here is the command line I run in order to launch a CMS Explorer scan:

First, CMS Explorer will identify the themes and plugin installed on the Joomla!-based website:

Then, it will identify all the vulnerabilities in OSVDB that correspond to the found plugins and themes.

Maybe CMS Explorer is a little too verbose.. But it does a decent job detecting Joomla! installed components and identifying vulnerabilities that are associated with these.

OWASP Joomla Vulnerability Scanner (aka joomscan)

OWASP Joomla Vulnerability Scanner, or Joomscan is an official OWASP Project and a flagship Joomla! scanner. Joomscan features include thorough version detection as well as signature-based vulnerability identification of Joomla! installations. As of this writing, Joomscan vulnerability database contains 466 distinct entries.

The tool is ready to use on BackTrack 5 and using it is as simple as running the following command:

users.txt and passwds.txt are two files containing usernnames and passwords that will be used when bruteforcing the form.

Well, that's it for today's Jommla! hacking round. I'm not going to compare the utilities as each one is specific and useful in its own way. Please don't forget to add your favorite Joomla! hacking tools and tips as a comment below. I'll try to keep this post updated, and hopefully post about other CMSs. Meanwhile, happy Joomla! hacking :)

I recently replaced my OSX based Macbook with an Ubuntu based Lenovo Thinkpad T420. I've done a number of things out of the ordinary to secure it, so thought I'd write an overview. You may find some of these techniques interesting, and maybe even useful. You may even learn about an attack or two that you were unaware of.

Defending from common thieves

My most likely adversary is the common thief. If my laptop is stolen, I want a chance to recover it, that doesn't involve relying solely on the police. Although the laptop came with Windows 7 installed, I had no intention of using it; Ubuntu is my current operating system of choice for laptops/desktops. Rather than wiping Windows 7, I've left it as a honeypot operating system. If a thief steals the laptop, when they turn it on, it will automatically boot up into Windows, without so much as even being prompted for a password. I installed a free application called Prey which will allow me to grab loads of information from the laptop, such as its location, and pictures from the built in webcam. The location is pretty accurate because the laptop came with an F5521gw (pdf) card, which provides GPS and 3G modem capability, and Prey is happy to take advantage of GPS data. Incidently this card also works fine under Linux using MBM. Hopefully, the thief will be too lazy or too dumb to do an immediate full reinstall of the OS, as it will just work out of the box as far as they're concerned.

To make room for Ubuntu on the disk, I installed GParted to a USB stick and booted that up. This allowed me to shrink the Windows 7 partition. The laptop only has a small 128GB drive though (SSD) so I had to try and recover as much space as possible. From Windows 7 I deleted the recovery partition, I disabled system restore, and I disabled swap. Disabling the swap file recovered a massive 8GB of space as the machine has 8GB of RAM.

Defending from experts

In the space recovered from Windows, went my Ubuntu installation. Natty Narwhal (11.04) was the latest version of Ubuntu at time of writing, so that's what I went with. I consider full disk encryption to be essential if you want to secure your laptop. However, there are several attacks against machines that use full disk encryption; I decided to address as many of them as possible.

Evil maid attacks

Even if you have a machine which uses full disk encryption, the boot partition and boot loader need to be stored somewhere unencrypted. Typically, people store it on the hard drive along with the encrypted partitions. The problem with doing this is, whenever you go to your machine, you don't know if somebody has tampered with the unencrypted data to install a software keylogger to capture your password. To get around this, I installed my boot partition and boot loader on a Corsair Survivor USB stick. I wanted a USB stick which would never leave my side. This particular USB stick is very strong, and water proof, so even when I go swimming or scuba diving, I don't need to leave it in a locker somewhere, unattended. I got one of my friends to take it on a scuba diving holiday before I used it. It survived several hours under the water at depths of between 10 and 15 metres.

Coldboot attacks

On a typical system with disk encryption, the encryption key is stored in RAM. This would be fine, if it weren't for the fact that there are several ways for an attacker with physical access, to read the contents of the RAM on a machine which is running, or which has been running recently. You might think that your machine's RAM is wiped as soon as it loses power, but that is not the case. It can take several minutes for the RAM to completely clear after losing power. Cooling the RAM with spray from an aircan, can extend that time period.

An attacker with access to the online machine, could simply hard reboot the machine from a USB stick or CD containing msramdmp to grab a copy of the RAM. You could password protect the BIOS and disable booting from anything other than the hard drive, but that still doesn't protect you. An attacker could cool the RAM, remove it from the running machine, place it in a second machine and boot from that instead.

The first defence I used against this attack is procedure based. I shut down the machine when it's not in use. My old Macbook was hardly ever shut down, and lived in suspend to RAM mode when not in use. The second defence I used is far more interesting. I use something called TRESOR. TRESOR is an implementation of AES as a cipher kernel module which stores the keys in the CPU debug registers, and which handles all of the crypto operations directly on the CPU, in a way which prevents the key from ever entering RAM. The laptop I purchased works perfectly with TRESOR as it contains a Core i5 processor which has the AES-NI instruction set.

Getting TRESOR to work was the most complicated part of installing my laptop. Not because it's particularly difficult, but because you have to build a custom kernel, with the TRESOR patch applied. And once you've got the custom kernel, you need to build custom installation media which uses that kernel. I did a basic Ubuntu installation without encryption to create a platform for building the custom kernel and custom installation media. Once I had the install CD ready, I did a second installation over the top of the first one using that CD instead. I'm not going to go into detail on how to do that, but I will link to the various HOWTOs that I used:

If a machine has a firewire port, or a card slot which would allow an attacker to insert a firewire card, then there's something else you need to address. It is possible to read the contents of RAM via a firewire port. Here is a great article detailing the issues and fixes for multiple operating systems. My laptop has a firewire port. I could have built a kernel without support for firewire and without firewire kernel modules, but I may need to use that port at some point. So instead, I built firewire as a set of kernel modules, and prevent the modules from loading under normal circumstances using /etc/modprobe.d/blacklist.

Preparing a disk for encryption

During my research, I found numerous people advocating that you should completely wipe a new hard drive with random data before setting up disk encryption. This is to make it impossible for somebody to be able to detect which parts of the drive have had encrypted data written to them. Doing this, is as simple as creating a partition on the space you want to fill with random data, and then using the "dd" command to copy data directly to that partition device in /dev/ from /dev/urandom. This took a few hours to run on my system. I complicated this procedure slightly by using something I purchased called an EntropyKey. The EntropyKey provides a much larger source of "real" random data, as opposed to the much more limited "pseudo" random data that is generated by the operating system. It talks to an application called ekeyd in order to feed /dev/random directly. I also use the entropy key when generating GnuPG keys and any other task which requires a source of good random data.

More on disk encryption

The LiveCD I modified doesn't have a nice GUI for handling full disk encryption. I needed to learn how to use the command line tool "cryptsetup" to set up encryption. Because TRESOR is built as a cipher kernel module, once you've booted from your custom LiveCD, you can just use the option "--cipher tresor" when using cryptsetup to create encrypted devices. It's worth spending some time playing with this tool and understanding what the various options do, if you don't want to lose access to your encrypted device.

When I initially did the installation, I chose to protect the full disk encryption key with a passphrase. It is also possible to protect it with a keyfile. The advantage of using a keyfile is that you can store it on an external device. An attacker can't just observe you entering the password, they also need to get hold of the keyfile. It's also much more difficult to brute force. I have now moved my laptop to using a keyfile. That keyfile is stored on the USB boot stick which never leaves my side, and it is GPG encrypted. Cryptsetup on Ubuntu comes with helper tools to do this. The basic process was:

Generate the keyfile

Use cryptsetup to add it to an additional key slot on the encrypted device

Encrypt with gnupg's "--symmetric" option and copy the encrypted version to somewhere like /etc/keys/

Update /etc/crypttab to use the new keyfile

Run "update-initramfs -u" to build a new initrd on the boot partition

The update-initramfs command calls a hook script which copies the gpg binary, gpg protected key, and appropriate boot scripts to the initrd on the boot partition. Once I'd confirmed that I can still successfully boot the machine, I emptied the key slot which contained the original passphrase. It would now be impossible to compel me to decrypt my hard drive if I were to lose, "lose", or irreperably damage my USB boot drive.

Swap

If you need to use swap. Make sure it is encrypted too. The easiest way to make sure everything is encrypted is to create an encrypted device, and then use LVM on top of that so that all of your partitions and swap end up on top of the same encrypted device. As this laptop has 8GB of RAM, I decided to go without swap altogether. I'm not going to be using the suspend to disk function which requires swap, and I don't want swapping to cause wearing on my SSD.

Trusted Platform Modules

The laptop I purchased has something called a Trusted Platform Module. This TPM can handle a number of crypto operations it's self. It also provides a random number generator similar to the EntropyKey. Apparently a lot of modern laptops contain one of these. I decided to use the random number generator on the TPM as another source of entropy for when my EntropyKey isn't inserted. To do this I used a piece of software called TrouSerS. There is also a modified version of Grub calledTrusted Grub which can use the TPM to do a number of integrity checks on the system as it boots. I'm not sure that this is of any use to me though as my boot partition and boot loader will never leave my side.

Securing the Web browser

I use Firefox as my web browser. Surfing the web scares me; the browser strikes me as the most likely way in for a remote attacker. And yet, most people run the browser under the same user id as the rest of their programs. So if the browser is compromised, all of the files that your user can access are also instantly compromised. To try and minimise any damage if this happens, I decided to run Firefox in its own account. My normal user account is called "mike". For Firefox I created a new user account called "mike.firefox". "/usr/bin/firefox" was merely a symlink to /usr/lib/firefox-6.0/firefox.sh so I replaced it with a shell script which runs:

sudo -u mike.firefox -H /usr/lib/firefox-6.0/firefox.sh

I didn't want to be prompted for a password every time I tried to run firefox though, so I configured sudo to allow me to run that command without entering my password by adding this to the end of my /etc/sudoers (use the visudo command to do this)

mike ALL=(mike.firefox) NOPASSWD: /usr/lib/firefox-6.0/firefox.sh

The "mike.firefox" user doesn't have access to the X display though when I'm logged in as "mike". To give it access I went to "System->Preferences->Startup Applications" and told it to run the command "xhost +local:mike.firefox" when I log in. Now, when I run firefox, it runs as user mike.firefox instead. Something to look out for when you do this: Any command that firefox spawns, will it's self run as user mike.firefox. I noticed that when playing flash, there was no audio. This is because the mike.firefox user that I created did not have access to the audio device. To give it permission, I ran the command "adduser mike.firefox audio". I also set up permissions so that user "mike" could access "/home/mike.firefox/Downloads" as that is where Firefox will now download to. I symlinked /home/mike/Downloads/firefox to this directory for simplicity.

PGP smart cards

All of my incoming email is encrypted using my public GPG key. I detailed how I do this here. This means that I need to store my private GPG keys on my laptop. They're protected by a passphrase, but is this enough? If my account was compromised, an attacker could key log my passphrase and then steal my keys. Luckily, when I purchased my laptop, I ticked the "Smartcard Reader" option. I then purchased an OpenPGP Smartcard. My encryption and signing subkeys have been transferred to the smartcard, and the master key has been removed from my laptop. All that remains in the PGP private keyring on my laptop are stubs which refer to the keys on the smartcard. You can not read a key from a smartcard. If you want to decrypt or sign data, gpg sends that data to the smartcard, which then performs the crypto operations on board, and sends the results back. This isn't perfect of course. An undetected attacker could potentially use the card to decrypt data when it is inserted, without my knowledge. I wrote a custom "pinentry" application to further secure my smartcard, from observation attacks. You can read about that here.

Miscellaneous

I use the following Firefox addons to minimise the chance of MITM attacks against my browsing, and to prevent most XSS/CSRF attacks: Certificate Patrol, Cipherfox, DNSSEC Validator, HTTPS Everywhere, HTTPS Finder, NoScript, Perspectives and Request Policy.

I have installed OpenVPN. It connects to my LinodeVPS and I route all traffic over it when I'm on untrusted networks such as cafes.

I installed a local DNS resolver called Unbound. It supports DNSSEC. I don't know how many sites support DNSSEC yet, but I should benefit more from this as time goes on.

I installed an application called blueproximity. It detects when my phone is in range, via bluetooth. If my phone moves out of range, the screen automatically locks. I've no doubt that this can be prevented via spoofing my phone, but it adds another layer of security.

My Windows honeypot also has a VPN to my Linode server, and Internet Explorer is configured to use the web proxy at the end of it. If my laptop is stolen, I should be able to intercept all of the browser traffic that comes from it.

Summary

Some people might say that many of these precautions are over the top and paranoid. I don't consider myself an "elite hacker", but I know that I could pull off most of the attacks that I've discussed above without much trouble. Cold boot and Evil maid are practical, easy to pull off, attacks. Why wouldn't I defend against them?

I'm not claiming that my laptop is impenetrable. An attacker could still grab me when I'm using the machine, rip the RAM out, and pull sensitive data from it. They could still grab my USB boot key and then beat the password out of me. They could still remotely compromise Firefox and then use an unknown kernel exploit to gain root privileges. The whole point of this exercise was to reduce the number of attack vectors, not eliminate them. That would be impossible.

If you do anything differently, or better, please let me know in the comments. I'd especially love to hear ways that I can make my Windows honeypot more effective.

Thursday, August 25, 2011

Share This Article

I published not to show how advanced MS products are but to show how LATE they are as this was already done in GNOME and KDE years ago :)

Sameh Attia

------------------------------------

Ahh, the Windows Explorer progress dialog. For years it has been struggling to figure out how to calculate how long our copy and delete operations would take, sliding the progress bar back and forth in a seemingly random, haphazard way, the laws of time all but ceasing to exist — five seconds remaining one moment and 13 minutes the next. That’s (almost) all going to change, with the arrival of a greatly improved file management experience in Windows 8.

Over on the Building Windows 8 blog, Microsoft’s Alex Simmons, a director of program management for Windows, has laid bare most of the new functionality. If you’d rather look at the reworked dialog boxes, they’re in a video that’s embedded below; otherwise, read on.

Simmons states that his team’s focus was to improve the high-volume copying experience — which makes good sense, since Explorer really isn’t that bad in its present state if you’re just moving around a handful of files. Gone are the multiple progress windows that stack atop your Explorer taskbar icon in Windows 7. All operations will be consolidated into a single window, similar to the way Internet Explorer or Firefox handle your downloads. And, just like your browser’s download manager, the updated file dialog allows you to pause and cancel jobs with the click of a button.

Want some more in-depth knowledge about what’s going on? Tap the more details button, and you’re presented with a real-time graph (pictured right) that charts the current speed of your operation and also reports the time and number and amount of files remaining. As for those off-the-mark time estimates, Simmons says that coming up with a precise calculation is nearly impossible due to the variables involved — such as interference from security software or network congestion. To that end, the Windows 8 Explorer interface has been tweaked to play up elements that can be detailed precisely — like transfer speeds.

One more area Microsoft has focused on is conflict resolution, something that had already been improved in Windows 7. The new copy and replace options allow users greater flexibility when identically named files are dropped into a folder. In Windows 7, you can choose to replace, not copy, or keep both copies of a file and let Windows rename the new addition. This can be done on a file-by-file basis, or you can check off the box and apply your preference en masse. In Windows 8, Explorer consolidates conflicts onto a single thumbnailed pane (below left) where you can check off the versions you want to keep.

Last but not least, Simmons quietly mentions a tweak to delete dialogs in Windows 8. No longer will Windows default to notifying users every time they send a file off to the Recycle Bin (a toggle you could flip in earlier versions of Windows). The aim is to create a “quieter, less distracting experience,” but my admin sense is tingling. You’ve got to imagine that this change is going to lead to more than a couple Delete > Empty Recycle Bin operations.

This tutorial shows how to set up network RAID1 with the help of DRBD on two Debian Squeeze systems. DRBD stands for Distributed Replicated Block Device and allows you to mirror block devices over a network. This is useful for high-availability setups (like a HA NFS server) because if one node fails, all data is still available from the other node.
I do not issue any guarantee that this will work for you!

1 Preliminary Note

I will use two servers here (both running Debian Squeeze):

server1.example.com (IP address 192.168.0.100)

server2.example.com (IP address: 192.168.0.101)

Both nodes have an unpartitioned second drive (/dev/sdb) with identical size (30GB in this example) that I want to mirror over the network (network RAID1) with the help of DRBD.
It is important that both nodes can resolve each other, either through DNS or through /etc/hosts. If you did not create DNS records for server1.example.com and server2.example.com, you can modify /etc/hosts on both nodes as follows:

Make sure you use the correct node names in the file (instead of server1.example.com and server2.example.com) - please make sure you use the node names that the command

uname -n

shows on both nodes. Also make sure you fill in the correct IP addresses in the address lines and the correct disk in the disk lines (if you don't use /dev/sdb1).
Now we initialize the meta data storage. On both nodes run:

The snippet Primary/Secondary tells you that this is the primary node.
Now that we have our new network RAID1 block device /dev/drbd0 (which consists of /dev/sdb1 from server1 and server2), let's create an ext3 filesystem on it and mount it to the /data directory. This has to be done only on server1!