Friday, October 25, 2013

By default Apache
runs all virtual hosts under the same Apache user, with no isolation
between them. That makes security vulnerabilities in server-side
languages such as PHP
a serious threat. An attacker can compromise all websites and virtual
hosts on a server as soon as he finds one site that's hosted on it
that's vulnerable. To address this problem, you can deploy the Apache
module suPHP, which is designed to ensure isolation between virtual hosts that support PHP.
SuPHP installs an Apache module called mod_suPHP that passes the
handling of PHP scripts to the binary /usr/local/sbin/suphp, which uses
the setuid flag – -rwsr-xr-x.
– and thereby ensures that a PHP web script runs under the user of its
file owner. Thus to accomplish isolation you can create different users
for each Apache virtual host and change the ownership of their web files
to match that of the virtual host. Once all virtual hosts run under
different users you can set strict file permissions on the web files and
thus ensure that a script executed in one virtual host cannot write to
or even read a file from another virtual host.

suPHP installation

Developer Sebastian Marsching provides suPHP only as a source
package, licensed under the GNU GPLv2. Even though you might find suPHP
as a binary installation from a third-party repository, for best
compatibility and performance you should compile the software yourself.
You will need the following packages:

apr-util-devel – APR utility library development kit

httpd-devel – development interfaces for the Apache HTTP server

gcc-c++ – C++ support to the GNU Compiler Collection

To install these in CentOS, run the command yum -y install apr-util-devel httpd-devel gcc-c++.Download suPHP
version 0.7.2 – the most recent version, released in May. Unfortunately,
the officially shipped source package is not compatible with CentOS 6.
Before you can compile it you need to run the following commands:

Manually specify the APR path with a command like ./configure --with-apr=/usr/bin/apr-1-config. After that run the usual make && make install to complete the installation.
A successful installation creates the following files:

/usr/lib/httpd/modules/mod_suphp.so – the Apache module

/usr/local/sbin/suphp – the suPHP binary

suPHP configuration

You configure suPHP in the file /etc/suphp.conf. Here's a sample
configuration file annotated with explanations of all the directives:

[global]
;Path to logfile.
logfile=/var/log/suphp/suphp.log
;Loglevel. Info level is good for most cases but the file grows fast and should be rotated.
loglevel=info
;User Apache is running as. By default, in CentOS this is 'apache'.
webserver_user=apache
;Path all scripts have to be in. In CentOS the webroot is /var/www/html/ by default.
docroot=/var/www/html/
; Security options. suPHP will check if the executed files and folders have secure permissions.
allow_file_group_writeable=false
allow_file_others_writeable=false
allow_directory_group_writeable=false
allow_directory_others_writeable=false
;Check wheter script is within DOCUMENT_ROOT
check_vhost_docroot=true
;Send minor error messages to browser. Disable this unless you are debugging with a browser.
errors_to_browser=false
;PATH environment variable
env_path=/bin:/usr/bin
;Umask to set, specify in octal notation. Such a umask will create new files with strict permissions 700 which allow only the owner to read/write/execute a file.
umask=0077
; Minimum UID. Set this to the first uid of a web user and above the uids of system users. Check the file /etc/passwd for the uids.
min_uid=200
; Minimum GID. Similarly to uid, set this to the first gid of a web user.
min_gid=200
[handlers]
;Handler for php-scripts
x-httpd-php="php:/usr/bin/php-cgi"
;Handler for CGI-scripts
x-suphp-cgi="execute:!self"

The above options provide a high security level. Note the logfile
option, which logs each script execution to a log file when the logging
level is set to "info," and thereby gives you useful information about
which users executes what scripts. Output looks like:

For every PHP execution suPHP reports the date and time, the full
path to the executed script, and the user and group that executed it.
With this information you can track each virtual host's activity.
For more options and additional information on settings, check suPHP's documentation.
Next, configure Apache to use the suPHP handler for PHP scripts. PHP
settings are usually found in a separate file, such as
/etc/httpd/conf.d/php.conf. Remove any previous PHP configuration and
leave only the new settings:

The only part of the vhost unique to suPHP is suPHP_UserGroup,
which must be present for each vhost. For the highest level of
isolation, create a new user and group for each virtual host by using
the command useradd -r exampleuser from the Linux
command line. If the user you create is used only for suPHP, you can
disable the user's ability to log in the system, which can help against
threats like brute force attacks.
In the above vhost configuration, the directory /var/www/html/example
(and all files and subdirectories under it) must belong to the user
exampleuser and the group exampleusergroup. If they are not, suPHP will
render an internal server error when you try to execute an incorrectly
owned file.
You can automate the creation of a new virtual host and set up the
proper files and folders by using a Bash script like this one:

However, you can't automate everything. Don't forget to set the
correct ownership and permissions when you manually place web files into
each vhost's webroot directory. The recommended file permissions are
700, which provide read/write/execute permissions only for the owner.
SuPHP is a great way to strengthen security on servers that run
PHP-based websites, which is why many commercial solutions are either
based on it or similar to it. However, according to the project's FAQ, suPHP is no longer actively maintained, so use it with caution.

Tuesday, October 22, 2013

How to dual-boot Fedora 18 and Windows 7 with full disk
encryption (FDE) configured on both operating systems stems from a
request from K. Miller. The dual-boot system will be on a single hard
disk drive (HDD), GRUB will be installed in Fedora’s boot partition, and
Truecrypt will be used to encrypt the Windows 7 end of the
installation.
Encrypting Windows when dual-booting it with a Linux distribution is
not something I’ve ever considered doing simply because I don’t care a
whole lot about that operating system. But K. Miller’s request and
suggestion prompted me to take a look at the possibility.
And I didn’t think it was going to be a difficult process until I
started. First, I tried Fedora 18 and Windows 8 Pro, with UEFI enabled.
That didn’t work. Then I tried Ubuntu 12.10 and Windows 8, also with
UEFI enabled. That proved to be even more difficult, mostly because of
the issue I wrote about in Why is Windows 8 on SSD invisible to Ubuntu 12.10′s installer?. That problem also affects HDDs.
After almost one full day of trying, I decided to honor K. Miller’s
original request, which was for a tutorial on how to “dual boot a Linux
(Fedora 18) encrypted partition alongside a Windows 7,” with “full disk
encryption for both installations.”
We all know the benefits of dual-booting, but why is it necessary to
encrypt both ends of such a system? You’ll find the answer in How Fedora protects your data with full disk encryption. Extending disk encryption to the Windows end of a dual-boot system makes for a more physically secure system.
This is a long tutorial, but keep in mind that the approach I used in
this article is not the only way to go about it. It should provide a
template for how this can be done.
So, if you want to go along with me, here are the tools you’ll need:

An existing installation of Windows 7, or if you are willing to
reinstall, a Windows 7 installation CD. Since I don’t keep a running
Windows system, a fresh installation was used for this tutorial.

Truecrypt. This is the software that will be used to encrypt Windows 7. It is an “open source” software available for download here.
Note that Windows has its own disk encryption system called BitLocker.
So why not use it instead of a third-party tool like Truecrypt?
To use BitLocker, your computer must have a compatible Trusted
Platform Module (TPM). The other reason not to use BitLocker this: It is
a Microsoft tool. As such, you can bet your left arm that it has a
backdoor. And no, I don’t have any evidence to back that up, but this is
Microsoft we are talking about.
One more thing to note: Though Truecrypt is listed on the project’s
website as an open source software, its license, TrueCrypt License 3.0,
is not listed under GPL-Compatible and GPL-Incompatible Free Software
Licenses available here. It is also not listed as an OSI-approved license. Just two points to keep in mind.

An installation image of Fedora 18, which is available for download here.

If you have all the pieces in place, let’s get started.
1. Install Windows 7 or shrink an existing C drive:
If you are going to install a fresh copy of Windows 7, be sure to leave
sufficient disk space for Fedora 18. If you have an existing
installation of Windows 7, the only thing you need to do here is to free
up disk space for the installation of Fedora 18.
The HDD I used for this installation is 600 GB in size. The next
screen shots show how I used Windows 7′s partition manager to recover
disk space that I used for Fedora 18. How you divvy up your HDD is up to
you. For my test system, I split the HDD in half, one half for Windows
7, the other half for Fedora 18. This screen shot shows the partitions
as seen from Windows 7. Right click on C and select “Shrink Volume.”
And this is the Shrink Volume window. Make your selection and click on Shrink.
Here’s the result of the shrinking operation. That unallocated space
is what will be used to install Fedora 18. Reboot the computer with the
Fedora 18 installation CD or DVD in the optical drive.

2. Install Fedora 18: I know the latest version of Anaconda that shipped with Fedora 18 has received muchas
bad press, but that is not going to be an issue here. Well, in a sense,
it will be, but the difficulty it presents is just a minor bump on this
road. The difficulty stems from the fact that the installer does not
give you the option to install GRUB, the boot loader in a custom
location. But that is a minor issue, as there is a simple solution to
it. It involves working from the command-line, but trust me, it’s a
piece of cake.
This screen shot shows the main Anaconda window, the “hub” in the
hub-and-spoke installation model. The only thing you’ll have to do here
is click on Installation Destination.
If you have more than one HDD attached to the computer you are using,
they will all be shown at this step. Select the one you wish to use and
check “Encrypt my data. I’ll set a passphrase later.” Click on the Continue button.
LVM, the Linux Logical Volume Manager,
is the default disk partitioning scheme. No need to change that, but
you’ll have to check “Let me customize the partitioning of the disks
instead.” Continue.
This is a partial screen shot of the manual disk partitioning step.
But don’t worry. There will be no need to do the partitioning yourself.
Anaconda will take care of it. We just need to make sure that it will be
using the free, unpartitioned space on the disk. The “Unknown” is
actually Windows 7. You can see its partitions.
This is another partial screen shot from the same step. This one is,
however, showing the options available for Fedora 18. At the bottom of
the window you can see the free space available for use. If you let
Anaconda partition the space automatically, that is the space it will
use. The Windows 7 half of the disk will be untouched. Since there’s no
need to create the partitions manually, click on “Click here to create
them automatically.”
Here are the Fedora 18 partitions that Anaconda just created. Nothing to do here, so click Finish Partitioning.
Because you elected to encrypt the space used by Fedora 18, Anaconda
will prompt you to specify the passphrase that will be used for
encryption. As I noted in Fedora 18 review, Anaconda will insist on a strong password. Save Passphrase.
Back to the main Anaconda window, click Begin Installation. On the window that opens after this, be sure to specify a password for the root account.
Throughout the Fedora installation process, I’m sure you noticed that
Anaconda did not give you the option to choose where to install GRUB 2,
the version of the GRand Unified Bootloader used by Fedora. Instead it
installs it in the Master Boot Record (MBR), the first sector of the
HDD, overwriting the Windows 7 boot files. So when you reboot the system
– after installation has completed successfully, you will be presented
with the GRUB 2 boot menu.
At this point, you might want to boot into Windows 7 just to be sure
that you can still do so. Then boot into your new installation of Fedora
18. Complete the second stage of the installation process, and log in
when you are done.

3. Install GRUB 2 to Fedora’s boot partition:
Once inside Fedora, the next task is to install GRUB in the Partition
Boot Record (PBR) of the boot partition, that is, the first sector of
the boot partition. Once in Fedora, launch a shell terminal and su to root. To install GRUB 2 in the boot partition’s PBR, you need to know its partition number or device name. The output of df -h will reveal that information. On my installation, it is /dev/sda3. Next, type grub2-install /dev/sda3. The system will complain and refuse to do as instructed. Not to worry, you can force it.
To compel it to install GRUB 2 where we want, type add “- -force” option to the command, so that it reads grub2-install – -force /dev/sda3.
Once that’s done, reboot the computer. Note that completing this step
does not remove GRUB from the MBR. It just installs another copy in the
boot partition. At the next step, GRUB will be removed from the MBR.
4. Restore Windows 7′s boot manager to the MBR:
When the computer reboots, you will still see Fedora’s boot menu, but
instead of booting into Fedora 18, boot into Windows 7. The next task is
to restore its boot program
to the MBR and add an entry for Fedora 18 in its boot manager’s menu.
The program I know that makes it easy to do that, is EasyBCD. Download
it from here. Note that
EasyBCD is free for personal use. After installing it, start it, if it
does not start automatically. Shown below is its main window. Click on Add New Entry to begin.
Then click on the Linux/BSD tab. Select GRUB 2 from the Type dropdown menu, and edit the Name field to match. Click on Add Entry.
This is a preview of what the entries will be on the boot menu of
Windows 7. The final task is to restore the Windows 7 boot program to
the MBR. To do that, click on BCD Deployment.

Under MBR Configuration Options, make sure that the first option is selected. Then click on Write MBR. Exit EasyBCD and reboot the computer.
If you reboot the computer after that last operation, you will be
presented with Windows 7′s boot menu. Test to make sure that you can
boot into either OS. When you are satisfied, reboot into Windows 7 to
start the last series of steps in this operation.
5. Encrypt Windows 7 with Truecrypt:
If you’ve not downloaded Truecrypt, you may do so now, and install it.
Start it by clicking its icon on the desktop. Throughout this step, very
little extra explanation is necessary because the on-screen
explanations will suffice. So, at this step, the default is good. Next.
Click Create Volume.
Select the last option as shown, then Next.
The first option is it. Next.
For obvious reasons, the last option offers a more (physically) secure system. Next.
Though not indicated in this screen shot, I chose “No”. I think the on-screen explanation is sufficient.
Last option, then Next.

Yes.
“Yes,” then Next.
First option, then Next.
It was, but we rectified this when we restored Windows boot program to the MBR. So, select “No.” Next.
This is fine. What will happen is that after this process is completed, pressing the Esc
key at Truecrypt’s boot menu will drop you to Fedora’s boot menu.
Because Fedora is also encrypted, being able to bypass Truecrypt’s boot
menu to get to it does not compromise the integrity of the system’s
physical security Next.
The default encryption algorithm is strong enough, but there are
other options, if you feel otherwise. For this test system, I chose the
default. Next.
Pick a strong passphrase. Next.
Follow the on-screen instructions, then Next.Next.Next.OK.

Burn.
Insert a blank CD-R in the optical drive, then click Next. After you’re done creating the Truecrypt Rescue Disk (TRD), you can transfer it to a USB stick, if you like that better.
If the TRD is created successfully, click Next.
For better encryption, choose a “Wipe Mode” from the dropdown menu. Next.Test.OK.
If you’ve followed all the steps as specified, there should be no problem here. Encrypt.
It took two hours for the encryption of my test system to complete.
Note that the time it takes is a function of the size of the disk being
encrypted, and the wipe mode you chose. The good thing here is that you
can still be using the system while Truecrypt is completing the task.
Otherwise, take a walk and come back after the estimated time to
completion.Finish.

Monday, October 14, 2013

If you follow the latest versions of… everything and tried to install
flashcache you probably noticed that none of the current guides are
correct regarding how to install it. Or they are mostly correct but with
some bits missing. So here’s an attempt to do a refreshed guide. I’m
using kernel version 3.7.10 and mkinitcpio version 0.13.0 (this actually matters, the interface for adding hooks and modules has changed).
Some of the guide is likely to be Arch-specific. I don’t know how
much, so please watch out if you’re using another system. I’m going to
explain why things are done the way they are, so you can replicate them
under other circumstances.

Why flashcache?

First, what do I want to achieve? I’m setting up a system which has a
large spinning disk (300GB) and a rather small SSD (16GB). Why such a
weird combination? Lenovo allowed me to add a free 16GB SSD drive to the
laptop configuration - couldn’t say no ;) The small disk is not useful
for a filesystem on its own, but if all disk writes/reads were cached on
it before writing them back to the platters, it should give my system a
huge performance gain without a huge money loss. Flashcache can achieve
exactly that. It was written by people working for Facebook to speed up
their databases, but it works just as well for many other usage
scenarios.
Why not other modules like bcache or something else dm-based? Because
flashcache does not require kernel modifications. It’s just a module
and a set of utilities. You get a new kernel and they “just work” again -
no source patching required. I’m excited about the efforts for making bcache part of the kernel and for the new dm cache target coming in 3.9, but for now flashcache is what’s available in the easiest way.
I’m going to set up two SSD partitions because I want to cache two
real partitions. There has to be a persistent 1:1 mapping between the
cache and real storage for flashcache to work. One of the partitions is
home (/home), the other is the root (/).

Preparation

Take backups, make sure you have a bootable installer of your system,
make sure you really want to try this. Any mistake can cost you all the
contents of your harddrive or break your grub configuration, so that
you’ll need an alternative method of accessing your system. Also some of
your “data has been written” guarantees are going to disappear. You’ve
been warned.

Building the modules and tools

First we need the source. Make sure your git is installed and clone the flashcache repository: https://github.com/facebook/flashcache
Then build it, specifying the path where the kernel source is located
- in case you’re in the middle of a version upgrade, this is the
version you’re compiling for, not the one you’re using now:

The module is the most interesting bit at the moment, but to load the
cache properly at boot time, we’ll need to put those binaries on the
ramdisk.

Configuring ramdisk

Arch system creates the ramdisk using mkinitcpio (which is a successor to initramfs (which is a successor to initrd)) - you can read some more about it at Ubuntu wiki for example. The way this works is via hooks configured in /etc/mkinitcpio.conf.
When the new kernel gets created, all hooks from that file are run in
the defined order to build up the contents of what ends up in
/boot/initramfs-linux.img (unless you changed the default).
The runtime scripts live in /usr/lib/initcpio/hooks while the ramdisk building elements live in /usr/lib/initcpio/install. Now the interesting part starts: first let’s place all needed bits into the ramdisk, by creating install hook /usr/lib/initcpio/install/flashcache :

This will add the required modules (dm-mod and flashcache),
make sure mapper directory is ready, install the tools and add some
useful udev disk discovery rules. Same rules are included in the lvm2
hook (I assume you’re using it anyway), so there is an overlap, but this
will not cause any conflicts.
The last line of the build function makes sure that the script with
runtime hooks will be included too. That’s the file which needs to
ensure everything is loaded at boot time. It should contain function
run_hook which runs after the modules are loaded, but before the
filesystems are mounted, which is a perfect time for additional device
setup. It looks like this and goes into /usr/lib/initcpio/hooks/flashcache:

Why the crazy splitting and where does flashcache_volumes come from?
It’s done so that the values are not hardcoded and adding a volume
doesn’t require rebuilding initramfs. Each variable set as kernel boot
parameter is visible in the hook script, so adding a flashcache_volumes=/dev/sdb1,/dev/sdb2 will activate both of those volumes. I just add that to the GRUB_CMDLINE_LINUX_DEFAULT variable in /etc/default/grub.
The commands for loading sdb1, sdb2 are in my case the partitions on
the SSD drive - but you may need to change those to match your
environment.
Additionally if you’re attempting to have your root filesystem
handled by flashcache, you’ll need two more parameters. One is of course
root=/dev/mapper/cached_system and the second is lvmwait=/dev/maper/cached_system to make sure the device is mounted before the system starts booting.
At this point regenerating the initramfs (sudo mkinitcpio -p linux) should work and print out something about included flashcache. For example:

Finale - fs preparation and reboot

To actually create the initial caching filesystem you’ll have to
prepare the SSD drive. Assuming it’s already split into partitions -
each one for buffering data from a corresponding real partition, you
have to run the flashcache_create app. The details of how to run it and available modes are described in the flashcache-sa-guide.txt file in the repository, but the simplest example is (in my case to create the root partition cache:

flashcache_create -p back cached_system /dev/sdb1 /dev/sda2

which creates a devmapper device called cached_system with fast cache on /dev/sdb1 and backing storage on /dev/sda2.
Now adjust your /etc/fstab to point at the caching devices where
necessary, install grub to include the new parameters and reboot. If
things went well you’ll be running from the cache instead of directly
from the spinning disk.

Was it worth the work?

Learning about initramfs and configuring it by hand - of course - it
was lots of fun and I got a ramdisk failing to boot the system only 3
times in the process…
Configuring flashcache - OH YES! It’s a night and day difference. You
can check the stats of your cache device by running dmsetup status
devicename. In my case after a couple of days of browsing, watching
movies, hacking on python and haskell code, I get 92% cache hits on read
and 58% on write on the root filesystem. On home it’s 97% and 91%
respectively. Each partition is 50GB HDD with 8GB SDD cache. Since the
cache persists across reboots, startup times have also dropped from ~5
minutes to around a minute in total.
I worked on SSD-only machines before and honestly can’t tell the
difference between them and one with flashcache during standard usage.
The only time when you’re likely to notice a delay is when loading a
new, uncached program and the disk has to spin up for reading.
Good luck with your setup.

A framework for the quick development of websites is a
structure of files and folders of standardized code (HTML, CSS, JS
documents, and more). These frameworks provide a basis to start
building a web site.
These front-end UI frameworks also enable users to dive into responsive
site
design. This type of design was inspired from the concept of responsive
architecture, a class of architecture or building that demonstrates an
ability to alter its form, to continually reflect the environmental
conditions that surround it. In a similar way, a responsive web design
seeks to accomodate the limitations of the device being used. This
includes, but is not limited to, the screen dimensions of the device.
Offering a good presentation experience with a minimum of resizing,
panning and scrolling across a wide range of devices is the key virtue
of responsive design.
There are hundreds of devices that are used to access the web. These
devices have different capabilities and constraints, such as screen
dimensions, input style, resolution, and form. As more and more users
access the web through different devices, in particular tablets and
smartphones, developers need tools to build websites. The important of
catering
for different devices should not be underestimated. After all, in a few
countries, mobile web traffic has already overtaken traffic from
traditional computers.

There are a number of options available for developers. Some
developers
may wish to build special dedicated sites for mobile devices. However,
this is a time consuming solution. A more attractive route is to build
a responsive site usable on all devices, with the site design changing
on the fly depending on the screen resolution and size of the device.
Responsive design is the way forward for making web sites accessible to
mobile
users.
The purpose of this article is to list the finest open source software
that lets you dive into responsive design. The software presented here
makes it easy to get started with responsive design.
Pre-built frameworks get designers up to speed with a limited
methodology rather than spending time building an intimate knowledge of
CSS positioning. The code is portable, and can be output to documents
in a wide array of formats.
Now, let's explore the 7 frameworks at hand.
For each title we have compiled its own portal page, a full description
with an in-depth analysis of its features, together with links to
relevant resources and reviews.

With Linux users still waiting for an official Google drive client,
there are some unofficial clients that are being used by the Linux community.
In this 4-part series, we will cover four different unofficial Google
drive clients that you can use till an official client is release by the
search engine giant. In this article, we will discuss about a command
line google drive client for Linux — Grive.

Grive

A snapshot from the man page of Grive

Grive is an open source Google drive client for Linux that is
developed in C++ programming language and is released under GPLv2
license. It uses Google Document List API for its interaction with
Google servers.

Testing Environment

OS – Ubuntu 13.04

Shell – Bash (4.2.45)

Application – Grive 0.2.0-1

A Brief Tutorial

Once installed, follow these steps to get started with this google drive client :

Run the authorization token command inside the same directory –> grive -a

Ideally the step-3 (mentioned above) should kick-start the authentication process but because of this known BUG in Ubuntu 13.04, I got the following error :
As a workaround (mentioned in the comments under the bug report), I tried the following command :
After this work around, I repeated the step-3 again and this time the
authorization process started. Firstly, a very long URL was produced in
output which the user is supposed to open in a web browser. So, I
copied it.
and then opened it in Firefox web browser.
After accepting the terms and conditions, I was presented with the a code:As instructed, I copied the code on the command prompt
and the authentication process completed. Gdrive then automatically started syncing the files from my google drive account.
and it continued doing so until it finished the syncing process.
After the syncing process completed, I could see all the google drive files in my folder gDrive.
Now, it was the time to test this command line application, so I created a test file named test_grive.txt in the folder gDrive and executed the command grive (to initiate syncing process, -a option is not required now) from the same directory.
Once the file was synced, I opened the web interface of my google drive to confirm whether the file was really synced or not.
As you can see, the file test_grive.txt was actually synced back to the google drive.NOTE – Grive does not sync with google drive servers automatically. You can either create a cron job or create an alias of ‘cd ~/grive && grive‘ to let this command line application sync with google drive servers.

Pros

Being command line based, It offers quick syncing with the google drive servers

It can be extended easily as it is open source.

Cons

File and folders with multiple parents are not supported.

Downloading Google documents is also not supported.

Conclusion

Grive is a good command line alternative for those who are still waiting for an official Google drive client. It does basic (download, upload) stuff neatly and can be used for day-to-day work. You can give it a try, it won’t disappoint you.