Sunday, December 27, 2015

Docker is the excellent new container application that is
generating much buzz and many silly stock photos of shipping containers.
Containers are not new; so, what's so great about Docker? Docker is
built on Linux Containers (LXC). It runs on Linux, is easy to use, and is resource-efficient.

Docker containers are commonly compared with virtual
machines. Virtual machines carry all the overhead of virtualized
hardware running multiple operating systems. Docker containers, however,
dump all that and share only the operating system. Docker can replace
virtual machines in some use cases; for example, I now use Docker in my
test lab to spin up various Linux distributions, instead of VirtualBox.
It's a lot faster, and it's considerably lighter on system resources.

Docker is great for datacenters, as they can run many times
more containers on the same hardware than virtual machines. It makes
packaging and distributing software a lot easier:

"Docker containers
wrap up a piece of software in a complete filesystem that contains
everything it needs to run: code, runtime, system tools, system
libraries -- anything you can install on a server. This guarantees that
it will always run the same, regardless of the environment it is running
in."

Docker runs natively on Linux, and in virtualized
environments on Mac OS X and MS Windows. The good Docker people have
made installation very easy on all three platforms.

Installing Docker

That's enough gasbagging; let's open a terminal and have
some fun. The best way to install Docker is with the Docker installer,
which is amazingly thorough. Note how it detects my Linux distro version
and pulls in dependencies. The output is abbreviated to show the
commands that the installer runs:

As you can see, it uses standard Linux commands. When it's finished, you should add yourself to the docker group so that you can run it without root permissions. (Remember to log out and then back in to activate your new group membership.)

Hello World!

We can run a Hello World example to test that Docker is installed correctly:

$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
[snip]
Hello from Docker.
This message shows that your installation appears to be working correctly.

This downloads and runs the hello-world image from the Docker Hub. This contains a library of Docker images, which you can access with a simple registration. You can also upload and share your own images. Docker provides a fun test image to play with, Whalesay. Whalesay is an adaption of Cowsay that draws the Docker whale instead of a cow (see Figure 1 above).

$ docker run docker/whalesay cowsay "Visit Linux.com every day!"

The first time you run a new image from Docker Hub, it gets downloaded to your computer. Then, after that Docker uses your local copy. You can see which images are installed on your system.

Build a Docker Image

Now let's build our own Docker image. Docker Hub has a lot of prefab images to play with (Figure 2), and that's the best way to start because building one from scratch is a fair bit of work. (There is even an empty scratch image for building your image from the ground up.) There are many distro images, such as Ubuntu, CentOS, Arch Linux, and Debian.

Figure 2: Docker Hub.

We'll start with a plain Ubuntu image. Create a directory for your Docker project, change to it, and create a new Dockerfile with your favorite text editor.

$ mkdir dockerstuff
$ cd dockerstuff
$ nano Dockerfile

Enter a single line in your Dockerfile:

FROM ubuntu

Now build your new image and give it a name. In this example the name is testproj. Make sure to include the trailing dot:

The real power of Docker lies in creating Dockerfiles that allow you to create customized images and quickly replicate them whenever you want. This simple example shows how to create a bare-bones Apache server. First, create a new directory, change to it, and start a new Dockerfile that includes the following lines.

This will take a little while as it downloads and installs the Apache packages. You'll see a lot of output on your screen, and when you see "Successfully built 538fea9dda79" (but with a different number, of course) then your image built successfully. Now you can run it. This runs it in the background:

A more comprehensive Dockerfile could install a complete LAMP stack, load Apache modules, configuration files, and everything you need to launch a complete Web server with a single command.

We have come to the end of this introduction to Docker, but don't stop now. Visit docs.docker.com to study the excellent documentation and try a little Web searching for Dockerfile examples. There are thousands of them, all free and easy to try.

Saturday, December 26, 2015

RegRipper is an open source forensic software used as a Windows Registry
data extraction command line or GUI tool. It is written in Perl and
this article will describe RegRipper command line tool installation on
the Linux systems such as Debian, Ubuntu, Fedora, Centos or Redhat. For
the most part, the installation process of command line tool RegRipper
is OS agnostic except the part where we deal with installation
pre-requisites.

1. Pre-requisites

Fist we need to install all prerequisites. Choose a relevant command below based on the Linux distribution you are running:

2. Installation of required libraries

The RegRipper command line tool depends on perl Parse::Win32Registry library. The following commands will take care of this pre-requisite and install this library into /usr/local/lib/rip-lib directory:

3. RegRipper script installation

At this stage we are ready to install rip.pl script. The
script is intended to run on MS Windows systems and as a result we need
to make some small modifications. We will also include a path to the
above installed Parse::Win32Registry library.
Download RegRipper source code from https://regripper.googlecode.com/files/. Current version is 2.8:

# wget -q https://regripper.googlecode.com/files/rrv2.8.zip

Extract rip.pl script:

# unzip -q rrv2.8.zip rip.pl

Remove interpretor line and unwanted DOS new line character ^M:

# tail -n +2 rip.pl > rip
# perl -pi -e 'tr[\r][]d' rip

Modify script to include an interpretor relevant to your Linux system and also include library path to Parse::Win32Registry:

A sudden outburst of violent disk I/O
activity can bring down your email or web server. Usually, a web,
mysql, or mail server serving millions and millions pages (requests) per
months are prone to this kind of problem. Backup activity can increase
current system load too. To avoid this kind of sudden outburst problem,
run your script with scheduling class and priority. Linux comes with
various utilities to manage this kind of madness.

CFQ scheduler

You
need Linux kernels 2.6.13+ with the CFQ IO scheduler. CFQ (Completely
Fair Queuing) is an I/O scheduler for the Linux kernel, which is default
in 2.6.18+ kernel. RHEL 4/ 5 and SuSE Linux has all scheduler built
into kernel so no need to rebuild your kernel. To find out your
scheduler name, enter:# for d in /sys/block/sd[a-z]/queue/scheduler; do echo "$d => $(cat $d)" ; done Sample output for each disk:

Old good nice program

Say hello to ionice utility

The
ionice command provide better control as compare to nice command for
the I/O scheduling class and priority of a program or script. It
supports following three scheduling classes (quoting from the man page):

Idle : A program running with idle io priority will only get disk time when no other program has asked for disk io
for a defined grace period. The impact of idle io processes on normal
system activity should be zero. This scheduling class does not take a
priority argument.

Best effort : This is the
default scheduling class for any process that hasn’t asked for a
specific io priority. Programs inherit the CPU nice setting for io
priorities. This class takes a priority argument from 0-7, with lower
number being higher priority. Programs running at the same best effort
priority are served in a round-robin fashion. This is usually recommended for most application.

Real time
: The RT scheduling class is given first access to the disk, regardless
of what else is going on in the system. Thus the RT class needs to be
used with some care, as it can starve other processes. As with the best
effort class, 8 priority levels are defined denoting how big a time
slice a given process will receive on each scheduling window. This is
should be avoided for all heavily loaded system.

Syntax

The syntax is:

ionice options PID
ionice options -p PID
ionice -c1 -n0 PID

How do I use the ionice command on Linux?

Linux refers the scheduling class using following number system and priorities:

Scheduling class

Number

Possible priority

real time

1

8 priority levels are defined denoting how big a time slice a given process will receive on each scheduling window

Puppet is an automation tool which allows
you to automate the configuration of software like apache and nginx
across multiple servers.Puppet installation
In this tutorial we will be installing Puppet in the Puppet/Agent mode.You can install it in a Stand Alone mode as well.OS & software Versions
Centos 6.5
Linux kernel 2.6.32
Puppet 3.6.2
Let’s get to it then.Puppet server configuration

info: Creating a new SSL key for vps.client.com
warning: peer certificate won't be verified in this SSL session
info: Caching certificate for ca
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
info: Creating a new SSL certificate request for agent1.localdomain
info: Certificate Request fingerprint (md5): FD:E7:41:C9:5C:B7:5C:27:11:0C:8F:9C:1D:F6:F9:46
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
warning: peer certificate won't be verified in this SSL session
Exiting; no certificate found and waitforcert is disabled

Puppet uses SSL to communicate with it’s clients,
when you start puppet on a client, it will automatically connect to the
puppet server in it’s conf file and request for it’s certificate to be
signed.
On the puppet server run

Now our client server “vps.client.com” is authorized to fetch and
apply configurations from the puppet server. To understand how puppet
ssl works and to troubleshoot any issues you can read
http://docs.puppetlabs.com/learning/agent_master_basic.html
Let’s look at a sample puppet configuration.Installing apache web server with puppet
Although puppet server configuration is stored in
“/etc/puppet/puppet.conf”, client configurations are stored in files
called manifests.

The configuration is pretty self explanatory, the first line
indicates that we need to install this configuration on a client machine
with the hostname ‘vps.client.com’. If you want to apply the
configuration to the puppet server then replace ‘vps.client.com’ with
‘default’ .
Read node definitions for multiple node configurations.
The next two lines tell puppet that we need to ensure that the apache
web server is installed. Puppet will check if apache is installed and
if not, install it.
Think of a “package” as an object, “httpd” as the name of the object
and “ensure => present” as the action to be performed on the object.
So if I wanted puppet to install a mysql database server, the configuration would be
node ‘vps.client.com’ {
package { ‘mysql-server’ :
ensure => installed,
}
}
The puppet server will compile this configuration into a catalog and serve it to a client when a request is sent to it.How do I pull my configuration to a client immediately?
Puppet client’s usually pull configuration once every 30 minutes, But
you can pull a configuration immediately buy running “service puppet
restart or the following command.

[user@puppet ~]# sudo puppet agent --test

What if I wanted puppet to add a user ‘Tom’?
Then the object would be user, the name of the object would be ‘tom’ and the action would be ‘present’.

node ‘vps.client.com’ {
user { ‘tomr’ :
ensure => present,
}
}

In puppet terms, these objects are known as Resources, the name of the objects are Titles and the actions are called Attributes.
Puppet has a number of these resources to help ease your automation,
You can read about them at
http://docs.puppetlabs.com/references/latest/type.htmlHow to ensure a service is running with puppet?
Once you have package like apache installed, you will want to ensure
that it is running. On the command line you can do this with the service
command, However in puppet you will need to use the manifest file and
add the configuration as follows.

Now you must have noticed I have added an “->” symbol. This is
because Puppet is not particular about ordering, But we want the service
command to run only after apache is installed and not before, hence I
have added the arrow symbol which tells Puppet to run only after “httpd”
is installed.
To know more about puppet ordering read.How to automate installation of predefined conf files?
You may want to have a customised apache conf file for this client,
which will have the vhost entry and other specific parameters you
choose. In this case we need to use the file resource.
Before we go into the configuration, you should know how puppet serves files. A Puppet server provides access to custom files via mount points. One such mount point by default is the modules directory.
The modules directory is where you would add your
modules. Modules make it easier to reuse configurations, rather than
having to write configurations for every node we can store them as a
module and call them whenever we like.
In order to write a module, you need to create a subdirectory inside
the modules directory with the module name and create a manifest file
called init.pp which should contain a class with the same name as the
subdirectory.

You need to add your custom httpd.conf file in the files subdirectory located at “/etc/puppet/modules/httpd/files/”
To understand the how the URI to the source attribute works read http://docs.puppetlabs.com/guides/file_serving.html
Now call the module in our main manifest file.

Incase you need a Web interface to Manage your Linux Servers then read my tutorial Using Foreman, an Opensource Frontend for PuppetUpdate: For more Automation and other System Administration/Devops Guides see https://github.com/Leo-G/DevopsWikiPuppet FAQHow do I change the time interval for a client to fetch it’s configuration from the server ?
Add “runinterval = 3600 “ under [main] section in “/etc/puppet/puppet.conf” on the client.
Time is in seconds.How do I install modules from puppet forge?

Denial-of-Service (DoS) attack is an attempt to make
a machine or network resource unavailable to its intended users, such
as to temporarily or indefinitely interrupt or suspend services of a
host connected to the Internet. A distributed denial-of-service (DDoS)
is where the attack source is more than one–and often thousands
of-unique IP addresses.

What is mod_evasive?

mod_evasive is an evasive maneuvers module for Apache to provide
evasive action in the event of an HTTP DoS or DDoS attack or brute force
attack. It is also designed to be a detection and network management
tool, and can be easily configured to talk to ipchains, firewalls,
routers, and etcetera. mod_evasive presently reports abuses via email
and syslog facilities.

Installing mod_evasive

Server Distro: Debian 8 jessie

Server IP: 10.42.0.109

Apache Version: Apache/2.4.10

mod_evasive appears to be in the Debian official repository, we will need to install using apt

# apt-get update
# apt-get install libapache2-mod-evasive

Setting up mod_evasive

We have mod_evasive installed but not configured, mod_evasive config is located at /etc/apache2/mods-available/evasive.conf. We will be editing that which should look similar to this

mod_evasive Configuration Directives

DOSHashTableSize
This directive defines the hash table size, i.e. the number of top-level
nodes for each child’s hash table. Increasing this number will provide
faster performance by decreasing the number of iterations required to
get to the record, but will consume more memory for table space. It is
advisable to increase this parameter on heavy load web servers.

DOSPageCount:
This sets threshold for total number of hits on same page (or URI) per
page interval. Once this threshold is reached, the client IP is locked
out and their requests will be dumped to 403, adding the IP to blacklist

DOSSiteCount:
This sets the threshold for total number of request on any object by
same client IP per site interval. Once this threshold is reached, the
client IP is added to blacklist

DOSPageInterval:
The page count interval, accepts real number as seconds. Default value is 1 second

DOSSiteInterval:
The site count interval, accepts real number as seconds. Default value is 1 second

DOSBlockingPeriod:
This directive sets the amount of time that a client will be blocked for
if they are added to the blocking list. During this time, all
subsequent requests from the client will result in 403 (Forbidden)
response and the timer will be reset (e.g. for another 10 seconds).
Since the timer is reset for every subsequent request, it is not
necessary to have a long blocking period; in the event of a DoS attack,
this timer will keep getting reset.The interval is specified in seconds
and may be a real number.

DOSEmailNotify:
This is an E-mail if provided will send notification once an IP is being blacklisted

DOSSystemCommand:
This is a system command that can be executed once an IP is blacklist if
enabled. Where %s is the blacklisted IP, this is designed for system
call to IP filter or other tools

DOSLogDir:
This is a directory where mod_evasive stores it’s log

This configuration is what I’m using which is working well and I
recommend it if you don’t know how to go about the configuration

Checking Apache access logs at /var/log/apache2/access.log we can see all connections from ApacheBench/2.3 were dropped to 403:
You see, with mod_evasive you can reduce the attack of DoS. Something that Nginx doesn’t have ;)

I can use the "smartctl -d ata -a /dev/sdb"
command to read hard disk health status directly connected to my
system. But, how do I read smartctl command to check SAS or SCSI disk
behind Adaptec RAID controller from the shell prompt on Linux operating
system? You need to use the
following syntax to check SATA or SAS disk which are typically simulate a
(logical) disk for each array of (physical) disks to the OS. /dev/sgX
can be used as pass through I/O controls providing direct access to
each physical disk for Adaptec raid controllers.

Please note that newer version of arcconf is located in /usr/Adaptec_Event_Monitor directory. So your full path must be as follows:# /usr/Adaptec_Event_Monitor/arcconf getconfig [AD | LD [LD#] | PD | MC | [AL]] [nologs] Where,

Friday, December 25, 2015

How do I find out if a web-page is gzipped
or compressed using Unix command line utility called curl? How do I
make sure mod_deflate or mod_gzip is working under Apache web server?
When content is compressed, downloads are faster because the files are
smaller—in many cases, less than a quarter the size of the original.
This is very useful for JavaScript and CSS files (including html),
faster downloads translates into faster rendering of web pages for
end-user. The mod_deflate
or mod_gzip Apache module provides the DEFLATE output filter that
allows output from your server to be compressed before being sent to the
client over the network. Most modern web browser support this feature.
You can use the curl command to find out if a web page is gzipped or not
using the the following simple syntax.

-H 'Accept-Encoding: gzip,deflate' - Send extra header in the request when sending HTTP to a server.

-L
- f the server reports that the requested page has moved to a different
location (indicated with a Location: header and a 3XX response
code), this option will make curl redo the request on the new place.

Monday, December 14, 2015

UEFI (Unified Extensible Firmware Interface) is the open, multi-vendor
replacement for the aging BIOS standard, which first appeared in IBM
computers in 1976. The UEFI standard is extensive, covering the full
boot architecture. This article focuses on a single useful but typically
overlooked feature of UEFI: secure boot.
Often maligned, you've probably encountered UEFI secure boot only when
you disabled it during initial setup of your computer. Indeed, the
introduction of secure boot was mired with controversy over Microsoft
being in charge of signing third-party
operating system code that would boot under a secure boot environment.
In this article, we explore the basics of secure boot and how to take
control of it. We describe how to install your own keys and sign your
own binaries with those keys. We also show how you can build a single
standalone GRUB EFI binary, which will protect your system from
tampering, such as cold-boot attacks. Finally, we show how full disk
encryption can be used to protect the entire hard disk, including the
kernel image (which ordinarily needs to be stored unencrypted).

UEFI Secure Boot

Secure boot is designed to protect a system against malicious code being
loaded and executed early in the boot process, before the operating
system has been loaded. This is to prevent malicious software from
installing a "bootkit" and maintaining control over a computer to mask
its presence. If an invalid binary is loaded while secure boot is
enabled, the user is alerted, and the system will refuse to boot the
tampered binary.
On each boot-up, the UEFI firmware inspects each EFI binary that is
loaded and ensures that it has either a valid signature (backed by a
locally trusted certificate) or that the binary's checksum is present on
an allowed list. It also verifies that the signature or checksum does
not appear in the deny list. Lists of trusted certificates or checksums
are stored as EFI variables within the non-volatile memory used by the
UEFI firmware environment to store settings and configuration data.

UEFI Key Overview

The four main EFI variables used for secure boot are shown in Figure a.
The Platform Key (often abbreviated to PK) offers full control
of the secure boot key hierarchy. The holder of the PK can install a new
PK and update the KEK (Key Exchange Key). This is a second key, which
either can sign executable EFI binaries directly or be used to sign the
db and dbx databases. The db (signature database) variable contains a
list of allowed signing certificates or the cryptographic hashes of
allowed binaries. The dbx is the inverse of db, and it is used as a
blacklist of specific certificates or hashes, which otherwise would have
been accepted, but which should not be able to run. Only the KEK and db
(shown in green) keys can sign binaries that may boot the system.
Figure a. Secure Boot Keys
The PK on most systems is issued by the manufacturer of the hardware,
while a KEK is held by the operating system vendor (such as Microsoft).
Hardware vendors also commonly have their own KEK installed (since
multiple KEKs can be present). To take full ownership of a computer
using secure boot, you need to replace (at a minimum) the PK and KEK, in
order to prevent new keys being installed without your consent. You
also should replace the signature database (db) if you want to prevent
commercially signed EFI binaries from running on your system.
Secure boot is designed to allow someone with physical control over a
computer to take control of the installed keys. A pre-installed
manufacturer PK can be programmatically replaced only by signing it with
the existing PK. With physical access to the computer, and access to the
UEFI firmware environment, this key can be removed and a new one
installed. Requiring physical access to the system to override the
default keys is an important security requirement of secure boot to
prevent malicious software from completing this process. Note that some
locked-down ARM-based devices implement UEFI secure boot without the
ability to change the pre-installed keys.

Testing Procedure

You can follow these procedures on a physical computer, or alternatively
in a virtualized instance of the Intel Tianocore reference UEFI
implementation. The ovmf package available in
most Linux distributions includes this. The QEMU virtualization tool can
launch an instance of ovmf for experimentation. Note
that the fat
argument specifies that a directory, storage, will be presented to the
virtualized firmware as a persistent storage volume. Create this
directory in the current working directory, and launch QEMU:

Files present in this folder when starting QEMU will appear as a volume
to the virtualized UEFI firmware. Note that files added to it after
starting QEMU will not appear in the system—restart QEMU and they
will
appear. This directory can be used to hold the public keys you want to
install to the UEFI firmware, as well as UEFI images to be booted later
in the process.

Generating Your Own Keys

Secure boot keys are self-signed 2048-bit RSA keys, in X.509 certificate
format. Note that most implementations do not support key lengths
greater than 2048 bits at present. You can generate a 2048-bit keypair
(with a validity period of 3650 days, or ten years) with the following
openssl command:

The CN subject can be customized as you wish, and its value is not
important. The resulting PK.key is a private key, and PK.crt is the
corresponding certificate (containing the public key), which you will
install into the UEFI firmware shortly. You should store the private
key securely on an encrypted storage device in a safe place.
Now you can carry out the same process for both the KEK and for the db key.
Note that the db and KEK EFI variables can contain multiple keys
(and in the case of db, SHA256 hashes of bootable binaries), although
for simplicity, this article considers only storing a single certificate
in each. This is more than adequate for taking control of your own
computer. Once again, the .key files are private keys, which should be
stored securely, and the .crt files are public certificates to be
installed into your UEFI system variables.

Taking Ownership and Installing Keys

Every UEFI firmware interface differs, and it is therefore not possible
to provide step-by-step instructions on how to install your own keys.
Refer to your motherboard or laptop's instruction manual, or search
on-line for the maker of the UEFI firmware. Enter the UEFI firmware
interface, usually by holding a key down at boot time, and locate the
security menu. Here there should be a section or submenu for secure
boot. Change the mode control to "custom" mode. This should allow you to
access the key management menus.
Figure 1. Enabling Secure Boot and Entering Custom Mode
At this point, you should make a backup of the UEFI platform keys
currently installed. You should not need this, since there should be an
option within your UEFI firmware interface to restore the default keys,
but it does no harm to be cautious. There should be an option to export
or save the current keys to a USB Flash drive. It is best to format this
with the FAT filesystem if you have any issues with it being detected.
After you have copied the backup keys somewhere safe, load the public
certificate (.crt) files you created previously onto the USB Flash
drive. Take care not to mix them up with the backup certificates from
earlier. Enter the UEFI firmware interface, and use the option to reset
or clear all existing secure boot keys.
Figure 2. Erasing the Existing Platform Key
This also might be referred to as "taking ownership" of secure boot.
Your system is now in secure boot "setup" mode, which will remain until
a new PK is installed. At this point, the EFI PK variable is unprotected
by the system, and a new value can be loaded in from the UEFI firmware
interface or from software running on the computer (such as an
operating system).
Figure 3. Loading a New Key from a Storage Device
At this point, you should disable secure boot temporarily, in order to
continue following this article. Your newly installed keys will remain
in place for when secure boot is enabled.

Signing Binaries

After you have installed your custom UEFI signing keys, you need to sign
your own EFI binaries. There are a variety of different ways to build
(or obtain) these. Most modern Linux bootloaders are EFI-compatible (for
example, GRUB 2, rEFInd or gummiboot), and the Linux kernel itself can
be built as a bootable EFI binary since version 3.3. It's possible to
sign and boot any valid EFI binary, although the approach you take here
depends on your preference.
One option is to sign the kernel image directly. If your distribution
uses a binary kernel, you would need to sign each new kernel update
before rebooting your system. If you use a self-compiled kernel, you
would need to sign each kernel after building it. This approach, however,
requires you to keep on top of kernel updates and sign each image. This
can become arduous, especially if you use a rolling-release
distribution or test mainline release candidates. An alternative, and
the approach we used in this article, is to sign a locked-down
UEFI-compatible bootloader (GRUB 2 in the case of this article), and use
this to boot various kernels from your system.
Some distributions configure GRUB to validate kernel image signatures
against a distribution-specified public key (with which they sign all
kernel binaries) and disable editing of the kernel
cmdline variable when
secure boot is in use. You therefore should refer to the documentation
for your distribution, as the section on ensuring your boot images are
encrypted would not be essential in this case.
The Linux sbsigntools package is available from
the repositories of most Linux distributions and is a good first
port of call when signing UEFI binaries. UEFI secure boot binaries
should be signed with an Authenticode-format signature. The command of
interest is sbsign, which is invoked as follows:

sbsign --key DB.key --cert DB.crt unsigned.efi \
--output signed.efi

Due to subtle variations in the implementation of the UEFI standards,
some systems may reject a correctly signed binary from
sbsign. The best alternative we found was to use the
osslsigncode utility, which also generates
Authenticode signatures. Although this tool was not specifically intended
for use with secure boot, it produces signatures that match the
required specification. Since osslsigncode does
not appear to be commonly included in distribution repositories, you
should build it from its source code. The process is relatively
straightforward and simply requires running make,
which will produce the executable binary. If you encounter
any issues, ensure you have installed openssl
and curl, which are dependencies of the
package. (See Resources for a link to the source code
repository.)
Binaries are signed with osslsigntool in a
similar manner to sbsign (note that the hash is
defined as sha256 per the UEFI specification; this should not be
altered):

Booting with UEFI

After you have signed an EFI binary (such as the GRUB bootloader
binary), the obvious next step is to test it. Computers using the legacy
BIOS boot technology load the initial operating system bootloader from
the MBR (master boot record) of the selected boot device. The MBR
contains code to load a further (and larger) bootloader held within the
disk, which loads the operating system. In contrast, UEFI is designed to
allow for more than one bootloader to exist on one drive, without the
need for those bootloaders to cooperate or even know the others exist.
Bootable UEFI binaries are located on a storage device (such as a hard
disk) within a standard path. The partition containing these binaries is
referred to as the EFI System Partition. It has a partition ID of 0xEF00
in gdisk, the GPT-compatible equivalent to fdisk. This partition is
conventionally located at the beginning of the filesystem and formatted
with a FAT32 filesystem. UEFI-bootable binaries are then stored as files
in the EFI/BOOT/ directory.
This signed binary should now boot if it is placed at EFI/BOOT/BOOTX64.EFI within the EFI system partition or an
external drive, which is set as the boot device. It is possible to have
multiple EFI binaries available on one EFI system partition, which makes
it easier to create a multi-boot setup. For that to work however, the
UEFI firmware needs a boot entry created in its non-volatile memory.
Otherwise, the default filename (BOOTX64.EFI) will be used, if it exists.
To add a new EFI binary to your firmware's list of available binaries,
you should use the efibootmgr utility. This
tool can be found in distribution repositories and often is used
automatically by the installers for popular bootloaders, such as GRUB.
At this point, you should re-enable secure boot within your UEFI
firmware. To ensure that secure boot is operating correctly, you should
attempt to boot an unsigned EFI binary. To do so, you can place a binary
(such as an unsigned GRUB EFI binary) at EFI/BOOT/BOOTX64.EFI on a FAT32-formatted USB Flash drive.
Use the UEFI firmware interface to set this drive as the current boot
drive, and ensure that a security warning appears, which halts the boot
process. You also should verify that an image signed with the default
UEFI secure boot keys does not boot—an Ubuntu 12.04 (or newer) CD or
bootable USB stick should allow you to verify this. Finally, you should
ensure that your self-signed binary boots correctly and without error.

Installing Standalone GRUB

By default, the GRUB bootloader uses a configuration file stored at
/boot/grub/grub.cfg. Ordinarily, this file could be edited by anyone
able to modify the contents of your /boot partition, either by booting
to another OS or by placing your drive in another computer.

Bootloader Security

Prior to the advent of secure boot and UEFI, someone with physical
access to a computer was presumed to have full access to it. User
passwords could be bypassed by simply adding
init=/bin/bash to the
kernel cmdline parameter, and the computer would boot straight up into a
root shell, with full access to all files on the system.
Setting up full disk encryption is one way to protect your data from
physical attack—if the contents of the hard disk is encrypted, the
disk must be decrypted before the system can boot. It is not possible to
mount the disk's partitions without the decryption key, so the data is
protected.
Another approach is to prevent an attacker from altering the kernel
cmdline parameter. This approach is easily bypassed on most computers,
however, by installing a new bootloader. This bootloader need not
respect the restrictions imposed by the original bootloader. In many
cases, replacing the bootloader may prove unnecessary—GRUB and other
bootloaders are fully configurable by means of a separate configuration
file, which could be edited to bypass security restrictions, such as
passwords.
Therefore, there would be no real security advantage in signing the GRUB
bootloader, since the signed (and verified) bootloader would then load
unsigned modules from the hard disk and use an unsigned configuration
file. By having GRUB create a single, bootable EFI binary, containing
all the necessary modules and configuration files, you no longer need to
trust the modules and configuration file of your GRUB binary. After
signing the GRUB binary, it cannot be modified without secure boot
rejecting it and refusing to load. This failure would alert you to
someone attempting to compromise your computer by modifying the bootloader.
As mentioned earlier, this step may not be necessary on some
distributions, as their GRUB bootloader automatically will enforce
similar restrictions and checks on kernels when booted with secure boot
enabled. So, this section is intended for those who are not using
such a distribution or who wish to implement something similar
themselves for learning purposes.
To create a standalone GRUB binary, the
grub-mkstandalone
tool is needed. This tool should be included as part of
recent GRUB2 distribution packages:

A more detailed explanation of the arguments used here is available on
the man page for grub-mkstandalone. The
significant arguments are -o, which specifies the output file to be
used, and the final string argument, specifying the path to the current
GRUB configuration file. The resulting standalone GRUB binary is
directly bootable and contains a memdisk, which holds the configuration
file and modules, as well as the configuration file. This GRUB binary
now can be signed and used to boot the system. Note that this process
should be repeated when the GRUB configuration file is re-generated,
such as after adding a new kernel, changing boot parameters or after
adding a new operating system to the list, since the embedded
configuration file will be out of date with the regular system one.

A Licensing Warning

As GRUB 2 is licensed under the GPLv3 (or later), this raises one
consideration to be aware of. Although not a consideration for
individual
users (who simply can install new secure boot keys and boot a modified
bootloader), if the GRUB 2 bootloader (or indeed any other
GPL-v3-licensed bootloader) was signed with a private signing key, and
the
distributed computer system was designed to prevent the use of unsigned
bootloaders, use of the GPL-v3-licensed software would not be in
compliance with the licence. This is a result of the so-called
anti-tivo'ization clause of GPLv3, which requires that users be able to
install and execute their own modified version of GPLv3 software on a
system, without being technically restricted from doing so.

Locking Down GRUB

To prevent a malicious user from modifying the kernel cmdline of your
system (for example, to point to a different init binary), a GRUB
password should be set. GRUB passwords are stored within the
configuration file, after being hashed with a cryptographic hashing
function. Generate a password hash with the
grub-mkpasswd-pbkdf2 command, which will prompt you to
enter a password.
The PBKDF2 function is a slow hash, designed to be computationally
intensive and prevent brute-force attacks against the password. Its
performance is adjusted using the -c parameter, if desired, to slow the
process further on a fast computer by carrying out more rounds of
PBKDF2. The default is for 10,000 rounds. After copying this password
hash, it should be added to your GRUB configuration files (which
normally are located in /etc/grub.d or similar). In the file 40_custom, add
the following:

set superusers="root"
password_pbkdf2 root

This will create a GRUB superuser account named root, which is able to
boot any GRUB entry, edit existing boot items and enter a GRUB console.
Without further configuration, this password also will be required to
boot the system. If you prefer to have yet another password on boot-up,
you can skip the next step. With full disk encryption in use though,
there is little need in requiring a password on each boot-up.
To remove the requirement for the superuser password to be entered on a
normal boot-up, edit the standard boot menu template (normally
/etc/grub.d/10-linux), and locate the line creating a regular menu
entry. It should look somewhat similar to this:

Change this line by adding the argument
--unrestricted, before the
opening curly bracket. This change tells GRUB that booting this entry
does not require a password prompt. Depending on your distribution and
GRUB version, the exact contents of the line may differ. The resulting
line should be similar to this:

After adding a superuser account and configuring the need (or otherwise)
for boot-up passwords, the main GRUB configuration file should be
re-generated. The command for this is distribution-specific, but is
often update-grub or
grub-mkconfig. The standalone GRUB binary
also should be re-generated and tested.

Protecting the Kernel

At this point, you should have a system capable of booting a signed (and
password-protected) GRUB bootloader. An adversary without access to your
keys would not be able to modify the bootloader or its configuration or
modules. Likewise, attackers would not be able to change the parameters
passed by the bootloader to the kernel. They could, however, modify your
kernel image (by swapping the hard disk into another computer). This
would then be booted by GRUB. Although it is possible for GRUB to verify
kernel image signatures, this requires you to re-sign each kernel update.
An alternative approach is to use full disk encryption to protect the
full system, including kernel images, the root filesystem and your home
directory. This prevents someone from removing your computer's drive and
accessing your data or modifying it—without knowing your encryption
password, the drive contents will be unreadable (and thus unmodifiable).
Most on-line guides will show full disk encryption but leave a separate,
unencrypted /boot partition (which holds the kernel and initrd images)
for ease of booting. By only creating a single, encrypted root
partition, there won't be an unencrypted kernel or initrd stored on the
disk. You can, of course, create a separate boot partition and encrypt
it using dm-crypt as normal, if you prefer.
The full process of carrying out full disk encryption including the boot
partition is worthy of an article in itself, given the various
distribution-specific changes necessary. A good starting point, however,
is the ArchLinux Wiki (see Resources). The main difference from a conventional encryption setup is
the use of the GRUB GRUB_ENABLE_CRYPTODISK=y
configuration parameter, which tells GRUB to attempt to decrypt an
encrypted volume prior to loading the main GRUB menu.
To avoid having to enter the encryption password twice per boot-up, the
system's /etc/crypttab can be used to
decrypt the filesystem with a keyfile automatically. This keyfile
then can be included in the (encrypted) initrd of the filesystem (refer to
your distribution's documentation to find out how to add this to the
initrd, so it will be included each time it is regenerated for a kernel
update).
This keyfile should be owned by the root user and does not require any
user or group to have read access to it. Likewise, you should give the
initrd image (in the boot partition) the same protection to prevent it
from being accessed while the system is powered up and the keyfile
is being extracted.

Final Considerations

UEFI secure boot allows you to take control over what code can run on
your computer. Installing your own keys allows you to prevent malicious
people from easily booting their own code on your computer. Combining
this with full disk encryption will keep your data protected against
unauthorized access and theft, and prevent an attacker from tricking you
into booting a malicious kernel.
As a final step, you should apply a password to your UEFI setup
interface, in order to prevent a physical attacker from gaining access
to your computer's setup interface and installing their own PK, KEK and
db key, as these instructions did. You should be aware, however, that a
weakness in your motherboard or laptop's implementation of UEFI could
potentially allow this password to be bypassed or removed, and that the
ability to re-flash the UEFI firmware through a "rescue mode" on your
system could potentially clear NVRAM variables. Nonetheless, by taking
control of secure boot and using it to protect your system, you should be
better protected against malicious software or those with temporary
physical access to your computer.

Question: I was downloading a large file
using SCP, but the download transfer failed in the middle because my
laptop got disconnected from the network. Is there a way to resume the
interrupted SCP transfer where I left off, instead of downloading the
file all over again?

Originally based on BSD RCP protocol, SCP (Secure copy) is a
mechanism that allows you to transfer a file between two end points over
a secure SSH connection. However, as a simple secure copy protocol, SCP does not understand range-request or partial transfer like HTTP does. As such, popular SCP implementations like the scp command line tool cannot resume aborted downloads from lost network connections.
If you want to resume an interrupted SCP transfer, you need to rely on other programs which support range requests. One popular such program is rsync. Similar to scp, rsync can also transfer files over SSH.
Suppose you were trying to download a file (bigdata.tgz) from a remote host remotehost.com using scp, but the SCP transfer was stopped in the middle due to a stalled SSH connection. You can use the following rsync command to easily resume the stopped transfer. Note that the remote server must have rsync installed as well.

The "-P" option is the same as "--partial --progress", allowing rsync to work with partially downloaded files. The "-rsh=ssh" option tells rsync to use ssh as a remote shell.
Once the command is invoked, rsync processes on local and
remote hosts compare a local file (./bigdata.tgz) and a remote file
(userid@remotehost.com:bigdata.tgz), determine among themselves what
portion of the file is not the same, and transfer the discrepancy to
either end. In this case, missing bytes in the partially downloaded
local file is downloaded from a remote host.
If the above rsync session itself gets interrupted, you can resume it as many time as you want by typing the same command. rsync will automatically restart the transfer where it left off.

Sometimes you try to unmount a disk
partition or mounted CD/DVD disk or device, which is accessed by other
users, then you will get an error umount: /xxx: device is busy.
However, Linux or FreeBSD comes with the fuser command to kill
forcefully mounted partition. For example, you can kill all processes
accessing the file system mounted at /nas01 using the fuser command.

Understanding device error busy error

Linux
/ UNIX will not allow you to unmount a device that is busy. There are
many reasons for this (such as program accessing partition or open file)
, but the most important one is to prevent the data loss.
Try the following command to find out what processes have activities on
the device/partition. If your device name is /dev/sdb1, enter the
following command as root user:# lsof | grep '/dev/sda1' Output:

vi 4453 vivek 3u BLK 8,1 8167 /dev/sda1

Above
output tells that user vivek has a vi process running that is using
/dev/sda1. All you have to do is stop vi process and run umount again.
As soon as that program terminates its task, the device will no longer
be busy and you can unmount it with the following command:# umount /dev/sda1

-m : Name specifies a file on a mounted file system or a block device that is mounted. In above example you are using /mnt

Linux umount command to unmount a disk partition.
You can also try the umount command with –l option on a Linux based system:# umount -l /mnt Where,

-l
: Also known as Lazy unmount. Detach the filesystem from the filesystem
hierarchy now, and cleanup all references to the filesystem as soon as
it is not busy anymore. This option works with kernel version 2.4.11+
and above only.

If you would like to unmount a NFS mount point then try following command:# umount -f /mnt Where,

-f: Force unmount in case of an unreachable NFS system

Please
note that using these commands or options can cause data loss for open
files; programs which access files after the file system has been
unmounted will get an error.

I am new Linux and Unix user. How do I
show the active jobs on Linux or Unix-like systems using BASH/KSH/TCSH
or POSIX based shell? How can I display status of jobs in the current
session on Unix/Linux? Job control
is nothing but the ability to stop/suspend the execution of processes
(command) and continue/resume their execution as per your requirements.
This is done using your operating system and shell such as bash/ksh or
POSIX shell.

jobs command options

Show only processes that have changed status since the last notification are printed.

-r

Restrict output to running jobs only.

-s

Restrict output to stopped jobs only.

-x

COMMAND
is run after all job specifications that appear in ARGS have been
replaced with the process ID of that job's process group leader./td>

A note about /usr/bin/jobs and shell builtin

Type the following type command to find out whether jobs is part of shell, external command or both:$ type -a jobs Sample outputs:

jobs is a shell builtin
jobs is /usr/bin/jobs

In almost all cases you need to use the
jobs command that is implemented as a BASH/KSH/POSIX shell built-in. The
/usr/bin/jobs command can not be used in the current shell. The
/usr/bin/jobs command operates in a different environment and does not
share the parent bash/ksh's shells understanding of jobs.