Thursday, April 30, 2015

Geofix is a simple
Python script that lets you use an Android device to record the
geographical coordinates of your current position. The clever part is
that the script stores the obtained latitude and longitude values in the
digiKam-compatible format, so you can copy the saved coordinates and
use them to geotag photos in digiKam’s Geo-location module.
To deploy Geofix on your Android device, install the SL4A and PythonForAndroid APK packages from the Scripting Layer for Android website. Copy then the geofix.py script to the sl4a/scripts
directory on the internal storage of your Android device. Open the SL4A
app, and launch the script. For faster access, you can add to the
homescreen an SL4A widget that links to the script.
Instead of using SL4A and Python for Android, which are all but abandoned by Google, you can opt for QPython. In this case, you need to use the geofix-qpython.py script. Copy it to the com.hipipal.qpyplus/scripts directory, and use the QPython app to launch the script.
Both scripts save obtained data in the geofix.tsv tab-separated file and the geofix.sqlite
database. You can use a spreadsheet application like LibreOffice Calc
to open the former, or you can run the supplied web app to display data
from the geofix.sqlite database in the browser. To do this, run the main.py script in the geofix-web directory by issuing the ./main.py command in the Terminal.
To geotag photos in digiKam using the data from Geofix, copy the desired coordinates in the digiKam format (e.g., geo:56.1831455,10.1182492). Select the photos you want to geotag and choose Image → Geo-location. Select the photos, right-click on the selection, and choose Paste coordinates.

Founded by Apache Project veteran Cliff Schmidt, the Literacy Bridge
created the Talking Book, a portable device that could play and record
audio content.
Designed to survive the rigors of sub-Saharan Africa, these devices
have allowed villages to learn about and adopt modern agricultural
practices, increase literacy rates, and allow villages and tribes to
share their oral history more widely by recording and replaying legends
and stories.

This project recently made headlines by analyzing the incidences of
reported killings by police officers in the United States. By performing
statistical analysis on records found after the fall of dictatorial
regimes, the organization sheds light on human rights abuses in those
countries. Its members are regularly called upon as expert witnesses in
war crimes tribunals. Their website claims that they "believe that truth
leads to accountability."

Founded in the chaos of the 2004 tsunami in Sri Lanka, Sahana was a
group of technologists' answer to the question: "What can we do to
help?" The goal of the project has remained the same since: how can we
leverage community efforts to improve communication and aid in a crisis
situation? Sahana provides projects which help reunite children with
their families, organize donations effectively, and help authorities
understand where aid is most urgently needed.

Where you have no internet, no reliable electricity, no roads, and no
fixed line telephones, you can still find mobile phones sending SMS
text messages. FrontlineSMS provides a framework to send, receive, and
process text messages from a central application using a simple GSM
modem or a mobile phone connected through a USB cable. The applications
are widespread—central recording and analysis of medical reports from
rural villages, community organizing, and gathering data related to
sexual exploitation and human trafficking are just a few of the applications which have successfully used FrontlineSMS.Do you know of other humanitarian free and open source projects? Let us know about them in the comments or send us your story.

Hello. This is the first part of a series of Linux tutorials. In
writing this tutorial, I assume that you are an absolute beginner in
creating Linux scripts and are very much willing to learn. During the
series the level will increase, so I am sure there will be something new
even for more advanced users. So let's begin.

Introduction

Most of our operating systems including Linux can support different
user interfaces (UI). The Graphical User Interface (GUI) is a
user-friendly desktop interface that enables users to click icons to run
an application. The other type of interface is the Command Line
Interface (CLI) which is purely textual and accepts commands from the
user. A shell, the command interpreter reads the command through the CLI
and invokes the program. Most of the operating systems nowadays,
provide both interfaces including Linux distributions.

When using shell, the user has to type in a series of commands at the
terminal. No problem if the user has to do the task only once. However,
if the task is complex and has to be repeated multiple times, it can
get a bit tedious for the user. Luckily, there is a way to automate the
tasks of the shell. This can be done by writing and running shell
scripts. A shell script is a type of file which is composed of a series
and sequence of commands that are supported by the Linux shell.

Why create shell scripts?

The shell script is a very useful tool in automating tasks in Linux
OSes. It can also be used to combine utilities and create new commands.
You can combine long and repetitive sequences of commands into one
simple command. All scripts can be run without the need of compiling it,
so the user will have a way of prototyping commands seamlessly.

I am new to Linux environment, can I still learn how to create shell scripts?

Of course! Creating shell scripts does not require complex knowledge
of Linux. A basic knowledge of the common commands in the Linux CLI and a
text editor will do. If you are an absolute beginner and have no
background knowledge in Linux Command Line, you might find this tutorial helpful.

Creating my first shell script

The bash (Bourne-Again Shell) is the
default shell in most of the Linux distributions and OS X. It is an
open-source GNU project that was intended to replace the sh (Bourne Shell), the original Unix shell. It was developed by Brian Fox and was released in 1989.
You must always remember that each Linux script using bash will start with the following line:

#!/bin/bash

Every Linux script starts with a shebang (#!) line. The bang line specifies the full path /bin/bash of the command interpreter that will be used to run the script.

Hello World!

Every programming language begins with the Hello World! display. We
will not end this tradition and create our own version of this dummy
output in Linux scripting.
To start creating our script, follow the steps below:Step 1: Open a text editor. I will use gedit for this example. To open gedit using the terminal, press CTRL + ALT + T on your keyboard and type gedit. Now, we can start writing our script.Step 2: Type the following command at the text editor:

#!/bin/bash echo "Hello World"

Step 3: Now, save the document with a file name hello.sh. Note that each script will have a .sh file extension.Step 4: As for security reasons enforced by Linux
distributions, files and scripts are not executable by default. However
we can change that for our script using the chmod command in Linux CLI.
Close the gedit application and open a terminal. Now type the following
command:

chmod +x hello.sh

The line above sets the executable permission to the hello.sh file. This procedure has to be done only once before running the script for the first time.Step 5: To run the script, type the following command at the terminal:

./hello.sh

Let's have another example. This time, we will incorporate displaying
some system information by using the whoami and date commands to our
hello script.
Open the hello.sh in our text editor and we will edit our script by typing:

#!/bin/bash echo "Hello $(whoami) !" echo "The date today is $(date)"

Save the changes we made in the script and run the script (Step 5 in the previous example) by typing:

./hello.sh

The output of the script will be:
In the previous example, the commands whoami and date were used
inside the echo command. This only signifies that all utilities and
commands available in the command line can also be used in shell
scripts.

Generating output using printf

So far, we have used echo to print strings and data from commands in
our previous example. Echo is used to display a line of text. Another
commmand that can be used to display data is the printf command. The
printf controls and prints data like the printf function in C.
Below is the summary of the common prinf controls:

Control

Usage

\"

Double quote

\\

Backslash

\b

Backspace

\c

Produce no further output

\e

Escape

\n

New Line

\r

Carriage Return

\t

Horizontal tab

\v

Vertical Tab

Example3: We will open the previous hello.sh and change all echo to printf and run the script again. Notice what changes occur in our output.

All lines are attached to each other because we didn't use any
controls in the printf command. Therefore the printf command in Linux
has the same properties as the C function printf.
To format the output of our script, we will use two of the controls
in the table summary above. In order to work, the controls have to be
indicated by a \ inside the quotes of the printf command. For instance,
we will edit the previous content of the hello.sh into:

Conclusion

In this tutorial, you have learned the basics of shell scripting and
were able to create and run shell scripts. During the second part of the
tutorial I will introduce how to declare variables, accept inputs and
perform arithmetic operations using shell commands.

Wednesday, April 29, 2015

Google can help you find almost anything, but it’s no good if you’ve
lost your smartphone – until today. The search engine now has the
ability to look up your lost device directly from its homepage.
Just type in “Find my phone,” and Google will show where your phone
is on a map. You can then set it to ring, should it be lost under piles
of laundry or something of the sort.

There are some caveats: your phone must have the latest version of Android’s main Google app
installed, and your browser must be logged into the same Google account
your phone is, but it’s a much simpler way to find your phone than
going through the Android Device Manager, which many Android users may not even be aware of.

Most of us longtime system administrators get a little nervous when
people start talking about DevOps. It's an IT topic surrounded by
a lot of mystery and confusion, much like the term "Cloud
Computing"
was a few years back. Thankfully, DevOps isn't something sysadmins need
to fear. It's not software that allows developers to do the job of the
traditional system administrator, but rather it's just a concept making
both development and system administration better. Tools like Chef and
Puppet (and Salt Stack, Ansible, New Relic and so on) aren't
"DevOps", they're
just tools that allow IT professionals to adopt a DevOps mindset. Let's
start there.

What Is DevOps?

Ask ten people to define DevOps, and you'll likely get 11 different
answers. (Those numbers work in binary too, although I suggest a larger
sample size.) The problem is that many folks confuse DevOps with DevOps
tools. These days, when people ask me, "What is DevOps?", I generally
respond:
"DevOps isn't a thing, it's a way of doing a thing."
The worlds of system administration and development historically
have been very separate. As a sysadmin, I tend to think very differently about
computing from how a developer does. For me, things like scalability and redundancy
are critical, and my success often is gauged by uptime. If things are
running, I'm successful. Developers have a different way of approaching
their jobs, and need to consider things like efficiency, stability,
security and features. Their success often is measured by usability.
Hopefully, you're thinking the traits I listed are important for both
development and system administration. In fact, it's that mindset
from which DevOps was born. If we took the best practices from the
world of development, and infused them into the processes of operations,
it would make system administration more efficient, more reliable and
ultimately better. The same is true for developers. If they can begin to
"code" their own hardware as part of the development process, they can
produce and deploy code more quickly and more efficiently. It's basically
the Reese's Peanut Butter Cup of IT. Combining the strengths of both
departments creates a result that is better than the sum of its parts.
Once you understand what DevOps really is, it's easy to see how people
confuse the tools (Chef, Puppet, New Relic and so on) for DevOps itself. Those
tools make it so easy for people to adopt the DevOps mindset, that they
become almost synonymous with the concept itself. But don't be seduced
by the toys—an organization can shift to a very successful DevOps way
of doing things simply by focusing on communication and cross-discipline
learning. The tools make it easier, but just like owning a rake doesn't
make someone a farmer, wedging DevOps tools into your organization
doesn't create a DevOps team for you. That said, just like any farmer
appreciates a good rake, any DevOps team will benefit from using the
plethora of tools in the DevOps world.

The System Administrator's New Rake

In this article, I want to talk about using DevOps tools as a system
administrator. If you're a sysadmin who isn't using a configuration
management tool to keep track of your servers, I urge you to check one
out. I'm going to talk about Chef, because for my day job, I recently
taught a course on how to use it. Since you're basically learning the
concepts behind DevOps tools, it doesn't matter that you're focusing on
Chef. Kyle Rankin is a big fan of Puppet, and conceptually, it's just
another type of rake. If you have a favorite application that isn't
Chef, awesome.
If I'm completely honest, I have to admit I was hesitant to learn Chef,
because it sounded scary and didn't seem to do anything I wasn't
already doing with Bash scripts and cron jobs. Plus, Chef uses the Ruby
programming language for its configuration files, and my programming
skills peaked with:

10 PRINT "Hello!"
20 GOTO 10

Nevertheless, I had to learn about it so I could teach the class. I can
tell you with confidence, it was worth it. Chef requires basically zero
programming knowledge. In fact, if no one mentioned that its configuration
files were Ruby, I'd just have assumed the syntax for the conf files was
specific and unique. Weird config files are nothing new, and honestly,
Chef's config files are easy to figure out.

Chef: Its Endless Potential

DevOps is a powerful concept, and as such, Chef can do amazing
things. Truly. Using creative "recipes", it's possible to spin up hundreds
of servers in the cloud, deploy apps, automatically scale based on need
and treat every aspect of computing as if it were just a function to
call from simple code. You can run Chef on a local server. You can
use the cloud-based service from the Chef company instead of hosting
a server. You even can use Chef completely server-less, deploying the
code on a single computer in solo mode.
Once it's set up, Chef supports multiple environments of similar
infrastructures. You can have a development environment that is completely
separate from production, and have the distinction made completely
by the version numbers of your configuration files. You can have your
configurations function completely platform agnostically, so a recipe
to spin up an Apache server will work whether you're using CentOS,
Ubuntu, Windows or OS X. Basically, Chef can be the central resource
for organizing your entire infrastructure, including hardware, software,
networking and even user management.
Thankfully, it doesn't have to do all that. If using Chef meant turning
your entire organization on its head, no one would ever adopt it. Chef
can be installed small, and if you desire, it can grow to handle more
and more in your company. To continue with my farmer analogy, Chef can
be a simple garden rake, or it can be a giant diesel combine tractor. And
sometimes, you just need a garden rake. That's what you're going to learn
today. A simple introduction to the Chef way of doing things, allowing
you to build or not build onto it later.

The Bits and Pieces

Initially, this was going to be a multipart article on the specifics of
setting up Chef for your environment. I still might do a series like
that for Chef or another DevOps configuration automation package,
but here I want everyone to understand not only DevOps itself, but what
the DevOps tools do. And again, my example will be Chef.
At its heart, Chef functions as a central repository for all your
configuration files. Those configuration files also include the ability
to carry out functions on servers. If you're a sysadmin, think of it
as a central, dynamic /etc directory along with a place all your Bash
and Perl scripts are held. See Figure 1 for a visual on how Chef's
information flows.
Figure 1. This is the basic Chef setup, showing how data flows.
The Admin Workstation is the computer at which configuration files
and scripts are created. In the world of Chef, those are called
cookbooks and recipes, but basically, it's the place all the human-work
is done. Generally, the local Chef files are kept in a revision control
system like Git, so that configurations can be rolled back in the case of
a failure. This was my first clue that DevOps might make things better for
system administrators, because in the past all my configuration revision
control was done by making a copy of a configuration file before editing
it, and tacking a .date at the end of the filename. Compared to the
code revision tools in the developer's world, that method (or at least
my method) is crude at best.
The cookbooks and recipes created on the administrator workstation
describe things like what files should be installed on the server
nodes, what configurations should look like, what applications should be
installed and stuff like that. Chef does an amazing job of being
platform-neutral, so if your cookbook installs Apache, it generally can
install
Apache without you needing to specify what type of system it's
installing
on. If you've ever been frustrated by Red Hat variants calling Apache
"httpd", and Debian variants calling it
"apache2", you'll love Chef.
Once you have created the cookbooks and recipes you need to configure your
servers, you upload them to the Chef server. You can connect to the Chef
server via its Web interface, but very little actual work is done via the
Web interface. Most of the configuration is done on the command line of
the Admin Workstation. Honestly, that is something a little confusing
about Chef that gets a little better with every update. Some things
can be modified via the Web page interface, but many things can't. A few
things can only be modified on the Web page, but it's not always clear
which or why.
With the code, configs and files uploaded to the Chef Server, the
attention is turned to the nodes. Before a node is part of the Chef
environment, it must be "bootstrapped". The process isn't difficult, but
it is required in order to use Chef. The client software is installed on
each new node, and then configuration files and commands are pulled from
the Chef server. In fact, in order for Chef to function, the nodes must
be configured to poll the server periodically for any changes. There is
no "push" methodology to send changes or updates to the node, so regular
client updates are important. (These are generally performed via cron.)
At this point, it might seem a little silly to have all those extra steps
when a simple FOR loop with some SSH commands could accomplish the same
tasks from the workstation, and have the advantage of no Chef client
installation or periodic polling. And I confess, that was my thought at
first too. When programs like Chef really prove their worth, however,
is when the number of nodes begins to scale up. Once the admittedly
complex setup is created, spinning up a new server is literally a single
one-liner to bootstrap a node. Using something like Amazon Web Services,
or Vagrant, even the creation of the computers themselves can be part
of the Chef process.

To Host or Not to Host

The folks at Chef have made the process of getting a Chef Server
instance as simple as signing up for a free account on their cloud
infrastructure. They maintain a "Chef Server" that allows you to upload
all your code and configs to their server, so you need to worry
only about your nodes. They even allow you to connect five of your server
nodes for free. If you have a small environment, or if you don't have the
resources to host your own Chef Server, it's tempting just to use their
pre-configured cloud service. Be warned, however, that it's free
only because they hope you'll start to depend on the service and eventually
pay for connecting more than those initial five free nodes.
They have an enterprise-based self-hosted solution that moves the Chef
Server into your environment like Figure 1 shows. But it's important
to realize that Chef is open source, so there is a completely free,
and fully functional open-source version of the server you can download
and install into your environment as well. You do lose their support,
but if you're just starting out with Chef or just playing with it,
having the open-source version is a smart way to go.

How to Begin?

The best news about Chef is that incredible resources exist
for learning how to use it. On the http://getchef.com Web
site, there is a video
series outlining a basic setup for installing Apache on your server
nodes as an example of the process. Plus, there's great documentation
that describes the installation process of the open-source Chef Server,
if that's the path you want to try.
Once you're familiar with how Chef works (really, go through the training
videos, or find other Chef fundamentals training somewhere), the next
step is to check out the vibrant Chef community. There are cookbooks and
recipes for just about any situation you can imagine. The cookbooks are
just open-source code and configuration files, so you can tweak them to
fit your particular needs, but like any downloaded code, it's nice to
start with something and tweak it instead of starting from scratch.
DevOps is not a scary new trend invented by developers in order to get
rid of pesky system administrators. We're not being replaced by code,
and our skills aren't becoming useless. What a DevOps mindset means is
that we get to steal the awesome tools developers use to keep their code
organized and efficient, while at the same time we can hand off some of the
tasks we hate (spinning up test servers for example) to the developers, so
they can do their jobs better, and we can focus on more important sysadmin
things. Tearing down that wall between development and operations truly
makes everyone's job easier, but it requires communication, trust and
a few good rakes in order to be successful. Check out a tool like Chef,
and see if DevOps can make your job easier and more awesome.

There is a well known story about a scientist who gave a talk about the
Earth and its place in the solar system. At the end of the talk, a woman
refuted him with "That's rubbish; the Earth is really like a flat dish,
supported on the back of a turtle." The scientist smiled and asked back
"But what's the turtle standing on?", to which the woman, realizing
the logical trap, answered, "It's very simple: it's turtles all the
way down!" No matter the verity of the anecdote, the identity of the
scientist (Bertrand Russell or William James are sometimes mentioned),
or even if they were turtles or tortoises, today we may apply a similar
solution to Web development, with "JavaScript all the way down".
If you are going to develop a Web site, for client-side development, you could
opt for Java applets, ActiveX controls, Adobe Flash animations and,
of course, plain JavaScript. On the other hand, for server-side coding,
you could go with C# (.Net), Java, Perl, PHP and more, running on
servers, such as Apache, Internet Information Server, Nginx, Tomcat and
the like. Currently, JavaScript allows you to do away with most of this
and use a single programming language, both on the client and the server
sides, and with even a JavaScript-based server. This way of working
even has produced a totally JavaScript-oriented acronym along the lines of
the old LAMP (Linux+Apache+MySQL+PHP) one: MEAN, which stands for MongoDB
(a NoSQL database you can access with JavaScript), Express (a Node.js
module to structure your server-side code), Angular.JS (Google's Web
development framework for client-side code) and Node.js.
In this article, I cover several JavaScript tools for writing,
testing and deploying Web applications, so you can consider whether
you want to give a twirl to a "JavaScript all the way down" Web stack.

What's in a Name?

JavaScript originally was developed at Netscape in 1995, first under
the name Mocha, and then as LiveScript. Soon (after Netscape and Sun got
together; nowadays, it's the Mozilla Foundation that manages the language)
it was renamed JavaScript to ride the popularity wave, despite having
nothing to do with Java. In 1997, it became an industry standard under a
fourth name, ECMAScript. The most common current version of JavaScript is
5.1, dated June 2011, and version 6 is on its way. (However, if you want
to use the more modern features, but your browser won't support them, take
a look at the Traceur compiler, which will back-compile version 6 code
to version 5 level.)
Some companies produced supersets of the language,
such as Microsoft, which developed JScript (renamed to avoid legal problems)
and Adobe, which created ActionScript for use with Flash.
There are several
other derivative languages (which actually compile to JavaScript for
execution), such as the more concise CoffeeScript, Microsoft's TypeScript
or Google's most recent AtScript (JavaScript plus Annotations), which
was developed for the Angular.JS project. The asm.js project even uses a
JavaScript subset as a target language for efficient compilers for other
languages. Those are many different names for a single concept!

Why JavaScript?

Although stacks like LAMP or its Java, Ruby or .Net peers do power
many Web applications today, using a single language both for client-
and server-side development has several advantages, and companies
like Groupon, LinkedIn, Netflix, PayPal and Walmart, among many more,
are proof of it.
Modern Web development is split between client-side and
server-side (or front-end and back-end) coding, and striving for the
best balance is more easily attained if your developers can work both
sides with the same ease. Of course, plenty of developers
are familiar with all the languages needed for both sides of coding, but
in any case, it's quite probable that they will be more productive at
one end or the other.
Many tools are available for JavaScript (building,
testing, deploying and more), and you'll be able to use them for all
components in your system (Figure 1). So, by going with the same
single set of tools, your experienced JavaScript developers will be able
to play both sides, and you'll have fewer problems getting the needed
programmers for your company.
Figure 1. JavaScript can be used everywhere, on the client and the server
sides.
Of course, being able to use a single language isn't the single key
point. In the "old days" (just a few years ago!), JavaScript lived
exclusively in browsers to read and interpret JavaScript source
code. (Okay, if you want to be precise, that's not exactly true; Netscape
Enterprise Server ran server-side JavaScript code, but it wasn't widely
adopted.) About five years ago, when Firefox and Chrome started competing
seriously with (by then) the most popular Internet Explorer, new JavaScript
engines were developed, separated from the layout engines that actually
drew the HTML pages seen on browsers. Given the rising popularity of
AJAX-based applications, which required more processing power on the
client side, a competition to provide the fastest JavaScript started,
and it hasn't stopped yet. With the higher performance achieved,
it became possible to use JavaScript more widely (Table 1).

Table 1. The Current Browsers and Their JavaScript Engines

Browser

JavaScript Engine

Chrome

V8

Firefox

SpiderMonkey

Opera

Carakan

Safari

Nitro

Some of these engines apply advanced techniques to get the most speed
and power. For example, V8 compiles JavaScript to native machine code
before executing it (this is called JIT, Just In Time compilation, and
it's done on the run instead of pre-translating the whole program as
is traditional with compilers) and also applies several optimization and
caching techniques for even higher throughput. SpiderMonkey includes
IonMonkey, which also is capable of compiling JavaScript code to object
code, although working in a more traditional way. So, accepting that modern
JavaScript engines have enough power to do whatever you may need, let's
now start a review of the Web stack with a server that wouldn't have
existed if it weren't for that high-level language performance: Node.js.

Node.js: a New Kind of Server

Node.js (or plain Node, as it's usually called) is a Web server,
mainly written itself in JavaScript, which uses that language for all
scripting. It originally was developed to simplify developing real-time
Web sites with push capabilities—so instead of all communications being
client-originated, the server might start a connection with a client
by itself. Node can work with lots of live connections, because it's
very lightweight in terms of requirements. There are two key concepts to
Node: it runs a single process (instead of many), and all I/O
(database queries, file accesses and so on) is implemented in a non-blocking,
asynchronous way.
Let's go a little deeper and further examine the main difference between
Node and more traditional servers like Apache. Whenever Apache receives
a request, it starts a new, separate thread (process) that uses RAM
of its own and CPU processing power. (If too many threads are running,
the request may have to wait a bit longer until it can be started.) When
the thread produces its answer, the thread is done. The maximum number of
possible threads depends on the average RAM requirements for a process;
it might be a few thousand at the same time, although numbers vary depending
on server size (Figure 2).
Figure 2. Apache and traditional Web servers run a separate thread for each
request.
On the other hand, Node runs a single thread. Whenever a request
is received, it is processed as soon as it's possible, and it will run
continuously until some I/O is required. Then, while the code waits for
the I/O results to be available, Node will be able to process other
waiting requests (Figure 3). Because all requests are served by
a single process, the possible number of running requests rises, and
there have been experiments with more than one million concurrent
connections—not shabby at all! This shows that an ideal use case for Node is having
server processes that are light in CPU processing, but high on
I/O.
This will allow more requests to run at the same time; CPU-intensive
server processes would block all other waiting requests and produce a
high drop in output.
Figure 3. Node runs a single thread for all requests.
A great asset of Node is that there are many available modules (an
estimate ran in the thousands) that help you get to production more
quickly. Though I obviously can't list all of them, you probably
should consider some of the modules listed in Table 2.

Table 2. Some widely used Node.js modules that will help your development and
operation.

Module

Description

async

Simplifies asynchronous work, a possible alternative to promises.

cluster

Improves concurrency in multicore systems by forking worker processes.
(For further scalability, you also could set up a reverse proxy and run
several Node.js instances, but that goes beyond the objective of this
article.)

connect

Works with "middleware" for common tasks, such as error handling,
logging, serving static files and more.

ejs, handlebars or jade, EJS

Templating engines.

express

A minimal Web framework—the E in MEAN.

forever

A command-line tool that will keep your server up, restarting if needed
after a crash or other problem.

mongoose, cradle, sequelize

Database ORM, for MongoDB, CouchDB and for relational databases, such as
MySQL and others.

passport

Authentication middleware, which can work with OAuth providers, such as
Facebook, Twitter, Google and more.

request or superagent

HTTP clients, quite useful for interacting with RESTful APIs.

underscore or lodash

Tools for functional programming and for extending the JavaScript core
objects.

Of course, there are some caveats when using Node.js. An obvious one
is that no process should do heavy computations, which would
"choke"
Node's single processing thread. If such a process is needed,
it should be done by an external process (you might want to consider
using a message queue for this) so as not to block other requests. Also,
care must be taken with error processing. An unhandled exception might
cause the whole server to crash eventually, which wouldn't bode well
for the server as a whole. On the other hand, having a large community
of users and plenty of fully available, production-level, tested code
already on hand can save you quite a bit of development time and let
you set up a modern, fast server environment.

Planning and Organizing Your Application

When starting out with a new project, you could set up your code from
zero and program everything from scratch, but several
frameworks can help you with much of the work and provide
clear structure and organization to your Web application. Choosing the
right framework will have an important impact on your development time,
on your testing and on the maintainability of your site. Of course,
there is no single answer to the question "What framework is
best?",
and new frameworks appear almost on a daily basis, so I'm just going
with three of the top solutions that are available today: AngularJS,
Backbone and Ember. Basically, all of these frameworks are available under
permissive licenses and give you a head start on developing modern SPA
(single page applications). For the server side, several packages (such
as Sails, to give just one example) work with all frameworks.
AngularJS (or Angular.JS or just plain Angular—take your pick) was
developed in 2009 by Google, and its current version is 1.3.4, dated
November 2014. The framework is based on the idea that declarative
programming is best for interfaces (and imperative programming for the
business logic), so it extends HTML with custom tag attributes that
are used to bind input and output data to a JavaScript model. In this
fashion, programmers don't have to manipulate the Web page directly,
because it is updated automatically. Angular also focuses on testing,
because the difficulty of automatic testing heavily depends upon the code
structure. Note that Angular is the A in MEAN, so there are some other
frameworks that expand on it, such as MEAN.IO or MEAN.JS.
Backbone is a lighter, leaner framework, dated from 2010, which uses a
RESTful JSON interface to update the server side automatically. (Fun fact:
Backbone was created by Jeremy Ashkenas, who also developed CoffeeScript;
see the "What's in a Name?" sidebar.) In terms of community size, it's
second only to Angular, and in code size, it's by far the smallest
one. Backbone doesn't include a templating engine of its own, but it
works fine with Underscore's templating, and given that this library
is included by default, it is a simple choice to make. It's considered
to be less "opinionated" than other frameworks and to have a quite
shallow learning curve, which means that you'll be able to start
working quickly. A deficiency is that Backbone lacks two-way data binding, so
you'll have to write code to update the view whenever the model changes
and vice versa. Also, you'll probably be manipulating the Web
page directly, which will make your code harder to unit test.
Finally, Ember probably is harder to learn than the other frameworks, but
it rewards the coder with higher performance. It favors "convention over
configuration", which likely will make Ruby on Rails or Symfony users
feel right at home. It integrates easily with a RESTful server side,
using JSON for communication. Ember includes Handlebars (see Table 2)
for templating and provides two-way updates. A negative point is the
usage of tags for markers, in order to keep templates
up to date with the model. If you try to debug a running application,
you'll find plenty of unexpected elements!

Simplify and Empower Your Coding

It's a sure bet that your application will need to work with HTML, handle
all kinds of events and do AJAX calls to connect with the server. This should
be reasonably easy—although it might be plenty of work—but even
today, browsers do not have exactly the same features. Thus, you might
have to go overboard with specific browser-detection techniques, so
your code will adapt and work everywhere. Modern application users have
grown accustomed to working with different events (tap, double tap, long tap,
drag and drop, and more), and you should be able to include that kind of
processing in your code, possibly with appropriate animations. Finally,
connecting to a server is a must, so you'll be using AJAX functions all
the time, and it shouldn't be a painful experience.

The most probable candidate library to help you with all these functions
is jQuery. Arguably, it's the most popular JavaScript library in use
today, employed at more than 60% of the most visited Web sites. jQuery provides
tools for navigating your application's Web document, handles events
with ease, applies animations and uses AJAX (Listing 1). Its current
version is 2.1.1 (or 1.11.1, if you want to support older browsers),
and it weighs in at only around 32K. Some frameworks (Angular, for example)
even will use it if available.

Listing 1. A simple jQuery example, showing how to process events, access the
page and use AJAX.

Other somewhat less used possibilities could be Prototype (current
version 1.7.2), MooTools (version 1.5.1) or Dojo Toolkit (version
11). One of the key selling points of all these libraries is the
abstraction of the differences between browsers, so you can write your
code without worrying if it will run on such or such browser. You
probably should take a look at all of them to find which one best fits your
programming style.

Also, there's one more kind of library you may want. Callbacks are
familiar to JavaScript programmers who need them for AJAX calls, but
when programming for Node, there certainly will be plenty of them! You
should be looking at "promises", a way of programming that will make
callback programming more readable and save you from "callback
hell"—a situation in which you need a callback, and that callback also needs
a callback, which also needs one and so on, making code really hard to
follow. See Listing 2, which also shows the growing indentation that
your code will need. I'm omitting error-processing code, which would
make the example even messier!

Listing 2. Callback hell happens when callbacks include callbacks, which
include callbacks and so on.

The behavior of promises is standardized through the "Promises/A+"
open specification. Several packages provide promises
(jQuery and Dojo already include some support for them), and in
general, they even can interact, processing each other's promises. A
promise is an object that represents the future value of an (usually
asynchronous) operation. You can process this value through the
promise .then(...) method and handle exceptions with its
.catch(...) method. Promises can be chained, and a promise can
produce a new promise, the value of which will be processed in the next
.then(...). With this style, the callback hell example of
Listing 2 would be converted into more understandable code; see Listing
3. Code, instead of being more and more indented, stays aligned to
the left. Callbacks still are being (internally) used, but your code
doesn't explicitly work with them. Error handling is also simpler;
you simply would add appropriate .catch(...) calls.

Listing 3. Using promises produces far more legible code.

You also can build promises out of more promises—for example, a service
might need the results of three different callbacks before producing an
answer. In this case, you could build a new single promise out of the
three individual promises and specify that the new one will be
fulfilled only when the other three have been fulfilled. There also are other
constructs that let you fulfill a promise when a given number (possibly
just one) of "sub-promises" have been fulfilled. See the Resources
section for several possible libraries you might want to try.

I have commented on several tools you might use to write your application,
so now let's consider the final steps: building the application,
testing it and eventually deploying it for operation.

Testing Your Application

No matter whether you program on your own or as a part of a large
development group, testing your code is a basic need, and doing it in an
automated way is a must. Several frameworks can help you
with this, such as Intern, Jasmine or Mocha (see Resources). In essence,
they are really similar. You define "suites", each of which runs one
or more "test cases", which test that your code does some specific
function. To test results and see if they satisfy your expectations,
you write "assertions", which basically are conditions that must be
satisfied (see Listing 4 for a simple example). You can run test suites
as part of the build process (which I explain below) to see if anything
was broken before attempting to deploy the newer version of your code.

Tests can be written in "fluent" style, using many matchers (see Listing
5 for some examples). Several libraries provide different ways
to write your tests, including Chai, Unit.js, Should.js and Expect.js;
check them out to decide which one suits you best.

Listing 5. Some examples of the many available matchers you can use to write
assertions.

If you want to run tests that involve a browser, PhantomJS and
Zombie provide a fake Web environment, so you can run tests with greater
speed than using tools like Selenium, which would be more appropriate
for final acceptance tests.

Listing 6. A sample test with Zombie (using promises, by the way) requires no
actual browser.

A Slew of DDs!

Modern agile development processes usually emphasize very short cycles,
based on writing tests for code yet unwritten, and then actually writing
the desired code, the tests being both a check that the code works as
desired and as a sort of specification in itself. This process is called
TDD (Test-Driven Development), and it usually leads to modularized and
flexible code, which also is easier to understand, because the tests
help with understanding. BDD (Behavior-Driven Development) is a process
based on TDD, which even specifies requirements in a form quite similar
to the matchers mentioned in this article. Yet another "DD" is ATDD
(Acceptance Test-Driven Development), which highlights the idea of writing
the (automated) acceptance tests even before programmers start coding.

Building and Deploying

Whenever your code is ready for deployment, you almost certainly will
have to do several repetitive tasks, and you'd better automate them. Of
course, you could go with classic tools like make or Apache's
ant, but keeping to the "JavaScript all the
way down" idea, let's look at
a pair of tools, Grunt and Gulp, which work well.

Grunt can be installed with npm. Do sudo npm install -g
grunt-cli,
but this isn't enough; you'll have to prepare a gruntfile to let it know
what it should do. Basically, you require a package.json file that
describes the packages your system requires and a Gruntfile.js file
that describes the tasks you need. Tasks may have subtasks of their own,
and you may choose to run the whole task or a specific subtask. For
each task, you will define (in JavaScript, of course) what needs to be
done (Listing 7). Running grunt with no parameters will run
a default (if given) task or the whole gamut of tasks.

Gulp is somewhat simpler to set up (in fact, it was created to simplify
Grunt's configuration files), and it depends on what its authors call
"code-over-configuration". Gulp works in "stream" or
"pipeline" fashion,
along the lines of Linux's command line, but with JavaScript
plugins. Each plugin takes one input and produces one output, which
automatically is fed to the next plugin in the queue. This is simpler
to understand and set up, and it even may be faster for tasks involving
several steps. On the other hand, being a newer project implies a smaller
community of users and fewer available plugins, although both situations
are likely to work out in the near future.

You can use them either from within a development environment (think
Eclipse or NetBeans, for example), from the command line or as
"watchers", setting them up to monitor specific files or directories
and run certain tasks whenever changes are detected to streamline
your development process further in a completely automatic way. You can set up
things so that templates will be processed, code will be minified,
SASS or LESS styles will be converted in pure CSS, and the resulting
files will be moved to the server, wherever it is appropriate for them. Both
tools have their fans, and you should try your hand at both to decide
which you prefer.

Getting and Updating Packages

Because modern systems depend on lots of packages (frameworks, libraries,
styles, utilities and what not), getting all of them and, even worse,
keeping them updated, can become a chore. There are two tools for this:
Node's own npm (mainly for server-side packages, although it can
work for client-side code too) and Twitter's bower (more geared to
the client-side parts). The former deals mainly with Node packages and
will let you keep your server updated based on a configuration file. The
latter, on the other hand, can install and update all kinds of front-end
components (that is, not only JavaScript files) your Web applications might
need also based on separate configuration metadata files.

Usage for
both utilities is the same; just substitute bower for
npm, and
you're done. Using npm search for.some.package can help you find a
given package. Typing npm install some.package will install it, and adding
a --save option will update the appropriate configuration file,
so future npm update commands will get everything up to
date. In
a pinch, npm also can be used as a replacement for
bower,
although then you'll possibly want to look at browserify to organize
your code and prepare your Web application for deployment. Give it a
look just in case.

Conclusion

Modern fast JavaScript engines, plus the availability of plenty of
specific tools to help you structure, test or deploy your systems
make it possible to create Web applications with "JavaScript
all the way down", helping your developers be more productive and giving
them the possibility of working on both client and server sides with the
same tools they already are proficient with. For modern development,
you certainly should give this a thought.

Working with CSS is simpler with Sass or
{less}; note that the latter can be installed
with npm.

Use testing frameworks, such as Intern, Jasmine
or Mocha. Chai, Should.js (),
Expect.js () and UnitJS
are complete assertion libraries with different
interfaces to suit your preferences. (UnitJS actually includes Should.js
and Expect.js to give you more freedom in choosing your preferred
assertion writing style.) PhantomJS and
Zombie.js allow you to run your tests
without using an actual browser, for higher speeds, while Selenium
is preferred for actual acceptance tests.

Vzwatchd is an OpenVZ monitoring daemon that informs the server
administrator by email when a limit of the container is reached. OpenVZ
is a Linux Kernel virtualisation technology that is often used by Web
Hosting services, it is the free core of the commercial virtuozzo
virtualisation application. OpenVZ is a lightweight virtualisation which
has less overhead then KVM or XEN, it is more like a Linux LXC jail but
with advanced limit options to define how many ressources a virtual
machine may use and it has support for filesystem quota.
This tutorial explains the installation and configuration of the vzwatchd daemon on Debian and Ubuntu.

1 Does my virtual server use OpenVZ

Have you rented a virtual server from a hosting company without
knowing which virtualisation technology it uses? Run the following
command to test if it uses OpenVZ:

cat /proc/user_beancounters

If the output is similar to the one below, then your server uses
OpenVZ or a compatible technology and you can use vzwatchd to monitor
the vserver.

at the end of of the compile output. If you get an error instead,
then rerun the command. I had to run the command twice to compile all
modules successfully.
To check if the installation was successfull, run the command:

vzwatchd check

This will check the installation and create an example config file.

root@server:~# vzwatchd check/etc/vzwatchd.conf does not exist, creating one with defaults.Edit /etc/vzwatchd.conf to suit your needs and then start /usr/local/bin/vzwatchd again.

3 Configure and activate vzwatchd

Now I will edit the vzwatchd.conf file and set the email address for the notification messages.

nano /etc/vzwatchd.conf

The config file shall look like this after you edited it, just with your own email address off course.

A
goldmine of open source code is available to programmers, but choosing
the right library and understanding how to use it can be tricky. Sourcegraph has created a search engine and code browser to help developers find better code and build software faster.
Sourcegraph is a code search engine and browsing tool that
semantically indexes all the open source code available on the web. You
can search for code by repository, package, or function and click on
fully linked code to read the docs, jump to definitions, and instantly
find usage examples. And you can do all of this in your web browser,
without having to configure any editor plugin.
Sourcegraph was created by two Stanford grads, Quinn Slack and Beyang
Liu, who, after spending hours hunting through poorly documented code,
decided to build a tool to help them better read and understand code.

Try clicking on code snippets from Docker, a popular open source container library.

Are you a repository author?

If you're an author of an open source project or library, you should enable your repository on Sourcegraph.
Enabling your repositories tells Sourcegraph to analyze and index your
code so that contributors and users of your libraries can search and
browse the code on Sourcegraph. These features can help your users save
hours by letting them quickly find and understand pieces of code. A
single good usage example can be worth a thousand words of
documentation. Enabling repositories is free and always will be for open
source.

Semantic search for projects, functions, or packages

Sourcegraph indexes code at a semantic level, which means it parses
and understands code the same way a compiler does. This is necessary to
support features such as semantic search and finding usage examples.
Sourcegraph currently supports Go, Java, and Python, with JavaScript,
Ruby, and Haskell in beta.
Try searching for popular projects like Docker, the AWS Java SDK, Kubernetes, redis-py, or your own project.

Interactive code snippets

From Sourcegraph's UI, you can browse open source libraries quickly
and efficiently. But sometimes, you want to share code outside that
interface. For example, you might want to embed a snippet of code in a
blog post or an answer to a forum question. Sourcegraph lets you embed
clickable, interactive snippets of code with Sourceboxes. Here's an example:

Open source at its core

The core analysis library of Sourcegraph is open source and available as an easy-to-use library called srclib
(pronounced "Source Lib"). srclib powers all the semantic
analysis-enabled features you see on Sourcegraph.com, and also supports
editor plugins that provide jump-to-definition and other semantically
aware functionality.

Network Time Protocol (NTP) is used to synchronize system clocks of
different hosts over network. All managed hosts can synchronize their
time with a designated time server called an NTP server. An NTP server
on the other hand synchronizes its own time with any public NTP server,
or any server of your choice. The system clocks of all NTP-managed
devices are synchronized to the millisecond precision.
In a corporate environment, if they do not want to open up their
firewall for NTP traffic, it is necessary to set up in-house NTP server,
and let employees use the internal server as opposed to public NTP
servers. In this tutorial, we will describe how to configure a CentOS
system as an NTP server. Before going into the detail, let's go over
the concept of NTP first.

Why Do We Need NTP?

Due to manufacturing variances, all (non-atomic) clocks do not run at
the exact same speed. Some clocks tend to run faster, while some run
slower. So over a large timeframe, the time of one clock gradually
drifts from another, causing what is known as "clock drift" or "time
drift". To minimize the effect of clock drift, the hosts using NTP
should periodically communicate with a designated NTP server to keep
their clock in sync.
Time synchrony across different hosts is important for things like scheduled backup, intrusion detection logging, distributed job scheduling or transaction bookkeeping. It may even be required as part of regulatory compliance.

NTP Hierarchy

NTP clocks are organized in a layered hierarchy. Each level of the hierarchy is called a stratum. The notion of stratum describes how many NTP hops away a machine is from an authoritative time source.
Stratum 0 is populated with clocks that have virtually no time
drifts, such as atomic clocks. These clocks cannot be directly used over
the network. Stratum N (N > 1) servers synchronize their time against Stratum N-1 servers. Stratum N clocks may be connected with each other over network.
NTP supports up to 15 stratums in the hierarchy. Stratum 16 is considered unsynchronized and unusable.

Preparing CentOS Server

Now let's proceed to set up an NTP server on CentOS.
First of all, we need to make sure that the time zone of the server is set up correctly. In CentOS 7, we can use the timedatectl command to view and change the server time zone (e.g., "Australia/Adelaide")

The rule will allow NTP traffic (on port UDP/123) from
192.168.1.0/24, and deny traffic from all other networks. You can update
the rule to match your requirements.

Configuring NTP Clients

1. Linux

NTP client hosts need the ntpdate package to synchronize time against the server. The package can be easily installed using yum or apt-get. After installing the package, run the command with the IP address of the server.

# ntpdate

The command is identical for RHEL and Debian based systems.

2. Windows

If you are using Windows, look for 'Internet Time' under Date and Time settings.

3. Cisco Devices

If you want to synchronize the time of a Cisco device, you can use the following command from the global configuration mode.

# ntp server

NTP enabled devices from other vendors have their own parameters for
Internet time. Please check the documentation of the device if you want
to synchronize its time with the NTP server.

Conclusion

To sum up, NTP is a protocol that keeps the clocks across all your
hosts in sync. We have demonstrated how we can set up an NTP server,
and let NTP enabled devices synchronize their time against the server.
Hope this helps.

Tuesday, April 28, 2015

We can make a bootable USB flash drive containing more than one
operating system. This drive called multiboot drive. You can make it
from Linux by using Multisystem. I show you how to use Multisystem to
create Ubuntu, elementary OS, Fedora, and Antergos inside a 16 GB USB
drive.

Explanation: to install Multisystem, we need 4 commands.
First command above tells Ubuntu to add Mutisystem Debian Repository.
Second command tells Ubuntu to get verification key for security (to
make sure that the repository is correct). Third command tells Ubuntu to
reset the repositories list so the new repository added (Multisystem)
will be used for further. The last command installs Multisystem.

Installing Multisystem and Dependencies

Burn Linux ISO Images To The Drive

Insert your USB drive to USB port. Mount it.

Open Multisystem.

Multisystem main window will detect your USB drive. In this example,
I use Kingston DataTraveler. So it detects the same device. If it
doesn't detect it on your machine, mount the drive and click the reload
button.

Select Your Drive

Then select your drive and click Confirm button.

A dialog will appear saying that the Grub2 will be installed in MBR.
It means Multisystem will install bootloader into your USB drive (not
HDD). Click OK.

Confirm GRUB Installation

Now you see a blank Multisystem window like this.

Main Window

Click the disc icon below Select an .iso section to open ISO file.

Select ISO file.

Select ISO

A black window will appear. It is a Terminal asking you for your password. Enter it.

Multisystem Is Burning

The terminal will do the burning process. Wait until finished.

One Operating System Burnt into The Drive Successfully

Repeat points 7 - 10 for another ISO.

The result in Multisystem is it reads your drive to has 4 operating systems like this.

Multisystem Reads My Drive Contents

Bonus

Actually when you install Multisystem, you will install QEMU dependencies too. By using QEMU, you can try your USB without restarting
nor testing on another machine. QEMU is a great virtualization
hypervisor. It is relatively lighter than Virtualbox. Just go to main
window > Boot Tab > click Test your liveusb in QEMU. See picture
below.