ATF Jubilee Edition

To celebrate his 50th column, Reuven casts his vision to the future of web development and suggests some current trends that will affect how the job gets done.

Welcome to the jubilee installment of At
the Forge. This is the 50th column that I have written for
Linux Journal (or for SSC's short-lived
Websmith magazine) since early 1996. Over the
last few years, we have explored a large number of Web-related
technologies, techniques and applications, ranging from simple CGI
programs to sophisticated database-backed applications written
using mod_perl.

This month, I want to spend a bit of time prognosticating,
looking into the future of web application development. On the one
hand, things have never been more exciting for web application
developers; the technology continues to advance at a remarkable
rate, making it easier and easier to create sophisticated
applications. At the same time, the increasingly crowded field of
embedded programming languages, application servers and database
adaptors makes it harder to decide which technology is most
appropriate.

Because this column describes where I believe web
technologies and application development are headed in the coming
years, it should also serve as a sort of guideline for what future
issues of ATF will contain. You can think of this month's
installment as an indication of where my consulting firm is headed
professionally, and thus what you can expect me to suggest and
describe in the year (or more!) ahead. Since this is
Linux Journal and Linux is my company's
primary server platform, I will focus here on items that run with
Linux and, preferably, those that are free software.

Where Have We Been?

Web application development began soon after the Web itself
was formed. Ever since the first dynamically generated content was
sent to the first browser—an act which predates the CGI standard,
to say nothing of Netscape, Internet Explorer and
Apache—programmers have been designing increasingly sophisticated
applications for use on the Web.

CGI, or the “common gateway interface”, soon arrived on the
scene. CGI got its name because dynamically generated content was
originally a means to give a web interface to non-web applications.
With the advent of CGI, it was suddenly possible to create portable
server-side programs. Most web applications continue to be written
using CGI, because of its simplicity and its extreme
platform-independent nature, as well as the fact that web space
providers can give their clients CGI access without endangering the
server's stability.

You can write a CGI program for any web server, in any
language, on any operating system, and be virtually guaranteed that
it will work. However, CGI has a number of drawbacks. In
particular, it requires that the web server spawn a new process for
each HTTP request aimed at a CGI program. In other words, a web
site that receives 100 hits/minute is spawning more than one new
process every second.

By itself, this should not scare you. After all, a basic
Linux box should be able to handle the creation of one new process
each second, right? However, the size of the new process, as well
as the speed with which it starts up, are both important
factors.

Perl, my programming language of choice for the last few
years, has proven itself as a powerful means for creating CGI
programs. The CGI.pm module provides an amazing array of functions
that do nearly everything you would ever want from a CGI program
(as well as a number of things that I would never consider doing).
Moreover, Perl includes a powerful pattern-matching engine, along
with modules that handle most popular Internet standards and
protocols. The DBI (database interface) module has proven to be an
additional boon, making it easy to include the output from an SQL
query in a dynamically generated page.

However robust, flexible and secure Perl might be, the CGI
standard was never designed for producing a large volume of
dynamically generated pages on the fly. Each invocation of a CGI
program written in Perl forces the computer to create a new
process, load Perl into memory, load your program into memory,
compile your program into Perl's internal opcodes and then,
finally, interpret it using the Perl run time mechanism. This all
takes time and means that CGI programs will not scale well over the
long term. Indeed, it does not take a lot of concurrently running
CGI programs to bring a typical server to its knees.

At the same time, CGI has been successful because it's so
easy to use. With no other API can you write a “hello, world”
program as simple as the following:

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.