Security Watch

Rebecca Mercuri authors the featured "Security
Watch" column for the Communications of the Association for Computing Machinery.
Articles will be linked here in html (containing interactive links) and pdf
(best for printing) formats as they appear. [RM Note: Links within the html
versions may not necessarily currently reflect the same material that was
originally referenced when these articles were published. It is appreciated
if readers who locate a link's new location let me know where it has moved,
so I can provide an update. Broken links need not be notified.]

With the ubiquity of computer-based devices in everyday
use, forensic techniques are increasingly being applied to a broad range
of digital media and equipment, thus posing many challenges for experts as
well as for those who make use of their skills. This article draws on the
author's experience as a computer forensic investigator and expert witness
in addressing best practices, training, certification, toolset, and laboratory
issues in this rapidly expanding field.

Transparency is playing an increasingly important role
in the world of computer security. But as with many sociological interactions
with technology, an optimal balance is difficult to quantify. The consideration
of a trust-centric approach (as opposed to a vulnerability-based one) may
help achieve the transparency needed to ensure confidence and reduce perceived
(and perhaps even actual) risks in transactional experiences.

Digital multimedia (whether it be audio, video, or still
photography and art) is exposed to a broad spectrum of security problems,
and involves significant gray areas in terms of methods and laws. From the
standpoint of the media provider, protection of artistic content from unauthorized
distribution or modification is a primary concern. At the delivery end, recipients
want to ensure that downloads are virus-free and legitimately obtained.
This article juxtaposes the benefits and risks of various aspects of digital
rights management.

Deadlines for compliance with the Health Insurance Portability
and Accountability Act (HIPAA) have caused a major crunch for the computer
security industry. This hippopotamus-sized legislation, enacted in 1996,
consists of two major provisions: insurance reform (so that preexisting conditions
do not result in denial of coverage when one changes jobs); and administrative
simplification (intended to reduce health care costs through standardized
electronic transmission of transactions). HIPAA violations can carry fines
of up to $250,000 and jail time of up to 10 years, so you can bet that
organizations are taking this federal law very seriously.

Advances in high-performance computing (such as exponential
increases in computational speed, memory capacity, and bandwidth) have found
their counterpart in new security threats. Yet there is an interesting twist
in that computational expansion tends to be relatively predictable, whereas
security challenges are typically introduced and mitigated (when possible)
in a more chaotic fashion. It is useful, therefore, to consider some of
the impacts of scaled-up computing on our overall security environment.

Standards can play an important role in security by enforcing
baselines and enabling compatibilities among products. In the best
of worlds, standards provide a neutral ground where methodologies are established
that advance the interests of manufacturers as well as consumers, while
providing assurances of safety and reliability. At the opposite extreme,
standards can be inappropriately employed to favor some vendors' products
over others, make competition costly, and encourage mediocrity over innovation,
all of which can have negative effects on security. This article considers
the current security standards environment and offers suggestions for its
increased understanding and improvement.

Author's Note: Astute readers pointed out the omission of some
well-known computer security-related standards groups from my table. Although
the original list was not intended to be comprehensive, I thought it would
be helpful to cite these additional ones here.

Quantification tools, if applied
prudently, can assist in the anticipation, budgeting, and control of direct
and indirect computer security costs. Costs related to computer security
are often difficult to assess, in part because accurate metrics have been
inherently unrealistic. Of those costs that can be measured, the largest
in terms of monetary value typically involve theft of proprietary information
or financial fraud. We see the results of surveys of organizations
providing estimates as to breach incidents, but lacking any way to translate
such statistics into expenditures and losses per organization, per computer,
or per user, the true impact of those figures remains uncertain.

Audit trails, whether computer-based
or manually produced, typically form a significant part of the front-line
defense for fraud detection and prevention within systems. Many of our
security practices revolve around the generation and preservation of authenticated
data streams that are to be perused routinely or periodically, as well
as in the event of system attack, failure, or other investigations. But
these audit trail systems are not necessarily robust, since components
can be subverted or ignored. Furthermore, it is the surrounding controls,
or overriding design-and-use philosophies, that are often discovered to
be inadequate or circumvented.

Programming (and also secure
system design), as Donald Knuth so wisely pointed out decades ago, is an
art, as much, and perhaps even more, than it is a science. As such, it should
be judged on Quality, and Quality often demands less, not more, in terms
of quantity. The larger the software, the more difficult it is to maintain,
assure, and protect. Therefore more code (or more hardware) does not necessarily
translate to good Quality. Software engineering approaches focusing on code
review, development cycles, configuration management, and so on, add more
complexity to the process, and cannot necessarily, in themselves, ensure
Quality.

Rebecca Mercuri has also been a frequent contributor
to Peter Neumann's popular "Inside Risks" column in the Communications
of the Association for Computing Machinery. Some of her articles
that directly pertain to computer security are linked below, others can
be found via her electronic voting page.

The ISO Common Criteria identifies numerous dependencies
(if you implement X, you are required to implement Y and perhaps also
Z, and so on) among the items necessary to provide security assurance,
but it omits the specification of counterindications (if you implement J
then you cannot implement K and perhaps not also L). This flaw has
serious implications in the application of the standard where counterindications
(such as the simultaneous requirement for anonymity and auditability of
certain voting systems) must be mitigated.

Permission to make digital or hard copies of all or part of these works
for personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, to republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.