Y2k-001.txt
2 OBJECTIVES
=============
The objectives of this document are:
· Define key terms with regard to Year-2000 Testing.
· Highlight normal process concerns as applied to Year-2000.
· Additionally, add any concerns brought about by Year-2000.
· Help to retain our sanity in this "just-another-crisis" challenge.
2.1 SCOPE
==========
This document presents normal process issues with the focus on
Year-2000 testing issues. It discusses both the actual testing
process, as well as test and facility planning. Year-2000 issues
essentially place one more barrier in the over-all process of
performing the usual QA process.
Additionally, for completeness, many design-related, as well as
customer-support and documentation-related issues are mentioned.
The intended audience is the test department, program management,
and customer support groups.
2.2 REFERENCE DOCUMENTS
========================
Beiser, Software Testing
Brooks, the Mythical Man Month.
Kaner, Falk, and Nugent, Software Testing.
2.3 DEFINITIONS
SME Subject Matter Expert; permanent full-time staff
member well versed in the operation of equipment,
systems processes in use at THE company, etc.
SUT System Under Test; ie, the specific product and other
hardware, as well as specific software that is to be
delivered as part of this feature.
SWDLC Software Development Life Cycle (also SWLC); a process
model used commonly to describe the process of specifying,
developing, programming, debugging, testing, releasing,
and then supporting software-based products.
SWQA Software Quality Assurance (do not confuse with SWQC);
The over-all process of managing the development/testing/
field-support of software products. Q/A is a MACRO process,
Q/C is a MICRO process. Q/A usually monitors the Q/C sub
processes within each portion of the over-all product
development/support process.
SWQC Software Quality Control (do not confuse with SWQA);
the micro-process of controlling the software release
process.
In general Q/C is the specific part of the development/support
process within each group. Thus, the micro-code (lowest
level of software) will have its own Q/C process that
will radically different from the Q/C process that is
carried out in (eg) Sales and Service Support.
Testing The process whereby the degree of quality is assessed
in a product. The usual progression (once design is
finished) is: Unit test, Function Test, Local
Integration Test, Systems Test, Full Integration Test,
Regression Test, Customer Acceptance Test. And then
release, and then repeat as required.
TC Test Case; a set of instructions written out in clear
and un-ambiguous terms that detail the steps to be
followed in testing a specific part of the operation
of the SUT. A test case, when run, produces a result
that is to be verified if the expected results are
"as expected" then the TC is said to have "passed"
otherwise, it is said to have failed.
TP Test Plan; a document that is made up of background
information and a list of the test cases. Note: Test
Planning is a generic term for that part of the over-all
SWQA process that deals with the testing portions of the
QA process.
Y2K Year-2000; the programming problem brought about by
storing of just the last two digits of the year number
- eg, "69" for "1969" or "01" for "2001".
3 INTRODUCTION
==============
This document details aspects of the normal SWDLC (Software
Development Life Cycle) as they pertain to the Year-2000
problems (Y2K for short). The so-called Y2K crisis is simply
one more aspect in the normal maintenance cycle of software
products. Admittedly there is much hype concerning this issue,
and the problems are real. However, if one simply takes the
Y2k as "just another headache" to be solved in the usual way,
then ones perspective is less prone to the panic that is much
hyped in the media.
Also note: The wording of this document carefully avoids the
use of the word "compliant" as in the mysterious
phrase "Y2K-compliant".
Compliance is an arbitrary concept - compliance is
with respect to some "standard". The ANSI standard
not with standing - almost in every case, each
company will develop its own compliance policy).
Now as long as it is understood that the compliance
refers to a standard that has been agreed upon -
and not some sort of "guarantee" that a system that
is in compliance is now immune to problems ---
then well and good. Otherwise, compliance is a
totally meaningless concept: There are no "actual"
standards for Y2K compliance; the phrase is totally
meaningless.
This is especially true the interactions with other
companies - many of whom have a less-than-thorough
approach to their own "compliance" policy. In such
cases, we will find ourselves dependent on implementing
special input-filters to protects us from THEIR mistakes.
(Following the year 2000, there will probably be a
standards committee meeting that will produce an
actual meaningful standard - at this point, it as
much anybody's guess as anyone else's - so let's
just make those "guesses" as good as possible).
Therefore, the phrase that will be used is
"Y2K-compatible"
If you want to imagine some mystical, silver bullet,
magical elixir of "compliance" that will assure
absolutely zero defects, then please feel free to do
so. The main guarantee of success is:
"dogged, un-dying determination
alone is omnipotent"
(attributed to Calvin Coolidge).
Y2K panic aside, the key issues are:
· Routine and already in-place QC procedures (as well as
the over-all QA function) can be used to treat the
"Y2K problem" just like any other maintenance issues.
· The "crisis" will help to bring most programs into the
lime-light so that an assessment of their utility,
maintainability, and necessity can be focused upon.
· Y2K is simply another aspect of "problem awareness";
as with any product, there are product-specific issues
and concerns that must be addressed in the normal
maintenance cycle.
· The primary problem is in the fact that the date on the
computer must be changed several times (usually something
that isn't even considered); this causes the test setup
to be rather slow, since many such systems must be re-booted.
· Additional problems occur in the creation of test data to
verify proper operation in an un-controlled environment;
many systems interact without side systems that may not have
been thoroughly tested. (Again, these issues are nothing new).
· The Key "hot" dates must be identified and their impact
assessed; again, just one more headache to worry about.
4 ROAD MAP -- What's going to be covered.
=========================================
First off, to establish a common point of reference, we will look
at a process model known as the "IPAFT-R" model. This outlines a
simple, adaptable model for the deployment of hardware and software
products. The idea here is to be able to discuss the standard sorts
of issues in a common frame-work.
Then, we then trace through each step of the IPAFT model, looking
at the relevant Y2K issues. We examine these from the points of
view of process-related as well as procedure-related problems,
with additional detail paid to the documentation and training issues.
Note too that the "R" (Release") in the IPAFT model, is really a
combination of the review and support steps in many of the
software development models.
5 YEAR 2000
============
5.1 The IPAFT-R Model
There are many "process" models for the development and release
of hardware and software systems. The following is a fairly
detailed summary of the so-called IPAFT (Identify, Prioritize,
Assess, Fix, Test - Release) model. This should cover all of
the bases and concepts presented here will of course be
compatible with those processes already in place.
IDENTIFY
========
The identify step consists of the "planning" or "getting
ready to do" stage of development. This includes the
standard sorts of things:
· Proposals; eg, "Request for Change", "Work Orders", etc.
· Feasiblity Study; eg, "Proof of Concept", "Prototyping", etc.
· Meetings between Staff & Customer.
If a new system is to be created the "feasibility" work will be
extensive, as well as much of the development & testing.
In many cases, no new programs are actually written, only
modifications to existing "legacy" systems. In this case,
the concern is that of introducing side effects to the
existing functionality. As such, the emphasis will be on
regression and negative-impact testing (both in the design
and test groups).
Finally, the main problem encountered in the identify stage
is to simply locate the source code, documentation, schematics,
etc. Often they were created without version or source-code
control. Many times, the only documentation is the expert
knowledge of the original designer. Getting that information
down on paper will obviously impact their on-going work.
(And Y2K does not make this any easier; especially if a large
number of components are found to be Y2K-incompatible).
PRIORITIZE
==========
Obviously the resources for on-going efforts will have to be
diverted (in full or in part) to the remediation work to be
done. It is during the "prioritization" step that the
action plans are drawn up. The usual functions are:
· Impact/Risk Assessment; "delaying vs. deploying",
"Actual Cost to Fix", etc.
· Business Impact Analysis; "market presence", "competitive
edge", etc.
· Criticality to Normal Operations; Critical, Major, High,
Low, Future Enhancement.
ASSESS
======
During this phase, the goal is to answer the question: "How bad
and how much?". The key is not to start fixing everything until
you have a clear idea of the interactions between the components
and a clear idea as to WHAT needs to be fixed.
The usual "process tools" are applicable at this point.
· Code Reviews & Walk Throughs
· Test Case Reviews; New vs Regression test cases.
· Tools Review/Assessment.
· Review of Documentation
· Release Planning; timing, press releases, damage control, &
the usual field-support issues, training of
customer-support staff, etc.
· Configuration Management Reviews; specifically, new-release
planning.
· Data and Program Flow Analysis (detailed view of the systems)
· Functional and Systems Analysis (more global, less detailed
view of the systems)
· Prototyping / Modeling
· Performance Analysis
· Dependency Charts, Flow Charts, Timing Diagrams
· Other diagrams: Warnier/Orr, HIPO, Burr, etc.
· Debuggers, Data Drivers, Test Equipment, Protocol Analyzers,
etc, etc, etc.
FIX
===
Finally, we can start "fixing" things. The key issue here is that
resources are limited. As such, there will be the need to share
these resources. The key concepts are:
· Optimizing Resources; schedualers, e-mail, working groups,
COMMUNICATE.
· Lab-time planning; New Installation, Upgrades, Routine
Maintenance.
· Project Management; Assign/Do/Report/Release.
· Configuration Management; Check-in/Check-out,
"Development Process"
· Establish the Exit Criteria; Number of open trouble reports,
make/break tests, etc.
· Unit Testing, White Box Testing, Systems Testing,
Local Integration Testing.
· Tools Development.
· Documentation Reviews; Technical doc's, Field support docs,
customer support procedures are updated,
new product literature, etc, etc, etc.
TEST
====
As the finished products become available from the design group the
independent test and validation process begins in earnest.
(Prior to this, the test cases, and the test equipment have been
under development). The usual stuff here:
· Establish Entrance Criteria; Open room vs Clean Room,
Trouble Report Review,
fit-for-use criteria,
· Make/Brake Testing; Ready to test assessment.
· Multi-function testing, Black Box Testing, Ease-of-use Assessment.
· System Integration & Full Integration Testing.
· Re-testing & Regression Testing.
· Fit for Use assessment.
· Customer Acceptance Testing.
· Beta Testing.
· Tools Development.
RELEASE
=======
At this point, the documentation is reviewed, the field-testing
(and often customer acceptance testing) occurs, and the field-support
and customer service people have one more feature to support.
A final "post-mortem" review should be done as well as the final
sigh of relief. Until the next project...
· Post Mortem; What worked? What didn't work?
How can we work smarter?
· New Documentation is shipped.
· New Product is shipped.
· New Support Documentation ready for field-support/customer
service.
· Pagers are issued.
· On to the next!
5.2 Identify
==============
For each software and hardware unit, the identify step accomplishes
three things:
· Separate the various components into identifiable units
· Locate the source code, documentation
· Note any dependencies between the components
During the identification phase, it is easy to over-look components
that we take for granted in normal day-to-day operations;
for example:
· The Operating System in use on both the Workstations and SUT's
· Databases, Editors,
· E-mail,
· Voice-mail (PBX),
One very definite problem that MUST be addressed is that of the
so-called "software licenses". These are almost always
time-dependent, and when the date is rolled forward, they will
report that your license has expired --- And just imagine what
it will be like actually in 2000! Some of the issues are:
· When performing the tests, you will necessarily need to
roll the calendar forward - possibly causing the license
to expire. A call will need to be made to the software vendor.
· It is possible that the licensing algorithm itself is NOT
Y2K-compatible.
· When the calendars do click over on December 31st, the
software vendors are probably going to be VERY slow in
responding to requests for license extensions -- imagine
all of the customers calling AT once on January 1, 2000 !!
We now examine the two aspects (process & procedure) as they apply
to the Identification step.
xxxxxxxxxxxxxxx