1.1 INTRODUCTION:System is combination of different factors which performdifferent functions. It handles by user and administrator who has aknowledge and skill about that system.

1.2 SYSTEMThe concept of an 'integrated whole' can also be stated interms of a system embodying a set of relationships which aredifferentiated from relationships of the set to other elements, andfrom relationships between an element of the set and elements nota part of the relational regime.

Systems have structure, defined by parts and their

composition;

Systems have behavior, which involves inputs, processing

and outputs of material, energy or information;

Systems have interconnectivity: the various parts of a

system have functional as well as structural relationshipsbetween each other.

Systems have by themselves functions or groups of

functions

1.3 CLASSIFICATION OF SYSTEM :

Classification of systems can be done in many ways.1.3.1 Physical or Abstract SystemPhysical systems are tangible entities that we can feel andtouch. These may be static or dynamic in nature. For example, takea computer center. Desks and chairs are the static parts, whichassist in the working of the center. Static parts don't change. Thedynamic systems are constantly changing. Computer systems aredynamic system. Programs, data, and applications can changeaccording to the user's needs.Abstract systems are conceptual. These are not physicalentities. They may be formulas, representation or model of a realsystem.

61.3.2 Open Closed SystemSystems interact with their environment to achieve theirtargets. Things that are not part of the system are environmentalelements for the system. Depending upon the interaction with theenvironment, systems can be divided into two categories, open andclosed.Open systems: Systems that interact with their environment.Practically most of the systems are open systems. An opensystem has many interfaces with its environment. It can alsoadapt to changing environmental conditions. It can receive inputsfrom, and delivers output to the outside of system. An informationsystem is an example of this category.Closed systems: Systems that don't interact with theirenvironment. Closed systems exist in concept only.1.3.3 Man made Information SystemThe main purpose of information systems is to manage datafor a particular organization. Maintaining files, producinginformation and reports are few functions. An information systemproduces customized information depending upon the needs of theorganization. These are usually formal, informal, and computerbased.Formal Information Systems: It deals with the flow ofinformation from top management to lower management.Information flows in the form of memos, instructions, etc. Butfeedback can be given from lower authorities to top management.Informal Information systems: Informal systems areemployee based. These are made to solve the day to day workrelated problems. Computer-Based Information Systems: This classof systems depends on the use of computer for managing businessapplications.1.3.4 Computer Base System:A system of one or more computers and associated softwarewith common storage called system.A computer is a programmable machine that receives input,stores and manipulates data, and provides output in a usefulformat.The computer elements described thus far are known as"hardware." A computer system has three parts: the hardware, thesoftware, and the people who make it work.

71.3.5 Information System:An information system (IS) is any combination of informationtechnology and people's activities using that technology to supportoperations, management, and decision-making.Information system deals with data of the organizations. Thepurposes of Information system are to process input, maintain data,produce reports, handle queries, handle on line transactions,generate reports, and other output. These maintain hugedatabases, handle hundreds of queries etc. The transformation ofdata into information is primary function of information system.Information systems differ in their business needs. Alsodepending upon different levels in organization information systemsdiffer. Three major information systems are1.2.3.

Transaction processing systems

Management information systemsDecision support systems

Figure 1.2 shows relation of information system to the levels

of organization. The information needs are different at differentorganizational levels. Accordingly the information can becategorized as: strategic information, managerial information andoperational information.Strategic information is the information needed by top mostmanagement for decision making. For example the trends inrevenues earned by the organization are required by the topmanagement for setting the policies of the organization. Thisinformation is not required by the lower levels in the organization.The information systems that provide these kinds of information areknown as Decision Support Systems.

Figure - Relation of information systems to levels of

organizationThe second category of information required by the middlemanagement is known as managerial information. The informationrequired at this level is used for making short term decisions andplans for the organization. Information like sales analysis for thepast quarter or yearly production details etc. fall under thiscategory. Management information system (MIS) caters to suchinformation needs of the organization. Due to its capabilities to fulfillthe managerial information needs of the organization, ManagementInformation Systems have become a necessity for all bigorganizations. And due to its vastness, most of the bigorganizations have separate MIS departments to look into therelated issues and proper functioning of the system.The third category of information is relating to the daily orshort term information needs of the organization such asattendance records of the employees. This kind of information isrequired at the operational level for carrying out the day-to-dayoperational activities. Due to its capabilities to provide informationfor processing transaction of the organization, the informationsystem is known as Transaction Processing System or DataProcessing System. Some examples of information provided bysuch systems are processing of orders, posting of entries in bank,evaluating overdue purchaser orders etc.1.3.6 Transaction Processing SystemsTPS processes business transaction of the organization.Transaction can be any activity of the organization. Transactionsdiffer from organization to organization. For example, take a railwayreservation system. Booking, cancelling, etc are all transactions.

9Any query made to it is a transaction. However, there aresome transactions, which are common to almost all organizations.Like employee new employee, maintaining their leave status,maintaining employees accounts, etc.This provides high speed and accurate processing of recordkeeping of basic operational processes. These include calculation,storage and retrieval.Transaction processing systems provide speed andaccuracy, and can be programmed to follow routines functions ofthe organization.1.3.7 Management Information Systems:These systems assist lower management in problem solvingand making decisions. They use the results of transactionprocessing and some other information also. It is a set ofinformation processing functions. It should handle queries asquickly as they arrive. An important element of MIS is database.A database is a non-redundant collection of interrelated dataitems that can be processed through application programs andavailable to many users.1.3.8 Decision Support Systems:These systems assist higher management to make longterm decisions. These type of systems handle unstructured or semistructured decisions. A decision is considered unstructured if thereare no clear procedures for making the decision and if not all thefactors to be considered in the decision can be readily identified inadvance.These are not of recurring nature. Some recur infrequently oroccur only once. A decision support system must very flexible. Theuser should be able to produce customized reports by givingparticular data and format specific to particular situations.

1.4 SYSTEM ANALYSIS :

Systems analysis is the study of sets of interacting entities,including computer systems. This field is closely related tooperations research. It is also "an explicit formal carried out to help, referred to as the decision maker, identify a better course ofaction.Computers are fast becoming our way of life and one cannotimagine life without computers in todays world. You go to a railway

10station for reservation, you want to web site a ticket for a cinema,you go to a library, or you go to a bank, you will find computers atall places. Since computers are used in every possible field today, itbecomes an important issue to understand and build thesecomputerized systems in an effective way.

1.5 SOFTWARE ENGINEERING:

Software Engineering is the systematic approach to thedevelopment, operation and maintenance of software. SoftwareEngineering is concerned with development and maintenance ofsoftware products.Software engineering (SE) is a profession dedicated todesigning, implementing, and modifying software so that it is ofhigher quality, more affordable, maintainable, and faster to build. Itis a "systematic approach to the analysis, design, assessment,implementation, test, maintenance and reengineering of software,that is, the application of engineering to software.The primary goal of software engineering is to provide thequality of software with low cost. Software Engineering involvesproject planning, project management, systematic analysis, design,validations and maintenance activities.Every Engineer wants to design the general theme for todevelop the software. So, the stepwise execution is necessary todevelop a good software. It is called as software engineering.

1.6 SYSTEM DESIGN :

Systems design is the process or art of defining thearchitecture, components, modules, interfaces, and data fora system to satisfy specified requirements. One could see it as theapplication of systems theory to product development. There issome overlap with the disciplines of systems analysis, systemsarchitecture and systems engineering.System design is divided into two types:1.6.1 Logical DesignThe logical design of a system pertains to an abstractrepresentation of the data flows, inputs and outputs of the system.This is often conducted via modeling, which involves a simplistic(and sometimes graphical) representation of an actual system. Inthe context of systems design, modeling can undertake thefollowing forms, including:

11

Data flow diagrams

Entity Life HistoriesEntity Relationship Diagrams

1.6.2 Physical Design

The physical design relates to the actual input and outputprocesses of the system. This is laid down in terms of how data isinputted into a system, how it is verified/authenticated, how it isprocessed, and how it is displayed as output.Physical design, in this context, does not refer to the tangiblephysical design of an information system. To use an analogy, apersonal computer's physical design involves input via a keyboard,processing within the CPU, and output via a monitor, printer, etc. Itwould not concern the actual layout of the tangible hardware, whichfor a PC would be a monitor, CPU, motherboard, hard drive,modems, video/graphics cards, USB slots, etc.System Design includes following points:

Requirements analysis - analyzes the needs of the end users or

customers

Benchmarking is an effort to evaluate how current systems

are used

Systems architecture - creates a blueprint for the design with

the necessary specifications for the hardware, software, peopleand data resources. In many cases, multiple architectures areevaluated before one is selected.

Design designers will produce one or more 'models' of what

they see a system eventually looking like, with ideas from theanalysis section either used or discarded. A document will beproduced with a description of the system, but nothing isspecific they might say 'touch screen' or 'GUI operatingsystem', but not mention any specific brands;

Computer programming and debugging in the software world, or

detailed design in the consumer, enterprise or commercial world- specifies the final system components.

System testing - evaluates the system's actual functionality in

relation to expected or intended functionality, including allintegration aspects.

1.7 SYSTEM ANALYST:

The system analyst is the person (or persons) who guidesthrough the development of an information system. In performing

12these tasks the analyst must always match the information systemobjectives with the goals of the organization.1.7.1 Role of System Analyst:Role of System Analyst differs from organization toorganization. Most common responsibilities of System Analyst arefollowing :1) System analysisIt includes system's study in order to get facts aboutbusiness activity. It is about getting information and determiningrequirements. Here the responsibility includes only requirementdetermination, not the design of the system.2) System analysis and design:Here apart from the analysis work, Analyst is alsoresponsible for the designing of the new system/application.3) Systems analysis, design, and programming:Here Analyst is also required to perform as a programmer,where he actually writes the code to implement the design of theproposed application.Due to the various responsibilities that a system analystrequires to handle, he has to be multifaceted person with variedskills required at various stages of the life cycle. In addition to thetechnical know-how of the information system development asystem analyst should also have the following knowledge.

Business knowledge: As the analyst might have to develop any

kind of a business system, he should be familiar with thegeneral functioning of all kind of businesses.

Interpersonal skills: Such skills are required at various stages of

development process for interacting with the users andextracting the requirements out of them

Problem solving skills: A system analyst should have enough

problem solving skills for defining the alternate solutions to thesystem and also for the problems occurring at the variousstages of the development process.

1.7.2 Task of System Analyst:

The primary objective of any system analyst is to identify theneed of the organization by acquiring information by various meansand methods. Information acquired by the analyst can be eithercomputer based or manual. Collection of information is the vital

13step as indirectly all the major decisions taken in the organizationsare influenced. The system analyst has to coordinate with thesystem users, computer programmers, manager and number ofpeople who are related with the use of system. Following are thetasks performed by the system analyst:Defining Requirement: The basic step for any system analyst is tounderstand the requirements of the users. This is achieved byvarious fact finding techniques like interviewing, observation,questionnaire etc. The information should be collected in such away that it will be useful to develop such a system which canprovide additional features to the users apart from the desired.Prioritizing Requirements: Number of users uses the system inthe organization. Each one has a different requirement andretrieves different information. Due to certain limitations incomputing capacity it may not be possible to satisfy the needs of allthe users. Even if the computer capacity is good enough is itnecessary to take some tasks and update the tasks as per thechanging requirements. Hence it is important to create list ofpriorities according to users requirements. The best way toovercome the above limitations is to have a common formal orinformal discussion with the users of the system. This helpsthe system analyst to arrive at a better conclusion.Gathering Facts, data and opinions of Users: After determiningthe necessary needs and collecting useful information the analyststarts the development of the system with active cooperation fromthe users of the system. Time to time, the users update the analystwith the necessary information for developing the system. Theanalyst while developing the system continuously consults theusers and acquires their views and opinions.Evaluation and Analysis: As the analyst maintains continuous heconstantly changes and modifies the system to make it better andmore user friendly for the users.Solving Problems: The analyst must provide alternate solutions tothe management and should a in dept study of the system to avoidfuture problems. The analyst should provide with some flexiblealternatives to the management which will help the manager to pickthe system which provides the best solution.Drawing Specifications: The analyst must draw certainspecifications which will be useful for the manager. The analystshould lay the specification which can be easily understood by themanager and they should be purely non-technical. Thespecifications must be in detailed and in well presented form.

141.7.3 Attributes of System Analyst:A System Analyst (SA) analyzes the organization and designof businesses, government departments, and non-profitorganizations; they also assess business models and theirintegration with technology.There are at least four tiers of business analysis:1. Planning Strategically - The analysis of the organizationbusiness strategic needs2. Operating/Business model analysis - the definition andanalysis of the organization's policies and market businessapproaches3. Process definition and design - the business processmodeling (often developed through process modeling anddesign)4. IT/Technical business analysis - the interpretation ofbusiness rules and requirements for technical systems(generally IT)Within the systems development life cycle domain (SDLC),the business analyst typically performs a liaison function betweenthe business side of an enterprise and the providers of services tothe enterprise. A Common alternative role in the IT sector isbusiness analyst, systems analyst, and functional analyst, althoughsome organizations may differentiate between these titles andcorresponding responsibilities.1.7.4 Skill required for System Analyst:Interpersonal skills are as follows:1: Communication:It is an interpersonal quality; the system analyst must havecommand on English language. Communication is necessary toestablish a proper relationship between system analyst and theuser.Communication is need to Gather correct informationEstablishes a problem solving ideas in front of the management.2: Understanding:This is also an interpersonal quality of the system analyst,understanding includesUnderstanding of the objectives of the organization.Understanding the problems of the system.

15Understanding the information given by the user or employee ofthe organization.3: Selling:The ideas of the system analyst are his products which he sellsto the manager of a particular organization. The system analystmust have not only the ability of creating ideas but also to sell hisideas.4: Teaching:It is also an interpersonal quality. A system analyst musthave teaching skills. He must have the ability to teach teammembers and the users. He has to teach about the new systemand also about the proper use of the new system.5: New technology:An analyst is an agent of change, he or she must have the ability toshow all the benefits of the candidate system with the newtechnologicaladvancement,hemustknewaboutEmail Internet Advance graphics Server based networkingNetwork technology etc.

1.8 SUMMARYThis chapter is based on System, their entire factors and impact onsurrounding. System is divided into different types and it performsvarious functions. System analyst can handle every types ofsystem.Questions:1. What is system? Explain classification of system?Ans: Refer 1.2 and 1.32. Explain skill of system analyst?Ans: refer 1.7.4

2.1 INTRODUCTION:SDLC, It is System Development Life Cycle. It includesGuidance, policies, and procedures for developing systemsthroughout their life cycle, including requirements, design,implementation, testing, deployment, operations, and maintenance.

2.2

THE SYSTEMS DEVELOPMENT LIFE CYCLE

(SDLC), OR SOFTWARE DEVELOPMENT LIFECYCLE :

In systems engineering, information systems and software

engineering, is the process of creating or altering systems, and themodels and methodologies that people use to develop thesesystems. The concept generally refers to computer or informationsystems.Systems and Development Life Cycle (SDLC) is a processused by a systems analyst to develop an information system,including requirements, validation, training, and user (stakeholder)ownership. Any SDLC should result in a high quality system thatmeets or exceeds customer expectations, reaches completionwithin time and cost estimates, works effectively and efficiently inthe current and planned Information Technology infrastructure, andis inexpensive to maintain and cost-effective to enhance.For ex. Computer systems are complex and often (especially withthe recent rise of Service-Oriented Architecture) link multipletraditional systems potentially supplied by different softwarevendors. To manage this level of complexity, a number of SDLCmodels have been created: "waterfall"; "fountain"; "spiral"; "buildand fix"; "rapid prototyping"; "incremental"; and "synchronize andstabilize.The systems development life cycle (SDLC) is a type ofmethodology used to describe the process for building informationsystems, intended to develop information systems in a very

18deliberate, structured and methodical way, reiterating each stage ofthe life cycle.2.2.1 System Development Phases:Systems Development Life Cycle (SDLC) adheres toimportant phases that are essential for developers, suchas planning, analysis, design, and implementation, and areexplained in the section below. Several Systems Development LifeCycle Models exist, the oldest of which originally regarded as"the Systems Development Life Cycle" is the waterfall model: asequence of stages in which the output of each stage becomes theinput for the next. These stages generally follow the same basicsteps, but many different waterfall methodologies give the stepsdifferent names and the number of steps seems to vary betweenfour and seven.2.2.2 SDLC Phases Diagram:Problem Definition

Evaluation

Analysis

Validation

Design

Implementation

2.2.3 Explanation of the SDLC Phases:

Requirements gathering and analysisThe goal of system analysis is to determine where theproblem is in an attempt to fix the system. This step involves"breaking down" the system in different pieces to analyze thesituation, analyzing project goals, "breaking down" what needs tobe created and attempting to engage users so that definiterequirements can be defined (Decomposition computer science).Requirements Gathering sometimes requires individuals/teams

19from client as well as service provider sides to get detailed andaccurate requirements....Design:In systems, design functions and operations are described indetail, including screen layouts, business rules, process diagramsand other documentation. The output of this stage will describe thenew system as a collection of modules or subsystems.The design stage takes as its initial input the requirementsidentified in the approved requirements document. For eachrequirement, a set of one or more design elements will be producedas a result of interviews, workshops, and/or prototype efforts.Design elements describe the desired software features in detail,and generally include functional hierarchy diagrams, screen layoutdiagrams, tables of business rules, business process diagrams,pseudocode, and a complete entity-relationship diagram with a fulldata dictionary. These design elements are intended to describethe software in sufficient detail that skilled programmers maydevelop the software with minimal additional input design.Build or coding:Modular and subsystem programming code will beaccomplished during this stage. Unit testing and module testing aredone in this stage by the developers. This stage is intermingled withthe next in that individual modules will need testing beforeintegration to the main project.Testing:The code is tested at various levels in software testing. Unit,system and user acceptance testings are often performed. This is agrey area as many different opinions exist as to what the stages oftesting are and how much if any iteration occurs. Iteration is notgenerally part of the waterfall model, but usually some occur at thisstage.Below are the following types of testing:

Data set testing.

Unit testing

System testing

Integration testing

Black box testing

White box testing

Regression testing

Automation testing

User acceptance testing

Performance testing

Production

20definition:- it is a process that ensures that the program performsthe intended task.Operations and maintenanceThe deployment of the system includes changes andenhancements before the decommissioning or sunset of thesystem. Maintaining the system is an important aspect of SDLC. Askey personnel change positions in the organization, new changeswill be implemented, which will require system updates.2.2.4 SDLC Phases with Management Control :The Systems Development Life Cycle (SDLC) phases serveas a programmatic guide to project activity and provide a flexiblebut consistent way to conduct projects to a depth matching thescope of the project. Each of the SDLC phase objectives aredescribed in this section with key deliverables, a description ofrecommended tasks, and a summary of related control objectivesfor effective management. It is critical for the project manager toestablish and monitor control objectives during each SDLC phasewhile executing projects. Control objectives help to provide a clearstatement of the desired result or purpose and should be usedthroughout the entire SDLC process. Control objectives can begrouped into major categories (Domains), and relate to the SDLCphases as shown in the figure.To manage and control any SDLC initiative, each project willbe required to establish some degree of a Work BreakdownStructure(WBS) to capture and schedule the work necessary tocomplete the project. The WBS and all programmatic materialshould be kept in the Project Description section of the projectnotebook. The WBS format is mostly left to the project manager toestablish in a way that best describes the project work. There aresome key areas that must be defined in the WBS as part of theSDLC policy. The following diagram describes three key areas thatwill be addressed in the WBS in a manner established by theproject manager.

21

Diagram: SDLC Phases Related to Management Controls

2.2.5 Advantages of SDLC model:With an SDLC Model, developers will have a clear idea onwhat should be or shouldnt be built. Since they already have anidea on the problems that should be answered, a detailed plancould be created following a certain SDLC model. With an SDLCmodel, developers could even create a program that will answerdifferent problems at the same time. Since everything will be laidout before a single code is written, the goal is clear and could beimplemented on time. Although there is a great possibility ofdeviation from the plan, a good project manager will take care ofthat concern.With an SDLC Model, programs built will have a cleardocumentation of development, structure and even coding. In casethere are problems once the program is adopted for public use,developers will always have the documentation to refer to whenthey need to look for any loopholes. Instead of testing it over andover again which will stop the implementation for a while,developers will just look at the documentation and perform propermaintenance program. This means SDLC will breathe more life tothe program. Instead of frustrating developers in uesswork ifsomething goes wrong, SDLC will make sure everything goessmoothly. It will also be a tool for maintenance, ensuring theprogram created will last for a long time.

222.2.6 Disadvantages of SDLC Model:Thinking about the disadvantages of a SDLC model is likelooking for a needle in the haystack. But the closest disadvantageanyone could think of SDLC is the difference between what iswritten in paper and what is actually implemented. There are thingsthat are happening in the actual work that the paper doesnt see.This gives a good impression for the clients especially for 3rd partydevelopers but when the software is actually launched its on a verybad situation. The actual situation of software development couldbe covered by fancy paperwork of SDLC.Another disadvantage of a program or software that followsthe SDLC program is it encourages stiff implementation instead ofpushing for creativity in different oftware. Although there are SDLCmodels where programmers could apply their creative juices, itsalways in the realm of what is needed instead of freelyimplementing what the developers think of necessary in the presentenvironment.There are so many things that could be done bydevelopers if there are no boundaries or limitations in what shouldbe developed.

2.3

WORKBREAKDOWNORGANIZATION:

STRUCTURE

The upper section of the Work Breakdown Structure (WBS)

should identify the major phases and milestones of the project in asummary fashion. In addition, the upper section should provide anoverview of the full scope and timeline of the project and will be partof the initial project description effort leading to project approval.The middle section of the WBS is based on the seven SystemsDevelopment Life Cycle (SDLC) phases as a guide for WBS taskdevelopment. The WBS elements should consist of milestones andtasks as opposed to activities and have a definitive period(usually two weeks or more). Each task must have a measurableoutput (e.g. document, decision, or analysis). A WBS task may relyon one or more activities (e.g. software engineering, systemsengineering) and may require close coordination with other tasks,either internal or external to the project. Any part of the projectneeding support from contractors should have a Statement ofwork (SOW) written to include the appropriate tasks from the SDLCphases.

23

For Ex: Following Diagram indicates Example of a product work

breakdown structure of an aircraft system.

2.4 ITERATIVE AND INCREMENTAL DEVELOPMENT

MODEL:Iterative and Incremental development is at the heart of acyclic software development process developed in response to theweaknesses of the waterfall model. It starts with an initial planningand ends with deployment with the cyclic interactions in between.Iterative and incremental development is essential parts ofthe Rational Unified Process, Extreme Programming and generallythe various agile software development frameworks.

24Diagram : An iterative development model

2.4.1 Iterative/Incremental Development

Incremental development slices the system functionality intoincrements (portions). In each increment, a slice of functionality isdelivered through cross-discipline work, from the requirements tothe deployment. The unified process groups increments/iterationsinto phases: inception, elaboration, construction, and transition.

Inception identifies project scope, risks, and requirements

(functional and non-functional) at a high level but in enoughdetail that work can be estimated.

Elaboration delivers a working architecture that mitigates the top

risks and fulfills the non-functional requirements.

Construction incrementally fills-in the architecture with

production-ready code produced from analysis, design,implementation, and testing of the functional requirements.

Transition delivers the system into the production operating

environment

25Diagram: Iterative/Incremental Development

2.5 EXTREME PROGRAMMING:

ExtremeProgramming(XP) isa softwaredevelopmentmethodology which is intended to improve software quality andresponsiveness to changing customer requirements. As a type ofagile software development, it advocates frequent "releases" inshort development cycles (time boxing), which is intended toimprove productivity and introduce checkpoints where newcustomer requirements can be adopted.Rules for Extreme Programming: Planning Managing Coding Designing Testing2.5.1 Goals of Extreme Programming Model:ExtremeProgrammingExplained describesExtremeProgramming as a software development discipline that organizespeople to produce higher quality software more productively.In traditional system development methods (suchas SSADM or the waterfall model) the requirements for the systemare determined at the beginning of the development project andoften fixed from that point on. This means that the cost of changing

26the requirements at a later stage (a common feature of softwareengineering projects) will be high. Like other agile softwaredevelopment methods, XP attempts to reduce the cost of changeby having multiple short development cycles, rather than one longone. In this doctrine changes are a natural, inescapable anddesirable aspect of software development projects, and should beplanned for instead of attempting to define a stable set ofrequirements.

2.6 RAD MODEL:

It is Rapid Application development model. Rapid ApplicationDevelopment (RAD) refers to a type of software developmentmethodology that uses minimal planning in favour of rapidprototyping. The "planning" of software developed using RAD isinterleaved with writing the software itself. The lack of extensivepre-planning generally allows software to be written much faster,and makes it easier to change requirements.Rapid Application Development is a software developmentmethodology thatinvolvestechniqueslikeiterativedevelopment and software prototyping. According to Whitten(2004), it is a merger of various structured techniques, especiallydata-driven Information Engineering, with prototyping techniques toaccelerate software systems development.2.6.1 Practical Application of RAD Model:Whenorganizationsadoptrapiddevelopmentmethodologies, care must be taken to avoid role and responsibilityconfusion and communication breakdown within the developmentteam, and between the team and the client. In addition, especiallyin cases where the client is absent or not able to participate withauthority in the development process, the system analyst should beendowed with this authority on behalf of the client to ensureappropriateprioritisationofnon-functionalrequirements.Furthermore, no increment of the system should be developedwithout a thorough and formally documented design phase.2.6.2 Advantages of the RAD methodology:1. Flexible and adaptable to changes.2. Prototyping applications give users a tangible description fromwhich to judge whether critical system requirements are beingmet by the system. Report output can be compared withexisting reports. Data entry forms can be reviewed forcompleteness of all fields, navigation, data access (drop downlists, checkboxes, radio buttons, etc.).

273. RAD generally incorporates short development cycles - userssee the RAD product quickly.4. RAD involves user participation thereby increasing chances ofearly user community acceptance.5. RAD realizes an overall reduction in project risk.6. Pareto's 80 - 20 Rule usually results in reducing the costs tocreate a custom system2.6.3 Disadvantages of RAD methodology:1. Unknown cost of product. As mentioned above, this problemcan be alleviated by the customer agreeing to a limitedamount of rework in the RAD process.2. It may be difficult for many important users to commit the timerequired for success of the RAD process.

2.7 UNIFIED PROCESS MODEL:

The Unified Software Development Process or UnifiedProcess isapopular iterativeandincremental softwaredevelopment process framework. The best-known and extensivelydocumented refinement of the Unified Process is the RationalUnified Process (RUP).

Diagram: Profile of a typical project showing the relative sizes of

the four phases of the Unified ProcessThe Unified Process is not simply a process, but rather anextensible framework which should be customized for specificorganizations or projects. The Rational Unified Process is, similarly,a customizable framework. As a result it is often impossible to saywhether a refinement of the process was derived from UP or fromRUP, and so the names tend to be used interchangeably.

282.7.1 Characteristics:Iterative and IncrementalThe Unified Process is an iterative and incrementaldevelopment process. The Elaboration, Construction and Transitionphases are divided into a series of time boxed iterations. (TheInception phase may also be divided into iterations for a largeproject.) Each iteration results in an increment, which is a release ofthe system that contains added or improved functionality comparedwith the previous release.Although most iterations will include work in most of theprocess disciplines (e.g. Requirements, Design, Implementation,Testing) the relative effort and emphasis will change over thecourse of the project.

2.8 USE CASE DRIVEN

In the Unified Process, use cases are used to capture thefunctional requirements and to define the contents of the iterations.Each iteration takes a set of use cases or scenarios fromrequirements all the way through implementation, test anddeployment.2.8.1 Architecture Centric:The Unified Process insists that architecture sit at the heartof the project team's efforts to shape the system. Since no singlemodel is sufficient to cover all aspects of a system, the UnifiedProcess supports multiple architectural models and views.One of the most important deliverables of the process isthe executable architecture baseline which is created during theElaboration phase. This partial implementation of the systemserves to validate the architecture and act as a foundation forremaining development.2.8.2 Risk Focused:The Unified Process requires the project team to focus onaddressing the most critical risks early in the project life cycle. Thedeliverables of each iteration, especially in the Elaboration phase,must be selected in order to ensure that the greatest risks areaddressed first.

29The Unified Process divides the project into four phases:

Inception

Elaboration

Construction

Transition2.8.3 Inception Phase:Inception is the smallest phase in the project, and ideally itshould be quite short. If the Inception Phase is long then it may bean indication of excessive up-front specification, which is contraryto the spirit of the Unified Process.The following are typical goals for the Inception phase.

Establish a justification or business case for the project

Establish the project scope and boundary conditions

Outline the use cases and key requirements that will drive thedesign tradeoffs

Outline one or more candidate architectures

Identify risks

Prepare a preliminary project schedule and cost estimate

The Lifecycle Objective Milestone marks the end of the

Inception phase.2.8.4 Elaboration Phase:During the Elaboration phase the project team is expected tocapture a healthy majority of the system requirements. However,the primary goals of Elaboration are to address known risk factorsand to establish and validate the system architecture. Commonprocesses undertaken in this phase include the creation of usecase diagrams, conceptual diagrams (class diagrams with onlybasic notation) and package diagrams (architectural diagrams).The architecture is validated primarily through theimplementation of an Executable Architecture Baseline. This is apartial implementation of the system which includes the core, mostarchitecturally significant, components. It is built in a series of small,timeboxed iterations. By the end of the Elaboration phase thesystem architecture must have stabilized and the executablearchitecture baseline must demonstrate that the architecture willsupport the key system functionality and exhibit the right behaviorin terms of performance, scalability and cost.

30The final Elaboration phase deliverable is a plan (includingcost and schedule estimates) for the Construction phase. At thispoint the plan should be accurate and credible, since it should bebased on the Elaboration phase experience and since significantrisk factors should have been addressed during the Elaborationphase.The Lifecycle Architecture Milestone marks the end of theElaboration phase.2.8.5 Construction Phase:Construction is the largest phase in the project. In this phasethe remainder of the system is built on the foundation laid inElaboration. System features are implemented in a series of short,time boxed iterations. Each iteration results in an executablerelease of the software. It is customary to write full text use casesduring the construction phase and each one becomes the start of anew iteration. Common UML (Unified Modelling Language)diagrams used during this phase include Activity, Sequence,Collaboration, State (Transition) and Interaction Overviewdiagrams.The Initial Operational Capability Milestone marks the end ofthe Construction phase.2.8.6 Transition PhaseThe final project phase is Transition. In this phase thesystem is deployed to the target users. Feedback received from aninitial release (or initial releases) may result in further refinementsto be incorporated over the course of several Transition phaseiterations. The Transition phase also includes system conversionsand user training.

2.9 EVOLUTIONARY SOFTWARE PROCESS MODEL:

Software Products can be perceived as evolving over a timeperiod. However, neither the Linear Sequential Model northe Prototype Model applies this aspect to software production.The Linear Sequential Model was designed for straight-linedevelopment. The Prototype Model was designed to assist thecustomer in understanding requirements and is designed toproduce a visualization of the final system.But the Evolutionary Models take the concept of evolutioninto the engineeringparadigm. ThereforeEvolutionaryModels are iterative. They are built in a manner that enablessoftware engineers to develop increasingly more complex versionsof the software.

312.9.1 The Incremental Model:The Incremental Model combines elements of the LinearSequential Model (applied repetitively) with the iterative philosophyof prototyping. When an Incremental Model is used, the firstincrement is often the core product. The subsequent iterations arethe supporting functionalities or the add-on features that a customerwould like to see. More specifically, the model is designed,implemented and tested as a series of incremental builds until theproduct is finished.

2.10 THE SPIRAL MODEL:

The Spiral Model is an evolutionary software processmodel that couples the iterative nature of prototyping with thecontrolled and systematic aspects of the Linear Sequential Model.Using the Spiral Model the software is developed in a series ofincremental releases. Unlike the Iteration Model where in the firstproduct is a core product, in the Spiral Model the early iterationscould result in a paper model or a prototype. However, during lateriterations more complex functionalities could be added.A Spiral Model, combines the iterative nature ofprototyping with the controlled and systematic aspects ofthe Waterfall Model, therein providing the potential for rapiddevelopment of incremental versions of the software. A SpiralModel is divided into a number of framework activities, also calledtask regions. These task regions could vary from 3-6 in number andthey are:

Customer Communication - tasks required to establish

effective communication between the developer and customer.

Planning - tasks required defining resources, timelines and

other project related information /items.

Risk Analysis - tasks required to assess the technical and

management risks.

Engineering - tasks required

representation of the application.

Construction & Release - tasks required to construct, test and

support (eg. Documentation and training)

Customer evaluation - tasks required to obtain periodic

customer feedback so that there are no last minute surprises.

to

build

one

or

more

322.10.1 Advantages of the Spiral Model:

Realistic approach to the development because the software

evolves as the process progresses. In addition, the developerand the client better understand and react to risks at eachevolutionary level.

The model uses prototyping as a risk reduction mechanism and

allows for the development of prototypes at any stage of theevolutionary development.

It maintains a systematic stepwise approach, like the classic

waterfall model, and also incorporates into it an iterativeframework that more reflect the real world.

2.10.2 Disadvantages of the Spiral Model:

One should possess considerable risk-assessment expertise

It has not been employed as much proven models (e.g.

the Waterfall Model) and hence may prove difficult to sell to theclient.

2.11 CONCURRENT DEVELOPMENT MODEL:

The concurrent development model, sometimes calledconcurrent engineering.The concurrent process model can be representedschematically as a series of major technical activities, tasks, andtheir associated states. For example, the engineering activitydefined for the spiral model is accomplished by invoking thefollowing tasks: prototyping and/or analysis modeling, requirementsspecification, and design.The activity-analysis-may be in any one of the states notedat any given time. Similarly, other activities (e.g., design orcustomer communication) can be represented in an analogousmanner. All activities exist concurrently but reside in differentstates. For example, early in a project the customer communicationactivity has completed its first iteration and exists in the awaitingchanges state. The analysis activity (which existed in the nonestate while initial customer communication was completed) nowmakes a transition into the under development state. If, however,the customer indicates that changes in requirements must bemade, the analysis activity moves from the under developmentstate into the awaiting changes state.

33

2.12 SUMMARY:This chapter concern with system and their developmentmodels. The system follows their different approaches with the helpof different types Model.Questions:1. Explain SDLC in detail?Ans: refer 2.2.42. Explain incremental and iterative model in detail?Ans: refer 2.4

3.1 INTRODUCTION:System is concerned with various factors. System requireinternal and external information/data for to processing functions.

3.2 SYSTEM ANALYSIS:

In this phase, the current system is studied in detail. Aperson responsible for the analysis of the system is known asanalyst. In system analysis, the analyst conducts the followingactivities.3.2.1 Needs System Analysis:This activity is known as requirements analysis. In this stepthe analyst sums up the requirements of the system from the userand the managers. The developed system should satisfy theserequirements during testing phase.3.2.2 Data Gathering:In this step, the system analyst collects data about thesystem to be developed. He uses different tools and methods,depending on situation. These are:3.2.3 Written Documents:The analyst may collect the information/data from writtendocuments available from manual-files of an organization. Thismethod of data gathering is normally used if you want to

36computerize the existing manual system or upgrade the existingcomputer based system. The written documents may be reports,forms, memos, business plans, policy statements, organizationalcharts and many others. The written documents provide valuableinformation about the existing system.3.2.4 Interviews:Interview is another data gathering technique. Theanalyst (or project team members) interviews, managers, users/clients, suppliers, and competitors to collect the information aboutthe system. It must be noted that the questions to be asked fromthem should be precise, relevant and to the point.3.2.5 Questionnaires:Questionnaires are the feedback forms used to collectInformation. The interview technique to collect information is timeconsuming method, so Questionnaires are designed to collectinformation from as many people as we like. It is very convenientand inexpensive method to collect information but sometimes theresponse may be Confusing or unclear and insufficient.3.2.6 Observations:In addition to the above-mentioned three techniques tocollect information, the analyst (or his team) may collectInformation through observation. In this collect technique, theworking, behavior, and other related information of the existingsystem are observed. It means that working of existing system iswatched carefully. 3.2.7 SamplingIf there are large numbers of people or events involvedin The system, we can use sampling method to collectinformation. In this method, only a part of the people or eventsinvolved are used to collect information. For example to testthe quality of a fruit, we test a piece of the fruit.

3.3 DATA ANALYSIS

After completion of gathering step the collected data aboutthe system is analyzed to ensure that the data is accurate andcomplete. For this purpose, various tools may be used. The mostpopular and commonly used tools for data analysis are: DFDs (Data Flow Diagrams) System Flowcharts Connectivity Diagrams Grid Charts Decision Tables etc.

373.3.1 Analysis Report:After completing the work of analysis, the requirementscollected for the system are documented in a presentable form. Itmeans that the analysis report is prepared. It is done for review andapproval of the project from the higher management. This reportshould have three parts.

First, it should explain how the current system works.

Second, it should explain the problems in the existing system.

Finally, it should describe the requirements for the new system

and make recommendations for future.

3.4 FACT FINDING METHODS:

To study any system the analyst needs to do collect facts and allrelevant information. the facts when expressed in quantitative formare termed as data. The success of any project is depended uponthe accuracy of available data. Accurate information can becollected with help of certain methods/ techniques. These specificmethods for finding information of the system are termed as factfinding techniques. Interview, Questionnaire, Record View andObservations are the different fact finding techniques used by theanalyst. The analyst may use more than one technique forinvestigation.3.4.1 InterviewThis method is used to collect the information from groupsor individuals. Analyst selects the people who are related with thesystem for the interview. In this method the analyst sits face to facewith the people and records their responses. The interviewer mustplan in advance the type of questions he/ she is going to ask andshould be ready to answer any type of question. He should alsochoose a suitable place and time which will be comfortable for therespondent.The information collected is quite accurate and reliable asthe interviewer can clear and cross check the doubts there itself.This method also helps gap the areas of misunderstandings andhelp to discuss about the future problems. Structured andunstructured are the two sub categories of Interview. Structuredinterview is more formal interview where fixed questions are askedand specific information is collected whereas unstructured interviewis more or less like a casual conversation where in-depth areastopics are covered and other information apart from the topic mayalso be obtained.

383.4.2 Questionnaire:It is the technique used to extract information from numberof people. This method can be adopted and used only by an skillfulanalyst. The Questionnaire consists of series of questions framedtogether in logical manner. The questions are simple, clear and tothe point. This method is very useful for attaining information frompeople who are concerned with the usage of the system and whoare living in different countries. The questionnaire can be mailed orsend to people by post. This is the cheapest source of fact finding.3.4.3 Record ViewThe information related to the system is published in thesources like newspapers, magazines, journals, documents etc. Thisrecord review helps the analyst to get valuable information aboutthe system and the organization.3.4.4 ObservationUnlike the other fact finding techniques, in this method theanalyst himself visits the organization and observes andunderstand the flow of documents, working of the existing system,the users of the system etc. For this method to be adopted it takesan analyst to perform this job as he knows which points should benoticed and highlighted. In analyst may observe the unwantedthings as well and simply cause delay in the development of thenew system.

3.5 CONDUCT INTERVIEWS:

Interviews are particularly useful for getting the story behinda participant's experiences. The interviewer can pursue in-depthinformation around a topic. Interviews may be useful as follow-up tocertain respondents to questionnaires, e.g., to further investigatetheir responses. Usually open-ended questions are asked duringinterviews.Before you start to design your interview questions andprocess, clearly articulate to yourself what problem or need is to beaddressed using the information to be gathered by the interviews.This helps you keep clear focus on the intent of each question.3.5.1 Preparation for Interview:1. Choose a setting with little distraction. Avoid loud lights ornoises, ensure the interviewee is comfortable (you might askthem if they are), etc. Often, they may feel more comfortableat their own places of work or homes.

392. Explain the purpose of the interview.3. Address terms of confidentiality. Note any terms ofconfidentiality. (Be careful here. Rarely can you absolutelypromise anything. Courts may get access to information, incertain circumstances.) Explain who will get access to theiranswers and how their answers will be analyzed. If theircomments are to be used as quotes, get their writtenpermission to do so. See getting informed consent.4. Explain the format of the interview. Explain the type ofinterview you are conducting and its nature. If you want themto ask questions, specify if they're to do so as they havethem or wait until the end of the interview.5. Indicate how long the interview usually takes.6. Tell them how to get in touch with you later if they want to.7. Ask them if they have any questions before you both getstarted with the interview.8. Don't count on your memory to recall their answers. Ask forpermission to record the interview or bring along someone totake notes.3.5.2 Types of Interviews:1. Informal, conversational interview - No predeterminedquestions are asked, in order to remain as open andadaptable as possible to the interviewee's nature andpriorities; during the interview, the interviewer "goes with theflow".2. General interview guide approach - the guide approach isintended to ensure that the same general areas ofinformation are collected from each interviewee; thisprovides more focus than the conversational approach, butstill allows a degree of freedom and adaptability in gettinginformation from the interviewee.3. Standardized, open-ended interview - here, the same openended questions are asked to all interviewees (an openended question is where respondents are free to choosehow to answer the question, i.e., they don't select "yes" or"no" or provide a numeric rating, etc.); this approachfacilitates faster interviews that can be more easily analyzedand compared.4. Closed, fixed-response interview - where all interviewees areasked the same questions and asked to choose answersfrom among the same set of alternatives. This format isuseful for those not practiced in interviewing.

403.5.3 Types of Topics in Questions:1. Behaviors - about what a person has done or is doing2. Opinions/values - about what a person thinks about a topic3. Feelings - note that respondents sometimes respond with "Ithink ..." so be careful to note that you're looking for feelings4. Knowledge - to get facts about a topic5. Sensory - about what people have seen, touched, heard,tasted or smelled6. Background/demographics- standardquestions, such as age, education, etc.

background

3.5.4 Wording of Questions:

1. Wording should be open-ended. Respondents should beable to choose their own terms when answering questions.2. Questions should be as neutral as possible. Avoid wordingthat might influence answers, e.g., evocative, judgmentalwording.3. Questions should be asked one at a time.4. Questions should be worded clearly. This includes knowingany terms particular to the program or the respondents'culture.5. Be careful asking "why" questions. This type of questioninfers a cause-effect relationship that may not truly exist.These questions may also cause respondents to feeldefensive, e.g., that they have to justify their response,which may inhibit their responses to this and futurequestions

3.6

OBSERVE&PROCESSES:

DOCUMENT

BUSINESS

Business analysis is the discipline of identifying business

needs and determining solutions to business problems. Solutionsoften include a systems development component, but may alsoconsist of process improvement or organizational change orstrategic planning and policy development. The person who carriesout this task is called a business analyst or BA.Business analysis as a discipline has a heavy overlap withrequirements analysis sometimes also called requirementsengineering, but focuses on identifying the changes to an

41organization that are required for it to achieve strategic goals.These changes include changes to strategies, structures, policies,processes, and information systems.3.6.1 Examples of business analysis include:Enterprise analysis or company analysisfocuses on understanding the needs of the business as awhole, its strategic direction, and identifying initiatives thatwill allow a business to meet those strategic goals.Requirements planning and managementinvolves planning the requirements development process,determining which requirements are the highest priority forimplementation, and managing change.Requirements elicitationdescribes techniques for collecting requirements fromstakeholders in a project.Requirements analysisdescribes how to develop and specify requirements inenough detail to allow them to be successfully implementedby a project team.Requirements communicationdescribes techniques for ensuring that stakeholders have ashared understanding of the requirements and how they willbe implemented.Solution assessment and validationdescribes how the business analystcorrectness of a proposed solution.

can verify the

3.7 ROLES OF BUSINESS ANALYSTS:

As the scope of business analysis is very wide, there hasbeen a tendency for business analysts to specialize in one of thethree sets of activities which constitute the scope of businessanalysis.StrategistOrganizations need to focus on strategic matters on a moreor less continuous basis in the modern business world. Businessanalysts, serving this need, are well-versed in analyzing thestrategic profile of the organization and its environment, advisingsenior management on suitable policies, and the effects of policydecisions.

42ArchitectOrganizations may need to introduce change to solvebusiness problems which may have been identified by the strategicanalysis, referred to above. Business analysts contribute byanalyzing objectives, processes and resources, and suggestingways by which re-designSystems analystThere is the need to align IT Development with the systemsactually running in production for the Business. A long-standingproblem in business is how to get the best return from ITinvestments, which are generally very expensive and of critical,often strategic, importance. IT departments, aware of the problem,often create a business analyst role to better understand, anddefine the requirements for their IT systems. Although there may besome overlap with the developer and testing roles, the focus isalways on the IT part of the change process, and generally, thistype of business analyst gets involved, only when a case forchange has already been made and decided upon.3.7.1 Business process improvement:A business process improvement (BPI) typically involves six steps1. Selection of process teams and leaderProcess teams, comprising 2-4 employees from variousdepartments that are involved in the particular process, are set up.Each team selects a process team leader, typically the person whois responsible for running the respective process.2. Process analysis trainingThe selected process team members are trained in processanalysis and documentation techniques.3. Process analysis interviewThe members of the process teams conduct severalinterviews with people working along the processes. During theinterview, they gather information about process structure, as wellas process performance data.4. Process documentationThe interview results are used to draw a first process map.Previously existing process descriptions are reviewed andintegrated, wherever possible. Possible process improvements,discussed during the interview, are integrated into the processmaps.

435. Review cycleThe draft documentation is then reviewed by the employeesworking in the process. Additional review cycles may be necessaryin order to achieve a common view (mental image) of the processwith all concerned employees. This stage is an iterative process.6. Problem analysisA thorough analysis of process problems can then beconducted, based on the process map, and information gatheredabout the process. At this time of the project, process goalinformation from the strategy audit is available as well, and is usedto derive measures for process improvement.3.7.2 Goal of business analysts:Business analysts want to achieve the following outcomes: Reduce waste Create solutions Complete projects on time Improve efficiency Document the right requirements

3.8 BUILD PROTOTYPES :

Software prototyping, an activity during certain softwaredevelopment, is the creation of prototypes, i.e., incomplete versionsof the software program being developed.A prototype typically simulates only a few aspects of thefeatures of the eventual program, and may be completely differentfrom the eventual implementation.The conventional purpose of a prototype is to allow users ofthe software to evaluate developers' proposals for the design of theeventual product by actually trying them out, rather than having tointerpret and evaluate the design based on descriptions.Prototyping can also be used by end users to describe and proverequirements that developers have not considered, so "controllingthe prototype" can be a key factor in the commercial relationshipbetween developers and their clients.3.8.1 Prototyping Process:The process of prototyping involves the following steps1. Identify basic requirementsDetermine basic requirements including the input and outputinformation desired. Details, such as security, can typically beignored.

442. Develop Initial PrototypeThe initial prototype is developed that includes only userinterfaces Review The customers, including end-users, examinethe prototype and provide feedback on additions or changes.3. Revise and Enhance the PrototypeUsing the feedback both the specifications and the prototypecan be improved. Negotiation about what is within the scope of thecontract/product may be necessary.3.8.2 Advantages of prototyping:There are many advantages to using prototyping in softwaredevelopment some tangible, some abstract.Reduced time and costs: Prototyping can improve the quality ofrequirements and specifications provided to developers. Becausechanges cost exponentially more to implement as they are detectedlater in development, the early determination of what the user reallywants can result in faster and less expensive software.Improved and increased user involvement: Prototyping requiresuser involvement and allows them to see and interact with aprototype allowing them to provide better and more completefeedback and specifications. The presence of the prototype beingexamined by the user prevents many misunderstandings andmiscommunications that occur when each side believe the otherunderstands what they said. Since users know the problem domainbetter than anyone on the development team does, increasedinteraction can result in final product that has greater tangible andintangible quality. The final product is more likely to satisfy theusers desire for look, feel and performance.3.8.3 Disadvantages of prototyping:Insufficient analysis: The focus on a limited prototype can distractdevelopers from properly analyzing the complete project. This canlead to overlooking better solutions, preparation of incompletespecifications or the conversion of limited prototypes into poorlyengineered final projects that are hard to maintain. Further, since aprototype is limited in functionality it may not scale well if theprototype is used as the basis of a final deliverable, which may notbe noticed if developers are too focused on building a prototype asa model.User confusion of prototype and finished system: Users canbegin to think that a prototype, intended to be thrown away, isactually a final system that merely needs to be finished or polished.(They are, for example, often unaware of the effort needed to add

45error-checking and security features which a prototype may nothave.) This can lead them to expect the prototype to accuratelymodel the performance of the final system when this is not theintent of the developers. Users can also become attached tofeatures that were included in a prototype for consideration andthen removed from the specification for a final system. If users areable to require all proposed features be included in the final systemthis can lead to conflict.Developer misunderstanding of user objectives: Developersmay assume that users share their objectives (e.g. to deliver corefunctionality on time and within budget), without understandingwider commercial issues. For example, user representativesattending Enterprise software (e.g. PeopleSoft) events may haveseen demonstrations of "transaction auditing" (where changes arelogged and displayed in a difference grid view) without being toldthat this feature demands additional coding and often requires morehardware to handle extra database accesses. Users might believethey can demand auditing on every field, whereas developers mightthink this is feature creep because they have made assumptionsabout the extent of user requirements. If the developer hascommitted delivery before the user requirements were reviewed,developers are between a rock and a hard place, particularly if usermanagement derives some advantage from their failure toimplement requirements.Developer attachment to prototype: Developers can alsobecome attached to prototypes they have spent a great deal ofeffort producing; this can lead to problems like attempting toconvert a limited prototype into a final system when it does nothave an appropriate underlying architecture. (This may suggest thatthrowaway prototyping, rather than evolutionary prototyping, shouldbe used.)Excessive development time of the prototype: A key property toprototyping is the fact that it is supposed to be done quickly. If thedevelopers lose sight of this fact, they very well may try to developa prototype that is too complex. When the prototype is thrown awaythe precisely developed requirements that it provides may not yielda sufficient increase in productivity to make up for the time spentdeveloping the prototype. Users can become stuck in debates overdetails of the prototype, holding up the development team anddelaying the final product.Expense of implementing prototyping: the start up costs forbuilding a development team focused on prototyping may be high.Many companies have development methodologies in place, andchanging them can mean retraining, retooling, or both. Many

46companies tend to just jump into the prototyping without botheringto retrain their workers as much as they should.

3.9 QUESTIONAIRE:A questionnaire is a research instrument consisting of aseries of questions and other prompts for the purpose of gatheringinformation from respondents. Although they are often designed forstatistical analysis of the responses, this is not always the case.The questionnaire was invented by Sir Francis Galton.Questionnaires have advantages over some other types ofsurveys in that they are cheap, do not require as much effort fromthe questioner as verbal or telephone surveys, and often havestandardized answers that make it simple to compile data.However, such standardized answers may frustrate users.Questionnaires are also sharply limited by the fact that respondentsmust be able to read the questions and respond to them. Thus, forsome demographic groups conducting a survey by questionnairemay not be practical.3.9.1 Question Construction:Question typesUsually, a questionnaire consists of a number of questionsthat the respondent has to answer in a set format. A distinction ismade between open-ended and closed-ended questions. An openended question asks the respondent to formulate his own answer,whereas a closed-ended question has the respondent pick ananswer from a given number of options. The response options for aclosed-ended question should be exhaustive and mutuallyexclusive. Four types of response scales for closed-endedquestions are distinguished:

Dichotomous, where the respondent has two options

Nominal-polytomous, where the respondent has more than

two unordered options

Ordinal-polytomous, where the respondent has more than

two ordered options

(Bounded)Continuous, where the respondent is presented

with a continuous scale

3.9.2 Basic rules for questionnaire item construction

Use statements which are interpreted in the same way bymembers of different subpopulations of the population ofinterest.

47

Use statements where persons that have different opinions

or traits will give different answers.

Think of having an "open" answer category after a list of

possible answers.

Use only one aspect of the construct you are interested in

per item.

Use positive statements and avoid negatives or double

negatives.

Do not make assumptions about the respondent.

Useclearandcomprehensibleunderstandable for all educational levels

Use correct spelling, grammar and punctuation.

Avoid items that contain more than one question per item(e.g. Do you like strawberries and potatoes?)

wording,

easily

3.9.3 Questionnaire administration modes:

Main modes of questionnaire administration are:

Face-to-face questionnaire administration,

interviewer presents the items orally.

Paper-and-pencil questionnaire administration, where the

items are presented on paper.

Computerized questionnaire administration, where the items

are presented on the computer.

Adaptive computerized questionnaire administration, where

a selection of items is presented on the computer, andbased on the answers on those items, the computer selectsfollowing items optimized for the testers estimated ability ortrait.

where

an

3.10 JAD SESSIONS:

JAD (Joint Application Development) is a methodology thatinvolves the client or end user in the design and development of anapplication, through a succession of collaborative workshops calledJAD sessions. Chuck Morris and Tony Crawford, both of IBM,developed JAD in the late 1970s and began teaching the approachthrough workshops in 1980.The JAD approach, in comparison with the more traditionalpractice, is thought to lead to faster development times and greaterclient satisfaction, because the client is involved throughout thedevelopment process. In comparison, in the traditional approach to

48systems development, the developer investigates the systemrequirements and develops an application, with client inputconsisting of a series of interviews.Joint Application Design (JAD) is a process used in theprototyping life cycle area of the Dynamic Systems DevelopmentMethod (DSDM) to collect business requirements while developingnew information systems for a company. "The JAD process alsoincludes approaches for enhancing user participation, expeditingdevelopment, and improving the quality of specifications." Itconsists of a workshop where knowledge workers and ITspecialists meet, sometimes for several days, to define and reviewthe business requirements for the system The attendees includehigh level management officials who will ensure the productprovides the needed reports and information at the end. This actsas a management process which allows Corporate InformationServices (IS) departments to work more effectively with users in ashorter time frame.Through JAD workshops the knowledge workers and ITspecialists are able to resolve any difficulties or differencesbetween the two parties regarding the new information system. Theworkshop follows a detailed agenda in order to guarantee that alluncertainties between parties are covered and to help prevent anymiscommunications. Miscommunications can carry far moreserious repercussions if not addressed until later on in the process.(See below for Key Participants and Key Steps to an EffectiveJAD). In the end, this process will result in a new informationsystem that is feasible and appealing to both the designers and endusers."Although the JAD design is widely acclaimed, little isactually known about its effectiveness in practice." According toJournal of Systems and Software, a field study was done at threeorganizations using JAD practices to determine how JADinfluenced system development outcomes. The results of the studysuggest that organizations realized modest improvement insystems development outcomes by using the JAD method. JADuse was most effective in small, clearly focused projects and lesseffective in large complex projects.Joint Application Design (JAD) was developed by Drake andJosh of IBM Raleigh and Tony Crawford of IBM Toronto in aworkshop setting. Originally, JAD was designed to bring systemdevelopers and users of varying backgrounds and opinionstogether in a productive as well as creative environment. Themeetings were a way of obtaining quality requirements andspecifications. The structured approach provides a good alternativeto traditional serial interviews by system analysts.

49Brain-storming and theory Z principals in JAD: In 1984-5Moshe Telem of Tel-Aviv University, developed and implemented aJAD conceptual approach that integrates brainstorming and Ouchi's"Japanese Management" theory Z principles for rapid, maximal andattainable requirements analysis through JAD. Telem named hisapproach Brainstorming a Collective Decision-Making Approach(BCDA) [4]. Telem also developed and implemented a BCDAtechnique (BCDT)[5], which was successfully used within the settingof a pedagogical management information system project for theIsraeli educational system[3]. In this project brainstorming andtheory Z principles in JAD proved to be not only feasible but alsoeffective, resulting in a realistic picture of true users' informationrequirements.3.10.1 Conduct Jad sessions :Joint Application Development or JAD as it is commonlyknown as a process originally developed for designing a computerbased system. JAD focuses on the use of highly structured, wellplanned meetings to identify the key components of systemdevelopment projects.The JAD process is based on four simple ideas; people who actually do a job have the best understanding ofthe job People who are trained in information technology have thebest understanding of the possibilities of that technology. Information systems never exist alone The best results are obtained when all these groups worktogether on a project.The JAD technique is based on the observation that thesuccess of a project can be hampered by poor intra teamcommunication, incomplete requirements definition and lack ofconsensus. The training teaches the essential skills and techniquesneed to plan, organize and participate in JAD planning.AD focuses on the use of highly structured, well plannedmeetings to identify the key components of system developmentprojects. JAD centers on a structured workshop session. Iteliminates many problems with traditional meetings which are likeworkshops. The sessions arei) very focusedii) conducted in a dedicated environmentiii) quickly drive major requirements

50The participants include: facilitator, end users, developers,tie breakers, observers and subject matter experts. The success ofJAD-based workshop depends on the skill of the facilitators.3.10.2 Need for to Conduct JAD sessions:Everybody who is responsible for gathering requirementsand developing business systems should attend the JAD trainingsessions. They are: workshop facilitators, business analysts,system analysts, process analysts, development project leaders,development team members, business managers and Informationtechnology members.3.10.3 Advantages and DisadvantagesJAD is more expensive and cumbersome, compared to othertraditional methods. Many companies find that JAD usersparticipate freely in requirements modeling process. They feel asense of ownership and support for the new system. One bigdisadvantage is that it opens up a lot of scope for interpersonalconflict.Compared with traditional methods, JAD may seem moreexpensive and can be cumbersome if the group is too large relativeto the size of the project. Many companies find, however, that JADallows key users to participate effectively in the requirementsmodeling process. When users participate in the systemsdevelopment process, they are more likely to feel a sense ofownership in the results, and support for the new system. Whenproperly used, JAD can result in a more accurate statement ofsystem requirements, a better understanding of common goals, anda stronger commitment to the success of the new system.Joint Application Design (JAD) is a process used in theprototyping life cycle area of the Dynamic Systems DevelopmentMethod (DSDM) to collect business requirements while developingnew information systems for a company. "The JAD process alsoincludes approaches for enhancing user participation, expeditingdevelopment, and improving the quality of specifications." Itconsists of a workshop where knowledge workers and ITspecialists meet, sometimes for several days, to define and reviewthe business requirements for the system.[1] The attendees includehigh level management officials who will ensure the productprovides the needed reports and information at the end. This actsas a management process which allows Corporate InformationServices (IS) departments to work more effectively with users in ashorter time frame.

51Through JAD workshops the knowledge workers and ITspecialists are able to resolve any difficulties or differencesbetween the two parties regarding the new information system. Theworkshop follows a detailed agenda in order to guarantee that alluncertainties between parties are covered and to help prevent anymiscommunications. Miscommunications can carry far moreserious repercussions if not addressed until later on in the process.(See below for Key Participants and Key Steps to an EffectiveJAD). In the end, this process will result in a new informationsystem that is feasible and appealing to both the designers and endusers."Although the JAD design is widely acclaimed, little isactually known about its effectiveness in practice." According toJournal of Systems and Software, a field study was done at threeorganizations using JAD practices to determine how JADinfluenced system development outcomes. The results of the studysuggest that organizations realized modest improvement insystems development outcomes by using the JAD method. JADuse was most effective in small, clearly focused projects and lesseffective in large complex projects.3.10.4 Four Principle Steps :1) Define session objectives- The first step for the facilitatortogether with the project leader is to define the session objectivesand answering the questions as to what are the session objectives?What is wanted from the session? Who can help create thedeliverables?2) Prepare for the session- The facilitator has primaryresponsibility for the JAD preparation. Four categories of tasks areinvolved in preparing for the session. Conduct pre-session research Create a session agenda Arrange session logistics Prepare the participants

3) Conduct the JAD session- The facilitator conducts the JAD

session, leading the developers and customers through plannedagenda. Conducting the meeting involves:- Starting and ending time,-Distributing and following the meeting agenda-Gaining consensus on the meeting purpose and round rules at thebeginning of the meeting-Keeping the meeting on track.

524) Procedure the Documents- It is critical to the success of anyJAD session that the information on flip-charts, foils, whiteboard,and discussions be recorded and reviewed by the participants.Each day of the session, the facilitator and scribe should create adraft of the days results. The final documents from the JAD shouldbe completed as soon as possible after the session. It is primaryresponsibility of the facilitator and the scribe to:- organize the final document for easy use by project members- complete a "Final Draft" document- distribute it to selected individuals for review- incorporate revisions as necessary- distribute the final copy for participant sign-offJAD improves the final quality of the product by keeping thefocus on the upfront of the development cycle thus reducing theerrors that are likely to cause huge expenses.

3.11 VALIDATION:Validation may refer to:

Validity, in logic, determining whether a statement is true

Validation and verification, in engineering, confirming that a

product or service meets the needs of its users

Verification and Validation (software), checking that a

software system meets specifications and fulfills its intendedpurpose

Validation of foreign studies and degrees, processes for

transferring educational credentials between countries

Validation (computer security), the process of determining

whether a user or computer program is allowed to dosomething.o

Validate (McAfee), a software application for this

purpose

Validation (drug manufacture), documenting that a process

or system meets its pre-determined specifications andquality attributes

Validation (psychology), in psychology and human

communication, the reciprocated communication of respectwhich signifies that the other's opinions are acknowledged,respected and heard

53

Data validation, in computer science, ensuring that data

Regression model validation, in statistics, determining

whether a model fits the data well

3.12 STRUCTURED WALKTHROUGHS:

In typical project planning, you must define the scope of thework to be accomplished. A typical tool that is used by projectmanagers is the work breakdown structure (WBS). Thiswalkthrough demonstrates a general approach to creating a WBSusing Team Foundation Server and Microsoft Project.This walkthrough is not based on any particularmethodology. However, it does use the quality of servicerequirement and task work item types in the MSF for Agile SoftwareDevelopment process template. The approach used in thiswalkthrough should be adaptable your own organization's work itemtypes and process.In this walkthrough, you will complete the following tasks: Create a requirement using Team Foundation Server. Create tasks using Team Foundation Server. Create tasks using Microsoft Project. Link tasks and requirements. Create a work breakdown structure from tasks in MicrosoftProjectThe term is often employed in the software industry(see softwarewalkthrough)todescribethe processofinspecting algorithms and source code by following paths throughthe algorithms or code as determined by input conditions andchoices made along the way. The purpose of such codewalkthroughs is generally to provide assurance of the fitness forpurpose of the algorithm or code; and occasionally to assess thecompetence or output of an individual or team.3.12.1 Types of Walkthroughs

3.12.2 PrerequisitesThe following prerequisites must be met to complete thiswalkthrough.

Microsoft Project must be installed.

A team project must be created that uses the MSF for AgileSoftware Development process template.

3.12.3 ScenarioThe scenario for this walkthrough is based on the exampleAdventure Works team project. Adventure Works is starting aproject to set up a Web interface for ordering its products. One ofthe customer requirements states that customers be able to checkon order status after orders are placed. The scope of this workmust be defined in a work breakdown structure to a sufficient levelof detail to enable project planning to be completed.The following approach is used by Adventure Works. Theproject manager must create the WBS and has the help of the teamto do this. One person on the team is a database expert and willprovide details on what is needed in the database to support thenew requirement. She will enter her work details using TeamFoundation Server.The project manager will work with other team members todefine additional work to complete the Web interface. Then theproject manager will enter those details using Microsoft Project.Finally, the project manager will create a WBS in MicrosoftVisio that can be used in the project planning document.Throughout this walkthrough you will perform the steps of eachrole to create the tasks and WBS. When you complete thewalkthrough, you will have created the following tasks and subtasksin a Gantt chart and a WBS. Order Storage Subsystem Order Tables Order Stored Procedures Order Web Interface Order Lookup Web Service Client Order Views

55

3.13 SUMMARYThrough this chapter we discussed about functionality ofsystem as well as system requirements. The resources to collectthe data are an essential of system. Some fact findings methodsare an important part here.Questions:1. Explain role of business analyst in detail?Ans: refer 3.72. Explain Data Analysis in detail?Ans: refer 3.3

4.1 INTRODUCTION:A feasibility study is an evaluation of a proposal designedto determine the difficulty in carrying out a designated task.Generally, a feasibility study precedes technical development andproject implementation. In other words, a feasibility study is anevaluation or analysis of the potential impact of a proposed project.

57

4.2 FIVE COMMON FACTORS FOR FEASIBILITY

STUDY:4.2.1 Technology and system feasibilityThe assessment is based on an outline design of systemrequirements in terms of Input, Processes, Output, Fields,Programs, and Procedures. This can be quantified in terms ofvolumes of data, trends, frequency of updating, etc. in order toestimate whether the new system will perform adequately or not.Technological feasibility is carried out to determine whether thecompany has the capability, in terms of software, hardware,personnel and expertise, to handle the completion of the project4.2.2 Economic feasibility:Economic analysis is the most frequently used method forevaluating the effectiveness of a new system. More commonlyknown as cost/benefit analysis, the procedure is to determine thebenefits and savings that are expected from a candidate systemand compare them with costs. If benefits outweigh costs, then thedecision is made to design and implement the system. Anentrepreneur must accurately weigh the cost versus benefits beforetaking an action.Cost-based study: It is important to identify cost and benefitfactors, which can be categorized as follows: 1. Development costs;and 2. Operating costs. This is an analysis of the costs to beincurred in the system and the benefits derivable out of the system.Time-based study: This is an analysis of the time required toachieve a return on investments. the benefits derived from thesystem. The future value of a project is also a factor.4.2.3 Legal feasibility:Determines whether the proposed system conflicts with legalrequirements, e.g. a data processing system must comply with thelocal Data Protection Acts.4.2.4 Operational feasibility:Operational feasibility is a measure of how well a proposedsystem solves the problems, and takes advantage of theopportunities identified during scope definition and how it satisfiesthe requirements identified in the requirements analysis phase ofsystem development.

Implementation Stakeholder, manager, and end-user

Resistance Evaluate management, team, and individual

In-House Strategies How will the work environment be

affected? How much will it change?

Adapt & Review Once change resistance is overcome,

explain how the new process will be implemented along witha review process to monitor the process change.

Example:If an operational feasibility study must answer the six itemsabove, how is it used in the real world? A good example might be ifa company has determined that it needs to totally redesign theworkspace environment.After analyzing the technical, economic, and schedulingfeasibility studies, next would come the operational analysis. Inorder to determine if the redesign of the workspace environmentwould work, an example of an operational feasibility study wouldfollow this path based on six elements:

Process Input and analysis from everyone the new

redesign will affect along with a data matrix on ideas andsuggestions from the original plans.

Evaluation Determinations from the process suggestions;

will the redesign benefit everyone? Who is left behind? Whofeels threatened?

Implementation Identify resources both inside and out

that will work on the redesign. How will the redesignconstruction interfere with current work?

Resistance What areas and individuals will be most

resistantStrategies How will the organization deal with thechanged workspace environment? Do new processes orstructures need to be reviewed or implemented in order forthe redesign to be effective?

59

Adapt & Review How much time does the organization

need to adapt to the new redesign. How will it be reviewedand monitored? What will happen if through a monitoringprocess, additional changes must be made?

4.2.5 Schedule feasibility:

A project will fail if it takes too long to be completed before itis useful. Typically this means estimating how long the system willtake to develop, and if it can be completed in a given time periodusing some methods like payback period. Schedule feasibility is ameasure of how reasonable the project timetable is. Given ourtechnical expertise, are the project deadlines reasonable? Someprojects are initiated with specific deadlines. You need to determinewhether the deadlines are mandatory or desirable It is an essentialtype of feasibilty.It makes prototype model with proper time spanwhich allot the steps and their required time duration.

4.3 OTHER FEASIBILITY FACTORS:

4.3.1 Market and real estate feasibility:Market Feasibility Study typically involves testing geographiclocations for a real estate development project, and usually involvesparcels of real estate land. Developers often conduct marketstudies to determine the best location within a jurisdiction, and totest alternative land uses for given parcels. Jurisdictions oftenrequire developers to complete feasibility studies before they willapprove a permit application for retail, commercial, industrial,manufacturing, housing, office or mixed-use project. MarketFeasibility takes into account the importance of the business in theselected area.4.3.2 Resource feasibility:This involves questions such as how much time is availableto build the new system, when it can be built, whether it interfereswith normal business operations, type and amount of resourcesrequired, dependencies, etc. Contingency and mitigation plansshould also be stated here.4.3.3 Cultural feasibilityIn this stage, the project's alternatives are evaluated for theirimpact on the local and general culture. For example,environmental factors need to be considered and these factors areto be well known. Further an enterprise's own culture can clash withthe results of the project.

60

4.4 COST ESTIMATES:

4.4.1 Cost/Benefit AnalysisEvaluating Quantitatively Whether to Follow a Course ofActionYou may have been intensely creative in generatingsolutions to a problem, and rigorous in your selection of the bestone available. However, this solution may still not be worthimplementing, as you may invest a lot of time and money in solvinga problem that is not worthy of this effort.Cost Benefit Analysis or CBA is a relatively* simple andwidely used technique for deciding whether to make a change. Asits name suggests, you simply add up the value of the benefits of acourse of action, and subtract the costs associated with it.Costs are either one-off, or may be ongoing. Benefits aremost often received over time. We build this effect of time into ouranalysis by calculating a payback period. This is the time it takesfor the benefits of a change to repay its costs. Many companieslook for payback on projects over a specified period of time.4.4.2 How to Use the Tool:In its simple form, cost-benefit analysis is carried out usingonly financial costs and financial benefits. For example, a simplecost benefit ratio for a road scheme would measure the cost ofbuilding the road, and subtract this from the economic benefit ofimproving transport links. It would not measure either the cost ofenvironmental damage or the benefit of quicker and easier travel towork.A more sophisticated approach to building a cost benefitmodels is to try to put a financial value on intangible costs andbenefits. This can be highly subjective - is, for example, a historicwater meadow worth $25,000, or is it worth $500,000 because if itsenvironmental importance? What is the value of stress-free travel towork in the morning?These are all questions that people have to answer, andanswers that people have to defend.The version of the cost benefit approach we explain here isnecessarily simple. Where large sums of money are involved (forexample, in financial market transactions), project evaluation canbecome an extremely complex and sophisticated art. The

61fundamentals of this are explained in Principles of CorporateFinance by Richard Brealey and Stewart Myers - this is somethingof an authority on the subject.Example:A sales director is deciding whether to implement a newcomputer-based contact management and sales processingsystem. His department has only a few computers, and hissalespeople are not computer literate. He is aware thatcomputerized sales forces are able to contact more customers andgive a higher quality of reliability and service to those customers.They are more able to meet commitments, and can work moreefficiently with fulfilment and delivery staff.His financial cost/benefit analysis is shown below:Costs:New computer equipment: 10 network-ready PCs with supporting software @ $2,450each 1 server @ $3,500 3 printers @ $1,200 each Cabling & Installation @ $4,600 Sales Support Software @ $15,000Training costs: Computer introduction - 8 people @ $400 each Keyboard skills - 8 people @ $400 each Sales Support System - 12 people @ $700 eachOther costs: Lost time: 40 man days @ $200 / day Lost sales through disruption: estimate: $20,000 Lost sales through inefficiency during first months: estimate:$20,000Total cost: $114,000Benefits:

Tripling of mail shot capacity: estimate: $40,000 / year

Ability to sustain telesales campaigns: estimate: $20,000 /

year

Improved efficiency and reliability of follow-up: estimate:

$50,000 / year

Improved customer service and retention: estimate: $30,000

/ year

62

Improved accuracy of customer information: estimate:

$10,000 / year

More ability to manage sales effort: $30,000 / year

Total Benefit: $180,000/year

Payback time: $114,000 / $180,000 = 0.63 of a year = approx. 8months4.4.3 Benefits of Cost Estimation :Cost/Benefit Analysis is a powerful, widely used andrelatively easy tool for deciding whether to make a change.To use the tool, firstly work out how much the change willcost to make. Then calculate the benefit you will from it.Where costs or benefits are paid or received over time, work outthe time it will take for the benefits to repay the costs.Cost/Benefit Analysis can be carried out using only financialcosts and financial benefits. You may, however, decide to includeintangible items within the analysis. As you must estimate a valuefor these, this inevitably brings an element of subjectivity into theprocess.Larger projects are evaluated using formal finance/capitalbudgeting, which takes into account many of the complexitiesinvolved with financial Decision Making. This is a complex area andis beyond the scope of this site.

4.5 BUSINESS SYSTEM ANALYSIS:

This process involves interviewing stakeholders andgathering information required to assist us in the development of aweb application which closely resembles your business model. Inmost cases our clients use the following types of services:Project Roadmap: helps you understand which resouces arerequired at various stages of the project, assess the overall risksand opportunities, and allocate a realistic timeline and budget forthe successful completion and implementation of the project.Project Blueprint: documents and diagrams each of the varioustechnical and content aspects of the software project and how theyfit and flow together, such as the data model for the database,user interface for the users, and everything in between. Thisbecomes the main guideline for the Programmers, Graphic

63Designers, and Content Developers who collaborate with theProject Manager on developing a web application.4.5.1 Developing a Project Roadmap:Before starting a web application project we need to havethe following information:Establish the approximate project cost and delivery timeIdentify the required resources and expertise and where to findthem Know the risks involved in each development stage, and planfor how to deal with them early on Gain client agreement on theabsolute essentials for each phase of the project. This increasesthe potential for project success and the long-term return oninvestment and helps to avoid drastic project delays in the futureAs a result of the initial study, the Project Roadmapbecomes the foundation of our business agreement with yourcompany. To develop a roadmap, we first communicate with thekey stakeholders and study your business model. Projectrequirements are broken down into manageable stages, with eachstage assigned a priority rank and a time and development costestimate.Approximately 10 % of a projects time and budget isinvested in developing the roadmap. This number could increase ifyour business concept and model are being implemented for thefirst time.

4.6

DEVELOPING A BLUEPRINTAPPLICATION:

FOR

A WEB

For larger projects, Programmers, Graphic Designers, and

Content developers do not start weaving a web application rightaway. To increase the likelihood of the projects success, we steerthe development process based on the project blueprint preparedby our System Analysts and Information Architects.The blueprint contains text documents and visual diagrams (ER, UML, IA Garrett, Use Cases, ... ). In other words, we will bedesigning the data model for the database, user interface for theusers, and everything in between. We model everything on paper toensure the development team can achieve their goal of a highquality web application on time and on budget.These documents are refined and improved as the projectmoves forward. The project blueprint almost always changesdepending on the discoveries and challenges that inevitably arisethroughout the development process.

64These documents and diagrams become your property oncethe project is delivered. This allows you to grow and further developthe application in the future. We do not keep any of the informationproprietary.4.6.1 Identification of list of deliverables :This is related to Project Execution and Control.List of Deliverables:Project Execution and Control differs from all other work inthat, between the kick-off meeting and project acceptance, allprocesses and tasks occur concurrently and repeatedly, andcontinue almost the entire duration of Project Execution andControl.Thus, the earlier concept of a "process deliverable" is notapplicable to Project Execution and Control, and even taskdeliverables are mostly activities, not products.Of course, there is the ultimate deliverable the product of theproject.The following table lists all Project Execution and Controlprocesses, tasks, and their deliverables4.6.2 Process Descriptions 1 Conduct Project Execution and Control Kick-off 2 Manage Triple Constraints (Scope, Schedule, Budget) 3 Monitor and Control Risks 4 Manage Project Execution and Control 5 Gain Project Acceptance1. Conduct Project Execution and Control Kick-OffRoles Project Manager Project Sponsor and / or Project Director Project Team Members Steering Committee StakeholdersPurposeThe purpose of Conduct Project Execution and Control Kickoff is to formally acknowledge the beginning of Project Executionand Control and facilitate the transition from Project Planning.Similar to Project Planning Kick-off, Project Execution and ControlKick-off ensures that the project is still on track and focused on theoriginal business need. Many new team members will beintroduced to the project at this point, and must be thoroughlyoriented and prepared to begin work. Most importantly, current

65project status is reviewed and all prior deliverables are reexamined, giving all new team members a common referencepoint.Tasks associated with Conduct Project Execution and ControlKick-Off-Orient New Project Team Members-Review Project Materials-Kick Off Project Execution and ControlOrient New Project Team MembersAs in Project Planning, the goal of orienting new ProjectTeam members is to enhance their abilities to contribute quicklyand positively to the projects desired outcome. If the ProjectManager created a Team Member Orientation Packet duringProject Planning, the packet should already contain an orientationchecklist, orientation meeting agenda, project materials, andlogistical information that will again be useful.The Project Manager should review the contents of theexisting Team Member Orientation Packet to ensure that they arecurrent and still applicable to the project. Any changes needed tothe contents of the packet should be made at this time. Onceupdated, packet materials can be photocopied and distributed tonew team members to facilitate their orientation process. TheProject Manager or Team Leader should conduct one-on-oneorientation sessions with new members to ensure that they readand understand the information presented to them.If the orientation packet was not created during ProjectPlanning and new team members are coming on board, the ProjectManager must gather and present information that would be usefulto new team members, including:

General information on the Customer

Logistics (parking policy, work hours, building/office security

Project procedures (team member expectations, how and

when to report project time and status, sick time andvacation policy)

66Review Project Materials and Current Project StatusBefore formally beginning Project Execution and Control, theProject Team should review updated Project Status Reports andthe Project Plan. At this point in the project, the Project Plancomprises all deliverables produced during Project Initiation andProject Planning (High Level and Detail):1. Project Charter, Project Initiation Plan2. Triple Constraints (Scope, Schedule, Budget)3. Risk Management Worksheet4. Description of Stakeholder Involvement5. Communications Plan6. Time and Cost Baseline7. Communications Management Process8. Change Control Process9. Acceptance Management Process10. Issue Management and Escalation Process11. Training Plan12. Project Implementation and Transition PlanKick off Project Execution and ControlAs was the case for Project Initiation and Project Planning, ameeting is conducted to kick off Project Execution and Control.During the meeting, the Project Manager should present the maincomponents of the Project Plan for review. Other items to coverduring the meeting include: Introduction of new team members Roles and responsibilities of each team member Restating the objective(s) of the project and goals forExecution and Control Latest Project Schedule and timeline Project risks and mitigation plans Current project status, including open issues and actionitemsThe goal of the kick-off meeting is to verify that all partiesinvolved have consistent levels of understanding and acceptance ofthe work done so far, to validate expectations pertaining to thedeliverables to be produced during Project Execution and Control,and to clarify and gain understanding of the expectations of eachteam member in producing the deliverables. Attendees at theProject Execution and Control Kick-off Meeting include the ProjectManager, Project Team, Project Sponsor and / or Project Director,and any other Stakeholders with a vested interest in the status of

67the project. This is an opportunity for the Project Sponsor and / orProject Director to reinforce the importance of the project and how itsupports the business need.2. Manage Triple ConstraintsRoles

PurposeThe Triple Constraints is the term used for a project'sinextricably linked constraints: Scope, Schedule, and Budget, with aresulting acceptable Quality. During Project Planning, each sectionof the Triple Constraints was refined. As project-specific tasks areperformed during Project Execution and Control, the TripleConstraint will need to be managed according to the processesestablished during Project Planning.The Triple Constraints is not static although Project Planningis complete and has been approved, some components of TripleConstraints will continue to evolve as a result of the execution ofproject tasks. Throughout the project, as more information aboutthe project becomes known and the product of the project isdeveloped, the Triple Constraints are likely to be affected and willneed to be closely managed.The purpose of the Manage Triple Constraints Task is to:

Manage Changes to Project Scope

Control the Project Schedule and Manage Schedule

Changes

Implement Quality Assurance and Quality Control Processes

According to the Quality Standards Revised During ProjectPlanning

Control and Manage Costs Established in the Project Budget

Tasks associated with Manage Triple Constraints

Manage Project Scope

Manage Project Schedule

Implement Quality Control

Manage Project Budget

68Manage Project ScopeDuring Project Planning, the Project Manager, through regularcommunication with the Customer Representatives and ProjectSponsor and / or Project Director, refined the Project Scope toclearly define the content of the deliverables to be produced duringProject Execution and Control. This definition includes a cleardescription of what will and will not be included in each deliverable.The process to be used to document changes to the ProjectScope was included in the Project Initiation Plan. This processincludes a description of the way scope will be managed and howchanges to scope will be handled. It is important that the ProjectManager enforce this process throughout the entire project, startingvery early in Project Execution and Control. Even if a scope changeis perceived to be very small, exercising the change processensures that all parties agree to the change and understand itspotential impact. Following the process each and every time scopechange occurs will minimize confusion as to what actuallyconstitutes a change. Additionally, instituting the process early willtest its effectiveness, get the Customer and Project Sponsor and /or Project Director accustomed to the way change will be managedthroughout the remainder of the project, and help them understandtheir roles as they relate to change.Manage Project ScheduleDuring Project Planning (Detail Level), an agreed-uponbaseline was established for the Project Schedule. This schedulebaseline will be used as a starting point against which performanceon the project will be measured. It is one of many tools the ProjectManager can use during Project Execution and Control todetermine if the project is on track.Project Team members use the communications mechanismsdocumented in the Communications Plan to provide feedback tothe Project Manager on their progress. Generally team membersdocument the time spent on tasks and provides estimates of thetime required to complete them. The Manager uses this informationto update the Project Schedule. In some areas there may beformal time tracking systems that are used to track project activity.After updating the Project Schedule, the Project Managermust take the time to review the status of the project. Somequestions that the Project Manager should be able to answer byexamining the Project Schedule include:

Is the project on track?

Are there any issues that are becoming evident that need tobe addressed now?

69

Which tasks are taking more time than estimated? Less time?

If a task is late, what is the effect on subsequent tasks?

What is the next deliverable to be produced and when is it

scheduled to be complete?

What is the amount of effort expended so far and how much is

remaining?

Are any Project Team members over-allocated or underallocated?

How much of the time allocated has been expended to date

and what is the time required to complete the project?

Most project scheduling tools provide the ability to produce

reports to display a variety of useful information. It isrecommended that the Project Manager experiment with allavailable reports to find those that are most useful forreporting information to the Project Team, Customer, andProject Sponsor and / or Project Director.

When updating the Project Schedule, it is very important that

the Project Manager maintain the integrity of the currentschedule. Each version of the schedule should be archived.By creating a new copy of the schedule whenever it isupdated, the Project Manager will never lose the runninghistory of the project and will also have a copy of everyschedule for audit purposes.

4.7 IMPLEMENT QUALITY CONTROL

Quality control involves monitoring the project and itsprogress to determine if the quality standards defined duringProject Planning are being implemented and whether the resultsmeet the quality standards defined during Project Initiation. Theentire organization has responsibilities relating to quality, but theprimary responsibility for ensuring that the project follows itsdefined quality procedures ultimately belongs to the ProjectManager. The following figure highlights the potential results ofexecuting a project with poor quality compared to a projectexecuted with high quality:

Quality control should be performed throughout the course of

the project. Some of the activities and processes that can be usedto monitor the quality of deliverables, determine if project resultscomply with quality standards, and identify ways to improveunsatisfactory performance, are described below. The ProjectManager and Project Sponsor and / or Project Director shoulddecide which are best to implement in their specific projectenvironment.

Conduct Peer Reviews the goal of a peer review is to identify

and remove quality issues from a deliverable as early in ProjectExecution and Control as efficiently as possible. A peer reviewis a thorough review of a specific deliverable, conducted bymembers of the Project Team who are the day-to-day peers ofthe individuals who produced the work. The peer review processadds time to the overall Project Schedule, but in many projectsituations the benefits of conducting a review far outweigh thetime considerations. The Project Manager must evaluate theneeds of his/her project, determine and document which, if any,deliverables should follow this process, and build the requiredtime and resources into the Project Schedule.

Prior to conducting a peer review, a Project Team member

should be identified as the facilitator or person responsible forkeeping the review on track. The facilitator should distribute allrelevant information pertaining to the deliverable to all participantsin advance of the meeting to prepare them to participate effectively.During the meeting, the facilitator should record informationincluding: Peer review date Names and roles of participants The name of the deliverable being reviewed Number of quality issues found

71

Description of each quality issue found

Actions to follow to correct the quality issues prior topresenting the deliverable to the approverNames of the individuals responsible for correcting thequality issuesThe date by which quality issues must be corrected

This information should be distributed to the Project

Manager, all meeting participants, and those individuals notinvolved in the meeting who will be responsible for correcting anyproblems discovered or for producing similar deliverables. Thefacilitator should also solicit input from the meeting participants todetermine if another peer review is necessary. Once the qualityissues have been corrected and the Project Manager is confidentthe deliverable meets expectations, it may be presented to theapprover.Project Deliverables (Project deliverables will differ dependingupon the project lifecycle being used. Customize the followingquestions and add others as necessary to properly and sufficientlyevaluate the deliverables specific to your project.) Do the deliverables meet the needs of the performingOrganization? Do the deliverables meet the objectives and goals outlined inthe Business Case? Do the deliverables achieve the quality standards defined inthe Quality Management Plan?

4.8 PROJECT MANAGEMENT DELIVERABLES

Does the Project Proposal define the business need the projectwill address, and how the projects product will support theorganizations strategic plan?

Does the Business Case provide an analysis of the costs and

benefits of the project and provide a compelling case for theproject?

Has a Project Repository been established to store all project

documents, and has it been made available to the ProjectTeam?

Does the Project Initiation Plan define the project goals andobjectives?

Does the Project Scope provide a list of all the processes thatwill be affected by the project?

72

In the Project Scope, is it clear as to what is in and out of

scope?

Is the Project Schedule defined sufficiently to enable the Project

Manager to manage task execution?

Was a Project Schedule baseline established?

Is the Project Schedule maintained on a regular basis?

Does the Quality Management Plan describe quality standards

for the project and associated quality assurance and qualitycontrol activities?

Has a project budget been established and documented in

sufficient detail?

Have project risks been identified and prioritized, and has a

mitigation plan been developed and documented for each?

If any risk events have occurred to date, was the risk mitigationplan executed successfully?

Are all Stakeholders aware of their involvement in the project,

and has this it been documented and stored in the projectrepository?

Does the Communications Plan describe the frequency and

method of communications for all Stakeholders involved in theproject?

Does the Change Control Process describe how to identify

change, what individuals may request a change, and theprocess to follow to approve or reject a request for change?

Has changes to scope been successfully managed so far?

Does the Acceptance Management Process clearly define who

is responsible for reviewing and approving project and projectmanagement deliverables? Does it describe the process tofollow to accept or reject deliverables?

Has the Acceptance Management Process proven successful

for the deliverables produced so far?

Does the Issue Management Process clearly define how issues

will be captured, tracked, and prioritized? Does it define theprocedure to follow should an unresolved issue need to beescalated?

Have issues been successfully managed up to this point?

Does the Organizational Change Management Plan document

how changes to people, existing business processes, andculture will be handled?

Has a Project Team Training Plan been established, and is it

being implemented?

73

Does the Implementation and Transition Plan describe how to

ensure that all Consumers are prepared to use the projectsproduct, and the Performing Organization is prepared to supportthe product?

Have all Project Management deliverables been approved by

the Project Sponsor and / or Project Director (or designatedapprover?)

Does the Project Plan contain all required components as listed

in the Guidebook?

Are each Project Plan component being maintained on a regular

basis?

4.9 SUMMARY:This chapter is based on system feasibility; roadmap andBusiness system analysis. Feasibility factors make much moreeffect on the system process and cost.Questions:1. Explain common factors for Feasibility study in detail?Ans: refer 4.22. Discuss the feasibility study factors for Online ExaminationSystem in brief.Ans.Refer 4.1

5.1 INTRODUCTION:Requirements analysis in systems engineering and softwareengineering, encompasses those tasks that go into determining theneeds or conditions to meet for a new or altered product, takingaccount of the possibly conflicting requirements of the variousstakeholders, such as beneficiaries or users.Requirements analysis is critical to the success of adevelopment project.[2] Requirements must be documented,actionable, measurable, testable, related to identified businessneeds or opportunities, and defined to a level of detail sufficient forsystem design. Requirements can be functional and non-functional.Conceptually, requirements analysis includes three types of activity:

Eliciting requirements: the task of communicating with

customers and users to determine what their requirements are.This is sometimes also called requirements gathering.

Analyzing requirements: determining whether the stated

requirements are unclear, incomplete, ambiguous, orcontradictory, and then resolving these issues.

Recording requirements: Requirements might be documented in

various forms, such as natural-language documents, use cases,user stories, or process specifications.

Requirements analysis can be a long and arduous process

during which many delicate psychological skills are involved. New

75systems change the environment and relationships betweenpeople, so it is important to identify all the stakeholders, take intoaccount all their needs and ensure they understand the implicationsof the new systems. Analysts can employ several techniques toelicit the requirements from the customer. Historically, this hasincluded such things as holding interviews, or holding focus groups(more aptly named in this context as requirements workshops) andcreating requirements lists. More modern techniques includeprototyping, and use cases. Where necessary, the analyst willemploy a combination of these methods to establish the exactrequirements of the stakeholders, so that a system that meets thebusiness needs is produced.

5.2 REQUIREMENT ENGINEERING:

Systematic requirements analysis is also known asrequirements engineering.[3] It is sometimes referred to loosely bynames such as requirements gathering, requirements capture, orrequirements specification. The term requirements analysis canalso be applied specifically to the analysis proper, as opposed toelicitation or documentation of the requirements, for instance.Requirements Engineering can be divided into discretechronological steps:

Requirements elicitation,

Requirements analysis and negotiation,

Requirements specification,

System modeling

Requirements validation,

Requirements management.Requirement engineering according to Laplante (2007) is "asubdiscipline of systems engineering and software engineering thatis concerned with determining the goals, functions, and constraintsof hardware and software systems."[4] In some life cycle models, therequirement engineering process begins with a feasibility studyactivity, which leads to a feasibility report. If the feasibility studysuggests that the product should be developed, then requirementanalysis can begin. If requirement analysis precedes feasibilitystudies, which may foster outside the box thinking, then feasibilityshould be determined before requirements are finalized.

5.3 SOFTWARE REQUIREMENTS SPECIFICATION

A software requirements specification (SRS) is a completedescription of the behaviour of the system to be developed. Itincludes a set of use cases that describe all of the interactions thatthe users will have with the software. Use cases are also known asfunctional requirements. In addition to use cases, the SRS also

76contains non-functional (or supplementary) requirements. Nonfunctional requirements are requirements which impose constraintson the design or implementation (such as performancerequirements, quality standards, or design constraints).Recommended approaches for the specification of softwarerequirements are described by IEEE 830-1998. This standarddescribes possible structures, desirable contents, and qualities of asoftware requirements specification.Types of RequirementsRequirements are categorized in several ways. The followingare common categorizations of requirements that relate to technicalmanagement.Customer Requirements Statements of fact and assumptions thatdefine the expectations of the system in terms of missionobjectives, environment, constraints, and measuresofeffectiveness and suitability (MOE/MOS). The customers are thosethat perform the eight primary functions of systems engineering,with special emphasis on the operator as the key customer.Operational requirements will define the basic need and, at aminimum, answer the questions posed in the following listing

Operational distribution or deployment: Where will the system

be used?

Mission profile or scenario: How will the system accomplish its

mission objective?

Performance and related parameters: What are the critical

system parameters to accomplish the mission?

Utilization environments:components to be used?

Effectiveness requirements: How effective or efficient must the

system be in performing its mission?

Operational life cycle: How long will the system be in use by theuser?

Environment: What environments will the system be expected to

operate in an effective manner?

How

are

the

various

system

5.4 FUNCTIONAL REQUIREMENTS

Functional requirements explain what has to be done byidentifying the necessary task, action or activity that must beaccomplished. Functional requirements analysis will be used as thetop-level functions for functional analysis

77Non-functional RequirementsNon-functional requirements are requirements that specifycriteria that can be used to judge the operation of a system, ratherthan specific behaviors.Performance RequirementsThe extent to which a mission or function must be executed;generally measured in terms of quantity, quality, coverage,timeliness or readiness. During requirements analysis, performance(how well does it have to be done) requirements will be interactivelydeveloped across all identified functions based on system life cyclefactors; and characterized in terms of the degree of certainty in theirestimate, the degree of criticality to system success, and theirrelationship to other requirements.Design RequirementsThe build to, code to, and buy to requirements forproducts and how to execute requirements for processesexpressed in technical data packages and technical manuals.

5.5 SYSTEM ANALYSIS MODEL:

The System Analysis Model is made up of class diagrams,sequence or collaboration diagrams and state-chart diagrams.Between them they constitute a logical, implementation-free view ofthe computer system that includes a detailed definition of everyaspect of functionality. This model: Defines what the system does not how it doesit. Defines logical requirements in more detailthan the use case model, rather than aphysical solution to the requirements. Leaves out all technology detail, includingsystem topologySystem Model Information Flow:The diagram illustrates the way in which the 3-dimensionalsystem model is developed iteratively from the uses case model interms of the information which flows between each view. Note thatit is not possible fully to develop any one of the three views withoutthe other two. They are interdependent. This is the reason whyincremental and iterative development is the most efficient way ofdeveloping computer software.

78

5.6 SCREEN PROTOTYPING:

Screen prototyping can be used as another useful way ofgetting information from the users. When it is integrated into a UMLmodel:The flow of the screen is made consistent with the flow of theuse case and the interaction model.

The data entered and displayed on the screen is made

consistent with the object model.The functionality of the screen is made consistent with theinteraction and object models.

79The System Design Model:This model is the detailed model of everything that is goingto be needed to write all the code for the system components. It isthe analysis model plus all the implementation detail. Preferably itshould be possible automatically to generate at least frame codefrom this model. This means that any structural changes to thecode can be made in the design model and forward generated. Thisensures that the design model accurately reflects the code in thecomponents. The design model includes:(1)Class, sequence or collaboration and state diagrams - as inthe analysis model, but now fully defined ready for codegeneration(2)Component diagrams defining all the software components,their interfaces and dependencies(3)Deployment diagrams defining the topology of the targetenvironment, including which components will run on whichcomputing nodesOverall Process Flow:The overall process flow must allow for both rework andincremental development.Rework - where changes need to be made, the earliest model thatthe change affects is changed first and the results then flow forwardthrough all the other models to keep them up to date.

Incrementation - increments can restart at any point, depending

upon whether the work needed for this increment has already beencompleted in higher level models.

80Incremental Development:Incremental Development is based on use cases or use caseflows which define working pieces of functionality at the user level.Within an 'Increment', the models required to develop a workingsoftware increment are each incremented until a working, testedexecuting piece of software is produced with incrementalfunctionality. This approach:(1) Improves estimation, planning and assessment. Use casesprovide better baselines for estimation than traditionally writtenspecifications. The estimates are continuously updated andimproved throughout the project.(2) Allows risks to the project to be addressed incrementally andreduced early in the lifecycle. Early increments can bescheduled to cover the most risky parts of the architecture.When the architecture is stable, development can be speededup.(3) Benefits users, managers and developers who see workingfunctionality early in the lifecycle. Each increment is,effectively, a prototype for the next increment.

5.7 MODEL ORGANISATION:

All model syntaxes provide a number of model elementswhich can appear on one or more diagram types. The modelelements are contained with a central model, together with all theirproperties and connections to other model elements. The diagramsare independent views of the model, just as a number of computerscreens looking into different records or parts of a database showdifferent views.The functional view, made up of data flow diagrams, is theprimary view of the system. It defines what is done, the flow of databetween things that are done and provides the primary structure ofthe solution. Changes in functionality result in changes in thesoftware structure.The data view, made up of entity relationship diagrams, isa record of what is in the system, or what is outside the system thatis being monitored. It is the static structural view.The dynamic view, made up of state transition diagrams,defines when things happen and the conditions under which theyhappen.

81

DiagramType 2

DiagramType 1

ModelStructured Analysis: In structured analysis there are threeorthogonal views:

Encapsulation of Hardware:The concept of encapsulation of data and functionality thatbelongs together is something which the hardware industry hasbeen doing for a long time. Hardware engineers have been creatingre-useable, re-configurable hardware at each level of abstractionsince the early sixties. Elementary Boolean functions areencapsulated together with bits and bytes of data in registers onchips. Chips are encapsulated together on circuit boards. Circuitboards are made to work together in various system boxes thatmake up the computer. Computers are made to work togetheracross networks. Hardware design, therefore, is totally objectoriented at every level and is, as a result, maximally re-useable,extensible and maintainable; in a single word: flexible. Applyingobject-orientation to software, therefore, could be seen as puttingthe engineering into software design that has existed in hardwaredesign for many years.Hardware encapsulates data and function at every level ofabstraction

Maximises maintainability, reuse and extension

Encapsulation of Software:In well developed object oriented software, functionality anddata is encapsulated in objects. Objects are encapsulated in

82components. Components are encapsulated into systems. If this isdone well the result is:Maximal coherenceMinimal interconnectionSolid interface definitions

6.1 INTRODUCTION:A data-flow diagram (DFD) is a graphical representation ofthe "flow" of data through an information system. DFDs can also beused for the visualization of data processing (structured design).On a DFD, data items flow from an external data source oran internal data store to an internal data store or an external datasink, via an internal process.A DFD provides no information about the timing ofprocesses, or about whether processes will operate in sequence orin parallel. It is therefore quite different from a flowchart, whichshows the flow of control through an algorithm, allowing a reader todetermine what operations will be performed, in what order, andunder what circumstances, but not what kinds of data will be inputto and output from the system, nor where the data will come fromand go to, nor where the data will be stored (all of which are shownon a DFD).

6.2 OVERVIEW OF DFD:

It is common practice to draw a context-level data flowdiagram first, which shows the interaction between the system andexternal agents which act as data sources and data sinks. On thecontext diagram (also known as the 'Level 0 DFD') the system'sinteractions with the outside world are modelled purely in terms ofdata flows across the system boundary. The context diagram

84shows the entire system as a single process, and gives no clues asto its internal organization.This context-level DFD is next "exploded", to produce aLevel 1 DFD that shows some of the detail of the system beingmodeled. The Level 1 DFD shows how the system is divided intosub-systems (processes), each of which deals with one or more ofthe data flows to or from an external agent, and which togetherprovide all of the functionality of the system as a whole. It alsoidentifies internal data stores that must be present in order for thesystem to do its job, and shows the flow of data between thevarious parts of the system.Data-flow diagrams were proposed by Larry Constantine, theoriginal developer of structured design, based on Martin andEstrin's "data-flow graph" model of computation.Data-flow diagrams (DFDs) are one of the three essentialperspectives of the structured-systems analysis and designmethod SSADM. The sponsor of a project and the end users willneed to be briefed and consulted throughout all stages of asystem's evolution. With a data-flow diagram, users are able tovisualize how the system will operate, what the system willaccomplish, and how the system will be implemented. The oldsystem's dataflow diagrams can be drawn up and compared withthe new system's data-flow diagrams to draw comparisons toimplement a more efficient system. Data-flow diagrams can beused to provide the end user with a physical idea of where the datathey input ultimately has an effect upon the structure of the wholesystem from order to dispatch to report. How any system isdeveloped can be determined through a data-flow diagram.6.2.1 How to develop Data Flow Diagram:Top-down approach1. The system designer makes a context level DFD or Level 0,which shows the "interaction" (data flows) between "the system"(represented by one process) and "the system environment"(represented by terminators).2. The system is "decomposed in lower-level DFD (Level 1)" into aset of "processes, data stores, and the data flows betweenthese processes and data stores".3. Each process is then decomposed into an "even-lower-leveldiagram containing its sub processes".4. This approach "then continues on the subsequent subprocesses", until a necessary and sufficient level of detail isreached which is called the primitive process (aka chewable inone bite).

85Diagram: Example of DFD flow:

6.2.2 Notation for to Draw Data Flow Diagram:

Data flow diagrams present the logical flow of information

through a system in graphical or pictorial form. Data flow diagramshave only four symbols, which makes useful for communicationbetween analysts and users. Data flow diagrams (DFDs) show thedata used and provided by processes within a system. DFDs makeuse of four basic symbols.Create structured analysis, information flow, processoriented, data-oriented, and data process diagrams as well asdata flowcharts.

86External EntityAn external entity is a source or destination of a data flowwhich is outside the area of study. Only those entities whichoriginate or receive data are represented on a business processdiagram. The symbol used is an oval containing a meaningful andunique identifier.ProcessA process shows a transformation or manipulation of dataflows within the system. The symbol used is a rectangular boxwhich contains 3 descriptive elements:Firstly an identification number appears in the upper lefthand corner. This is allocated arbitrarily at the top level and servesas a unique reference.Secondly, a location appears to the right of the identifier anddescribes where in the system the process takes place. This may,for example, be a department or a piece of hardware. Finally, adescriptive title is placed in the centre of the box. This should be asimple imperative sentence with a specific verb, for example'maintain customer records' or 'find driver'.Data FlowA data flow shows the flow of information from its source toits destination. A data flow is represented by a line, witharrowheads showing the direction of flow. Information always flowsto or from a process and may be written, verbal or electronic. Eachdata flow may be referenced by the processes or data stores at itshead and tail, or by a description of its contents.Data StoreA data store is a holding place for information within the system:It is represented by an open ended narrow rectangle. Datastores may be long-term files such as sales ledgers, or may beshort-term accumulations: for example batches of documents thatare waiting to be processed. Each data store should be given areference followed by an arbitrary number.Resource FlowA resource flow shows the flow of any physical material fromits source to its destination. For this reason they are sometimesreferred to as physical flows.The physical material in question should be given ameaningful name. Resource flows are usually restricted to early,high-level diagrams and are used when a description of thephysical flow of materials is considered to be important to help theanalysis.

87External EntitiesIt is normal for all the information represented within asystem to have been obtained from, and/or to be passed onto, anexternal source or recipient. These external entities may beduplicated on a diagram, to avoid crossing data flow lines. Wherethey are duplicated a stripe is drawn across the left hand corner,like this.The addition of a lowercase letter to each entity on thediagram is a good way to uniquely identify them.ProcessesWhen naming processes, avoid glossing over them, withoutreally understanding their role. Indications that this has been doneare the use of vague terms in the descriptive title area - like'process' or 'update'.The most important thing to remember is that the descriptionmust be meaningful to whoever will be using the diagram.Data FlowsDouble headed arrows can be used (to show two-way flows)on all but bottom level diagrams. Furthermore, in common withmost of the other symbols used, a data flow at a particular level of adiagram may be decomposed to multiple data flows at lower levels.Data StoresEach store should be given a reference letter, followed by anarbitrary number. These reference letters are allocated as follows:'D' - indicates a permanent computer file'M' - indicates a manual file'T' - indicates a transient store, one that is deleted after processing.In order to avoid complex flows, the same data store may be drawnseveral times on a diagram. Multiple instances of the same datastore are indicated by a double vertical bar on their left hand edge.6.2.3 Data Flow Diagram Example:A data flow Diagram example is a graphic representationof all the major steps of a process. It can help you: Understand the complete process. Identify the critical stages of a process. Locate problem areas. Show relationships between different steps in a process.Professional looking examples and templates of Data FlowDiagram which help you create data flow diagrams rapidly.Document 6 Sigma and ISO 9000 processes.

88Example:Data flow diagrams can be used to provide a clearrepresentation of any business function. The technique starts withan overall picture of the business and continues by analyzing eachof the functional areas of interest. This analysis can be carried outto precisely the level of detail required. The technique exploits amethod called top-down expansion to conduct the analysis in atargeted way.

Data

FlowDiagrams

DiagramNotationThere are only five symbols that are used in the drawing ofbusiness process diagrams (data flow diagrams). These are nowexplained, together with the rules that apply to them.

This diagram represents a banking process, which

maintains customer accounts. In this example, customers canwithdraw or deposit cash, request information about their accountor update their account details. The five different symbols used inthis example represent the full set of symbols required to draw anybusiness process diagram.External Entity

An external entity is a source or destination of a data flow

which is outside the area of study. Only those entities which

89originate or receive data are represented on a business processdiagram. The symbol used is an oval containing a meaningful andunique identifier.Process

A process shows a transformation or manipulation of data flows

within the system. The symbol used is a rectangular box whichcontains 3 descriptive elements:Firstly an identification number appears in the upper left handcorner. This is allocated arbitrarily at the top level and serves as aunique reference.Secondly, a location appears to the right of the identifier anddescribes where in the system the process takes place. This may,for example, be a department or a piece of hardware. Finally, adescriptive title is placed in the centre of the box. This should be asimple imperative sentence with a specific verb, for example'maintain customer records' or 'find driver'.Data Flow

A data flow shows the flow of information from its source

to its destination. A data flow is represented by a line, witharrowheads showing the direction of flow. Information always flowsto or from a process and may be written, verbal or electronic. Eachdata flow may be referenced by the processes or data stores at itshead and tail, or by a description of its contents.

Data Store

A data store is a holding place for information within the system:

It is represented by an open ended narrow rectangle.Data stores may be long-term files such as sales ledgers, or maybe short-term accumulations: for example batches of documents

90that are waiting to be processed. Each data store should be given areference followed by an arbitrary number.ResourceFlow

A resource flow shows the flow of any physical material from

its source to its destination. For this reason they are sometimesreferred to as physical flows.The physical material in question should be given ameaningful name. Resource flows are usually restricted to early,high-level diagrams and are used when a description of thephysical flow of materials is considered to be important to help theanalysis.Example for Online Inventory System:Gives vendor detailsReceive enquiry detailsSends quotation details

Data Flow Diagram

Receive purchase order details

Vendor

Gives delivery challan details

Receives stock details

Take GRNGive employee detailsReceive challan details

0.0Online InventorySystem

Give GRN details

Employee13

Gives purchase return details

Gives enquiry details

Receives quotation details

Gives purchase order details

Views report details

Management

91Examples of Data Flow Diagram

6.3 SUMMARY:This chapter based on systematic flow of data with the helpof some notation, symbol which established by scientist. The flow ofdata indicates sequence to execute and process data in thesystem. This DFD is an analysis method to analyze the data up touser satisfied level.Questions:1.Explain DFD with example?Ans: refer 6.22.Draw a DFD for Sale & Purchase Management System forManufacturing Industry.Ans: refer 6.23. .Draw a DFD for Event Management System for Hotel.Ans: refer 6.2

7.1 INTRODUCTION:This activity is designed to help you understand the processof designing and constructing ERDs using Systems Architect. EntityRelationship Diagrams are a major data modeling tool and will helporganize the data in your project into entities and define therelationships between the entities. This process has proved toenable the analyst to produce a good database structure so that thedata can be stored and retrieved in a most efficient manner.

93

7.2 ENTITYA data entity is anything real or abstract about which wewant to store data. Entity types fall into five classes: roles, events,locations, tangible things or concepts. E.g. employee, payment,campus, book. Specific examples of an entity arecalled instances. E.g. the employee John Jones, Mary Smith'spayment, etc.RelationshipA data relationship is a natural association that existsbetween one or more entities. E.g. Employees processpayments. Cardinality defines the number of occurrences of oneentity for a single occurrence of the related entity. E.g. an employeemay process many payments but might not process any paymentsdepending on the nature of her job.AttributeA data attribute is a characteristic common to all or mostinstances of a particular entity. Synonyms include property, dataelement, field. E.g. Name, address, SSN, pay rate are all attributesof the entity employee. An attribute or combination of attributes thatuniquely identifies one and only one instance of an entity is calleda primary key or identifier. E.g. SSN is a primary key forEmployee.

7.3

ENTITYRELATIONSHIPMETHODOLOGY

DIAGRAM

1. Identify Entities Identify the roles, events, locations, tangible

things or concepts about which the end-userswant to store data.2. FindFind the natural associations between pairs ofRelationshipsentities using a relationship matrix.3. Draw RoughPut entities in rectangles and relationships onERDline segments connecting the entities.4. Fill inDetermine the number of occurrences of oneCardinalityentity for a single occurrence of the relatedentity.5. Define Primary Identify the data attribute(s) that uniquelyKeysidentify one and only one occurrence of eachentity.6. Draw KeyEliminate Many-to-Many relationships andBased ERDinclude primary and foreign keys in each entity.7. IdentifyName the information details (fields) which areAttributesessential to the system under development.

948. Map Attributes

For each attribute, match it with exactly one

entity that it describes.9. Draw fullyAdjust the ERD from step 6 to account forattributed ERDentities or relationships discovered in step 8.10. Check Results Does the final Entity Relationship Diagramaccurately depict the system data?7.3.1 Solved Example:A company has several departments. Each department hasa supervisor and at least one employee. Employees must beassigned to at least one, but possibly more departments. At leastone employee is assigned to a project, but an employee may be onvacation and not assigned to any projects. The important data fieldsare the names of the departments, projects, supervisors andemployees, as well as the supervisor and employee SSN and aunique project number.1. Identify EntitiesThe entities in this system are Department, Employee,Supervisor and Project. One is tempted to make Company anentity, but it is a false entity because it has only one instance in thisproblem. True entities must have more than one instance.2. Find Relationships:

5. Define Primary Keys

6. Draw Key-Based ERD

There are two many-to-many relationships in the rough ERDabove, between Department and Employee and between Employeeand Project. Thus we need the associative entities DepartmentEmployee and Employee-Project. The primary key for DepartmentEmployee is the concatenated key Department Name andEmployee SSN. The primary key for Employee-Project is theconcatenated key Employee SSN and Project Number.

96Example for Inventory Management System:

ERD:

has

m1Enquirym

Stock 1

has

Enquiry_item

Ouotation_item

m1 mmm Itemm mPurchase_order_item

mPO

Challan_item

has

1m

1Purchase_return_item

has

mm Challan m1 1

has

1 Quotation mm 1has11Vendor 11 1 1

has

has

mGRN mm1has11Invoice employee1

mmPurchase ret. mGRN_item

has

97Example to draw ERD for Travelling Service:User

Is a

Traveller

Service provider

Admin1

Configure

**Flight

has

Tariff Plan

has

111

has1

Baggage allowed

Flight schedule

has1

has

fare

has*

payment

*Booking

has1

has1

refund

has

cancellation

has

7. Identify AttributesThe only attributes indicated are the names of thedepartments, projects, supervisors and employees, as well as thesupervisor and employee SSN and a unique project number.8. Map AttributesAttributeEntityAttributeEntityDepartment NameDepartmentSupervisor SSNSupervisorEmployee SSNEmployeeSupervisor NameSupervisorEmployee NameEmployeeProject NameProjectProject IDProject

10. Check Results

The final ERD seems to model the data in this system well.

7.4 DATA DICTIONARY:

A data dictionary, a.k.a. metadata repository, as defined inthe IBM Dictionary of Computing, is a "centralized repository ofinformation about data such as meaning, relationships to otherdata, origin, usage, and format." The term may have one of severalclosely related meanings pertaining to databases and databasemanagement systems (DBMS):

a document describing a database or collection of databases

an integral component of a DBMS that is required todetermine its structurea piece of middleware that extends or supplants the nativedata dictionary of a DBMS

99Database users and application developers can benefit froman authoritative data dictionary document that catalogs theorganization, contents, and conventions of one or moredatabases. This typically includes the names and descriptions ofvarious tables and fields in each database, plus additional details,like the type and length of each data element. There is no universalstandard as to the level of detail in such a document, but it isprimarily a weak kind of data.7.4.1 Example of Data Dictionary:

7.5 UML DIAGRAM:

Unified Modelling Language (UML) is a standardized generalpurpose modelling language in the field of software engineering.The standard is managed, and was created by, the ObjectManagement Group.

100The Unified Modelling Language (UML) is used to specify,visualize, modify, construct and document the artifacts of an objectoriented software intensive system under development.[1] UMLoffers a standard way to visualize a system's architecturalblueprints, including elements such as:

actors

business processes

(logical) components

activities

programming language statements

database schemas, and

reusable software components

7.5.1 What is UML?The Unified Modelling Language was originally developed atRational Software but is now administered by the ObjectManagement Group (see link). It is a modelling syntax aimedprimarily at creating models of software-based systems, but can beused in a number of areas. It is:Syntax only - UML is just a language; it tells you what modelelements and diagrams are available and the rules associated withthem. It does not tell you what diagrams to create.Process-independent - the process by which the models arecreated is separate from the definition of the language. You willneed a process in addition to the use of UML itself.Tool-independent - UML leaves plenty of space for tool vendors tobe creative and add value to visual modelling with UML. However,some tools will be better than others for particular applications.Well documented - the UML notation guide is available as areference to all the syntax available in the language.Its application is not well understood - the UML notation guide is notsufficient to teach you how to use the language. It is a genericmodelling language and needs to be adapted by the user toparticular applications.Originally just for system modelling - some user-defined extensionsare becoming more widely used now, for example, for businessmodelling and modelling the design of web-based applications.7.5.2 How UML Started?UML came about when James Rumbaugh joined GradyBooch at Rational Software. They both had object-oriented

101syntaxes and needed to combine them. Semantically, they werevery similar; it was mainly the symbols that needed to be unified.UML Diagram is divided into four types:-Activity Diagrams - a generic flow chart used much in businessmodelling and sometimes in use case modelling to indicate theoverall flow of the use case. This diagram type replaces the needfor dataflow diagrams but is not a main diagram type for thepurposes of analysis and design.-State Machine Diagrams - in information systems these tend to beused to descibe the lifecycle of an important data entity. In real-timesystems they tend to be used to describe state dependentbehaviour- Component Diagrams - show the types of components, theirinterfaces and dependencies in the software architecture that is thesolution to the application being developed.- Deployment Diagrams - show actual computing nodes, theircommunication relationships and the processes or components thatrun on them.UML can be used to model a business, prior to automating itwith computers. The same basic UML syntax is used; however, anumber of new symbols are added, in order to make the diagramsmore relevant to the business process world. A commonly-used setof these symbols is available in current versions of Rational Rose.The most commonly used UML extensions for webapplications were developed by Jim Conallen. You can access hisown website to learn more about them by following the link. Thesesymbols are also available in current versions of Rational Rose.UML is designed to be extended in this way. Extensionsto the syntax are created by adding 'stereotypes' to a modelelement. The stereotype creates a new model element from anexisting one with an extended, user-defined meaning. User definedsymbols, which replace the original UML symbol for the modelelement, can then be assigned to the stereotype. UML itself usesthis mechanism, so it is important to know what stereotypes arepredefined in UML in order not to clash with them when creatingnew ones.The use case diagram shows the functionality of thesystem from an outside-in viewpoint.

102-Actors (stick men) are anything outside the system that interactswith the system.

- Use Cases (ovals) are the procedures by which the actors interactwith the system.- Solid lines indicate which actors interact with the system as part ofwhich procedures.- Dashed lines show dependencies between use cases, where oneuse case is 'included' in or 'extends' another.

7.6 CLASS DIAGRAM:

Class diagrams show the static structure of the systems.Classes define the properties of the objects which belong to them.These include:-Attributes - (second container) the data properties of the classesincluding type, default value and constraints.

1037.6.1 Example:

- Operations - (third container) the signature of the functionality that

can be applied to the objects of the classes including parameters,parameter types, parameter constraints, return types and thesemantics.- Associations - (solid lines between classes) the references,contained within the objects of the classes, to other objects,enabling interaction with those objects.Sequence Diagram:Sequence diagrams show potential interactions betweenobjects in the system being defined. Normally these are specifiedas part of a use case or use case flow and show how the use casewill be implemented in the system. They include:- Objects - oblong boxes or actors at the top - either named or justshown as belonging to a class, from, or to which messages aresent to other objects.

104Example:

- Messages - solid lines for calls and dotted lines for data returns,showing the messages that are sent between objects including theorder of the messages which is from the top to the bottom of thediagram.- Object lifelines - dotted vertical lines showing the lifetime of theobjects.- Activation - the vertical oblong boxes on the object lifelinesshowing the thread of control in a synchronous system.

7.7 COMMUNICATION DIAGRAM:

Communication Diagrams show similar information to sequencediagrams, except that the vertical sequence is missing. In its placeare:

105- Object Links - solid lines between the objects. These representthe references between objects that are needed for them to interactand so show the static structure at object level.- Messages - arrows with one or more message name that showthe direction and names of the messages sent between objects.

7.8 ACTIVITY DIAGRAM:

A UML Activity Diagram is a general purpose flowchart witha few extras. It can be used to detail a business process, or to helpdefine complex iteration and selection in a use case description. Itincludes:- Active states - oblongs with rounded corners which describe whatis done.

- Transitions - which show the order in which the active states occurand represent a thread of activity?- Conditions - (in square brackets) which qualify the transitions.-Decisions - (nodes in the transitions) which cause the thread toselect one of multiple paths.

106

7.9 COMPONENT DIAGRAM:

Component Diagrams show the types of softwarecomponents in the system, their interfaces and dependencies.7.9.1 Example:

7.10 DEPLOYMENT DIAGRAM:

Deployment diagrams show the computing nodes in the system,their communication links, the components that run on them andtheir dependencies.7.10.1 Example:

107

7.11 SUMMARY :In this chapter we discussed many points related to systemdevelopment and analysis model. These model helps to us torepresent any system like online and offline system.Questions:1.Explain ER- Diagram in detail?Ans: refer 7.22.Explain UML diagram in detail?Ans: refer 7.53.Draw an ERD for Hotel Mangament System.Ans: refer 7.24.Draw a Use case diagram for Online Shopping for Greeting card.Ans: refer 7.5

8.1 INTRODUCTION:Systems design is the process or art of defining thearchitecture, components, modules, interfaces, and data fora system to satisfy specified requirements. One could see it as theapplication of systems theory to product development. There issome overlap with the disciplines of systems analysis, systemsarchitecture and systems engineering.

109Object-oriented analysis and design methods are becomingthe most widely used methods for computer system design. TheUML has become the standard language used in Object-orientedanalysis and design. It is widely used for modeling softwaresystems and is increasingly used for high designing non-softwaresystems and organizations.

8.2 LOCAL DESIGN:

The logical design of a system pertains to an abstractrepresentation of the data flows, inputs and outputs of the system.This is often conducted via modelling, which involves a simplistic(and sometimes graphical) representation of an actual system. Inthe context of systems design, modelling can undertake thefollowing forms, including:

Data flow diagrams

Entity Life HistoriesEntity Relationship Diagrams

8.3 PHYSICAL DESIGN:

The physical design relates to the actual input and outputprocesses of the system. This is laid down in terms of how data isinputted into a system, how it is verified/authenticated, how it isprocessed, and how it is displayed as output.Physical design, in this context, does not refer to the tangiblephysical design of an information system. To use an analogy, apersonal computer's physical design involves input via a keyboard,processing within the CPU, and output via a monitor, printer, etc. Itwould not concern the actual layout of the tangible hardware, whichfor a PC would be a monitor, CPU, motherboard, hard drive,modems, video/graphics cards, USB slots, etc

8.4 ALTERNATIVE DESIGN METHOD:

8.4.1 Rapid Application Development (RAD)Rapid Application Development (RAD) is a methodology inwhich a systems designer produces prototypes for an end-user.The end-user reviews the prototype, and offers feedback on itssuitability. This process is repeated until the end-user is satisfiedwith the final system.

1108.4.2 Joint Application Development (JAD)JAD is a methodology which evolved from RAD, in which asystems designer consults with a group consisting of the followingparties:

Executive sponsor

Systems Designer

Managers of the system

8.5 EMBEDDED SYSTEM:

An embedded system is a computer system designed toperform one or a few dedicated functions often with real-timecomputing constraints. It is embedded as part of a complete deviceoften including hardware and mechanical parts. By contrast, ageneral-purpose computer, such as a personal computer (PC), isdesigned to be flexible and to meet a wide range of end-userneeds.Characteristics:1. Embedded systems are designed to do some specific task,rather than be a general-purpose computer for multiple tasks.Some also have real-time performance constraints that must bemet, for reasons such as safety and usability; others may havelow or no performance requirements, allowing the systemhardware to be simplified to reduce costs.2. Embedded systems are not always standalone devices. Manyembedded systems consist of small, computerized parts withina larger device that serves a more general purpose. Forexample, the Gibson Robo Guitar features an embeddedsystem for tuning the strings, but the overall purpose of theRobot Guitar is, of course, to play music. Similarly, anembedded system in an automobile provides a specific functionas a subsystem of the car itself.3. The program instructions written for embedded systems arereferred to as firmware, and are stored in read-only memoryor Flash memory chips. They run with limited computerhardware resources: little memory, small or non-existentkeyboard and/or screen.

8.6 DESIGN PHASE ACTIVITIES:

1.The Systems Development Life Cycle (SDLC), or SoftwareDevelopment Life Cycle in systems engineering, informationsystems and software engineering, is the process of creating or

111altering systems, and the models and methodologies that peopleuse to develop these systems. The concept generally refersto computer or information systems.2. Work breakdown structure organization:The upper section of the Work Breakdown Structure (WBS)should identify the major phases and milestones of the project in asummary fashion. In addition, the upper section should provide anoverview of the full scope and timeline of the project and will be partof the initial project description effort leading to project approval.

8.7 BASELINES IN THE SDLC:

Baselines are an important part of the Systems DevelopmentLife Cycle (SDLC). These baselines are established after four of thefive phases of the SDLC and are critical to the iterative nature ofthe model.. Each baseline is considered as a milestone in theSDLC.

Functional Baseline: established after the conceptual design

phase.

Allocated Baseline: established after the preliminary design

phase.

Product Baseline: established after the detail design and

development phase.

Updated Product Baseline: established after the production

construction phase.

8.8 SYSTEM FLOW CHART:

A system flowchart explains how a system works using adiagram. The diagram shows the flow of data through a system.8.8.1 Symbols for to draw Flowchart:

112The symbols are linked with directed lines (lines with arrows)showing the flow of data through the system.8.8.2 An example of a system flowchart is shown below:

Transactions are input, validated and sorted and then used

to update a master file.Note: The arrows show the flow of data through the system. Thedotted line shows that the Updated master file is then used as inputfor the next Update process.A flowchart isatypeof diagram,thatrepresentsan algorithm or process, showing the steps as boxes of variouskinds, and their order by connecting these with arrows. Thisdiagrammatic representation can give a step-by-step solution to agiven problem. Data is represented in these boxes, and arrowsconnecting them represent flow / direction of flow of data.Flowcharts are used in analyzing, designing, documenting ormanaging a process or program in various fields.

113Example:

8.8.3 Flowchart Symbols:

A typical flowchart from older to computer science textbooksmay have the following kinds of symbols:Start and end symbolsRepresented as circles, ovals or rounded rectangles, usuallycontaining the word "Start" or "End", or another phrase signallingthe start or end of a process, such as "submit enquiry" or "receiveproduct".ArrowsShowing what's called "flow of control" in computer science.An arrow coming from one symbol and ending at another symbolrepresents that control passes to the symbol the arrow points to.Processing stepsRepresented as rectangles. Examples: "Add 1 to X";"replace identified part"; "save changes" or similar.Input/OutputRepresented as a parallelogram.Examples: Get X from the user; display X.

8.9 STRUCTURE CHART:

A Structure Chart (SC) in software engineering and organizationaltheory is a chart, which shows the breakdown of the configurationsystem to the lowest manageable levels.

114This chart is used in structured programming to arrange theprogram modules in a tree structure. Each module is representedby a box, which contains the module's name. The tree structurevisualizes the relationships between the modules.Overview:A structure chart is a top-down modular design tool,constructed of squares representing the different modules inthe system, and lines that connect them. The lines represent theconnection and or ownership between activities and sub activitiesas they are used in organization charts.In structured analysis structure charts, according to Wolber(2009), "are used to specify the high-level design, or architecture, ofa computer program. As a design tool, they aid the programmer individing and conquering a large software problem, that is,recursively breaking a problem down into parts that are smallenough to be understood by a human brain. The process is calledtop-down design, or functional decomposition. Programmers use astructure chart to build a program in a manner similar to how anarchitect uses a blueprint to build a house. In the design stage, thechart is drawn and used as a way for the client and the varioussoftware designers to communicate. During the actual building ofthe program (implementation), the chart is continually referred to asthe master-plan".A structure chart is also used to diagram associatedelements that comprise a run stream or thread. It is oftendeveloped as a hierarchical diagram, but other representations areallowable. The representation must describe the breakdown ofthe configurationsystem intosubsystems andthelowestmanageable level. An accurate and complete structure chart is thekey to the determination of the configuration items, and a visualrepresentation of the configuration system and the internalinterfaces among its CIs. During the configuration control process,the structure chart is used to identify CIs and their associatedartifacts that a proposed change may impact.8.9.1 Applications of Structure Chart:Use a Structure Chart to illustrate the high level overview ofsoftware structure. Structure Charts do not show moduleinternals. Use a method, such as Pseudo code or StructuredEnglish, to show the detailed internals of modules.Following are few advantage points:-Representing Sequence, Repetition, and Condition on a StructureChart

115-Modules on a Structure Chart-Interrelationships among Modules-Information Transfers-Reducing Clutter on a Structure Chart.8.9.2 Example: The example Structure Chart illustrates thestructure of the modules to process a customer order.

8.10 TRANSACTIONAL ANALYSIS:

Transactional Analysis is a theory developed by Dr. EricBerne in the 1950s. Transactional analysis, commonly knownas TA to its adherents, is an integrative approach to the theoryof psychology and psychotherapy.Transactional analysis can serve as a sophisticated,elegant, and effective system on which to base the practical

116activities of professionals in psychotherapy, counselling, education,and organizational consultation.It is a sophisticated theory of personality, motivation, andproblem solving that can be of great use to psychotherapists,counsellors, educators, and business consultants.Transactional analysis can be divided into five theoreticaland practical conceptual clusters. These five clusters enjoy varyingdegrees of recognition within the behavioural sciences. They arelisted below along with (between quotes) concepts that parallelthem in the behavioural sciences.1. The Strokes Cluster. This cluster finds correlates in existingtheories of "attachment," "intimacy," "warmth," "tender lovingcare," "need to belong," "contact," "closeness," "relationships,""social support," and "love."2. The OK Cluster. This cluster finds correlates in existing theoriesof "positive psychology," "flow," "human potential," "resiliency,""excellence," "optimism," "subjective well-being," "positive selfconcept," "spontaneous healing," "nature's helping hand," "vismedicatrix naturae" (the healing power of nature), and "thehealing power of the mind."3. The Script and Games Cluster. This cluster finds correlates inexisting theories of "narratives," "maladaptive schemas," "selfnarratives," "story schemas," "story grammars," "personalmyths," "personal event memories," "self-defining memories,""nuclear scenes," "gendered narratives," "narrative coherence,""narrative complexity," "core self-beliefs," and "self-concept."4. The Ego States and Transactions Cluster. The idea of threeegos states and the transactional interactions between them arethe most distinctive feature of transactional analysis and yethave the least amount of resonance in the literature. However,the utility of this concept is the principal reason why peoplebecome interested and maintain their interest in transactionalanalysis.5. The Transactional Analysis Theory of Change Cluster.Transactional analysis is essentially a cognitive-behaviouraltheory of personality and change that nevertheless retains aninterest in the psychodynamic aspect of the personality.Transactional Analysis is a contractual approach. A contractis "an explicit bilateral commitment to a well-defined course ofaction" Berne E. (1966). This means that all parties need to agree:

117

why they want to do something

with whomwhat they are going to doby whenAny fees, payment or exchanges there will be..

The fact that these different states exist is taken as being

responsible for the positive or negative outcomes to conversations.Berne showed the transactional stimulus and response through theuse of a simple diagram showing parent (P), adult (A), child and(C), ego states and the transactional links between them.PPAACC8.10.1 The three ego states presented by Berne are:ParentThe parent ego state is characterized by the need toestablish standards, direct others, in still values and criticize. Thereare two recognized sub-groups of this ego state, being controllingparents who show signs of being authoritarian, controlling andnegative, and nurturing parents who tend to be positive andsupportive, but who can become suffocating.AdultThe adult ego state is characterized by the ability to act in adetached and rational manner, logically as a decision makerutilizing information to its maximum. The archetypal example of thisego state might be Mr Spock!ChildThe child ego state is characterized by a greaterdemonstration of emotion, either positive or negative. Once again,as with the parent, there are sub-groups of this ego state, in thiscase three. The first is the natural child state, with uninhibitedactions, which might include energy and raw enthusiasm, tocuriosity and fear. It is essentially self- centred. The adapted childstate is a state where emotions are still strong, but there is someattempt to control, ending in compliant or withdrawn behaviours.Finally, the 'little professor' is a child ego state that shows emergingadult traits, and a greater ability to constrain emotions.Transactions can be brief, can involve no verbal content atall (looking at some-one across a room), or can be long andinvolved. However, Berne believed that there were four basic typesof transaction:

1188.10.2 The four basic types of transaction:ComplementaryA transaction where the ego states complement each other,resulting in a positive exchange. This might include two teachersdiscussing some assessment data in order to solve a problemwhere they are both inhabiting the adult ego state.DuplexThis is a transaction that can appear simple, but entails twolevels of communication, one often implicit. At a social level, thetransaction might be adult to adult, but at a psychological level itmight be child to child as a hidden competitive communication.AngularHere, the stimulation appears to be aimed at one ego state,but covertly is actually aimed at another, such as the use ofsarcasm. This may then lead to a different ego state response fromthat which might be expected.CrossedHere, the parent acts as a controlling parent, but in aimingthe stimulus at the child ego state, a response from the adult egostate, although perhaps perfectly reasonable but unexpected,brings conflict.As a result, where there are crossed transactions, there is ahigh possibility of a negative development to a conversation, oftenresulting in confrontation or bad feeling?8.10.3 Example:Transactional Analysis in the classroom:Within any classroom there is a constant dynamictransactional process developing. How this is man-aged can haveimportant ramifications for both short and long term relationshipsbetween staff and students.If we take as a starting point the more traditional style ofrelationship between teacher and student, this will often occur as aparental stimulus directed at a child, expecting a child reaction.Hence, this translates to a simple transaction such as that below(shown by the solid lines).However, as all teachers know, students at secondary levelare beginning to re-establish their boundaries as people and arebecoming increasingly independent. As a result, it is increasinglylikely that as the children become older, a parental stimulusdirected at a child ego state will result in an adult to adult response.

119Even though the response is perfectly reasonable, and indeedwould be sought in most circumstances, in this case it leads to acrossed transaction and the potential for a negative conversation.

8.11 SUMMARYThis chapter is based on System design and their different designmodels like flow chart, UML diagram, structure chart, activitydiagram. These diagram helps to represent flow and working ofdata.

9.1 INTRODUCTION:Software documentation or source code documentation iswritten text that accompanies computer software. It either explainshow it operates or how to use it, and may mean different things topeople in different roles.

9.2 PEOPLE AND SOFTWARE:

Documentation is an important part of software engineering. Typesof documentation include:1. Requirements - Statements that identify attribute capabilities,characteristics, or qualities of a system. This is the foundationfor what shall be or has been implemented.2. Architecture/Design - Overview of software. Includes relations toan environment and construction principles to be used in designof software components.3. Technical - Documentation of code, algorithms, interfaces, andAPIs.4. End User - Manuals for the end-user, system administrators andsupport staff.

1215. Marketing - How to market the product and analysis of themarket demand.

9.3 REQUIREMENTS DOCUMENTATION:

Requirements documentation is the description of whatparticular software doesorshalldo.Itisusedthroughout development to communicate what the software does orshall do. It is also used as an agreement or as the foundation foragreement on what the software shall do. Requirements areproduced and consumed by everyone involved in the production ofsoftware: endusers, customers, productmanagers, projectanagers, sales, marketing, softwarearchitects, usabilityexperts, interaction designers, developers, and testers, to name afew. Thus, requirements documentation has many differentpurposes.The need for requirements documentation is typically relatedto the complexity of the product, the impact of the product, and thelife expectancy of the software. If the software is very complex ordeveloped by many people (e.g., mobile phone software),requirements can help to better communicate what to achieve. Ifthe software is safety-critical and can have negative impact onhuman life (e.g., nuclear power systems, medical equipment), moreformal requirements documentation is often required. If thesoftware is expected to live for only a month or two (e.g., very smallmobile phone applications developed specifically for a certaincampaign) very little requirements documentation may be needed.If the software is a first release that is later built upon, requirementsdocumentation is very helpful when managing the change of thesoftware and verifying that nothing has been broken in the softwarewhen it is modified.

9.4 ARCHITECTURE/DESIGN DOCUMENTATION:

Architecture documentation is a special breed of designdocument. In a way, architecture documents are third derivativefrom the code (design document being second derivative, and codedocuments being first). Very little in the architecture documents isspecific to the code itself. These documents do not describe how toprogram a particular routine, or even why that particular routineexists in the form that it does, but instead merely lays out thegeneral requirements that would motivate the existence of such aroutine. A good architecture document is short on details but thickon explanation. It may suggest approaches for lower level design,but leave the actual exploration trade studies to other documents.

122A very important part of the design document in enterprisesoftware development is the Database Design Document (DDD). Itcontains Conceptual, Logical, and Physical Design Elements. TheDDD includes the formal information that the people who interactwith the database need. The purpose of preparing it is to create acommon source to be used by all players within the scene. Thepotential users are:

Database Designer

Database Developer

Database Administrator

Application Designer

Application Developer

When talking about Relational Database Systems, the

document should include following parts:

Entity - Relationship Schema, including following information

and their clear definitions:

Entity Sets and their attributes

Relationships and their attributes

Candidate keys for each entity set

Attribute and Tuple based constraints

Relational Schema, including following information:

Tables, Attributes, and their properties

Views

Constraints such as primary keys, foreign keys,

Cardinality of referential constraints

Cascading Policy for referential constraints

Primary keysIt is very important to include all information that is to beused by all actors in the scene. It is also very important to updatethe documents as any change occurs in the database as well.

9.5 TECHNICAL DOCUMENTATION:

This is what most programmers mean when using theterm software documentation. When creating software, code aloneis insufficient. There must be some text along with it to describevarious aspects of its intended operation. It is important for thecode documents to be thorough, but not so verbose that it becomesdifficult to maintain them. Several How-to and overviewdocumentation are found specific to the software application or

123software product being documented by API Writers. Thisdocumentation may be used by developers, testers and also theend customers or clients using this software application.Many programmers really like the idea of auto-generatingdocumentation for various reasons. For example, because it isextracted from the source code itself (for example,through comments), the programmer can write it while referring tothe code, and use the same tools used to create the source code tomake the documentation. This makes it much easier to keep thedocumentation up-to-date.

9.6 USER DOCUMENTATION:

Unlike code documents, user documents are usually farmore diverse with respect to the source code of the program, andinstead simply describe how it is used.In the case of a software library, the code documents anduser documents could be effectively equivalent and are worthconjoining, but for a general application this is not often true. On theother hand, the Lisp machine grew out of a tradition in which everypiece of code had an attached documentation string. Incombination with strong search capabilities (based on a Unixlike apropos command), and online sources, Lisp users could lookup documentation prepared by these API Writers and paste theassociated function directly into their own code. This level of easeof use is unheard of in putatively more modern systems.Typically, the user documentation describes each feature ofthe program, and assists the user in realizing these features. Agood user document can also go so far as to providethorough troubleshooting assistance. It is very important for userdocuments to not be confusing, and for them to be up to date. Userdocuments need not be organized in any particular way, but it isvery important for them to have a thorough index. Consistency andsimplicity are also very valuable. User documentation is consideredto constitute a contract specifying what the software will do. APIWriters are very well accomplished towards writing good userdocuments as they would be well aware of the softwarearchitecture and programming techniques used. See also TechnicalWriting.There are three broad ways in which user documentationcan be organized:1. Tutorial: A tutorial approach is considered the most useful for anew user, in which they are guided through each step ofaccomplishing particular tasks.

1242. Thematic: A thematic approach, where chapters or sectionsconcentrate on one particular area of interest, is of more generaluse to an intermediate user. Some authors prefer to conveytheir ideas through a knowledge based article to facilitating theuser needs. This approach is usually practiced by a dynamicindustry, such as Information technology, where the userpopulationislargelycorrelatedwiththe troubleshooting demands.3. List or Reference: The final type of organizing principle is one inwhich commands or tasks are simply listed alphabetically orlogically grouped, often via cross-referenced indexes. This latterapproach is of greater use to advanced users who know exactlywhat sort of information they are looking for.

9.7 MARKETING DOCUMENTATION:

For many applications it is necessary to have somepromotional materials to encourage casual observers to spendmore time learning about the product. This form of documentationhas three purposes:1. To excite the potential user about the product and in still inthem a desire for becoming more involved with it.2. To inform them about what exactly the product does, so thattheir expectations are in line with what they will be receiving.3. To explain the position of this product with respect to otheralternatives.4. To completely shroud the function of the product in mystery.

9.8 HIPO CHART:

HIPO for hierarchical input process output is a "popular1970s systems analysis design aid and documentation techniquefor representing the modules of a system as a hierarchy and fordocumenting each module.It was used to develop requirements, construct the design,and support implementation of an expert system to demonstrateautomated rendezvous. Verification was then conductedsystematically because of the method of design andimplementation.The overall design of the system is documented using HIPOcharts or structure charts. The structure chart is similar inappearance to an organizational chart, but has been modified to

125show additional detail. Structure charts can be used to displayseveral types of information, but are used most commonly todiagram either data structures or code structures.Why we use HIPO chart?The HIPO (Hierarchy plus Input-Process-Output) techniqueis a tool for planning and/or documenting a computer program. AHIPO model consists of a hierarchy chart that graphicallyrepresents the programs control structure and a set of IPO (InputProcess-Output) charts that describe the inputs to, the outputsfrom, and the functions (or processes) performed by each moduleon the hierarchy chart.Advantages of HIPO Chart:Using the HIPO technique, designers can evaluate andrefine a programs design, and correct flaws prior toimplementation. Given the graphic nature of HIPO, users andmanagers can easily follow a programs structure. The hierarchychart serves as a useful planning and visualization document formanaging the program development process. The IPO chartsdefine for the programmer each modules inputs, outputs, andalgorithms.Limitation of HIPO Chart:-HIPO provides valuable long-term documentation. However, thetext plus flowchart nature of the IPO charts makes them difficult tomaintain, so the documentation often does not represent thecurrent state of the program.-By its very nature, the HIPO technique is best used to plan and/ordocument a hierarchically structured program.Example: Set of Tasks to Be Performed by an InteractiveInventory Program:1.0 Manage inventory2.0 Update stock2.1 Process sale2.2 Process return2.3 Process shipment3.0 Generate report3.1 Respond to query3.2 Display status report4.0 Maintain inventory data4.1 Modify record4.2 Add record4.3 Delete record

9.9 WARNIER-ORR DIAGRAM:

A Warnier/Orr diagram (also known as a logicalconstructionofaprogram/system)isakindof hierarchical flowchart that allow the description of theorganization of data and procedures.Warnier/Orr diagrams show the processes and sequences inwhich they are performed. Each process is defined in a hierarchicalmanner i.e. it consists of sets of subprocesses, that define it. Ateach level, the process is shown in bracket that groups itscomponents.Since a process can have many different subprocesses,Warnier/Orr diagram uses a set of brackets to show each level ofthe system. Critical factors in s/w definition and development areiteration or repetition and alteration. Warnier/Orr diagrams showthis very well.To develop a Warnier/Orr diagram, the analyst worksbackwards, starting with systems output and using output oriented

127analysis. On paper, the development moves from right to left. First,the intended output or results of the processing are defined. At thenext level, shown by inclusion with a bracket, the steps needed toproduce the output are defined. Each step in turn is further defined.Additional brackets group the processes required to produce theresult on the next level.Construct the Warnier-orr diagram:There are four basic constructs used on Warnier/Orrdiagrams: hierarchy, sequence, repetition, and alternation. Thereare also two slightly more advanced concepts that are occasionallyneeded: concurrency and recursion.HierarchyHierarchy is the most fundamental of all of the Warnier/Orrconstructs. It is simply a nested group of sets and subsets shownas a set of nested brackets. Each bracket on the diagram(depending on how you represent it, the character is usually morelike a brace "{" than a bracket "[", but we call them "brackets")represents one level of hierarchy. The hierarchy or structure that isrepresented on the diagram can show the organization of data orprocessing. However, both data and processing are never shownon the same diagram.SequenceSequence is the simplest structure to show on a Warnier/Orrdiagram. Within one level of hierarchy, the features listed areshown in the order in which they occur. In other words, on the Fig.6 the step listed first is the first that will be executed (if the diagramreflects a process), while the step listed last is the last that will beexecuted. Similarly with data, the data field listed first is the first thatis encountered when looking at the data, the data field listed last isthe final one encountered.RepetitionRepetition is the representation of a classic "loop" inprogramming terms. It occurs whenever the same set of dataoccurs over and over again (for a data structure) or whenever thesame group of actions is to occur over and over again (for aprocessing structure).lternationAlternation, or selection, is the traditional "decision" processwhereby a determination is made to execute one process oranother. It is indicated as a relationship between two subsets.ConcurrencyConcurrency is one of the two advanced constructs used inthe methodology. It is used whenever sequence is unimportant.

128RecursionRecursion is the least used of the constructs. It is used toindicate that a set contains an earlier or a less ordered version ofitself.Example: Warnier-orr diagram for Data Structure.

129Example: Warnier Orr diagram for Payroll cycle.

Example:A Warnier/Orr diagram is a style of diagram which isextremely useful for describing complex processes (e.g. computerprograms, business processes, instructions) and objects (e.g. datastructures, documents, parts explosions). Warnier/Orr diagrams areelegant, easy to understand and easy to create. When you interpretone of B-liner's diagrams as a Warnier/Orr diagram, you give asimple, yet formal meaning to the elements of the diagram.The following is a quick description of the main elements of aWarnier/Orr diagram.Bracket:

A bracket encloses a level of decomposition in a

diagram. It reveals what something "consists of" at thenext level of detail.Sequence: The sequence of events is defined by the top-tobottom order in a diagram. That is, an event occursafter everything above it in a diagram, but beforeanything below it.OR:You represent choice in a diagram by placing an "OR"operator between the items of a choice. The "OR"operator looks either likeor.AND:You represent concurrency in a diagram by placing an"AND" operator between the concurrent actions. The"AND" operator looks either likeor.

130Repetition: To show that an action repeats (loops), you simply putthe number of repetitions of the action in parenthesesbelow the action.The diagram below illustrates the use of these constructs todescribe a simple process.

You could read the above diagram like this:

"Welcoming a guest to your home (from 1 to manytimes) consists of greeting the guest and taking the guest's coat atthe same time, then showing the guest in. Greeting aguest consists of saying "Good morning" if it's morning, or saying"Good afternoon" if it's afternoon, or saying "Good evening" if it'sevening. Taking the guest's coat consists of helping the guestremove their coat, then hanging the coat up."As you can see, the diagram is much easier to understandthan the description.

10.1 INTRODUCTION:Output is what the customer is buying when he or she payfor a development of project. Inputs, databases, and processes arepresent to provide output.A data input specification is a detailed description of theindividual fields (data elements) on an input document together withtheir characteristics. In this chapter we will learn about Input design,Output design and User Interface.

10.2 OUTPUT DESIGN:

Output is the most important task of any system. Theseguidelines apply for the most part to both paper and screen outputs.Output design is often discussed before other feature of designbecause, from the customers point of view, the output is thesystem. Output is what the customer is buying when he or she payfor a development of project. Inputs, databases, and processes arepresent to provide output. Problems often associated with businessinformation output are information hold-up, information (data)overload, paper domination, extreme distribution, and no tailoring.For example:Mainframe printers: high volume, high speed, located in thedata centre Remote site printers: medium speed, close to end user.

133COM is Computer Output Microfilm. It is more compressedthan traditional output and may be produced as fast as non-impactprinter output.

Turnaround documents trim down the cost of internal

Periodic reports have set frequencies such as daily or

Detail and summary reports differ in the former support dayto-day operation of the business while the latter includestatistics and ratios used by managers to consider the healthof operations.

Page breaks and control breaks allow for abstract totals on

key fields. Report requirements documents include generalreport information and field specifications; print layout sheetspresent a picture of what the report will actually look like.

Page decoupling is the separation of pages into cohesive

groups.

Two ways to create output for strategic purposes are

(1) Make it compatible with processes outside the immediatescope of the system(2) Turn action documents into turnaround documents.People often receive reports they do not require because thenumber of reports received is perceived as a measure of power.Fields on a report should be selected carefully to provide organizedreports, facilitate 80-column remote printing, and reduceinformation (data) overload.The types of fields which should be considered for businessoutput are: key fields for access to information, fields for controlbreaks, fields that change, and exception fields.Output may be designed to aid future change by stressingformless reports, defining field size for future growth, making fieldconstants into variables, and leaving room on review reports foradded ratios and statistics.Output can now be more easily tailored to the needs ofindividual users because inquiry-based systems allow usersthemselves to generate ad hoc reports. An output intermediary canrestrict access to key information and avoid illegal access. Aninformation clearinghouse (or information centre) is a service centre

134that provides consultation, assistance, and documentation toencourage end-user development and use of applications. Thespecifications essential to describe the output of a system are: dataflow diagrams, data flow specifications, data structurespecifications, and data element specifications.

Output Documents

Printed Reports

External Reports: for use or distribution outside the

organization; often on pre-printed forms.

Internal Reports: for use within the organization; not as

"pretty", stock paper, greenbar, etc.

Periodic Reports: produced with a set frequency (daily,

weekly, monthly, every fifth Tuesday, etc.)

Ad-Hoc (On Demand) Reports:

produced upon user demand.

Detail Reports: one line per transaction.

unbalanced

interval;

Review Reports: an overview.

10.3 INPUT DESIGN

A source document differs from a turnaround document inthat the former holds data that revolutionize the status of a resourcewhile the latter is a machine readable document. Transactionthroughput is the number of error-free transactions entered during aspecified time period. A document should be concise becauselonger documents contain more data and so take longer to enterand have a greater chance of data entry errors.Numeric coding substitutes numbers for character data (e.g.,1=male, 2=female); mnemonic coding represents data in a formthat is easier for the user to understand and remember. (e.g.,M=male, F=female). The more quickly an error is detected, thenearer the error is to the person who generated it and so the erroris more easily corrected. An example of an illogical combination ina payroll system would be an option to eliminate federal taxwithholding.By "multiple levels" of messages, I mean allowing the user toobtain more detailed explanations of an error by using a helpoption, but not forcing a long-lasting message on a user who doesnot want it. An error suspense record would include the following

Consistency in terminology and wording.

Place error messages in the same place on the

10.4 USER INTERFACE

i.

ii.

The primary differences between an inter active and batch

environment are:interactive processing is done during the organization'sprime work hoursinteractive systems usually have multiple, simultaneoususersthe experience level of users runs from novice to highlyexperienceddevelopers must be good communicators because of theneed to design systems with error messages, help text, andrequests for user responses.The seven step path that grades the structure of aninteractive system isa. Greeting screen (e.g., company logo)

An intermediate menu and a function screen differ in that the

former provides choices from a set of related operationswhile the latter provides the ability to perform tasks such asupdates or deletes.

iv.

The difference between inquiry and command language

dialogue modes is that the former asks the user to provide aresponse to a simple question (e.g., "Do you really want todelete this file?") where the latter requires that the user knowwhat he or she wants to do next (e.g., MS-DOS C:> prompt;VAX/VMS $ prompt; Unix shell prompt). GUI Interface(Windows, Macintosh) provide Dialog Boxes to prompt userto input required information/parameters.

v.

Directions for designing form-filling screens:

a) Fields on the screen should be in the same sequenceas on the source document.b) Use cuing to provide the user with information suchas field formats (e.g., dates)c) Provide default values.d) Edit all entered fields for transaction errors.e) Move the cursor automatically to the next entry fieldf) Allow entry to be free-form (e.g., do not make the userenter leading zeroes)

Consider having all entries made at the same position on the

screen.vi.

A default value is a value automatically supplied by the

application when the user leaves a field blank. For example,at SXU the screen on which student names and addressesare entered has a default value of "IL" for State since themajority of students have addresses in Illinois. At one time"312" was a default value for Area Code, but with theadditional Area Codes now in use (312, 773, 708, 630, 847)providing a default value for this field is no longer as useful.

Input verification is asking the user to confirm his or her most

Adaptive models are useful because they adapt to the user's

experience level as he or she moves from novice toexperienced over time as experience with the system grows.

xv.

"Within User" sources of variation include: warm up, fatigue,

boredom, environmental conditions, and extraneous events.

xvi.

The elements of the adaptive model are:

Triggering question to determine user experience level

Differentiation among user experienceAlternative processing paths based on user levelTransition of casual user to experienced processing pathTransition of novice user to experienced processing pathAllowing the user to move to an easier processing path

xvii.

Interactive tasks can be designed for closure by providing

the user with feedback indicating that a task has beencompleted.

xviii.

Internal locus of control is making users feel that they are in

control of the system, rather than that the system is incontrol of them.

11.1 INTRODUCTION:A strategy for software testing must accommodate low-leveltests that are necessary to verify that a small source code segmenthas been correctly implemented as well as high-level tests thatvalidate major system functions against customer requirements. Astrategy must provide guidance for the practitioner and a set ofmilestones for the manager. Because the steps of the test strategyoccur at a time when dead-line pressure begins to rise, progressmust be measurable and problems must surface as early aspossible.

11.2 STRATEGICTESTING

APPROACH

TO

SOFTWARE

Testing is a set of activities that can be planned in advance

and conducted systematically. For this reason a template forsoftware testing -- a set of steps into which we can place specific

141test case design techniques and testing methods -- should bedefined for the software process.A number of software testing strategies have been proposedin the literature. All provide the software developer with a templatefor testing and all have the following generic characteristics. Testing begins at the component level and works 'outward'toward the integration of the entire computer-based system, Different testing techniques are appropriate at differentpoints in time. Testing is conducted by the developer of the software and(for large projects) an independent test group. Testing and debugging are different activities, butdebugging must be accommodated in any testing strategy,A strategy for software testing must accommodate low-leveltests that are necessary to verify that a small source code segmenthas been correctly implemented as well as high-level tests thatvalidate major system functions against customer requirements. Astrategy must provide guidance for the practitioner and a set ofmilestones for the manager. Because the steps of the test strategyoccur at a time when dead-line pressure begins to rise, progressmust be measurable and problems must surface as early aspossible.

11.3 ORGANIZING FOR SOFTWARE TESTING

For every software project, there is an inherent conflict ofinterest that occurs as testing begins. The people who have builtthe software are now asked to test the software. This seemsharmless in itself; after all, who knows the program better than itsdevelopers? Unfortunately, these same developers have a vestedinterest in demonstrating that the program is error free, that it worksaccording to customer requirements, and that it will be completedon schedule and within budget. Each of these interests mitigateagainst thorough testing.From a psychological point of view, software analysis anddesign (along with coding) are constructive tasks. The softwareengineer creates a computer program, its documentation, andrelated data structures. Like any builder, the software engineer isproud of the edifice that has been built and looks askance atanyone who attempts to tear it down. When testing commences,there is a subtle, yet definite, attempt to 'break' the thing that thesoftware engineer has built. From the point of view of the builder,testing can be considered to be (psychologically) destructive.

142There are often a number of misconceptions that can beerroneously inferred from the preceding discussion: That the developer of software should do no testing at all. That the software should be tossed over the wall tostrangers who will test it mercilessly, That tester gets involved with the project only when thetesting steps are about to begin.Each of these above statements is incorrect.The software developer is always responsible for testing theindividual units (components) of the program, ensuring that eachperforms the function for which it was designed. In many cases, thedeveloper also conducts integration testing -- a testing step thatleads to the construction (and test) of the complete programstructure. Only after the software architecture is complete does anindependent test group become involved.The role of an independent test group (ITG) is to remove theinherent problems associated with letting the builder test the thingthat has been built. Independent testing removes the conflict ofinterest that may otherwise be present. After all, personnel in theindependent group team are paid to find errors.However, the software engineer doesnt turn the programover to ITG and walk away. The developer and the ITG workclosely throughout a software project to ensure that thorough testswill be conducted: While testing is conducted, the developer mustbe available to correct errors that are uncovered.The ITG is part of the software development project team inthe sense that it becomes involved during the specification activityand stays involved (planning and specifying test procedures)throughout a large project. However, in many cases the ITG reportsto the software quality assurance organization, thereby achieving adegree of independence that might not be possible if it were a partof the software engineering organization.

11.4 A SOFTWARE TESTING STRATEGY

The software engineering process may be viewed as thespiral illustrated in figure below. Initially, system engineeringdefines the role of software and leads to software requirementsanalysis. Where the information domain, function, behaviour,performance, constraints. And validation criteria for software areestablished. Moving inward along the spiral we come to design andfinally to coding. To develop computer software, we spiral inward

143along streamlines that decrease the level of abstraction on eachturn.A strategy for software testing may also be viewed in thecontext of the spiral (figure above). Unit testing begins at the vortexof the spiral and concentrates on each unit (i.e, component) of thesoftware as implemented in source code. Testing progresses bymoving outward along the spiral to integration testing, where thefocus is on design and the construction of the software architecture.Taking another turn outward on the spiral, we encounter validationtesting, where requirements established as part of softwarerequirements analysis are validated against the software that hasbeen constructed. Finally, we arrive at system testing, where thesoftware and other system elements are tested as a whole. To testcomputer software, we spiral out along streamlines that broadenthe scope of testing with each turn.Initially, tests focus on each component individually,ensuring that it functions properly as a unit. Hence, the name unittesting. Unit testing makes heavy use of white-box testingtechniques, exercising specific paths in a module's control structureto ensure complete coverage and maximum error detection. Next,components must be assembled or integrated to form the completesoftware package. Integration testing addresses the issuesassociated with the dual problems of verification and programconstruction. Black-box test case design techniques are the mostprevalent during integration, although a limited amount of white-boxtesting may be used to ensure coverage of major control paths.After the software has been integrated (constructed), a set of highorder tests are conducted Validation criteria (established duringrequirements analysis) must be tested. Validation testing providesfinal assurance that software meets all functional, behavioural, andperformance requirement. Black box testing techniques arc usedexclusively during validation.The last high-order testing step falls outside the boundary ofsoftware engineering and into the broader context of computersystem engineering. Software, once validated, must be combinedwith other system elements (e.g., hardware, people, anddatabases). System testing verifies that all elements mesh properlyand that overall system function/performance is achieved.

11.5 UNIT TESTING

Unit testing focuses verification effort on the smallest unit ofsoftware design -- the software component or module. Using thecomponent-level design description as a guide, important controlpaths are tested to uncover errors within the boundary of themodule. The relative complexity of tests and uncovered errors is

144limited by the constrained scope established for unit testing. Theunit test is white-box oriented, and the step can be conducted inparallel for multiple components.

11.6 INTEGRATION TESTING

A neophyte in the software world might ask a seeminglylegitimate question once all modules have been unit tested: "If theyall work individually, why do you doubt that they'll work when we putthem together?" The problem, or course, is putting them together -interfacing. Data can be lost across an interface; one module canhave an inadvertent, adverse affect on another; sub-functions,when combined, may not produce the desired major function;individually acceptable imprecision may be magnified tounacceptable levels; global data structures can present problems.Sadly, the list goes on and on.Integration testing is a systematic technique for constructingthe program structure while at the same time conducting tests touncover errors associated with interfacing. The objective is to takeunit tested components and build a program structure that hasbeen dictated by design.There is often a tendency to attempt non-incrementalintegration; that is, to construct the program using a big bangapproaches. All components are combined in advance. The entireprogram is tested as a whole. And chaos usually results! A set oferrors is encountered. Correction is difficult because isolation orcauses is complicated by the vast expanse of the entire program.Once these errors are corrected, new ones appear and the processcontinues in a seemingly endless loop.Incremental integration is the antithesis of the big bangapproach. The program is constructed and tested in smallincrements, where errors are easier to isolate and correct;interfaces are more likely to be tested completely; and a systematictest approach may be applied. In the sections that follow, a numberof different incremental integration strategies are discussed.11.6.1 Top down IntegrationTop down integration testing is an incremental approach toconstruction or program structure. Modules are integrated bymoving downward through the control hierarchy beginning with themain control module (main program). Modules subordinate (andultimately subordinate) to the main control module are incorporatedinto the structure in either a depth-first or breadth-first manner.Referring to Figure below depth-first integration would integrate allcomponents on a major control path of the structure. Selection of amajor path is somewhat arbitrary and depends on application-

145specific characteristics. For example, selecting the left hand path,components M1 M2 and M5 would be integrated first. Next, M8 or(if necessary proper functioning of M2) M6 would be integrated.Then, the central and right hand control paths are built. Breadth firstintegration incorporates all components directly sub-ordinate ateach level, moving across the structure horizontally.M1

M2M3

M5

M6

M4

M7

M8

The integration process is performed in a series of five steps:

1. The main control module is used as a test driver and stubsare substituted for all components directly sub-ordinate tothe main control module.2. Depending on the integration approach selected ( i.e., depthor breadth first), sub-ordinate stubs are replaced one at atime with actual components.3. Tests are conducted as each component is integrated.4. On completion of each set of tests, another stub is replacedwith the real components.5. Regression testing may be conducted to ensure that newerrors have not been introduced.The process continues from step 2 until the entire programstructure is built.The top-down integration strategy verifies major control ordecision points early in the test process. In a well-factored programstructure, decision making occurs at upper levels in the hierarchyand is therefore encountered first. If major control problems doexist, early recognition is essential. If depth-first integration isselected, a complete function of the software may be implementedand demonstrated.Top-down strategy sounds relatively uncomplicated, but inpractice, logistical problems can arise. The most common of these

146problems occurs when processing at low levels in the hierarchy isrequired to adequately test upper levels. Stubs replace low-levelmodules at the beginning of top-down testing; therefore, nosignificant data can now upward in the program structure. Thetester is left will three choices; Delay many tests until stubs are replaced with actualmodules. Develop stubs that perform limited functions that simulatethe actual module, or Integrate the software from the bottom of the hierarchyupward.The first approach (delay tests until stubs are replaced byactual modules) causes us to loose some control overcorrespondence between specific tests and incorporation of specificmodules. This can lead to difficulty in determining the cause oferrors and tends to violate the highly constrained nature of the topdown approach. The second approach is workable but can lead tosignificant overhead, as stubs become more and more complex.The third approach is called bottom-up testing.11.6.2 Bottom-up IntegrationBottom-up integration testing, as its name implies, beginsconstruction and testing with atomic modules (i.e., components atthe lowest levels in the program structure). Because componentsare integrated from the bottom up, processing required forcomponents subordinate to a given level is always available andthe need for stubs is eliminated.A bottom-up integration strategy may be implemented with thefollowing steps: Low-level components are combined into clusters(sometimes called builds) that perform a specific softwaresub-function. A driver (a control program for testing) is written tocoordinate test case input and output. The cluster is tested. Drivers are removed and clusters are combined movingupward in the program structure.

147Integration follows the pattern illustrated in Figure below.Components are combined to form clusters 1,2, and 3.Mc

Ma

D1

Mb

D2

D3

Cluster 3Cluster 2Cluster 1Each of the clusters is tested using a driver (shown as adashed block). Components in clusters 1 and 2 are subordinate toMa. Drivers D1 and D2 are removed and the clusters are interfaceddirectly to Ma. Similarly, driver D3 for cluster 3 is removed prior tointegration with module Mb. Both Ma and Mb will ultimately beintegrated with component Mc, and so forth.As integration moves upward the need for separate testdrivers lessens. In fact, if the top two levels of program structureare integrated top down, the number of drivers can be reducedsubstantially and integration of clusters is greatly simplified.

11.7 REGRESSION TESTING

Each time a new module is added as part of integrationtesting, the software changes. New data flow paths are established,new I/O may occur, and new control logic is invoked. Thesechanges may cause problems with functions that previously workedflawlessly. In the context of an integration test strategy, regressiontesting is the re-execution of some subset of tests that have alreadybeen conducted to ensure that changes have not propagatedunintended side effects.

148In a broader context, successful tests (of any kind) result inthe discovery of errors, and errors must be corrected. Wheneversoftware is corrected, some aspect of the software configuration(the program, its documentation, or the data that support it) ischanged. Regression testing is the activity that hclp5 to en5ure thatchanges (due to testing or for other reasons) do not introduceunintended behaviour or additional errors.For instance, suppose you are going to add new functionalityto your software, or you are going to modify a module to improve itsresponse time. The changes, of course, may introduce errors intosoftware that was previously correct. For example, suppose theprogram fragmentx := c + 1 ;proc (z);c := x + 2; x:= 3;works properly. Now suppose that in a subsequent redesign it istransformed intoproc(z);c := c + 3;x:= 3;in an attempt at program optimization. This may result in an error ifprocedure proc accesses variable x.Thus, we need to organize testing also with the purpose ofverifying possible regressions of software during its life, i.e.,degradations of correctness or other qualities due to latermodifications. Properly designing and documenting test cases withthe purpose of making tests repeatable, and using test generators,will help regression testing. Conversely, the use of interactivehuman input reduces repeatability and thus hampers regressiontesting.Finally, we must treat test cases in much the same way assoftware. It is clear that such factors as resolvability, reusability,and verifiability are just as important in test cases as they are insoftware. We must apply formality and rigor and all of our otherprinciples in the development and management of test cases.

11.8 COMMENTS ON INTEGRATION TESTING

There has been much discussion of the relative advantagesand disadvantages of top-down versus bottom-up integrationtesting. In general, the advantages of one strategy tend to result indisadvantages for the other strategy. The major disadvantage ofthe top-down approach is the need for stubs and the attendanttesting difficulties that can be associated with them. Problems

149associated with stubs may be offset by the advantage of testingmajor control functions early. The major disadvantage of bottom-upintegration is that the program as an entity does not exist until thelast module is added. This drawback is tempered by easier testcase design and a lack of stubs.Selection of an integration strategy depends upon softwarecharacteristics and, sometimes, project schedule. In general, acombined approach (sometimes called sandwich testing) that usestop-down tests for upper levels of the program structure, coupledwith bottom-up tests for subordinate levels may be the bestcompromise.As integration testing is conducted, the tester should identifycritical modules. A critical module has one or more of the followingcharacteristics: addresses several software requirements, has a high level of control (resides relatively high in theprogram structure), is complex or error prone (cyclomatic complexity may beused as an indicator), or has definite performance requirements. Critical modules should be tested as early as is possible.In addition, regression tests should focus on critical modulefunction.

11.9 THE ART OF DEBUGGING

Software testing is a process that can be systematicallyplanned and specified. Test case design can be conducted, astrategy can be defined, and results can be evaluated againstprescribed expectations.Debugging occurs as a consequence or successful testing.That is, when a test case uncovers an error, debugging is theprocess that results in the removal or the error.Althoughdebugging can and should be an orderly process, it is still verymuch an art. A software engineer, evaluating the results or a test, isoften confronted with a "symptomatic" indication or a softwareproblem. That is, the external manifestation or the error and theinternal cause or the error may have no obvious relationship to oneanother. The poorly understood mental process that connects asymptom to a cause is debugging.

15011.9.1 The Debugging ProcessDebugging is not testing but always occurs as aconsequence of testing. Referring to Figure in the next page, thedebugging process begins with the execution or a test case.Results are assessed and a lack or correspondence betweenexpected and actual performance is encountered. In many cases,the non-corresponding data are a symptom of an underlying causeas yet hidden. The debugging process attempts to match symptomwith cause thereby leading to error correction.The debugging process will always have one or two outcomes:1. The cause will be found and corrected, or2. The cause will not be found.In the latter case, the person performing debugging maysuspect a cause, design a test case to help validate that suspicion,and work toward error correction in an iterative fashion.

Test cases

Regression Test

Correction

Additional Test

Execution of cases

Suspected causesResult

Identify CausesDebugging

Why is debugging so difficult? In all likelihood, human

psychology has more to do with an answer than softwaretechnology. However, a few characteristics or bugs provide someclues: The symptom and the cause may be geographicallyremote. That is, the symptom may appear in one part or aprogram, while the cause may actually be located at a sitethat is far removed. Highly coupled program structuresexacerbate this situation.

151 The symptom may disappear (temporarily) when anothererror is corrected. The symptom may actually be caused by non-errors (e.g.,round-off inaccuracies). The symptom may be caused by human error that is noteasily traced. The symptom may be a result, or timing problems, ratherthan processing problems.

It may be difficult to accurately reproduce input conditions

(e.g., a real-time application in which input ordering isindeterminate).

The symptom may be intermittent. This is particularly

common in embedded systems that couple hardware andsoftware inextricably. The symptom may be due to causes that are distributedacross a number of tasks running on different processors.During debugging, we encounter errors that range frommildly annoying (e,g., an incorrect output format) to catastrophic(e.g. the system fails, causing serious economic or physicaldamage). As the consequences or an error increase, the amount ofpressure to find the cause also increases. Often, pressuresometimes forces a sort- ware developer to fix one error and at thesame time introduce two more.

12.1 INTRODUCTION:In this Chapter, we will learn the testing life cycle of softwareand also testing methods like white box testing and black boxtesting. Also we trying to cover the sub processes of white boxtesting and black box testing methods such as Integration Testing,Unit Testing, Regression Testing, System Testing and much more.

12.2 THE TESTING PROCESS AND THE SOFTWARE

TESTING LIFE CYCLE:Every testing project has to follow the waterfall model of thetesting process.The waterfall model is as given below1. Test Strategy & Planning

1532. Test Design3. Test Environment setup4. Test Execution5. Defect Analysis & Tracking6. Final ReportingAccording to the respective projects, the scope of testing canbe tailored, but the process mentioned above is common to anytesting activity.Software Testing has been accepted as a separate discipline to theextent that there is a separate life cycle for the testing activity.Involving software testing in all phases of the softwaredevelopment life cycle has become a necessity as part of thesoftware quality assurance process. Right from the Requirementsstudy till the implementation, there needs to be testing done onevery phase. The V-Model of the Software Testing Life Cycle alongwith the Software Development Life cycle given below indicates thevarious phases or levels of testing.

RequirementStudy

ProductionVerificationtesting

High Leveltesting

User acceptanceDesign

Low LevelDesign

System Testing

UnitTesting

Integration Testing

SDLC - STLCThere are two categories of testing activities that can bedone on software, namely, Static Testing Dynamic Testing

154The kind of verification we do on the software work productsbefore the process of Compilation and creation of an executable ismore of Requirement review, design review, code review,walkthrough and audits. This type of testing is called Static Testing.When we test the software by executing and comparing the actual& expected results, it is called Dynamic Testing

12.3 TYPES OF TESTING

From the V-model, we see that are various levels or phasesof testing, namely, Unit testing, Integration testing, System testing,User Acceptance testing etc.Let us see a brief definition on the widely employed types of testing.Unit Testing: The testing done to a unit or to a smallest piece ofsoftware. Done to verify if it satisfies its functional specification orits intended design structure.Integration Testing: Testing which takes place as sub elementsare combined (i.e. integrated) to form higher-level elementsRegression Testing: Selective re-testing of a system to verify themodification (bug fixes) have not caused unintended effects andthat system still complies with its specified requirementsSystem Testing: Testing the softwarespecifications on the intended hardware

for

the

required

Acceptance Testing: Formal testing conducted to determine

whether or not a system satisfies its acceptance criteria, whichenables a customer to determine whether to accept the system ornot.Performance Testing: To evaluate the time taken or responsetime of the system to perform its required functions in comparisonStress Testing: To evaluate a system beyond the limits of thespecified requirements or ystem resources (such as disk space,memory, processor utilization) to ensure the system do not reakunexpectedlyLoad Testing: Load Testing, a subset of stress testing, verifies thata web site can handle a particular number of concurrent users whilemaintaining acceptable response timesAlpha Testing: Testing of a software product or system conductedat the developers site by the customer

155Beta Testing: Testing conducted at one or more customer sites bythe end user of a delivered software product system.

12.4 THE TESTING TECHNIQUES

To perform these types of testing, there are two widely usedtesting techniques. The above said testing types are performedbased on the following testing techniques.Black-Box testing technique:This technique is used for testing based solely on analysis ofrequirements (specification, user documentation.). Also known asfunctional testing.White-Box testing technique:This technique us used for testing based on analysis ofinternal logic (design, code, etc.)(But expected results still comerequirements). Also known as structural testing.

12.5 BLACK BOX AND WHITE BOX TESTING:

Test Design refers to understanding the sources of test cases, testcoverage, how to develop and document test cases, and how tobuild and maintain test data. There are 2 primary methods by whichtests can be designed and they are:- BLACK BOX- WHITE BOXBlack-box test design treats the system as a literal "black-box", soit doesn't explicitly use knowledge of the internal structure. It isusually described as focusing on testing functional requirements.Synonyms for black-box include: behavioural, functional, opaque box, and closed-box.White-box test design allows one to peek inside the "box", and itfocuses specifically on using internal knowledge of the software toguide the selection of test data. It is used to detect errors by meansof execution-oriented test cases.Synonyms for white-box include:Structural, glass-box and clear-box.While black-box and white-box are terms that are still in popularuse, many people prefer the terms "behavioural" and "structural".Behavioural test design is slightly different from black-box testdesign because the use of internal knowledge isn't strictlyforbidden, but it's still discouraged. In practice, it hasn't provenuseful to use a single test design method. One has to use a mixtureof different methods so that they aren't hindered by the limitations

156of a particular one. Some call this "gray-box" or "translucent-box"test design, but others wish we'd stop talking about boxesaltogether!!!

12.6 BLACK BOX TESTING

Black Box Testing is testing without knowledge of the internalworkings of the item being tested. For example, when black boxtesting is applied to software engineering, the tester would onlyknow the "legal" inputs and what the expected outputs should be,but not how the program actually arrives at those outputs. It isbecause of this that black box testing can be considered testingwith respect to the specifications, no other knowledge of theprogram is necessary. For this reason, the tester and theprogrammer can be independent of one another, avoidingprogrammer bias toward his own work. For this testing, test groupsare often used, Though centered around the knowledge of userrequirements, black box tests do not necessarily involve theparticipation of users. Among the most important black box teststhat do not involve users are functionality testing, volume tests,stress tests, recovery testing, and benchmarks. Additionally, thereare two types of black box test that involve users, i.e. field andlaboratory tests. In the following the most important aspects ofthese black box tests will be described briefly.

Black box testing - without user involvement

The so-called ``functionality testing'' is central to most testingexercises. Its primary objective is to assess whether the programdoes what it is supposed to do, i.e. what is specified in therequirements. There are different approaches to functionalitytesting. One is the testing of each program feature or function insequence. The other is to test module by module, i.e. each functionwhere it is called first.The objective of volume tests is to find the limitations of thesoftware by processing a huge amount of data. A volume test canuncover problems that are related to the efficiency of a system, e.g.incorrect buffer sizes, a consumption of too much memory space,or only show that an error message would be needed telling theuser that the system cannot process the given amount of data.During a stress test, the system has to process a hugeamount of data or perform many function calls within a short periodof time. A typical example could be to perform the same functionfrom all workstations connected in a LAN within a short period oftime (e.g. sending e-mails, or, in the NLP area, to modify a termbank via different terminals simultaneously).

157The aim of recovery testing is to make sure to which extentdata can be recovered after a system breakdown. Does the systemprovide possibilities to recover all of the data or part of it? Howmuch can be recovered and how? Is the recovered data still correctand consistent? Particularly for software that needs high reliabilitystandards, recovery testing is very important.The notion of benchmark tests involves the testing ofprogram efficiency. The efficiency of a piece of software stronglydepends on the hardware environment and therefore benchmarktests always consider the soft/hardware combination. Whereas formost software engineers benchmark tests are concerned with thequantitative measurement of specific operations, some alsoconsider user tests that compare the efficiency of different softwaresystems as benchmark tests. In the context of this document,however, benchmark tests only denote operations that areindependent of personal variables.

Black box testing - with user involvement

For tests involving users, methodological considerations are

rare in SE literature. Rather, one may find practical test reports thatdistinguish roughly between field and laboratory tests. In thefollowing only a rough description of field and laboratory tests willbe given.E.g. Scenario Tests. The term ``scenario'' has entered softwareevaluation in the early 1990s . A scenario test is a test case whichaims at a realistic user background for the evaluation of software asit was defined and performed It is an instance of black box testingwhere the major objective is to assess the suitability of a softwareproduct for every-day routines. In short it involves putting thesystem into its intended use by its envisaged type of user,performing a standardised task.In field tests users are observed while using the softwaresystem at their normal working place. Apart from general usabilityrelated aspects, field tests are particularly useful for assessing theinteroperability of the software system, i.e. how the technicalintegration of the system works. Moreover, field tests are the onlyreal means to elucidate problems of the organisational integrationof the software system into existing procedures. Particularly in theNLP environment this problem has frequently been underestimated.A typical example of the organisational problem of implementing atranslation memory is the language service of a big automobilemanufacturer, where the major implementation problem is not thetechnical environment, but the fact that many clients still submittheir orders as print-out, that neither source texts nor target texts

158are properly organised and stored and, last but not least, individualtranslators are not too motivated to change their working habits.Laboratory tests are mostly performed to assess the generalusability of the system. Due to the high laboratory equipment costslaboratory tests are mostly only performed at big software housessuch as IBM or Microsoft. Since laboratory tests provide testerswith many technical possibilities, data collection and analysis areeasier than for field tests.12.6.1 Black box testing Methods

Black-box technique that divides the input domain into

classes of data from which test cases can be derived. An ideal testcase uncovers a class of errors that might require many arbitrarytest cases to be executed before a general error is observedEquivalence class guidelines:1. If input condition specifies a range, one valid and two invalidequivalence classes are defined2. If an input condition requires a specific value, one valid and twoinvalid equivalence classes are defined3. If an input condition specifies a member of a set, one valid andone invalid equivalence class is defined4. If an input condition is Boolean, one valid and one invalidequivalence class is defined

159

Comparison Testing

Black-box testing for safety critical systems in which

independently developed implementations of redundant systemsare tested for conformance to specifications. Often equivalenceclass partitioning is used to develop a common set of test cases foreach implementation.

Orthogonal Array Testing

Black-box technique that enables the design of a reasonably

small set of test cases that provide maximum test coverage. Focusis on categories of faulty logic likely to be present in the softwarecomponent (without examining the code) Priorities for assessingtests using an orthogonal array1. Detect and isolate all single mode faults2. Detect all double mode faults3. Multimode faults12.6.2 Advantages of Black Box Testing More effective on larger units of code than glass box testing Tester needs no knowledge of implementation, including specificprogramming languages Tester and programmer are independent of each other Tests are done from a user's point of view Will help to expose any ambiguities or inconsistencies in thespecifications Test cases can be designed as soon as the specifications arecomplete12.6.3 Disadvantages of Black Box Testing Only a small number of possible inputs can actually be tested, totest every possible input stream would take nearly forever Without clear and concise specifications, test cases are hard todesign There may be unnecessary repetition of test inputs if the tester isnot informed of test cases the programmer has already tried May leave many program paths untested Cannot be directed toward specific segments of code which maybe very complex (and therefore more error prone) Most testing related research has been directed toward glass boxtesting

160

12.7 WHITE BOX TESTING

Software testing approaches that examine the programstructure and derive test data from the program logic. Structuraltesting is sometimes referred to as clear-box testing since whiteboxes are considered opaque and do not really permit visibility intothe code. Synonyms for white box testing Glass Box testing Structural testing Clear Box testing Open Box Testing

The purpose of white box testing

Initiate a strategic initiative to build quality throughout the life cycle

of a software product or service.Provide a complementary function to black box testing.Perform complete coverage at the component level.Improve quality by optimizing performance.12.7.1 Code Coverage Analysis

Basis Path Testing

A testing mechanism proposed by McCabe whose aim is toderive a logical complexity measure of a procedural design and usethis as a guide for defining a basic set of execution paths. Theseare test cases that exercise basic set will execute every statementat least once.

Flow Graph Notation

A notation for representing control flow similar to flow chartsand UML activity diagrams.

Cyclomatic ComplexityThe cyclomatic complexity gives a quantitative measure of4the logical complexity. This value gives the number ofindependent paths in the basis set, and an upper bound for thenumber of tests to ensure that each statement is executed at leastonce. An independent path is any path through a program thatintroduces at least one new set of processing statements or a newcondition (i.e., a new edge). Cyclomatic complexity provides upperbound for number of tests required to guarantee coverage of allprogram statements.

16112.7.2 Control Structure testing

Conditions Testing

Condition testing aims to exercise all logical conditions in a

program module. They may define: Relational expression: (E1 op E2), where E1 and E2 arearithmetic expressions. Simple condition: Boolean variable or relational expression,possibly proceeded by a NOT operator. Compound condition: composed of two or more simpleconditions, Boolean operators and parentheses. Booleanexpression:expressions.

Condition

without

Relational

Data Flow Testing

Selects test paths according to the location of definitions and use ofvariables. Loop TestingLoops fundamental to many algorithms. Can define loops as imple,concatenated, nested, and unstructured.Examples:Note that unstructured loops are not to be tested . rather, they areredesigned.

Design by Contract (D b C)

DbC is a formal way of using comments to incorporate

specification information into the code itself. Basically, the codespecification is expressed unambiguously using a formal languagethat describes the code's implicit contracts. These contracts specifysuch requirements as: Conditions that the client must meet before a method isinvoked. Conditions that a method must meet after it executes. Assertions that a method must satisfy at specific points of itsexecution

ProfilingProfiling provides a framework for analyzing Java codeperformance for speed and heap memory use. It identifies routinesthat are consuming the majority of the CPU time so that problemsmay be tracked down to improve performance.

TransactionsSystems that employ transaction, local or distributed, maybe validated to ensure that ACID (Atomicity, Consistency, Isolation,Durability). Each of the individual parameters is tested individuallyagainst a reference data set.Transactions are checked thoroughly for partial/completecommits and rollbacks encompassing databases and other XAcompliant transaction processors.12.7.3 Advantages of White Box Testing Forces test developer to reason carefully about implementation Approximate the partitioning done by execution equivalence Reveals errors in "hidden" code Beneficent side-effects12.7.4 Disadvantages of White Box Testing Expensive Cases omitted in the code could be missed out.

12.8 DIFFERENCE BETWEEN BLACK BOX TESTING

AND WHITE BOX TESTINGAn easy way to start up a debate in a software testing forumis to ask the difference between black box and white box testing.These terms are commonly used, yet everyone seems to have adifferent idea of what they mean.Black box testing begins with a metaphor. Imagine youretesting an electronics system. Its housed in a black box with lights,switches, and dials on the outside. You must test it without openingit up, and you cant see beyond its surface. You have to see if itworks just by flipping switches (inputs) and seeing what happens tothe lights and dials (outputs). This is black box testing. Black boxsoftware testing is doing the same thing, but with software. Theactual meaning of the metaphor, however, depends on how youdefine the boundary of the box and what kind of access theblackness is blocking.

163An opposite test approach would be to open up theelectronics system, see how the circuits are wired, apply probesinternally and maybe even disassemble parts of it. By analogy, thisis called white box testing, To help understand the different waysthat software testing can be divided between black box and whitebox techniques, consider the Five-Fold Testing System. It lays outfive dimensions that can be used for examining testing:1. People (who does the testing)2. Coverage (what gets tested)3. Risks (why you are testing)4. Activities (how you are testing)5. Evaluation (how you know youve found a bug)Lets use this system to understand and clarify thecharacteristics of black box and white box testing.People: Who does the testing?Some people know how software works (developers) and othersjust use it (users).Accordingly, any testing by users or other non-developers issometimes called black box testing. Developer testing is calledwhite box testing. The distinction here is based on what theperson knows or can understand.Coverage: What is tested?If we draw the box around the system as a whole, blackbox testing becomes another name for system testing. And testingthe units inside the box becomes white box testing.This is one way to think about coverage. Another is tocontrast testing that aims to cover all the requirements with testingthat aims to cover all the code. These are the two most commonlyused coverage criteria. Both are supported by extensive literatureand commercial tools. Requirements-based testing could be calledblack box because it makes sure that all the customerrequirements have been verified. Code-based testing is often calledwhite box because it makes sure that all the code (the statements,paths, or decisions) is exercised.Risks: Why are you testing?Sometimes testing is targeted at particular risks. Boundarytesting and other attack-based techniques are targeted at commoncoding errors. Effective security testing also requires a detailedunderstanding of the code and the system architecture. Thus, thesetechniques might be classified as white box. Another set of risksconcerns whether the software will actually provide value to users.Usability testing focuses on this risk, and could be termed blackbox.

164Activities: How do you test?A common distinction is made between behavioural testdesign, which defines tests based on functional requirements, andstructural test design, which defines tests based on the code itself.These are two design approaches. Since behavioural testing isbased on external functional definition, it is often called black box,while structural testingbased on the code internalsis calledwhite box. Indeed, this is probably the most commonly citeddefinition for black box and white box testing. Another activitybased distinction contrasts dynamic test execution with formal codeinspection. In this case, the metaphor maps test execution(dynamic testing) with black box testing, and maps code inspection(static testing) with white box testing. We could also focus on thetools used. Some tool vendors refer to code-coverage tools aswhite box tools, and tools that facilitate applying inputs andcapturing inputsmost notably GUI capture replay toolsas blackbox tools. Testing is then categorized based on the types of toolsused.Evaluation: How do you know if youve found a bug?There are certain kinds of software faults that dont alwayslead to obvious failures. They may be masked by fault tolerance orsimply luck. Memory leaks and wild pointers are examples. Certaintest techniques seek to make these kinds of problems more visible.Related techniques capture code history and stack informationwhen faults occur, helping with diagnosis. Assertions are anothertechnique for helping to make problems more visible. All of thesetechniques could be considered white box test techniques, sincethey use code instrumentation to make the internal workings of thesoftware more visible.These contrast with black box techniques that simply look atthe official outputs of a program.White box testing is concerned only with testing the softwareproduct, it cannot guarantee that the complete specification hasbeen implemented. Black box testing is concerned only with testingthe specification, it cannot guarantee that all parts of theimplementation have been tested. Thus black box testing is testingagainst the specification and will discover faults of omission,indicating that part of the specification has not been fulfilled.White box testing is testing against the implementation andwill discover faults of commission, indicating that part of theimplementation is faulty. In order to fully test a software productboth black and white box testing are required.White box testing is much more expensive than black boxtesting. It requires the source code to be produced before the tests

165can be planned and is much more laborious in the determination ofsuitable input data and the determination if the software is or is notcorrect. The advice given is to start test planning with a black boxtest approach as soon as the specification is available. White boxplanning should commence as soon as all black box tests havebeen successfully passed, with the production of flow graphs anddetermination of paths. The paths should then be checked againstthe black box test plan and any additional required test runsdetermined and applied.The consequences of test failure at this stage may bevery expensive. A failure of a white box test may result in a changewhich requires all black box testing to be repeated and the redetermination of the white box paths

12.9 SUMMARY:To conclude, apart from the above described analyticalmethods of both white and black box testing, there are furtherconstructive means to guarantee high quality software endproducts. Among the most important constructive means are theusages of object-oriented programming tools, the integration ofCASE tools, rapid prototyping, and last but not least theinvolvement of users in both software development and testingprocedures.Questions:1. Explain Software Testing Life Cycle in detail?Ans: Refer 12.22. Explain Types of Testing in detail?Ans: Refer 12.33. Explain Black Box and White Box testing?Ans: Refer 12.5

13.1 INTRODUCTION:Software testing is an art. Most of the testing methods andpractices are not very different from 20 years ago. It is nowherenear maturity, although there are many tools and techniquesavailable to use. Good testing also requires a tester's creativity,experience and intuition, together with proper techniquesBefore moving further towards introduction to softwaretesting, we need to know a few concepts that will simplify thedefinition of software testing.

Error: Error or mistake is a human action that produces

wrong or incorrect result.

Defect (Bug, Fault): A flaw in the system or a product

that can cause the component to fail.

Failure: It is the variance between the actual and

expected result.

Risk: Risk is a factor that could result in negativity or a

chance of loss or damage.

Thus Software testing is the process of finding defects/bugs

in the system, which occurs due to an error in the application, whichcould lead to failure of the resultant product and increase in

167probability of high risk. In short, software testing has different goalsand objectives, which often include:1. finding defects;2. gaining confidence in and providing information about thelevel of quality;3. Preventing defects.

13.2 SCOPE

OF

SOFTWARE

TESTING

The primary function of software testing is to detect bugs

in order to correct and uncover it. The scope of software testingincludes execution of that code in various environments and also toexamine the aspects of code - does the software do what it issupposed to do and function according to the specifications? As wemove further we come across some questions such as "When tostart testing?" and "When to stop testing?" It is recommended tostart testing from the initial stages of the software development.This not only helps in rectifying tremendous errors before the laststage, but also reduces the rework of finding the bugs in the initialstages every now and then. It also saves the cost of the defectrequired to find it. Software testing is an ongoing process, which ispotentially endless but has to be stopped somewhere, due to thelack of time and budget. It is required to achieve maximum profitwith good quality product, within the limitations of time and money.The tester has to follow some procedural way through which he canjudge if he covered all the points required for testing or missed outany.

13.3 SOFTWARE TESTING KEY CONCEPTS

Defects and Failures: As we discussed earlier, defects are

not caused only due to the coding errors, but mostcommonly due to the requirement gaps in the non-functionalrequirement, such as usability, testability, scalability,maintainability, performance and security. A failure is causeddue to the deviation between an actual and an expectedresult. But not all defects result to failures. A defect can turninto a failure due to the change in the environment and orthe change in the configuration of the system requirements.

Input Combination and Preconditions: Testing all

combination of inputs and initial state (preconditions), is notfeasible. This means finding large number of infrequentdefects is difficult.

Verification and Validation: Software testing is done

considering these two factors.1. Verification: This verifies whether the product is doneaccording to the specification?2. Validation: This checks whether the product meets thecustomer requirement?

Software Quality Assurance: Software testing is an

important part of the software quality assurance. Qualityassurance is an activity, which proves the suitability of theproduct by taking care of the quality of a product andensuring that the customer requirements are met.

13.4 SOFTWARE TESTING TYPES

Software test type is a group of test activities that are aimedat testing a component or system focused on a specific testobjective; a non-functional requirement such as usability, testabilityor reliability. Various types of software testing are used with thecommon objective of finding defects in that particular component.Software testing is classified according to two basic types ofsoftware testing: Manual Scripted Testing and Automated Testing.Manual Scripted Testing:

For further explanation of these concepts, read more on

types of software testing.Automated Testing: Manual testing is a time consuming process.Automation testing involves automating a manual process. Testautomation is a process of writing a computer program in the formof scripts to do a testing which would otherwise need to be donemanually. Some of the popular automation tools are Winrunner,Quick Test Professional (QTP), Load Runner, Silk Test, RationalRobot, etc. Automation tools category also includes maintenancetool such as Test Director and many other.

13.5 SOFTWARE TESTING METHODOLOGIES

The software testing methodologies or process includesvarious models that built the process of working for a particularproduct. These models are as follows: Waterfall Model V Model Spiral Model Rational Unified Process(RUP) Agile Model Rapid Application Development(RAD)These models are elaborated briefly in software testingmethodologies.

13.6 SOFTWARE TESTING ARTIFACTS

Software testing process can produce various artifacts such as:

Test Plan: A test specification is called a test plan. A test

plan is documented so that it can be used to verify and

170ensure that a product or system meets its designspecification.

Traceability matrix: This is a table that correlates or design

documents to test documents. This verifies that the testresults are correct and is also used to change tests when thesource documents are changed.

Test Case: Test cases and software testing strategies are

used to check the functionality of individual component thatis integrated to give the resultant product. These test casesare developed with the objective of judging the applicationfor its capabilities or features.

Test Data: When multiple sets of values or data are used to

test the same functionality of a particular feature in the testcase, the test values and changeable environmentalcomponents are collected in separate files and stored as testdata.

Mothora [DeMillo91] is an automated mutation testing tool-set

171determine input-output correctness, locate and remove faultsor bugs, and control and document the test.

NuMega's Boundschecker [NuMega99] Rational's Purify

[Rational99]. They are run-time checking and debugging aids.They can both check and protect against memory leaks andpointer problems.

Ballista COTS Software Robustness Testing Harness

[Ballista99]. The Ballista testing harness is a full-scaleautomated robustness testing tool. The first version supportstesting up to 233 POSIX function calls in UNIX operatingsystems. The second version also supports testing of userfunctions provided that the data types are recognized by thetesting server. The Ballista testing harness gives quantitativemeasures of robustness comparisons across operatingsystems. The goal is to automatically test and hardenCommercial Off-The-Shelf (COTS) software againstrobustness failures.

13.8 SUMMARY:Software testing is an art. Most of the testing methods andpractices are not very different from 20 years ago. It is nowherenear maturity, although there are many tools and techniquesavailable to use. Good testing also requires a tester's creativity,experience and intuition, together with proper techniques.Questions:1. Explain Software Testing Key Concepts?Ans: refer2. Explain Software Testing MethodologiesAns: refer 13.53. Explain Available tools, techniques in detail?Ans: refer 13.7

14.1 INTRODUCTION:The quality of data input agrees on the quality of informationoutput. Systems analysts can support accurate data entry throughsuccess of three broad objectives: effective coding, effective andefficient data capture and entry, and assuring quality throughvalidation. In this Chapter, we will learn Data Entry and DataFormat.

14.2 DATA ENTRY AND DATA STORAGE

The quality of data input determines the quality ofinformation output. Systems analysts can support accurate dataentry through achievement of three broad objectives: effectivecoding, effective and efficient data capture and entry, and assuringquality through validation. Coding aids in reaching the objective ofefficiency, since data that are coded require less time to enter andreduce the number of items entered. Coding can also help inappropriate sorting of data during the data transformation process.Additionally, coded data can save valuable memory and storagespace.In establishing a coding system, systems analysts shouldfollow these guidelines:Keep codes concise.Keep codes stable.

173Make codes that are unique.Allow codes to be sort.Avoid confusing codes.Keep codes uniform.Allow for modification of codes.Make codes meaningful.The simple sequence code is a number that is assigned tosomething if it needs to be numbered. It therefore has no relation tothe data itself. Classification codes are used to distinguish onegroup of data, with special characteristics, from another.Classification codes can consist of either a single letter or number.The block sequence code is an extension of the sequence code.The advantage of the block sequence code is that the data aregrouped according to common characteristics, while still takingadvantage of the simplicity of assigning the next available numberwithin the block to the next item needing identification.A mnemonic is a memory aid. Any code that helps the dataentry person remembers how to enter the data or the end-userremembers how to use the information can be considered amnemonic. Mnemonic coding can be less arbitrary, and thereforeeasier to remember, than numeric coding schemes. Compare, forexample, a gender coding system that uses "F" for Female and "M"for Male with an arbitrary numeric coding of gender where perhaps"1" means Female and "2" means Male. Or, perhaps it should be"1" for Male and "2" for Female? Or, why not "7" for Male and "4"for Female? The arbitrary nature of numeric coding makes it moredifficult for the user.

14.3 DATE FORMATS

An effective format for the storage of date values is theeight-digit YYYYMMDD format as it allows for easy sorting by date.Note the importance of using four digits for the year. This eliminatesany ambiguity in whether a value such as 01 means the year 1901or the year 2001. Using four digits also insures that the correct sortsequence will be maintained in a group of records that include yearvalues both before and after the turn of the century (e.g., 1999,2000, 2001).Remember, however, that the date format you use forstorage of a date value need not be the same date format that youpresent to the user via the user interface or require of the user fordata entry. While YYYYMMDD may be useful for the storage ofdate values it is not how human beings commonly write or readdates. A person is more likely to be familiar with using dates thatare in MMDDYY format. That is, a person is much more likely to be

174comfortable writing the date December 25, 2001 as "12/25/01" than"20011225."Fortunately, it is a simple matter to code a routine that canbe inserted between the user interface or data entry routines andthe data storage routines that read from or write to magnetic disk.Thus, date values can be saved on disk in whatever format isdeemed convenient for storage and sorting while at the same timebeing presented in the user interface, data entry routines, andprinted reports in whatever format is deemed convenient andfamiliar for human users.

14.4 DATA ENTRY METHODS

keyboardsoptical character recognition (OCR)magnetic ink character recognition (MICR)mark-sense formspunch-out formsbar codesintelligent terminalsTests for validating input data include: test for missing data,test for correct field length, test for class or composition, test forrange or reasonableness, test for invalid values, test for comparisonwith stored data, setting up self-validating codes, and using checkdigits. Tests for class or composition are used to check whether datafields are correctly filled in with either numbers or letters. Tests forrange or reasonableness do not permit a user to input a date suchas October 32.This is sometimes called a sanity check.DatabaseA database is a group of related files. This collection isusually organized to facilitate efficient and accurate inquiry andupdate. A database management system (DBMS) is a softwarepackage that is used to organize and maintain a database.Usually when we use the word "file" we mean traditional orconventional files. Sometimes we call them "flat files." With thesetraditional, flat files each file is a single, recognizable, distinct entityon your hard disk. These are the kind of files that you can seecataloged in your directory. Commonly, these days, when we usethe word "database" we are not talking about a collection of thiskind of file; rather we would usually be understood to be talkingabout a database management system. And, commonly, peoplewho work in a DBMS environment speak in terms of "tables" rather

175than "files." DBMS software allows data and file relationships to becreated, maintained, and reported. A DBMS offers a number ofadvantages over file-oriented systems including reduced dataduplication, easier reporting, improved security, and more rapiddevelopment of new applications. The DBMS may or may not storea table as an individual, distinct disk file. The software may chooseto store more than one table in a single disk file. Or it may chooseto store one table across several distinct disk files, or even spread itacross multiple hard disks. The details of physical storage of thedata are not important to the end user who only is concerned aboutthe logical tables, not physical disk files.In a hierarchical database the data is organized in a treestructure. Each parent record may have multiple child records, butany child may only have one parent. The parent-child relationshipsare established when the database is first generated, which makeslater modification more difficult.A network database is similar to a hierarchical databaseexcept that a child record (called a "member") may have more thanone parent (called an "owner"). Like in a hierarchical database, theparent-child relationships must be defined before the database isput into use, and the addition or modification of fields requires therelationships to be redefined.In a relational database the data is organized in tables thatare called "relations." Tables are usually depicted as a grid of rows("tuples") and columns ("attributes"). Each row is a record; eachcolumn is a field. With a relational database links between tablescan be established at any time provided the tables have a field incommon. This allows for a great amount of flexibility.

14.5 SYSTEM IMPLEMENTATION

Systems implementation is the construction of the newsystem and its delivery into production or day-to-day operation.The key to understanding the implementation phase is torealize that there is a lot more to be done than programming.During implementation you bring your process, data, and networkmodels to life with technology. This requires programming, but italso requires database creation and population, and networkinstallation and testing. You also need to make sure the people aretaken care of with effective training and documentation. Finally, ifyou expect your development skills to improve over time, you needto conduct a review of the lessons learned.

176During both design and implementation, you ought to belooking ahead to the support phase. Over the long run, this iswhere most of the costs of an application reside.Systemsimplementationinvolvesinstallationandchangeover from the previous system to the new one, includingtraining users and making adjustments to the system. Manyproblems can arise at this stage. You have to be extremely carefulin implementing new systems. First, users are probably nervousabout the change already. If something goes wrong they may nevertrust the new system. Second, if major errors occur, you could loseimportant business data.A crucial stage in implementation is final testing. Testing andquality control must be performed at every stage of development, buta final systems test is needed before staff entrust the company'sdata to the new system. Occasionally, small problems will be noted,but their resolution will be left for later.In any large system, errors and changes will occur, the keyis to identify them and determine which ones must be fixedimmediately. Smaller problems are often left to the softwaremaintenance staff. Change is an important part of MIS. Designingand implementing new systems often causes changes in thebusiness operations. Yet, many people do, not like changes.Changes require learning new methods, forging new relationshipswith people and managers, or perhaps even loss of jobs. Changesexist on many levels: in society, in business, and in informationsystems. Changes can occur because of shifts in the environment,or they can be introduced by internal change agents. Left tothemselves, most organizations will resist even small changes.Change agents are objects or people who cause or facilitatechanges. Sometimes it might be a new employee who brings freshideas; other times changes can be mandated by top-levelmanagement. Sometimes an outside event such as arrival of a newcompetitor or a natural disaster forces an organization to change.Whatever the cause, people tend to resist change.However, if organizations do not change, they cannotsurvive. The goal is to implement systems in a manner thatrecognizes resistance to change but encourages people to acceptthe new system. Effective implementation involves finding ways toreduce this resistance. Sometimes, implementation involves thecooperation of outsiders such as suppliers.Because implementation is so important, several techniqueshave been developed to help implement new systems. Direct cutoveris an obvious technique, where the old system is simply dropped andthe new one started. If at all possible, it is best to avoid this

177technique, because it is the most dangerous to data. If anything goeswrong with the new system, you run the risk of losing valuableinformation because the old system is not available.In many ways, the safest choice is to use parallelimplementation. In this case, the new system is introducedalongside the old one. Both systems are operated at the same timeuntil you determine that the new system is acceptable. The maindrawback to this method is that it can be expensive because datahas to be entered twice. Additionally, if users are nervous about thenew system, they might avoid the change and stick with the oldmethod. In this case, the new system may never get a fair trial.If you design a system for a chain of retail stores, you couldpilot test the first implementation in one store. By working with onestore at a time, there are likely to be fewer problems. But ifproblems do arise, you will have more staff members around toovercome the obstacles. When the system is working well in onestore, you can move to the next location. Similarly, even if there isonly one store, you might be able to split the implementation intosections based on the area of business. You might install a set ofcomputer cash registers first. When they work correctly, you canconnect them to a central computer and produce daily reports.Next, you can move on to annual summaries and payroll.Eventually the entire system will be installed.Let us now see the Process of Implementation which involves thefollowing steps:

a good career opportunity for non-technical people who wish

Conversion: Migration from the old system to a new system

Maintenance: very important; if you don't maintain the newsystem properly, it's useless to develop a new system.monitor the system,

178

Upgrade,Trouble-shooting,Continuous improvement

14.6 SYSTEM MAINTENANCE

Once the system is installed, the MIS job has just begun.Computer systems are constantly changing. Hardware upgradesoccur continually, and commercial software tools may change everyyear. Users change jobs. Errors may exist in the system. Thebusiness changes, and management and users demand newinformation and expansions. All of these actions mean the systemneeds to be modified. The job of overseeing and making thesemodifications is called software maintenance.The pressures for change are so great that in mostorganizations today as much as 80 per cent of the MIS staff isdevoted to modifying existing programs. These changes can betime consuming and difficult. Most major systems were created byteams of programmers and analysts over a long period. In order tomake a change to a program, the programmer has to understandhow the current program works.Because the program was written by many different peoplewith varying styles, it can be hard to understand. Finally, when aprogrammer makes a minor change in one location, it can affectanother area of the program, which can cause additional errors ornecessitate more changes.One difficulty with software maintenance is that every timepart of an application is modified, there is a risk of adding defects(bugs). Also, over time the application becomes less structured andmore complex, making it harder to understand. These are some ofthe main reasons why the year 2000 alterations were so expensiveand time consuming. At some point, a company may decide toreplace or improve the heavily modified system. There are severaltechniques for improving an existing system, ranging from rewritingindividual sections to restructuring the entire application.. Thedifference lies in scope-how much of the application needs to bemodified. Older applications that were subject to modifications overseveral years tend to contain code that is no longer used, poorlydocumented changes, and inconsistent naming conventions. Theseapplications are prime candidates for restructuring, during whichthe entire code is analyzed and reorganized to make it moreefficient. More important, the code is organized, standardized, anddocumented to make it easier to make changes in the future.

179

14.7 SYSTEM EVALUATION

An important phase in any project is evaluating the resultingsystem. As part of this evaluation, it is also important to assess theeffectiveness of the particular development process. There areseveral questions to ask. Were the initial cost estimates accurate?Was the project completed on time? Did users have sufficientinput? Are maintenance costs higher than expected?Evaluation is a difficult issue. How can you as a manager tellthe difference between a good system and a poor one? In someway, the system should decrease costs, increase revenue, orprovide a competitive advantage. Although these effects areimportant, they are often subtle and difficult to measure. Thesystem should also be easy to use and flexible enough to adapt tochanges in the business. If employees or customers continue tocomplain about a system, it should be re-examined.A system also needs to be reliable. It should be availablewhen needed and should produce accurate output. Error detectioncan be provided in the system to recognize and avoid commonproblems. Similarly, some systems can be built to tolerate errors,so that when errors arise, the system recognizes the problem andworks around it. For example, some computers exist today thatautomatically switch to backup components when one section fails,thereby exhibiting fault tolerance.Managers concern to remember when dealing with newsystems is that the evaluation mechanism should be determined atthe start. The question of evaluation is ignored until someonequestions the value of the finished product. It is a good designpractice to ask what would make this system a good system when itis finished or how we can tell a good system from a bad one in thisapplication. Even though these questions may be difficult toanswer, they need to be asked. The answers, however incomplete,will provide valuable guidance during the design stage.Recall that every system needs a goal, a way of measuringprogress toward that goal, and a feedback mechanism.Traditionally, control of systems has been the task of the computerprogramming staff. Their primary goal was to create error-freecode, and they used various testing techniques to find and correcterrors in the code. Today, creating error-free code is not a sufficientgoal. We have all heard the phrase, "The customer is always right."The meaning behind this phrase is that sometimes people havedifferent opinions on whether a system is behaving correctly. Whenthere is a conflict, the opinion that is most important is that of thecustomer. In the final analysis, customers are in control becausethey can always take their business elsewhere. With information

180systems, the users are the customers and the users should be theones in control. Users determine whether a system is good. If theusers are not convinced that the system performs useful tasks, it isnot a good system.Feasibility comparison Cost and budget Compare actual costs tobudget estimates Time estimates Revenue effects Was projectcompleted on time?Maintenance costs Does system produce additional revenue?Project goals how much money and time are spent on changes?Does system meet the initial goals of the project?User satisfaction how do users (and management) evaluate thesystem?System performanceSystem reliability: Are the results accurate and on time?System availability: Is the system available continually?System security: Does the system provides access to authorizedusers?Summary: In this chapter, we learned Data Format, Data Entry andData Storage, Date Formats, Data Entry Methods, SystemImplementation, System Maintenance, System Evaluation.Questions:1. Explain System Implementation in detail?Ans: refer 14.52. Explain System Maintenance in detail?Ans: refer 14.63. Explain System Evaluation?Ans: refer 14.7

15.1 INTRODUCTION:Documentation is an important part of software engineering. Typesof documentation include:Requirements - Statements that identify attributes capabilities,characteristics, or qualities of a system. This is the foundation forwhat shall be or has been implemented.1. Architecture/Design - Overview of software. Includesrelations to an environment and construction principles to beused in design of software components.2. Technical - Documentation of code, algorithms, interfaces,and APIs.3. End User - Manuals for the end-user, system administratorsand support staff.4. Marketing - How to market the product and analysis of themarket demand.

15.2 REQUIREMENTS DOCUMENTATION

Requirements documentation is the description of whatparticular software does or shall do. It is used throughoutdevelopment to communicate what the software does or shall do. Itis also used as an agreement or as the foundation for agreement

182on what the software shall do. Requirements are produced andconsumed by everyone involved in the production of software: endusers, customers, product managers, project managers, sales,marketing, software architects, usability experts, interactiondesigners, developers, and testers, to name a few. Thus,requirements documentation has many different purposes.Requirements come in a variety of styles, notations andformality. Requirements can be goal-like (e.g., distributed workenvironment), close to design (e.g., builds can be started by rightclicking a configuration file and select the 'build' function), andanything in between. They can be specified as statements innatural language, as drawn figures, as detailed mathematicalformulas, and as a combination of them all.The variation and complexity of requirements documentationmakes it a proven challenge. Requirements may be implicit andhard to uncover. It is difficult to know exactly how much and whatkind of documentation is needed and how much can be left to thearchitecture and design documentation, and it is difficult to knowhow to document requirements considering the variety of peoplethat shall read and use the documentation. Thus, requirementsdocumentation is often incomplete (or non-existent). Without properrequirements documentation, software changes become moredifficultand therefore more error prone (decreased softwarequality) and time-consuming (expensive).The need for requirements documentation is typically relatedto the complexity of the product, the impact of the product, and thelife expectancy of the software. If the software is very complex ordeveloped by many people (e.g., mobile phone software),requirements can help to better communicate what to achieve. Ifthe software is safety-critical and can have negative impact onhuman life (e.g., nuclear power systems, medical equipment), moreformal requirements documentation is often required. If thesoftware is expected to live for only a month or two (e.g., very smallmobile phone applications developed specifically for a certaincampaign) very little requirements documentation may be needed.If the software is a first release that is later built upon, requirementsdocumentation is very helpful when managing the change of thesoftware and verifying that nothing has been broken in the softwarewhen it is modified.Traditionally, requirements are specified in requirementsdocuments (e.g. using word processing applications andspreadsheet applications). To manage the increased complexityand changing nature of requirements documentation (and softwaredocumentation in general), database-centric systems and specialpurpose requirements management tools are advocated.

183

15.3 ARCHITECTURE/DESIGN DOCUMENTATION

Architecture documentation is a special breed of designdocument. In a way, architecture documents are third derivativefrom the code (design document being second derivative, and codedocuments being first). Very little in the architecture documents isspecific to the code itself. These documents do not describe how toprogram a particular routine, or even why that particular routineexists in the form that it does, but instead merely lays out thegeneral requirements that would motivate the existence of such aroutine. A good architecture document is short on details but thickon explanation. It may suggest approaches for lower level design,but leave the actual exploration trade studies to other documents.Another breed of design docs is the comparison document,or trade study. This would often take the form of a whitepaper. Itfocuses on one specific aspect of the system and suggestsalternate approaches. It could be at the user interface, code,design, or even architectural level. It will outline what the situationis, describe one or more alternatives, and enumerate the pros andcons of each. A good trade study document is heavy on research,expresses its idea clearly (without relying heavily on obtuse jargonto dazzle the reader), and most importantly is impartial. It shouldhonestly and clearly explain the costs of whatever solution it offersas best. The objective of a trade study is to devise the bestsolution, rather than to push a particular point of view. It is perfectlyacceptable to state no conclusion, or to conclude that none of thealternatives are sufficiently better than the baseline to warrant achange. It should be approached as a scientific endeavour, not as amarketing technique.A very important part of the design document in enterprisesoftware development is the Database Design Document (DDD). Itcontains Conceptual, Logical, and Physical Design Elements. TheDDD includes the formal information that the people who interactwith the database need. The purpose of preparing it is to create acommon source to be used by all players within the scene. Thepotential users are: Database Designer Database Developer Database Administrator Application Designer Application DeveloperWhen talking about Relational Database Systems, thedocument should include following parts:

184

Entity - Relationship Schema, including following information

and their clear definitions:o

Entity Sets and their attributes

Relationships and their attributes

Candidate keys for each entity set

Attribute and Tuple based constraints

Relational Schema, including following information:

o

Tables, Attributes, and their properties

Views

Constraints such as primary keys, foreign keys,

Cardinality of referential constraints

Cascading Policy for referential constraints

Primary keys

It is very important to include all information that is to be

used by all actors in the scene. It is also very important to updatethe documents as any change occurs in the database as well.

15.4 TECHNICAL DOCUMENTATION

This is what most programmers mean when using the termsoftware documentation. When creating software, code alone isinsufficient. There must be some text along with it to describevarious aspects of its intended operation. It is important for thecode documents to be thorough, but not so verbose that it becomesdifficult to maintain them. Several How-to and overviewdocumentation are found specific to the software application orsoftware product being documented by API Writers. Thisdocumentation may be used by developers, testers and also theend customers or clients using this software application. Today, wesee lot of high end applications in the field of power, energy,transportation, networks, aerospace, safety, security, industryautomation and a variety of other domains. Technicaldocumentation has become important within such organizations asthe basic and advanced level of information may change over aperiod of time with architecture changes. Hence, technicaldocumentation has gained lot of importance in recent times,especially in the software field.Often, tools such as Doxygen, NDoc, javadoc, EiffelStudio,Sandcastle, ROBODoc, POD, TwinText, or Universal Report canbe used to auto-generate the code documentsthat is, they extractthe comments and software contracts, where available, from thesource code and create reference manuals in such forms as text or

185HTML files. Code documents are often organized into a referenceguide style, allowing a programmer to quickly look up an arbitraryfunction or class.Many programmers really like the idea of auto-generatingdocumentation for various reasons. For example, because it isextracted from the source code itself (for example, throughcomments), the programmer can write it while referring to the code,and use the same tools used to create the source code to make thedocumentation. This makes it much easier to keep thedocumentation up-to-date.Of course, a downside is that only programmers can edit thiskind of documentation, and it depends on them to refresh theoutput (for example, by running a job to update the documentsnightly). Some would characterize this as a pro rather than a con.Donald Knuth has insisted on the fact that documentation can be avery difficult afterthought process and has advocated literateprogramming, writing at the same time and location as the sourcecode and extracted by automatic means.Elucidative Programming is the result of practicalapplications of Literate Programming in real programming contexts.The Elucidative paradigm proposes that source code anddocumentation be stored separately. This paradigm was inspired bythe same experimental findings that produced Kelp. Often, softwaredevelopers need to be able to create and access information that isnot going to be part of the source file itself. Such annotations areusually part of several software development activities, such ascode walks and porting, where third party source code is analysedin a functional way. Annotations can therefore help the developerduring any stage of software development where a formaldocumentation system would hinder progress. Kelp storesannotations in separate files, linking the information to the sourcecode dynamically.

15.5 USER DOCUMENTATION

Unlike code documents, user documents are usually farmore diverse with respect to the source code of the program, andinstead simply describe how it is used.In the case of a software library, the code documents anduser documents could be effectively equivalent and are worthconjoining, but for a general application this is not often true. On theother hand, the Lisp machine grew out of a tradition in which everypiece of code had an attached documentation string. Incombination with strong search capabilities (based on a Unix-likeapropos command), and online sources, Lisp users could look up

186documentation prepared by these API Writers and paste theassociated function directly into their own code. This level of easeof use is unheard of in putatively more modern systems.Typically, the user documentation describes each feature ofthe program, and assists the user in realizing these features. Agood user document can also go so far as to provide thoroughtroubleshooting assistance. It is very important for user documentsto not be confusing, and for them to be up to date. User documentsneed not be organized in any particular way, but it is very importantfor them to have a thorough index. Consistency and simplicity arealso very valuable. User documentation is considered to constitutea contract specifying what the software will do. API Writers are verywell accomplished towards writing good user documents as theywould be well aware of the software architecture and programmingtechniques used. See also Technical Writing.There are three broad ways in which user documentationcan be organized.1. Tutorial: A tutorial approach is considered the most useful for anew user, in which they are guided through each step ofaccomplishing particular tasks .2. Thematic: A thematic approach, where chapters or sectionsconcentrate on one particular area of interest, is of more generaluse to an intermediate user. Some authors prefer to conveytheir ideas through a knowledge based article to facilitating theuser needs. This approach is usually practiced by a dynamicindustry, such as Information technology, where the userpopulation is largely correlated with the troubleshootingdemands.3. List or Reference: The final type of organizing principle is onein which commands or tasks are simply listed alphabetically orlogically grouped, often via cross-referenced indexes. This latterapproach is of greater use to advanced users who know exactlywhat sort of information they are looking for.A common complaint among users regarding softwaredocumentation is that only one of these three approaches wastaken to the near-exclusion of the other two. It is common to limitprovided software documentation for personal computers to onlinehelp that give only reference information on commands or menuitems. The job of tutoring new users or helping more experiencedusers get the most out of a program is left to private publishers,who are often given significant assistance by the softwaredeveloper.

187

15.6 MARKETING DOCUMENTATION

For many applications it is necessary to have somepromotional materials to encourage casual observers to spendmore time learning about the product. This form of documentationhas three purposes:1. To excite the potential user about the product and instil inthem a desire for becoming more involved with it.2. To inform them about what exactly the product does, so thattheir expectations are in line with what they will be receiving.3. To explain the position of this product with respect to otheralternatives.4. To completely shroud the function of the product in mystery.One good marketing technique is to provide clear andmemorable catch phrases that exemplify the point we wish toconvey, and also emphasize the interoperability of the program withanything else provided by the manufacturer.

15.7 CASE TOOLS AND THEIR IMPORTANCE

CASE tools stand for Computer Aided SoftwareEngineering tools. As the name implies they are computer basedprograms to increase the productivity of analysts. They permiteffective communication with users as well as other members of thedevelopment team. They integrate the development done duringeach phase of a system life cycle and also assist in correctlyassessing the effects and cost of changes so that maintenance costcan be estimated.

Available CASE tools

Commercially available systems provide tools (i.e. computerprogram packages) for each phase of the system development lifecycle. A typical package is Visual Analyst which has several toolsintegrated together. Tools are also in the open domain which canbe downloaded and used. However, they do not usually have verygood user interfaces.Following types of tools are available: System requirements specification documentation tool Data flow diagramming tool System flow chart generation tool Data dictionary creation Formatting and checking structured English process logic Decision table checking

When are tools used

Tools are used throughout the system design phase. CASEtools are sometimes classified as upper CASE tools and lowerCASE tools. The tools we have described so far are upper CASEtools.They are tools which will generate computer screen codefrom higher level descriptions such as structured English anddecision tables, such tools are called lower CASE tools

Object Oriented System Design Tools

Unified Modelling Language is currently the standard. UMLtool set is marketed by Rational Rose a company whose tools arewidely used. This is an expensive tool and not in our scope in hiscourse.

How to use the tools

Most tools have a users guide which is given as help files alongwith the toolMany have FAQs and search capabilitiesDetails on several open domain tools and what they do is givenbelow.

System Flowchart and ER-Diagram generation Tool

Name of the tool: SMARTDRAW

URL:ThisSoftwarecanbedownloadedfrom:http://www.smartdraw.com. This is a paid software, but a 30-dayfree trial for learning can be downloaded.Requirements to use the tool: PC running Windows 95, 98 or NT.The latest versions of Internet Explorer or Netscape Navigator, andabout 20MB of free space.What the tool does: Smartdraw is a perfect suite for drawing allkinds of diagrams and charts: Flowcharts, Organizational charts,Gantt charts, Network diagrams, ERdiagrams etc.The drag and drop readymade graphics of thousands oftemplates from built-in libraries makes drawing easier. It has a large

189drawing area and drawings from this tool can be embedded intoWord, Excel and PowerPoint by simply copy-pasting. It has anextensive collection of symbols for all kinds of drawings.How to use: The built-in tips guides as the drawing is beingcreated. Tool tips automatically label buttons on the tool bar.There is online tutorial provided in:http://www.smartdraw.com/tutorials/flowcharts/tutorials1.htmhttp://www.ttp.co.uk/abtsd.html

Data Flow Diagram Tool

Name of the tool: IBMS/DFD

URL: This a free software that can be downloaded from:http://viu.eng.rpi.eduRequirements to use the tool: The following installationinstructions assume that the user uses a PC running Windows 95,98 or NT. Additionally, the instructions assume the use of the latestversions of Internet Explorer or Netscape Navigator. To downloadthe zip files & extract them you will need WinZip or similar software.If needed download at http://www.winzip.com.What the tool does: The tool helps the users draw a standard dataflow diagram (a process-oriented model of information systems) forsystems analysis.How to use: Double click on the IBMS icon to see the welcomescreen. Click Any where inside the welcome screen to bring up thefirst screen.Under "Tools" menu, select DFD Modelling. The IBMS willpop up the Data Flow Diagram window. Its menu bar has the File,Edit, Insert, Font, Tool, Window and Help options. Its tool box onthe right contains 10 icons, representing (from left to right and topto bottom) pointer, cut, data flow, process, external entity, datastore, zoom-out, zoom-in, decompose, and compose operations,respectively.Left click on the DFD component to be used in the toolbox,key in the information pertaining to it in the input dialogue box thatprompts for information.To move the DFD components: Left click on the Pointer iconin the tool box, point to the component, and hold Left Button tomove to the new location desired in the work area.To edit information of the DFD components: Right click onthe DFD component. The input dialogue box will prompt you to editinformation of that component.

190Levelling of DFD: Use the Decompose icon in the tool boxfor levelling.To save the DFD: Under File menu, choose Save orSaveAs. Input the name and extension of the DFD (the defaultextension is DFD) and specify folder for the DFD to be saved. ClickOK.

System requirement specification documentation tool

Name of the tool: ARM

URL: The tool can be downloaded without cost athttp://sw-assurance.gsfc.nasa.gov/disciplines/quality/index.phpWhat the tool does: ARM or Automated RequirementMeasurement tool aids in writing the System RequirementsSpecifications right. The user writes the SRS in a text file, the ARMtool scans this file that contains the requirement specifications andgives a report file with the same prefix name as the users sourcefile and adds an extension of .arm. This report file contains acategory called INCOMPLETE that indicate the words and phrasesthat are not fully developed.Requirements to use the tool: PC running Windows 95, 98 or NT.The latest versions of Internet Explorer or Netscape Navigator, andabout 8MB of free space.How to use the tool : On clicking the option Analyze under Filemenu and selecting the file that contains the System RequirementsSpecifications, the tool processes the document to check if thespecifications are right and generate a ARM report.The WALKTHROUGH option in the ARM tool assists a userby guiding him as to how to use the tool apart from the HELPmenu. The README.doc file downloaded during installation alsocontains description of the usage of this tool. A Tool for designing and Manipulating Decision tablesName of the tool: Prologa V.5URL: http://www.econ.kuleuven.ac.be/prologaNote: This tool can be downloaded from the above given URL, afterobtaining the password.What the tool does: The purpose of the tool is to allow thedecision maker to construct and manipulate (systems of) decisiontables. In this construction process, the features available areautomatic table contraction, automatic table optimization,(automatic) decomposition and composition of tables, verification

191and validation of tables and between tables, visual development,and rule based specification.