Weblog for the employees, suppliers and customers of the enterprise IT solutions

Archive for March, 2014

With our expansion into Application Lifecycle Management (ALM), we’ve talked about different issues dealing with the SDLC portion. That is only a portion of the larger picture that is ALM. Not only does it deal with the initial development stages, it also extends to the operational and governance stages to cover all aspects of an application form inception until obsolescence.

This means that a company should not only be worried about the earlier stages of the application (Developments) but other issues such as the tools utilized to enable the ALM (IDEs, Testing software), the overall governance of the process (Methodologies used, processes involved). Some key areas and tips to take note of are as follows.

Vendors supplying your tools. Without the appropriate tools the entire process can be in danger of failing, and as such ensuring the right vendor is used becomes crucial. Vendors need to be researched to ensure their ability to continue supporting your tools and on top of this multiple vendors should be considered and kept as backups to ensure a smooth procurement process.

Ensure the ALM process is developed by the parties involved to utilize processes familiar, and tools which are most favored. It is no good for a company to force their personnel to operate with tools that are unfamiliar leading to increased problems (slower working pace, increase bugs). This is much easier implemented in smaller companies, where as in larger companies tools and processes may be set in stone therefore requiring new hires to be hired based on their history with the currently used processes and tools.

Tools and processes should not just work to enable collection of information but to work towards promoting collaboration. This means that tools need to be able to work with one another to pass on information to one process to another to be utilized – Vendors try to circumvent this issue by providing solutions that can cover multiple aspects of ALM but in reality companies may use multiple products to cover their ALM. The issue of companies being spread out geographically is another challenge in ensuring all information gathered is distributed and shared appropriately. Maintaining strict guidelines as to what tools should be used and enforced as part of the daily activities is necessary.

In recent times Application Development has become a crucial point of focus for security matters. This is an issue that arises for a multitude of reasons but two stand out issues are mistakes and problems during the SDLC. Mistakes are caused by human error as after all Developers ARE human and are prone to errors. Problems on the other hand are issues or a situation that is unfavorable that needs to be overcome and not always stem from errors. For example a problem with communication during the designing phase between client and developer leading to misaligned goals. In this blog, we will go over some of the more common problems that developers are faced with during the SDLC.

Communication during Initial Phase
As I mentioned earlier, one of the biggest problem areas appear during the requirements gathering / defining stage and relates to communication problems between the involved parties. Methodology such as the Waterfall model leads to an issue where if misalignment of the end vision is not dealt with at the early stages – the next phase of the process is either forced to be put on hold or the parties continue on unaware leading to the problem being exacerbated in the latter stages. Other methodologies can help mitigate issues with communication, such as AGILE development but the levels of involvement of the end-user needs to be increased for each reiteration required. Proper time spent in the initial phases in any methodology is crucial to the success of the project. Whether it be turning down end-users due to known history of communication problems (Requires understanding of the end-user’s history) or aligning your interests and goals prior to starting the project.

Management/Scheduling
Work culture can lead to unfavorable management situations, sometimes inexperienced personnel are put in the role of project manager through leveraging relationships, or a simple case of misunderstanding of a person’s skills, and even budget limits play a role. From projects being mismanaged by inexperience bringing to rise issues such as bad estimation of the time required for each phase or forcing work loads into unrealistic time frames due to budgetary constraints.

Development and “Late Requests”
This is more of a problem that is due to the initial problem of communication being not kept in check, but they are not always limited solely to this. Another problem is that sometimes end-users decide to request a feature to be added in due to changes in their vision or realizing it too late. A simple request on the end-users behalf can have large implications to the development team. This may be due to the program being developed in a way that the request will require a rework from the bottom up. This is a problem that is not always avoidable but is mitigated through ensuring all requirements are fully gathered and having the end-user understand the implications of having “late requests”.

Crunch time testing
Testing is key to ensuring that the program works as per the initial vision, and also nowadays to ensure all security measures/bugs are tested. The problems that arise from the testing phase are usually derived from the problem of bad management (Whether it be a lack of time allotted to testing due to bad management or budget constraints) especially underestimating the time required to thoroughly test the product.

In the end, Software development has a plethora of reasons it can go bad, but out of all of them the majority stem from the aforementioned common problems. Seeking to overcome them through proper management, appropriately defining and reiterating requirements, and managing time will help keep your SDLC in check and on the right path.

The current trend with Software Development is to go through the stages of the Development Life Cycle and then only once everything is complete is a security audit performed. This as touched on in the previous blog “Reducing your costs during the SDLC” is a much more costly approach. This is where the concept of the S-SDLC comes into play, as in current times the amount of outside attackers looking to exploit your system has vastly increased. The S-SDLC aims to apply security as part of the life cycle at earlier stages to better mitigate issues that can be propagated through the rest of the stages if not caught early.

Fig 1. A sample S-SDLC process

The S-SDLC involves mapping security measures into each of the stages of the normal SDLC.

Requirements Gathering

Security Requirements

Setting up Phase Gates

Risk Assessment

Design

Identify Design Requirements from security perspective

Architecture & Design Reviews

Threat Modeling

Coding

Coding Best Practices

Perform Static Analysis

Testing

Vulnerability Assessment

Fuzzing

Deployment

Server Configuration Review

Network Configuration Review

Although the above is not a perfect solution in terms of what is available or should be utilized during the SDLC as they may vary from case to case depending on the type of software being developed and what may suit it best.

The S-SDLC efforts can and should be measured to better understand how the overall process of software development is working for the organization. Comparing test reports during each of the stages throughout the development cycle to other reports through time can yield results in the form of increases/decreases in “fixes-to-be-done”, better turnaround rates on applications (while comparing the aforementioned increase or decreases of fixes needed).

This week we will tackle the subject of Enterprise Risk Management or simply put ERM and what benefits it brings to the organization. The overall topic can be hard to define, and that is why in 2004 the Committee of Sponsoring Organizations (COSO) decided it was necessary to create a formal definition. Thus ERM was defined as “a process, effected by the entity’s board of directors, management, and other personnel, applied in strategy-setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within the risk appetite, to provide reasonable assurance regarding the achievement of objectives.”

I. A more risk oriented culture: By having a company culture that is inclined with thinking about risks proactively from top down of the hierarchy – the effects trickle down through to the rest of the organization. It forces the culture of the company to think about risks in a more open manner to deconstruct risks and seize potential opportunities while mitigating the actual risk.

It forces risk discussions to become part of the overall strategic thinking of a company, business processes and business units. This allows a better understanding of risk at all levels and eases the sharing of information.

II. Risk reports are standardized: ERM makes understanding the multitudes of varying data (Risk Indicators, Risks as they emerge, mitigation strategies) simpler through standardized reports. This allows for managers to tackle issues with risks based on areas of importance, and get a better idea of overall risk issues such as tolerance/threshold. ERM allows for reports to be accessible, concise, and flexible while making it easy to aggregate all relevant risk information to be shared company wide.

III. Improved insights on risks: All ERM help develop leading indicators which are designed to help detect potential risk events and provide early warnings to better react. ERM develops a different way at looking at risks; traditional risk analysis generally deals with avoiding/managing risks. ERM provides a look at risks which aims to quantify risks and to evaluate said risk to use it as a competitive advantage (through taking advantage of market conditions, increasing performance capabilities, exploiting conditions to improve competitive positions).

IV. Resource utilization: Risk management involves utilizing resources within the company to facilitate the sharing of information and performing activities to actively mitigate risks. ERM allows for companies to strategically allocate resources so they are used as efficiently as possible by reducing redundant tasks.

In a nutshell ERM allows for companies to better coordinate entity wide efforts to better mitigate risks, seize opportunities, and standardize reporting. Although the key benefits are listed above, by no means is that all the benefits ERM brings to the table. As evident in the KPMG survey, cost saving (through reduced operational costs and reduced debt costs), and improved shareholder values are some other reasons to consider ERM.

Saint’s Security Suite aims to bring the Next Generation of Vulnerability Management to enterprises in an easy to use package with a plethora of integrated tools bundled in a friendly interface all of which allows for a dynamic control of data presentation.

Saint’s suite provides six core tools all with a specialized purpose to help protect business operations, mitigate risks, simplify compliance, and improve IT management.

Vulnerability Scanning Saint’s easy to use vulnerability scanner allows users to target any IPv4/IPv6/URL for vulnerabilities backed by a daily updated list of vulnerability checks and exploits. It also allows for scanning of operating system, web application and database applications to verify patch compliance, antivirus status, and any sensitive information (credit card numbers or social security numbers)

Configuration Auditing A dedicated Security Content Automation Protocol module which supports focused scanning analysis and CyberScope reporting. Designed to help display compliance with various policies (FDCC, USGCB) defined by the NIST. It comes built in with a policy editor which allows enterprises to develop customized security policies benchmarked against NIST standards.

Drill Down Dashboards & Analytic All of Saint’s dashboards allow for users to delve deeper into data provided to get a detailed view of the information provided. Allowing for users to select multiple results to perform batch analysis. All data can be dynamically sorted, searched, and ordered to make the most sense to you.

Compliance & Custom Reporting The report wizard is able to generate customized reports with over 150 options including charts. Users can generate trend reports using previously gathered data and present data to get a better idea of how the landscape has or is changing. The wizard supports compliance report generation including some of the following:

PCI

NERC CIP

SOX

FISMA

HIPAA

Role Based Security (RBS) IT Systems are most prone to risk when the users of the system gets complacent and/or makes a mistake. Case in point would be the RSA breach in 2011 where an employee mistakenly opened a malicious attachment leading to mass data theft. Saint seeks to solve that with RBS, which allows for tight control of all user permissions and activity. On top of that its scanner is able to be assigned asset groups which contain predefined list of associated targets. These tools enable Saint to perform as a one stop solution for enterprise Vulnerability Management, but they are by no means limited to just that. Included are tools for asset management, access control, API for integration support (ease of integration with third party tools), and as enterprises are always growing; Saint has made the Suite easily scalable to match.

We’ve delved into the realm of Enterprise Risk Management in previous blogs, and now its time to take a look at a subset of ERM: IT Risk Management (ITRM). The publication Risk IT by ISACA notes that ITRM covers both the negative impacts and benefits to operations/service delivery by missing the chance to utilize technology in enabling or enhancing business. In other words it is the practice of applying risk management to the information technology aspects of a business to mitigate IT risks which are on the rise due to the increasing reliance on technology. In this blog we will take a look at ITRM areas that can be missed by CIOs which can have a disastrous outcome.

I. Company mergers and acquisitions: When companies decide upon a merger or are bought out there is a flurry of activity going on for the parties involved. IT personnel are left with the daunting task of ensuring systems are merged together to work seamlessly which is easier said than done. Companies using legacy systems may need to be merged with systems that are incompatible, or completely different IT practices need to be melded to one. Failure to ensure that compliance standards are met can lead to loopholes in the system or worse complete loss of data.

On top of all the internal tasks involved, staff are left wondering if the merger will leave them out of a job or the inevitable they are actually laid off. CIOs and IT personnel have to keep track of such events to mitigate the possibilities of sensitive data being leaked, sabotage ( ranging from non-compliance with work practices all the way to malicious reworking of systems/data)

II. Vendor relationships: Data is vital, and vendors are responsible for providing you with tools and products to manage your company. When deals are made that span years, it is vital that companies understand their vendors and understand their intentions. Planning to mitigate risks such as vendors falling out of business (working on contingency vendors that support your systems, or forecasting a vendor’s financial stability), or vendor acquisitions (can lead to changes in products the vendor carry which can be detrimental to your future partnership) is a necessary step to take in ITRM. Steps such as clauses regarding the right to terminate will help in such cases.

III. Management of IT personnel: It may not seem like an area that can be prone to having high risk associated with it, but on the contrary appropriate management of IT skills is crucial. Every IT personnel has skills they bring to table that makes them sought out by project managers. What happens is there are times when a certain person’s skillset is required for project Y but the person is currently tasked with handling Project X. This leads to issues where projects are stalled due to a complete lack of the appropriate person, or project managers fill in the requirement with less experienced personnel increasing the overall risk of the project. Smaller issues such as the idea of employee favoritism and discontent amongst workers can arise through employee “hogging”.

IV. Outdated Disaster Recovery: As systems expand, and evolve involving more complexity; the necessity to have an up to date Disaster Recovery plan is vital. Ensuring that all aspects of the documentation is kept relevant through to making sure offsite locations are still viable to making sure the latest system hardware/software change is documented.

V. Risk management of Application Development: For any entity that works on developing applications (Proprietary software) the need to implement proper Risk Management during the SDLC is of the utmost importance. Especially in times where the demand for a product can force companies to fore-go thorough practices leading to backlashes from the end user(s).
In the graphic above, you can see that SecureState (A company specializing in information security and risk services) has developed a thorough set of practices (tools) for each stage of the SDLC to help seek out and mitigate and technical issues (that would be potential risks for the application).

As with ERM, ITRM has a host of beneficial aspects to ensuring it gets performed, and as companies are becoming ever more reliant on their IT Departments, the time is now to seize the opportunity to better implement IT risk management practices.