Because Good Software Is Good Business

Tag Archives: software metrics

Post navigation

Software risks to the business, specifically Application Resiliency, headline a recent executive roundtable hosted by CAST and sponsored by IBM Italy, ZeroUno and the Boston Consulting Group. European IT executives from the financial services industry assembled to debate the importance of mitigating software risks to their business.

Companies seeking to reduce time to market while improving application quality, today usually choose between assigning application development projects to either in-house teams or outsourced system integrators (SI). However, the cost arbitrage of Global In-House Centers (GIC), better known in the industry as “Captives,” continues to provide advantages in cost competitiveness that cannot be overlooked.

In this week’s IT Software Quality Report podcast, Bob Martin, a principal engineer at MITRE Systems, discusses with CISQ Director, Dr. Bill Curtis, how the U.S. Department of Homeland Security and other organizations are using software quality metrics to better judge the software and services offered by vendors and contractors. Martin says thanks to the increasing use of software quality assessments, organizations are becoming more aware of what to look for in software design and architecture in order to make their applications more “reliable, rugged and resilient” and avoid the failure of critical systems. What’s more, organizations are finally understanding that software quality is more than a matter of normal feature bugs: it is the reliability and security of the application — and software quality assessments must become part of the organization’s due diligence process.
Listen to or download this episode now!

On the night of his ship’s maiden and lone voyage, the skipper of the Titanic saw the top of an iceberg, swerved to avoid it, and in doing so piloted his ship’s hull directly into the monstrous portion of the iceberg that lied unseen beneath the surface of the ocean, tearing apart the “unsinkable” ship. Had he known what lied beneath the surface, his reaction likely would have been much different and could have yielded a very different, possibly positive result.
The Titanic’s experience underscores the essence of the difference between testing and software quality assessment – addressing the seen versus the unseen. uTest, a company that specializes in testing software, recently questioned whether or not software quality comes from testing. The blogger laments that QA testers are saddled with software that is already “buggy” and lacking in quality. He goes on to sympathize with them and encourages them to work with developers to communicate issues and promote better development practices. uTest also comments that since developers create the problematic software, they may not represent the optimal choice for ensuring quality.
So in answer to uTest’s question, “Does software quality come from testing?”…we would say, “No.”Testing can only address an application’s “external quality.” Testers can effectively address only visible symptoms such as correctness, efficiency or maintenance costs. What lies beneath the surface, however, the internal quality, directly impacts the external quality and can lead to even greater issues. These characteristics – program structure, complexity, coding practices, coupling, testability, reusability, maintainability, readability and flexibility – are the invisible root of the software quality iceberg and can do far more damage to a company’s reputation and IT maintenance budget than the visible issues.
But how can you fix problems if you can’t see them? uTest is correct in as much as developers probably should not be the ones responsible for finding the issues. First of all, time, business and cost pressures have all pushed developers to make sub-optimal choices that impact the quality and future performance of critical applications. Second, and more important, there is just too much that needs to be reviewed for any individual developer – or even group of developers – to review efficiently and find the issues that could lead to application software malfunction.
Managing the risk of poor software quality requires a thorough understanding of the Structural Quality of critical applications. However, assessing the health, structural quality, complexity, maintainability and functional size of an application can be a daunting manual task that takes time and expert resources.
To accomplish an effective internal quality review requires Application assessment automation. More and more companies are automating their process for all critical applications as the occasional manual review becomes increasingly obsolete. Such a service provides visibility to continual, automated assessment to help companies ensure that quality is built into their systems with every developer contribution – whether the software is being built from scratch or being customized. And that visibility to the internal quality of application software is the difference between a company either enjoying a successful voyage or a Titanic disaster.

Corner of 26th St. and 6th Ave in NYC at 2 AM? Good to be big!
Middle seat from Los Angeles to Sydney? Good to be small!
Size always matters. Software is no exception. Software measurement expert Capers Jones has published data on how software size is fundamental. If you know size you can derive a lot of useful things — an accurate estimate of the defect density, for example. To see the complete list, go to Capers Jones’ presentation and check out slide #4.
Having an accurate function point count for your critical apps is great. The problem is it’s expensive, it takes too much time, requires expertise you don’t have, and distracts your team from their day job.
The rest of this post is about how CAST’s function point automation works and how it solves the problems covered in the previous paragraph.
Over the last few years, we’ve really bulked up our function point counting capabilities. Think Arnold Schwarzenegger 1975 Mr. Olympia competition.
If you know a bit about function points, you know how incredibly hard it is to automate function point counts starting from source code as the input. That’s because function points capture functionality from the end user’s perspective. This functionality is encapsulated in calls from the GUI layer to the database layer.
To do what CAST does, you have to be able to analyze these calls and reverse engineer the end user experience! That’s a tall order, but that’s exactly what we did over the last 5 years of intense R&D and field testing.
This intense effort has led to a three key breakthroughs.
Breakthrough #1: Micro function points. The CAST function point counting algorithm is sophisticated enough to count micro-function point changes — the result of small enhancements that can quickly add up. These are impossible to count manually, but they’re easily picked up by CAST’s automation.
Breakthrough #2: Enhancement function points. Like Microsoft Word’s Track Changes capability, CAST remembers exactly which data and transaction elements have been added, modified, and deleted in a series of changes made to a project. So you no longer have to worry about ignoring work that is necessary but doesn’t necessarily change the function point count.
Breakthrough #3: Calibration with function point experts in the field. We’ve been working with partners like David Consulting Group to ensure our automated counts are well within the accepted variance of counts.
Fast, low cost, benchmarkable function point counting? Automated size measurement is the answer!

Run Your Apps Through C-A-S-T!
(Sung to the tune of YMCA by the Village People)
IT, there’s no need to feel down.
I said, IT, pick yourself off the ground.
I said, IT, ’cause your website is down
And your CIO has left town.
IT, there’s a place you can go.
I said, IT, when your uptime is low.
They will heal you, since I’m sure they will find
All the bugs that cause your downtime.
Let’s run your apps through the C-A-S-T.
Let’s run your apps through the C-A-S-T.
They know everything about Java and SAP,
They can tell when your code is crap …
Let’s run your apps through the C-A-S-T.
Let’s run your apps through the C-A-S-T.
You can get your code clean, you can check your SI,
It will help you with CMMI …
IT, are you listening to me?
I said, IT, how bad can your code be?
I said, IT, you write terrible C.
But you got to know this one thing!
No man finds all bugs by himself.
I said, IT, put your pride on the shelf,
And just go there, give your systems to CAST.
They will find your flaws so damn fast.
Let’s run your apps through the C-A-S-T.
Let’s run your apps through the C-A-S-T.

They know everything about Java and SAP,
They can tell when your code is crap …
Let’s run your apps through the C-A-S-T.
Let’s run your apps through the C-A-S-T.
You can get your code clean, you can check your SI,
It will help you with CMMI …
IT, when the big bugs get missed.
I said, IT, then your QA gets dissed.
I said, IT, cause the business is pissed.
They put you atop their s**t list …
That’s when IT is just way out of luck,
And our VPs are just passing the buck,
And our coders they have all run amok,
And our apps they all really suck …
Let’s run your apps through the C-A-S-T.
Let’s run your apps through the C-A-S-T.
You can get your code clean, you can check your SI,
It will help you with CMMI …
C-A-S-T … we’ll find your bugs with the C-A-S-T.
IT, IT, there’s no need to feel down.
IT, IT, pick your code off the ground.
C-A-S-T … we’ll check your apps with the C-A-S-T.
IT, IT, are you listening to me?
IT, IT, send CAST your Java and C.
C-A-S-T … we’ll measure them with the C-A-S-T.
IT, IT, all your bugs will be found.
IT, IT, all your apps will be sound.