Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

rewt66 asks: "We are looking for a good static analysis tool for a fairly large (half a million lines) C/C++ project. What tools do you recommend? What do you recommend avoiding? What experience (good or bad) have you had with such tools?"

1. If you have 500k lines in a single project, consider re-factoring it into separate libraries that you can divide and conquer. Also, if you have 500k lines of code, consider cleaning it up, re-factoring it, etc. Fewer lines of code is more impressive than more.

That's great and all, but some things just take a lot of code. Refactoring into libraries only goes so far, you're still going to have a ton of code, it'll just be split up in libraries. That's useful, and it's good advice, but since the poster didn't ask about it, you could at least give him the benefit of the doubt and assume the project is already organized appropriately. Half a million lines isn't that big, certainly not big enough to automatically assume their codebase is organized badly.

While yes, some things take a lot of code, but more often than not the excess code is a result of new coders contributing to a project for which they don't really have a grasp of the big picture. So they re-invent the wheel or add way much more to what should be a simple task.For example, I worked on DB2 for a while. I routinely saw 3000 line files that implement such complicated things as hash lists. Then there was another 2000 line file that performs modular reduction in a dozen different ways because

I agree that much code is far longer than it needs to be, but I don't think it's fair to equate this with large projects.

IME, large projects (over a million lines, say) often get that way because they have been built around some sort of framework, and the boilerplate code pushes the line count up. When you get past a certain scale -- more than a handful of developers, or with the team split across multiple geographic locations, that sort of thing -- such frameworks can be very valuable in retaining a sane

Part of IBMs problem is turnaround. Many of the developers are new to DB2 and fresh out of uni. The hash template I saw was a prime example of "I found this in a textbook somewhere." It was completely overkill since it's only used to hash array of bytes (why a template?) and the montgomery reduction used to perform the bucketing is not needed since the hash is invoked only upon startup/shutdown.Whoever wrote that code obviously failed "problem statement" 101. Worse yet, the code had bugs in it and wasn'

Part of IBMs problem is turnaround. Many of the developers are new to DB2 and fresh out of uni. The hash template I saw was a prime example of "I found this in a textbook somewhere." It was completely overkill since it's only used to hash array of bytes (why a template?) and the montgomery reduction used to perform the bucketing is not needed since the hash is invoked only upon startup/shutdown.

I have to stop you there. Turnaround on DB2 developers, at least in my area, is almost zero. Most of the developers around me who have 5 or more years experience, some having been with the project for 20 plus years.

Now we do hire a fair number of IIP students each year for 16 months sessions - maybe you were surrounded by students.

In my experience, DB2 concentrates on functionality, stability and performance. Code-size is tackled when it impacts one of those areas and is otherwise unimportant.

OpenOffice for all it's virtues is a SHITTY PROGRAM that no sane proper experienced developer would have come up with.

I don't doubt OO is shitty -- I wouldn't poke it with a stick.
But one important thing to realize is that
smart people end up writing shitty programs all the time.

For example, I once tested an API that was obviously designed and written by
utter morons.
Yet each time I had to talk to one of the programmers, or their manager,
I was pleasantly surprised. They were smart, committed, had the

I assumed that he meant lines of C and/or C++ code.Look at something like my LibTomCrypt. It covers a wide range of cryptographic algorithms, it's only ~48K lines of code, quite a bit of which are tables for the ciphers/hashes. There are also plenty of comments, etc. Of actual code there is probably only ~30K or so.

And in that 30K I do symmetric ciphers, hashes, prngs, MACs, RSA (with PKCS #1), ECC (DSA/DH), DSA (DSS) and a decent subset of ASN.1.

why would you put mp3/flac/vorbis/etc in the same project? Why not just link them in like you're supposed to? As for mp3 codecs [and probably vorbis] most of that is unrolled DCT like transforms and tables.

That's I think part of the problem, people think they have to have all of the source in one build to make a project.

A hello world program execution is the result of a kernel, shell, standard C library, etc... none of which you count as lines of code in the program.

Who said anything about putting all of the source into one build? The OP said he had 500Ksloc of code that were used to build his project. He never said it was all in one module! It seems to me you (and just about everyone else) have made a ridiculous assumption about his codebase and just run with it.I've seen projects in the 1.5Msloc range, but they were broken down into 1500+ different modules to make them managable. It was all homegrown because there were no free or commercial alternatives to any of the

There's nothing wrong with having lots of code in a project. A solution with 1000 libraries of 500 lines each is no better. Don't break stuff up just for the sake of not having a lot of code in a project. Break it up and refactor it if it NEEDS it for context/architecture/organization reasons.

Chances are very good that if you have >100K lines of code, and they're not all tables, or just plain wasted white space, that you have functionality that can be broken off and re-used through a library. Do you even know what 500K lines of code is? That's a ridiculous amount of code.If you look at things like the kernel or GCC they're already split up into mini libraries inside the host project. So yeah, all of GCC may be several million lines of code (I don't know the exact numbers) but it's not just

Disclaimer: I have never used this tool and actually know relatively little about it. However, my current research uses other software the same company makes (CodeSurfer) and is very much tied to this company, and I have an internship with them this summer. The company was started by my adviser and his adviser, employs a couple former advisees of my adviser, etc.

I found the static analyser in SGI's Prodev Workshop to be quite excellent, though that was a while ago and I am comparing it with nothing - I'm not sure how it stacks up against more recent offerings :

If you are on Windows, you can use the native C++ static analysis that comes with the Windows SDK. Just add the/analyze switch when invoking the compiler (cl.exe) It's the tool that is used by MS to test its own code, known internally as PreFast.It helped me find many bugs in other people's code.

/analyze is pretty good. If you're using one of the more expensive editions of Visual Studio, support for/analyze is built into the IDE and very convenient.With the latest versions of the Windows SDK,/analyze becomes much more powerful./analyze has built-in models for the behavior of some CRT-defined functions, but all other functions are black boxes. The newest CRT and Windows SDK headers (as well as any.h files generated by a recent version of MIDL) have all been annotated with "SAL" annotations that

I too have used Coverity, but I wasn't as impressed with it. Especially considdering the price. It is better than lint, but it's not that much better. Expect to get a lot of false positives.We used it once on a large set of code from a company we acquired. Since none of us were very familiar with the code, and the code had a lot of stability problems, the thought was that it might help us find some of the more elusive bugs and improve the stability of the software.

If you're a business, there's also KlocWork [klocwork.com] which seems to work well enough. Bit pricey and can't be installed for home use, but enterprise use is quite nice (hint: competitor to Coverity). I heard they may offer F/OSS scanning as well - one of the nice things is that you can disable a warning on a block of code once it's been verified as a false positive so a subsequent scan won't bring it up again.

While not disputing coverity's features, I feel you should discuss other tools you've used in comparison to coverity and describe why you had the conclusion of "the best framework for creating custom tests that I have ever come across".

I've never tried it for a code base as large as 500k. My guess it that I used it up to 15k. I was very pleased with it. I agreed with just about every warning it raised, and was able to easily suppress individual instances or whole classes of errors. I also found it somewhat easier to get started with compared to the big tools from Rational et al.

I think it's a bit pricey for a an open-source coder like me, but it should be cheap enough for a company with a tools budget.

Pricey? Well, it's not free, but it's almost free compared to Coverity or high-end tools like that. And it really does some very clever checks. You get a lot of bang for your static analysis buck.I've been using PC-Lint for over 10 years now. I think it's made me a better programmer.

I love PC-Lint, but I really do wish its handling of C++ was better. It was really rough at first, generating kinds of false errors on even the most harmless-looking template code. It's better now, but it still has a lot o

I have to agree with this recommendation (Gimpel lint).A few points, though:

- It is purely text-based, so if you are looking for a shiny GUI-based tool (easier to sell to the PHB), you are out of luck.

- depending on the quality of your code, running it for the first time can result in a huge (make that HUGE) amount of warnings. You might want to start small and only turn on more and more options later. Initially, you will have to invest quite a bit of time to get your code "lint-clean". In the long run, thi

I'd agree with the recommendation, and FWIW I work on a project with over 1,000,000 lines of C++ code.

I also agree with the warnings from others about Lint being a bit verbose until you shut off a few stylistic things you might not care about, which fortunately is easy to do.

I also also agree with the caveat about false positives with non-trivial C++ code: sometimes it just plain misunderstands and gives incorrect warning/error messages. It's been improving steadily in recent versions, though, and the v

Ditto this. Used it on ~850,000 lines of code. Takes some doing to get it configured to flag what you want, and not what you want to ignore. But a great tool. Customer support was fantastic. Reported a bug on ATL template analysis and it was fixed within 2 weeks.

Whatever you use, make sure you adjust the settings to only capture those problems that you think are critical. With 500k lines of code, unless your codebase is *extremely* solid running a Lint tool will result in a LOT of action items. I've used SPLINT (a lint for secure programming - http://www.splint.org/ [splint.org]) in a project with a codebase much smaller than 500k and it took weeks to finish addressing all the issues - sometimes these things can be more of a curse than a blessing.

I work on a C/C++ code base that is a lot bigger than 500k lines. I've worked with results produced by Klocwork [klocwork.com] and also with the output from Reasoning [reasoning.com]. Both of these services/packages will cost you money but both provide good insight into your code. The commercial packages generally produce more focused results with less false-positives, so while they cost you money up front, your developers will spend less time weeding out the noise.

If paying money out for a commercial package isn't your thing, don't overlook the old standby lint or splint [splint.org], an updated successor.

Also well worth investigating to see how your code is actually running is Valgrind and it's associated tools [valgrind.org]. The Valgrind toolkit will give you a good idea where memory is being leaked, where variables and pointers are going off the rails. Valgrind hooks into a running program, so it's important to make sure that you test all the corners of the codebase if you go this route.

Valgrind hooks into a running program, so it's important to make sure that you test all the corners of the codebase if you go this route.

One minor clarification: valgrind can't attach to an already-running program the way a debugger can. Valgrind is actually an x86 emulator, so you have to ask valgrind to execute your program from the very beginning.

There are many software tools out there for static analysis, but differ in what they do or who they target as their customer. The big names in my mind are Coverty, Fortify, Prexis, and PolySpace. I only have personal experience with Prexis and PolySpace so I will just speak to those.

One important thing to consider is the set of compilers, tools, target system, and build environments you are using. If you are using MS only products the you will most likely have very good support because most all source code analysis suits will simply import the build information and you will be off and running right away. If your environment is Unix or embedded systems then things may be more difficult because you will need to hook into the build process somehow. The scanner tools usually intercept the CC command from a "make" build and call their back end using their custom processing rather than the compiler proper. Different products do this in different ways so be sure the product you choose knows how to deal with your specific build environment. In my case I walked into another parties environment and needed to simulate a build for a new build environment that I had never seen before, every time. Not one environment ever looked like the next, so the setup and configuration was always a big challenge, just to get started.

Prexis is primarily a tool for life cycle scanning of source code for security issues. There are two ways to perform the code scanning, with either the main engine component which can schedule nightly scans and track progress over time or with the additional Prexis Pro utility, which is designed for quick assessments by the engineers on their own code without logging everything into the main database. The Pro tool worked best for my code assessments since I had no need for tracking changes over time, and it was a little easier to configure which counts for a lot in my situation.

PolySpace is a completely different tool with a different purpose from Prexis. PolySpace attempts to mathematically discover runtime flaws in the code while only using static analysis to do so. It does a great job on smaller projects, but because of the complexity and thoroughness of its analysis, it is somewhat slow. PolySpace needs to evaluate an entire application all at once in order to do a good analysis. If your.5 MSLOC of code is many separate programs/executables then you will be fine, but if you are talking about one huge monolithic application then you may have to evaluate it in chunks which just increases the false positives and forces the engineer to do more manual chasing of details to determine if the issue is really a problem or not. From what I have seen this product is in a class by itself.

PolySpace attempts to mathematically discover runtime flaws in the code while only using static analysis to do so. It does a great job on smaller projects, but because of the complexity and thoroughness of its analysis, it is somewhat slow.

Last time I heard, Polyspace didn't do C++ --
just C and some random toy language (Java or Ada?).
Cool but extremely expensive.

Regardless of what tool you select, you will have to decide what rules you want to apply and what you are trying to get out of using the tool. If management doesn't understand the purpose of the tools, they may make inappropriate decisions on how to use them. As an example, I worked on a large project, (hundreds of developers), and management decided that we needed to use a static analysis tool and that code had to be "clean" before it could be checked in. It was phased in, so we had a month to eliminate

Shameless commercial plug here... I'm the CTO of Klocwork (www.klocwork.com), a vendor of source code analysis tools. We provide security vulnerability and implementation defect checking for C, C++ and Java. In addition, as others on this thread have stated, you're going to want to look at refactoring, architectural analysis, rule tuning, metrics, trends, all the usual stuff and all of which we supply as part of our enterprise suite of products. Check your supplier list carefully as all of the companies in

If you are researching this for you enterprise I suggest you evaluate Klocwork (and its competitors: Coverity, Grammatech, Parasoft, there are others). We handle large-scale C/C++ projects, our own codebase is much larger than yours and we run Klocwork in-house to track defects in our own code on a daily basis and on developer desktops for subprojects. In fact we successfully handled mammoth projects as big as 10M lines of code and beyond (but frankly, it is getting rather trick

I'm working on a project that's evolved over several years and there's been high turnover among the developers. We use a product called Understand for C++ [scitools.com]. It has a lot of great reverse engineering, metric generation, and source browsing features that make it pretty useful.

I recently used this on my last job for max stack depth analysis. A good tool I must say, fairly cheap as well for a corporate budget. The user interface is a bit rough, not very pretty, but I hear its improved a lot over the years. I think I would insist of getting a WebEx demo or something since I doubt I even touched 2% of the program's features.

I've used it to go into old code and figure it out so that I could know where to make the changes I needed to make and know what else would be affected by those changes. The interface is getting alot better.