Sometimes you need a handy function to convert number of bytes to a human readable size. I wrote a C++ version but found better logic on StackOverflow but implemented in C so decided to implement a similar function in C++.

[code lang=”cpp”]
/**
*Function to convert number of bytes to higher units.
*Inspired from C version of source from:
*http://stackoverflow.com/questions/3898840/converting-a-number-of-bytes-into-a-file-size-in-c
*/

Every now and then at work I come across the need to instrument a new project with code coverage and as it happens I end up struggling to recollect how I did it. Google does come to my rescue but most of the posts deal with how to instrument the code and process basic reports but none of them deal with best practices as such. So here is a TL;DR version of best practices according to my opinion which I plan to refer next time I need to do the task. My best practices are for the situation similar to what I’m facing and these need to be tweaked for other cases.

Rules of engagement:

Need to instrument C/C++ code

Work with standard GCC 4.4 compiler and the lcov-1.12 (These are the versions I’m using but other versions may work)

Target system where the binary is executed is different from the system where the source is compiled.

There are multiple versions of target devices where different code paths get executed

Need to generate a combined coverage report from all the target devices.

For this example, assume my work- space root is /home/jpadhye/project and source is contained in directories comp1 and comp2

For this example, the compiled are stored under work-space root in directory obj/x86_64

Steps for instrumentation:

1] If you are using GNU makefile, let’s assume it is target.mk you have lines:

3] Then compile your source code with code coverage enabled. Ensure that the $_TARGET_CODE_COVERAGE is set to 1 to enable code coverage. After the compilation is done, for all of the instrumented source files, you will find a *.gcno with the same name and under same directory structure as the source file. In my case, the object files are collected in the ‘obj/x86_64’ directory in the work-space root with the source directory structure maintained. These files are used while generating the coverage information. If these files are not created, then something went wrong in your instrumentation. Check the above steps

Steps for execution:

In my case, the target execution environment was a separate device, different than the device on which the code was built. So these steps are for my use case but are also applicable where build and execution machine are the same.

1] When the code gets compiled with code coverage enabled, the coverage file contain the full path. For example, if you work-space root directory is ‘/home/jaideep/project’ then the complete absolute file path for the source files gets recorded. When code gets executed on target machine, the *.gcda files containing the coverage information will be maintain the same directory and naming structure as the source files. But if you want the coverage information to be generated in a specific folder, then you need to strip the prefix of the work-space directory and specify alternate prefix for the work-space directory structure as follows:

This ensures that the directory structure from the project directory onward gets generated in the directory specified by GCOV_PREFIX.

2] Once the environment is set, then we execute the instrumented binary and run the required tests to generate the coverage information. Once you are done, you can kill the process with SIGSTOP which results in graceful execution of the program: ‘kill -SIGSTOP <pid>’. If you want to generate code coverage for each test case, then you can simply call ‘kill -SIGPROF <pid>’ to make the process dump the coverage information without killing the processes itself. Any of these signals will result in coverage information being dumped in the form of *.gcda files with the same directory structure from the project root onward.

3] Once the coverage information is generated, compress the folder containing the *gcda files into a tarball and copy it to the root of your work-space.

Generating HTML report:

To generate the HTML coverage report, ensure all the coverage information tarballs from your target devices are present. Currently my script handles information from two target hosts but the logic could be extended to handle multiple hosts. Following is the explanation of the script:

[code lang=”bash”]
#host1_report.tgz: First variable is coverage tarball copied from first target device.
#host2_report.tgz: Second variable is coverage tarball copied from second target device.
#test_name: This will show up as the report name in the html report
#pattern: The pattern you are interested in. For example:If I only want coverage for comp1, I’ll give ‘*/comp1/*’
#output_location: Location where html report is expected. This should be document directory of webserver.
#Example:
./generate_coverage_html.sh target1-lnx.tgz target2-lnx.tgz lnx-app-ut1 ‘*/comp1/subdir/* */comp2/subdir/*’ /var/www/htdocs/
[/code]

The annotated script to generate the report is as follows:

[code lang=”bash”]
#Run this script from the workspace root
#!/bin/bash
set -o errexit
set -v
set -x

#Define temporary directory where coverage metadata will be stored.
WORK_DIR=${PWD}/obj/codecoverage

#Parse the user input. This part taken from:
#https://github.com/socialize/socialize-project-helpers-ios/blob/master/generate-coverage-report.sh