Contactez-nous

Chat en direct avec un représentant Tek. Service disponible de 9 h à 17 h, CET jours ouvrables.

Téléphone

Appelez-nous au

Disponible de 9 h à 17 h CET jours ouvrables.

Télécharger

Télécharger des manuels, des fiches techniques, des logiciels, etc. :

TYPE DE TÉLÉCHARGEMENT
MODÈLE OU MOT CLÉ

Feedback

Embedded Software Development: Put Code Metrics to Work for You

You’re working on an exciting new feature for an embedded system. The hardware is looking good. You turn your attention to the firmware, and… chaos! The code has served faithfully for years, but it’s grown in a piecemeal fashion over time and become difficult to maintain. A change that once needed just a few lines of code now requires a dozen updates spread over several files. Where do you start?

One technique that can help you answer that question is careful application of code metrics. Code metrics are properties of your code such as size, complexity, or test coverage. Knowing these can be incredibly useful for getting that first toehold on a mountain of code. Misapplying them can kill productivity and morale. In this article, we’ll look at a couple of the more common code metrics and see how they can help you.

Cyclomatic Complexity

First, let’s turn to one of the oldest and most frequently used code metrics: complexity. As engineers, we develop an intuitive sense of how difficult a particular piece of code is to understand. Since the earliest days of software engineering, researchers have proposed ways to quantify this sense with a single number.

As a crude first approximation, you might consider the size of a particular function (in lines of code) as a measure of its complexity. Indeed, as Khaled El Emam demonstrated in his 1999 paper on code size, you wouldn’t be too far off. Large subsystems tend to have more bugs than small ones do. But it’s not really the number of lines of code that confounds program maintenance; it’s the number of paths through it. That’s why Thomas McCabe proposed a path-oriented metric called cyclomatic complexity in his 1976 paper, A Complexity Measure.

McCabe gave his formula in graph-theory terms, with dots and lines representing paths and decision points in your code. But text-oriented version of this metric is a little more intuitive, and arrives at the same result.

Assuming a function has a single entry and exit point, you can get its cyclomatic number by counting the number of decisions and adding 1. For our purposes, a decision is an if statement, for loop condition, case statement, Boolean operator, and so on—basically, any place where your program’s execution might branch one way or the other.

For example, considering the following C function:

typedef enum Color { GREEN, RED } Color;

Color led_color(bool voltage_in_range, bool contacts_closed) {
    Color result;

    if (voltage_in_range && !contacts_closed) {
        result = GREEN;
    } else {
        result = RED;
    }

    return result;
}

This function has two decisions in it: the if statement, and the boolean && condition. Add 1 to the total, and the result is a complexity of 3.

You can verify this calculation by downloading an open source tool like pmccabe and running it on your code:

pmccabe main.c

Modified McCabe Cyclomatic Complexity
|   Traditional McCabe Cyclomatic Complexity
|       |    # Statements in function
|       |        |   First line of function
|       |        |       |   # lines in function
|       |        |       |       |  filename(definition line number):function
|       |        |       |       |           |
3       3        5       6      11       main.c(6): led_color

Once you’ve calculated cyclomatic complexity, what do you do with the information? This question has nearly as long a history as the metric itself. The software industry went through a phase where researchers tried to find the maximum acceptable complexity number across all companies and projects. Presumably, teams would then ensure that no code ever exceeded that value.

There are two big problems with treating complexity as a target to achieve. First, you can trivially game this metric by breaking a program up into thousands of tiny functions that each do practically nothing. The result is a low complexity score, but a tangled mess of code. Second, whenever you make a change purely in the name of pacifying the code metric, you risk breaking the code.

A much healthier approach is to use cyclomatic complexity as a spotlight to identify trouble areas in the code. Here’s an example. On one recent project, our team noticed a very high complexity score in what was supposed to be a purely mathematical section of the program. It turns out that a recent change had accidentally introduced a dependency on several other parts of the system.

The team sat down together and found a much better place for that new logic. Not only did the math routine become simpler, but the overall system became more maintainable. All the logic representing external state was now in one relatively small file.

Test Coverage

Any time you make a change to your code—even for a good cause like reduced complexity—there’s a chance of introducing an error. You may hope to refactor the code base to make it more maintainable, but unless you have a decent suite of automated tests, you’re not refactoring—you’re just pushing code around and hoping for the best. Unit tests are the first line of defense against errors during this process.

Of course, any particular isolated test is only going to run a small portion of your project code. If the bug lies elsewhere, that single test won’t catch it. With enough carefully written tests, however, you can give most of your project code a basic workout in a single quick pass.

The question then becomes, how do you determine whether or not you’ve written your unit tests carefully enough? One way is to start the tests using a profiler to observe which lines run and which ones don’t. The resulting metric, test coverage, or more specifically line coverage, is the percentage of lines of code that get exercised by the unit tests.

For example, consider the following simple test for the function we examined earlier:

int main() {
    assert( led_color(true, false) == GREEN );

    return 0;
}

To see the coverage for this test using the GCC compiler, all you have to do is pass in a couple of extra flags during compilation, then run the report generator after your program is done:

gcc -fprofile-arcs -ftest-coverage -c main.c
gcc -fprofile-arcs -o myprogram main.o
./myprogram
gcov main.c

The report will contain an annotated copy of your source code, like this:

    1:    6:Color led_color(bool voltage_in_range, bool contacts_closed) {
    1:    7:    Color result;
    -:    8:
    2:    9:    if (voltage_in_range && !contacts_closed) {
    1:   10:        result = GREEN;
    1:   11:    } else {
#####:   12:        result = RED;
    -:   13:    }
    -:   14:
    1:   15:    return result;
    -:   16:}

Each line of the program is marked with the number of times it ran. A hyphen indicates a comment, blank line, or other non-code line. Any line tagged with ##### didn’t get run during the test.

What target coverage metric should you shoot for? As with code complexity, that’s asking the question backwards. If you pick a percentage and blindly try to achieve it, you may end up with a false sense of confidence about the code. For example, if you add one more test case to the example, you can increase coverage of the led_color() function to 100 percent:

    assert( led_color(false, false) == RED );

This one additional test case isn’t sufficient to check the function’s behavior completely. The second half of the Boolean condition isn’t getting exercised at all. You could delete it and still get 100 percent coverage, but end up with a malfunctioning program.

Even though the modified tests ran every line of the code, they did not exercise every possible path through the program. You could try to achieve a more strenuous coverage metric, like branch coverage or path coverage. But for all but the most trivial programs, you’d see an explosion in the number of test cases we’d have to write and maintain.

Even if you somehow achieved 100 percent path coverage, that still wouldn’t be enough to catch all the potential bugs. As Robert Glass points out, your tests might run every line in your program, but they can’t run the lines you didn’t write.

A much more helpful question about code coverage is: what does it reveal about my project? The answer is that it finds potential hiding spots for bugs. If I know a particular function doesn’t get exercised at all during unit testing, I’ll know to give it extra attention after a code change—by code review, by adding better unit tests, or at the very least by exercising that feature more during integration testing.