Don't argument with puppies

Using "doing X [figuratively] kills puppies" as an argument for your case is not helpful. Rather explain why X is considered an anti-pattern and in what cases it may be still permissible. This gives the other person a possibility to reason about the guidance and probably increases the chance that your argument will be accepted.

Now is the sixth time that I tutor in the programming course for the bachelor students. It is a 10-day program which teaches C for beginners. It is a tough pace but every element of C gets covered. Well, except perhaps union, volatile, and extern. In the tutorial classes I am there to answer questions.

One pattern that I occasionally see when listening to other tutors and lecturers explain, is this:

Don't do X. If you do X, puppies will die! And those are cute baby puppies with eyes this big. You don't want to harm them, do you?

I can understand that one resorts to this in order to make a point important. But this also sends two subtle messages:

  1. "I don't have a good argument for my case."
  2. "You wouldn't understand the complicated reasoning, just accept that X is to be avoided at all cost."

It feels like a parent saying "Because I said so!".

In physics, one usually teaches the origins of things such that one does not have to believe that some theory is valid, it is shown either by experiment or formal reasoning that a theory makes sense. This way one knows which preconditions need and knows what could go wrong. In programming this is similar. There are a bunch of anti-patterns which one should not do. In every case there are reasons why some pattern is bad. And not stating the reason does not really help the participants of the course.

Let me illustrate with a few examples.

First Example: Heron's square root

On the second day, one exercise asks to implement the Heron algorithm to compute the square root. The algorithm is a sequence $(a_i)$ which has the property that $a_n \to \sqrt{r}$ with $a_0 = r$ and

$$a_{n+1} = \frac 12 \left( a_n + \frac{r}{a_n} \right) \,.$$

A typical program by a participant might look like this:

#include <math.h>
#include <stdio.h>

int main() {
    double radicant = 2;
    double cur = radicant, prev = -1;

    while (1) {
        if (fabs((cur - prev) / radicant) < 1e-10) {
            break;
        }

        prev = cur;
        cur = 0.5 * (prev + radicant / prev);
    }

    printf("sqrt(%lg) = %lg\n", radicant, cur);
    return 0;
}

This program does implement it correctly. It is nicely formatted. Can it be improved? Depending on your taste, it can.

The statement in the course has been:

Never write while (1) and use break later on, puppies will die then. It's not only that kittens would die, but the even more cute puppies will die. Don't do that. Ever!

Would that convince you? Would you not do that any more, even if it felt so convenient? Given that at some point the simplest way for you to solve some problem would involve this technique, would you use the technique?

The problem is that such a statement does not give you any measure for the "badness" of some pattern. A solution which would probably be approved of by the person who made the above statement would be this one here:

#include <math.h>
#include <stdio.h>

int main() {
    double radicant = 2;
    double cur = radicant, prev = -1;

    while (fabs((cur - prev) / radicant) > 1e-10) {
        prev = cur;
        cur = 0.5 * (prev + radicant / prev);
    }

    printf("sqrt(%lg) = %lg\n", radicant, cur);
    return 0;
}

I do like this better, indeed. There is no nested if inside the while any more. But there is a thing I don't like: The variable prev is initialized with -1. That is a magic number, which is a bad thing.

Of course, I now owe you an explanation. The number $-1$ there is chosen because I know that the algorithm will never generate negative numbers and I also know that the comparison will check for relative errors of $10^{-10}$. Therefore I estimate that $-1$ is enough safety margin to always enter the while loop. That's what the $-1$ is for, to enter the while loop once such that I have can assign prev a more sensible value. All this is encoded in the prev = -1 statement.

Here I would use the do { ... } while (...); construct. With this I can safely leave prev uninitialized and still have the same condition.

#include <math.h>
#include <stdio.h>

int main() {
    double radicant = 2;
    double cur = radicant, prev;

    do {
        prev = cur;
        cur = 0.5 * (prev + radicant / prev);
    } while (fabs((cur - prev) / radicant) > 1e-10);

    printf("sqrt(%lg) = %lg\n", radicant, cur);
    return 0;
}

This is indeed more readable, I think. There are no more magic numbers (okay, the 1e-10 is still magic and should be extracted further.

The argument for not using while (1) is that is makes programs harder to reason about. And that is a very good reason! If you work on a project for a couple weeks and all your code is hard to reason about, you will have a very hard time to find errors in it. If you need all your smarts to write the program, then you don't have enough smarts to find errors on it. Therefore try to write the programs that you can understand them will a fraction of your smarts. Avoiding while (1) where it makes sense can make your code more readable.

Variable names

Also the variable names make a huge difference. Would the algorithm be as easy to follow as above, if the program was the following?

#include <math.h>
#include <stdio.h>

int main() {
    double a = 2;
    double c = a, b = -1;

    while (1) {
        b = c;
        c = 0.5 * (b + a / b);

        if (fabs((c - b) / a) < 1e-10) {
            break;
        }
    }

    printf("sqrt(%lg) = %lg\n", a, c);
    return 0;
}

The variable names a, b, and c do not say anything to the reader. You have to work through every single statement to get an idea. And before anyone suggests it: Adding comments does not make it any better, it rather makes it worse. One has to read the comments, keep the associations in mind. Quickly the reader will think:

a is the radicant, b is the current value, c is the last value, or the other way around? I have to read the comment again ...

Using a bit more verbose names keep you from juggling. Everybody only has that much brain capacity, better use it for the actual problem than to keep those things in the head.

Second Example: Euclid's greatest common divisor

Another example for the while (1) pattern is Euclid's algorithm for the greatest common divisor. A program as written by a participant looks like this:

#include <stdio.h>

int main() {
    int a = 12, b = 9;

    while (1) {
        if (a == 0) {
            printf("gcd = %i\n", b);
            break;
        }
        if (b == 0) {
            printf("gcd = %i\n", a);
            break;
        }
        if (a > b) {
            a = a - b;
        } else {
            b = b - a;
        }
    }

    return 0;
}

This is correct and gives the right results. Yet there is a while (1) in it. Transforming it would give the following:

#include <stdio.h>

int main() {
    int a = 12, b = 9;

    while (a != 0 && b != 0) {
        if (a > b) {
            a = a - b;
        } else {
            b = b - a;
        }
    }

    if (a == 0) {
        printf("gcd = %i\n", b);
    }
    else {
        printf("gcd = %i\n", a);
    }

    return 0;
}

There are a couple good things now:

  • The while loop is shorter. Having less lines of code to reason about in a single block is always good.
  • The while (1) is gone and no puppies die. Or kittens?

The condition a != 0 is now replicated, there is another a == 0 as well. This is duplication and we now have to check which of the condition led to the exit from the while loop. I am not sure whether this is an improvement in every way.

Conclusion

On the technical side, it might be worthwhile to check whether while (1) constructs can be written differently. There might still be cases where the best time to check for the exit condition is in the middle of the loop. Then using a break there might be the cleanest way to write it.

More importantly, I think it is not good to assume that people would not understand a logical argument. Some programmers are adamant about "good style". They are rightly so because there are good reasons behind those "best practices". They should take the time to state those reasons instead of spending half a minute talking about killing puppies. Better spend those 30 seconds on making the case for code readability and maintainability. That will serve the people who just learn programming better in the long run.

Corollary

For most other things, there are good arguments:

Indentation

Correct indentation makes the structure of the code directly apparent. Putting closing braces on a single line makes it easier to find those. Reading code should be easy to make finding errors easier.

Spaces around operators

Makes reading easier, same as above.

Avoid global variables

Having global state makes reasoning about single units impossible. Therefore locating an error is harder and therefore take more time. Writing unit tests become harder as well. It might save a bit in the short run, but will make the code harder to maintain.