Why leaking file descriptors is a problem

Whenever you open a file with fopen, you get a FILE * which is your handle to the file you just opened. Some people tell you to always run fclose on the file handle such that the file is closed.

Leaving the file unclosed will not cause problems in most cases. I guess most people have forgot the fclose or did not bother to write it at some point. Their programs did not crash. To test when they crash, I wrote this little testing program. All it does it to open a lot of files in a loop and not close them.

// Copyright © 2015-2016 Martin Ueding <dev@martin-ueding.de>

// Try to leak as many file descriptors until the program crashes.

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
    char filename[100];

    for (size_t i = 0; i < 1500; ++i) {
        sprintf(filename, "/tmp/%010ld.txt", i);
        FILE *fp = fopen(filename, "w");

        if (fp == NULL) {
            printf("i = %6ld: failed!\n", i);
            return 1;
        }

        if (i % 100 == 0) {
            printf("i = %6ld: ok.\n", i);
        }
    }

    return 0;
}

You can download this file: leak.c

Then compile it. You could also have used gcc if you like, this is ISO standard C. When I run it, I get the following output:

$ clang --std=c11 leak.c -o leak
$ ./leak
i =      0: ok.
i =    100: ok.
i =    200: ok.
i =    300: ok.
i =    400: ok.
i =    500: ok.
i =    600: ok.
i =    700: ok.
i =    800: ok.
i =    900: ok.
i =   1000: ok.
i =   1021: failed!

Ouch! The last file created in the directory is 0000001020.txt. It seems this program can only open 1020 files at the same time. This is enough if you just process a couple files. When you open files in a loop, it will break.

On my machine, ulimit -n tells me that there are 1024 file descriptors for each program. I assume that the program itself will need one descriptor for its machine code, and perhaps there are three other ones used for shared libraries.

So please use fclose to close your files in your programs. If you do not do that and somebody works with a lot of files, it will either crash (when you have checked fp == NULL) or it will just not create a file! Alternatively, you can use C++ where the std::ofstream class has a destructor that will automatically close the file at the end of the loop.