Free software developer. Canonical, ex-Linaro, ex-Samsung. I write in C, Go and Python. I usually do system software. Zyga aka Zygoon
4675 words

Introduction to bashunit - unit testing for bash scripts

Today I will talk about bashunit - a unit testing library for bash scripts.

All the posts about bash were building up to this. I wanted to be able to test bash scripts, but having found nothing that makes that practical, I decided to roll my own.

Having created bashcov earlier, I needed to connect the dots between discovering tests, running them, reporting errors in a nice way and measuring coverage at the same time. I also wanted to avoid having to source bashunit from test scripts, to avoid complexity related to having two moving parts, the tests and the progam running them. I settled on the following design.

bashunit is a standalone bash script. It can enumerate tests by looking for files called ending with In isolation, it sources each one and discovers functions starting with test_. In further isolation, it calls each test function, having established tracing in order to compute coverage. All output from the test function is redirected. On success only test function names and test file names are printed. On failure the subset of the trace related to the test function is displayed, as is any output that was collected.

The ultimate success of failure is returned with the exit code, making bashunit suitable for embedding into a larger build process.

In addition, since coverage is enlightening, similar to bashcov the collected coverage data is used to create an annotated script with the extension .coverage, corresponding to each sourced program that was executed through testing. This data is entirely human readable and meant to help understand gaps in testing or unexpected code flow due to the complexity in bash itself.

Let's look at a simple example. We will be working with a pair of files, one called housing our production code, and another one called with our unit tests.

Let's look at first

hello_world() {
    echo "Hello, World"

if [ "${0##*/}" = ]; then

When executed, the script prints Hello World. When sourced id defines the hello_world function and does nothing else. Simple enough. Let's look at our unit tests.



test_hello_world() {
    hello_world | grep -qFx 'Hello World'

In the UNIX tradition, we use grep to match the output of the hello_world function. The arguments to grep are -q for --quiet, to avoid printing the matching output, -F for --fixed-strings to avoid using regular expressions and finally -x for --line-regexp to consider matches only matching entire lines (avoids matching a substring by accident).

Running bashunit in the same directory yields this the following output:

bashunit: sourcing
bashunit: calling test_hello_world
bashunit: calling test_hello_world resulted in exit code 1
bashunit: trace of execution of test_hello_world
bashunit:    ++ .
bashunit:    + hello_world
bashunit:    + grep -qFx 'Hello World'

What is that? Our tests have failed. Well, I made them to, if you look carefully the example code has a comma between Hello and World, while test the code does not.

Correcting that discrepancy produces the following output:

bashunit: sourcing
bashunit: calling test_hello_world

The exit status is zero, indicating that our tests have passed. In addition to this, we have coverage analysis in the file, it looks like this:

  -: #!/bin/bash
  -: hello_world() {
  1:    echo "Hello, World"
  -: }
  1: if [ "${0##*/}" = ]; then
  -:     hello_world
  -: fi

The two 1s indicate that the corresponding line was executed one time. If you add loops or write multiple tests for a single function you will see that number increment accordingly.

Interestingly, not only test functions are executed, the guard at the bottom, where we either execute hello_world or do nothing more is also executed. This is done when the script is sourced by bashunit.

Much more is possible, but this is the initial version of bashunit. It requires a bit more modern bash than what is available on MacOS, so if you want to use it, a good Linux distribution is your best bet.

You can find bashunit, bashcov and the example at

Broken composition or the tale of bash and set -e

Today I will talk about a surprising behavior in bash, that may cause issues by hiding bugs.

Bash has rather simple error checking support. There are, in general, two ways one can approach error checking. The first one is entirely unrealistic, the second one has rather poor user experience.

The first way to handle errors is to wrap, every single thing in and if-then-else statement, provide a tailored error message, perform cleanup and quit. Nobody is doing that. Shell scripts, it's sloppy more often than not.

The second way to handle errors is to let bash do it for you, by setting te errexit option, by using set -e. To illustrate this, look at the following shell program:

set -e

echo "I'm doing stuff"
echo "I'm done doing stuff"

As one may expect,false will be the last executed command.

Sadly, things are not always what one would expect. For good reasons set -e is ignored in certain contexts. Consider the following program:

set -e
if ! false; then
   echo "not false is true"

Again, as one would expect, the program executes in its entirety. Execution does not stop immediately at false, as that would prevent anyone from using set -e in any non-trivial script.

This behavior is documented by the bash manual page:

Exit immediately if a pipeline (which may consist of a single simple command), a list, or a compound command (see SHELL GRAMMAR above), exits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test following the if or elif reserved words, part of any command executed in a && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted with !. If a compound command other than a subshell returns a non-zero status because a command failed while -e was being ignored, the shell does not exit. A trap on ERR, if set, is executed before the shell exits. This option applies to the shell environment and each subshell environment separately (see COMMAND EXECUTION ENVIRONMENT above), and may cause subshells to exit before executing all the commands in the subshell.

The set of situations where set -e is ignored is much larger. It contains things like pipelines, commands involving && and ||, except for the final element. It is also ignored when the exit status is inverted with !, making something as innocent as ! true silently ignore the exit status.

What may arguably need more emphasizing, is this:

If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.

What this really says is that set -e is disabled for the enitre duration of a compound command. This is huge, as it means that code which looks perfectly fine and works perfectly fine in isolation, is automatically broken by other code, which also looks and behaves perfectly fine in isolation.

Consider this function:

foo() {
    set -e # for extra sanity
    echo "doing stuff"
    echo "finished doing stuff"

This function looks correct and is correct in isolation. Lets carry on and look at another function:

bar() {
    set -e # for extra sanity
    if ! foo; then
        echo "foo has failed, alert!"
        return 1

This also looks correct. In fact, it is correct as well, as long as foo is an external program. If foo is a function, like what we defined above. The outcome is that foo executes all the way to the end, ignoring set -e's normal effect after the non-zero exit code from false. Unlike when invoked in isolation, foo prints the finished doing stuff message. What is worse, because echo succeeds, foo doesn't fail!

Bash breaks composition of correct code. Two correct functions stop being correct, if the correctness was based on the assumption, that set -e stops execution of a list of commands, after the first failing element of that list.

It's a documented feature, so it's hardly something one can report as a bug on bash. One can argue that shellcheck should warn about that. I will file a bug on shellcheck, but discovering this has strongly weakened my trust in using bash for anything other than an isolated, executed script. Using functions or sourcing other scripts is a huge risk, as the actual semantics is not what one would expect.

Poor man's introspection in bash

Today I wanted to talk about bashunit, a unit testing library for bash scripts. This turned out to be a bigger topic, so you will have to wait a bit longer for the complete story. Instead I will talk about how to do poor man's introspection in bash, so that writing tests is less cumbersome.

In some sense, if you have to know this sort of obscure bash feature, it may be a good indication to stop, take a step back and run away. Still if you are reading this, chances are you run towards things like that, not away.

While writing bashunit, the library for unit testing bash scripts, I wanted to not have to enumerate test functions by hand. Doing that would be annoying, error-prone and just silly. Bash being a kitchen sink, must have a way to discover that instead. Most programming languages, with the notable exception of C, have some sort of reflection or introspection capability, where the program can discover something about itself through a specialized API. Details differ widely but the closer a language is to graphical user interfaces or serialization and wire protocols, the more likely it is to grow such capability. Introspection has to have a cost, as there must be additional meta-data that describes the various types, classes and functions. On the upside, much of this data is required by the garbage collector anyway, so you might as well use it.

Bash is very far away from that world. Bash is rather crude in terms of language design. Still it has enough for us to accomplish this task. The key idea is to use the declare built-in, which is normally used to define variables with specific properties. When used with the -F switch, it can also be used to list function declarations, omitting their body text.

We can couple that with a loop that reads subsequent function declarations, filter out the declaration syntax and end up with just the list of names. From there on all we need is a simple match expression and we can find anything matching a simple pattern. Et voilĂ , our main function, which discovers all the test functions and runs them.

bashunit_main() {
    local def
    local name
    declare -F | while IFS= read -r def; do
        name="${def##declare -f }"
        case "$name" in
                if ! "$name"; then
                    echo "bashunit: test $name failed"
                    return 1

Tomorrow we will build on this idea, to create a very simple test runner and coverage analyzer.

Measuring execution coverage of shell scripts

Today I will talk about measuring test coverage of shell scripts

Testing is being honest about our flawed brains that constantly make mistakes regardless of how much we try to avoid it. Modern programming languages make writing test code a first-class concept, with intrinsic support in the language syntax and in the first-party tooling. Next to memory safety, concurrency safety, excellent testing support allows us to craft ever larger applications with an acceptable failure rate.

Shell scripts are as old as UNIX, and are usually devoted to glue logic. Normally testing shell scripts is done the hard way, in production. For more critical scripts there's a tendency to test the end-to-end interaction but as far as I'm aware of, writing unit tests and measuring coverage is unexpected.

In a way that's sensible, as long as shell scripts are small, rarely changed and indeed are battle tested in production. On the other hand nothing is unchanged forever, environments change, code is subtly broken and programmers on the entire range of the experience spectrum, can easily come across a subtly misunderstood, or broken, feature of the shell.

In a way static analysis tools have outpaced the traditional hard way of testing shell programs. The utterly excellent shellcheck program should be a mandatory tool in the arsenal of anyone who routinely works with shell programs. Today we will not look at shellcheck, instead we will look at how we can measure test coverage of a shell program.

I must apologize, at all times when I wrote shell I really meant bash. Not because bash is the best or most featureful shell, merely because it happens to have the right intersection of having enough features and being commonly used enough to warrant an experiment. It's plausible or even likely that zsh or fish have similar capabilities that I have not explored yet.

What capabilities are those? Ability to implement an execution coverage program in bash itself. Much like in when using Python, C, Java or Go, we want to see if our test code at least executes a specific portion of the program code.

Bash has two features that make writing such a tool possible. The first one is most likely known to everyone, the set -x option, which enables tracing. Tracing prints the commands, just as they are executed, to standard error. This feels like almost what we want, if only we could easily map the command to a location in a source file, we could construct a crude, line-oriented analysis tool. The second feature is also standard, albeit perhaps less well-known. It is the PS4 variable, which defines the format of the trace output. If only we could put something as simple as $FILENAME:$LINENO there, right? Well, in bash we can, although the first variable has a bash-specific name $BASH_SOURCE. The second feature which makes this convenient, is the ability to redirect the trace to a different file descriptor. We can do that by setting $BASH_XTRACE_FD=... to a file descriptor of an open file.

With those two features combined we can easily run a test program, which sources a production program, exercises a specific function and quits. We can write unit tests. We a can also run integration tests and check if any of the production code is missing coverage that indicates important test is missing.

I pieced together a very simple program that uses this idea. It is available at and is written in bash itself.

Signal to noise ratio in build systems

Today I will argue why silent rules are a useful feature of good build systems.

Build systems build stuff, mainly by invoking other tools, like compilers, linkers, code generators and file system manipulation tools. Build tools were traditionally printing some indication of progress. Make displays the commands as they are executed. CMake displays a quasi progress bar, including the name of the compiled file and a counter.

Interestingly, it seems the more vertically oriented, the less output shows up by default. If you need to hand-craft a solution out of parts, like with make, debugging the parts is important to the program you are building. Compare the verbosity of a autotools build system with a go build ./... invocation, that can build many thousands of programs and libraries. The former prints walls of text, the latter prints, nothing, unless there's an error.

As an extreme case, this is taken from the build log of Firefox 79. This is the command used to compile a single file. Note that the command is not really verbatim, as the <<PKGBUILDDIR>> parts hide long directory names used internally in the real log (this part is coming from the Debian build system). Also note that despite the length, this is a single line.

/usr/bin/gcc -std=gnu99 -o mpi.o -c -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fstack-protector-strong -DNDEBUG=1 -DTRIMMED=1 -DNSS_PKCS11_2_0_COMPAT -DMOZ_HAS_MOZGLUE -DMOZILLA_INTERNAL_API -DIMPL_LIBXUL -DSTATIC_EXPORTABLE_JS_API -I/<<PKGBUILDDIR>>/third_party/prio -I/<<PKGBUILDDIR>>/build-browser/third_party/prio -I/<<PKGBUILDDIR>>/security/nss/lib/freebl/mpi -I/<<PKGBUILDDIR>>/third_party/msgpack/include -I/<<PKGBUILDDIR>>/third_party/prio/include -I/<<PKGBUILDDIR>>/build-browser/dist/include -I/usr/include/nspr -I/usr/include/nss -I/usr/include/nspr -I/<<PKGBUILDDIR>>/build-browser/dist/include/nss -fPIC -include /<<PKGBUILDDIR>>/build-browser/mozilla-config.h -DMOZILLA_CLIENT -Wdate-time -D_FORTIFY_SOURCE=2 -O2 -fdebug-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -Wformat -Werror=format-security -fno-strict-aliasing -ffunction-sections -fdata-sections -fno-math-errno -pthread -pipe -g -freorder-blocks -O2 -fomit-frame-pointer -funwind-tables -Wall -Wempty-body -Wignored-qualifiers -Wpointer-arith -Wsign-compare -Wtype-limits -Wunreachable-code -Wduplicated-cond -Wno-error=maybe-uninitialized -Wno-error=deprecated-declarations -Wno-error=array-bounds -Wno-error=coverage-mismatch -Wno-error=free-nonheap-object -Wno-multistatement-macros -Wno-error=class-memaccess -Wno-error=deprecated-copy -Wformat -Wformat-overflow=2 -MD -MP -MF .deps/mpi.o.pp /<<PKGBUILDDIR>>/security/nss/lib/freebl/mpi/mpi.c

This particular build log is 299738 lines long. That's about 40MB of text output, for a single build.

Obvously, not all builds are alike. There is value in an overly verbose log, like this one, because when something fails the log may be all you get. It is useful to be able to repeat the exact steps taken to see the failure in order to fix it.

On the other end of the spectrum you can look at incremental builds, performed locally while editing the source. There some things are notable:

  • The initial build is much like the one quoted above, except that the log file will not be looked at by hand. An IDE may parse it to pick up warnings or errors. Many developers don't use IDEs and just run the build and ignore the wall of text it produces, as long as it doesn't fail entirely.

  • As code is changed, the build system will re-compile the parts that became invalidated by the changes. This can be as little as one .c file or as many as all the .c files, that include a common header that was changed. Interestingly computing the number of files that need changes may take a while and it may be faster to fire start compiling even before the whole set is known. Having a precise progress bar may be detrimental to the performance.

  • The output of the compiler may be more important than the invocation of the compiler. After all, it's very easy to invoke the build system again. Reading a page-long argument list to gcc is less relevant than the printed error or warning.

That last point is what I want to focus on. The whole idea is to hide or simplify some information, in order to present other kind of information more prominently. We attenunate the build command to amplify the compiler output.

Compare those two make check output logs from my toy library. I'm working on a few new manual pages and I have a rule which uses man to verify syntax. Note that I specifically used markup that wraps long lines, as this is also something you'd see in a terminal window.

This is what you get out of the box:

zyga@x240 ~/D/libzt (feature/defer)> make check
/usr/bin/shellcheck configure
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_CMP_BOOL.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_CMP_BOOL.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_CMP_INT.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_CMP_INT.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_CMP_PTR.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_CMP_PTR.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_CMP_RUNE.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_CMP_RUNE.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_CMP_UINT.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_CMP_UINT.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_CURRENT_LOCATION.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_CURRENT_LOCATION.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_FALSE.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_FALSE.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_NOT_NULL.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_NOT_NULL.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_NULL.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_NULL.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/ZT_TRUE.3 2>&1 >/dev/null | sed -e 's@tbl:@man/ZT_TRUE.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/libzt-test.1 2>&1 >/dev/null | sed -e 's@tbl:@man/libzt-test.1@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/libzt.3 2>&1 >/dev/null | sed -e 's@tbl:@man/libzt.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_check.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_check.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_claim.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_claim.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_closure.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_closure.3@g'
mdoc warning: A .Bd directive has no matching .Ed (#20)
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_closure_func0.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_closure_func0.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_closure_func1.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_closure_func1.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_defer.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_defer.3@g'
Usage: .Fn function_name [function_arg] ... (#16)
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_location.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_location.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_location_at.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_location_at.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_main.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_main.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_pack_boolean.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_pack_boolean.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_pack_closure0.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_pack_closure0.3@g'
mdoc warning: A .Bd directive has no matching .Ed (#20)
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_pack_closure1.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_pack_closure1.3@g'
mdoc warning: A .Bd directive has no matching .Ed (#21)
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_pack_integer.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_pack_integer.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_pack_nothing.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_pack_nothing.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_pack_pointer.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_pack_pointer.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_pack_rune.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_pack_rune.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_pack_string.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_pack_string.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_pack_unsigned.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_pack_unsigned.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_test.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_test.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_test_case_func.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_test_case_func.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_test_suite_func.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_test_suite_func.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_value.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_value.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_visit_test_case.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_visit_test_case.3@g'
LC_ALL=C MANROFFSEQ= MANWIDTH=80 man --warnings=all --encoding=UTF-8 --troff-device=utf8 --ditroff --local-file man/zt_visitor.3 2>&1 >/dev/null | sed -e 's@tbl:@man/zt_visitor.3@g'
plog-converter --settings ./.pvs-studio.cfg -d V1042 --srcRoot . --renderTypes errorfile zt.c.PVS-Studio.log zt-test.c.PVS-Studio.log | srcdir=. abssrcdir=/home/zyga/Dokumenty/libzt awk -f /usr/local/include/zmk/pvs-filter.awk
libzt self-test successful

This is what you get when you use ./configure --enable-silent-rules:

zyga@x240 ~/D/libzt (feature/defer)> make check
SHELLCHECK configure
MAN man/libzt-test.1
MAN man/libzt.3
MAN man/zt_check.3
MAN man/zt_claim.3
MAN man/zt_closure.3
mdoc warning: A .Bd directive has no matching .Ed (#20)
MAN man/zt_closure_func0.3
MAN man/zt_closure_func1.3
MAN man/zt_defer.3
Usage: .Fn function_name [function_arg] ... (#16)
MAN man/zt_location.3
MAN man/zt_location_at.3
MAN man/zt_main.3
MAN man/zt_pack_boolean.3
MAN man/zt_pack_closure0.3
mdoc warning: A .Bd directive has no matching .Ed (#20)
MAN man/zt_pack_closure1.3
mdoc warning: A .Bd directive has no matching .Ed (#21)
MAN man/zt_pack_integer.3
MAN man/zt_pack_nothing.3
MAN man/zt_pack_pointer.3
MAN man/zt_pack_rune.3
MAN man/zt_pack_string.3
MAN man/zt_pack_unsigned.3
MAN man/zt_test.3
MAN man/zt_test_case_func.3
MAN man/zt_test_suite_func.3
MAN man/zt_value.3
MAN man/zt_visit_test_case.3
MAN man/zt_visitor.3
PLOG-CONVERTER zt.c.PVS-Studio.log zt-test.c.PVS-Studio.log static-check-pvs
EXEC libzt-test
libzt self-test successful

I will let you decide which output is more readable. Did you spot the mdoc warning lines on your first read? If you build system supports that, consider using silend rules and fix those warnings you now see.

Build system griefs - autotools

I always had a strong dislike of commonly used build systems. There's always something that would bug me about those I had to use or interact with.

Autoconf/Automake/Libtool are complex, slow and ugly. Custom, weird macro language? Check. Makefile lookalike with different features? Check. Super slow single threaded configuration phase. Check. Gigantic, generated scripts and makefiles, full of compatibility code, workarounds for dead platforms and general feeling of misery when something goes wrong. Check. Practical value for porting between Linux, MacOS and Windows. Well, sort of, if you want to endure the pain of setting up the dependency chain there. It feels very foreign away from GNU.

Autotools were the first build system I've encountered. Decades later, it's still remarkably popular, by both broad feature support and inertia. Decades later it still lights up exactly one core, on those multi-core workstations we call laptops. The documentation is complete but arguably cryptic and locked in weird info pages. Most projects I've seen cargo-cult and tweak their scripts and macros from one place to another.

Despite all the criticism autotools did get some things right, in my opinion. The build-time detection of features, as ugly, slow and abused for checking things that are available everywhere now, is still the killer feature. There would be no portable C software as we know it today without the ability to toggle those ifdefs and enable sections of the code depending on the availability and functionality of an API, dependency or platform feature.

The user-interaction via the configuration script, now commonly used to draw lines in the sand and show how one distribution archetype differs from the other, is ironically still one of the best user interfaces for building that does not involve a full blown menu system.

The theory where you don't need autotools to use a project built with it. The theoretical portability, albeit to mostly fringe systems, is also a noble goal. Though today I rarely see systems that don't rip out the generated build system and re-generate it from source, mainly to ensure nobody has snuck in anything nasty into that huge, un-auditable, generated shell monstrosity that's rivaling the size of modest projects.

Will autotools eventually be replaced? I don't think so. It seems like one of those things that gets phased out only when a maintainer retires. The benefits of rewriting the whole build system and move to something more modern must outweigh the pain and cost of doing so. In the end it would help to modernize autotools more than it would help to convince everyone to port their software over.

New Blog

Just testing the whole blogging via-notes-thing, thanks to

The idea is that you can blog from a desktop or mobile client,
by creating a set of notes that appear as distinct posts. Not all notes are public, in fact, by default they are all encrypted and private.

The effort to set this up is remarkably low. The only downside is that, as all hosted products, there's a free tier that is not as nice as the paid subscription.

There's a snap or appimage for Linux. The way to get your blog listed on, wait for it,, (har har), is a bit cumbersome, but this is all thanks to privacy so it's not too bad.

I may keep this.

Oh and thanks to for the idea.