Correct DNS nameserver software is highly crucial for the smooth functioning of the Internet, but writing an efficient, high-throughput implementation that is also bug-free is challenging. Today, developers use an ad hoc collection of regression tests they authored to test the implementations for crashes, RFC deviations and also to compare with other implementations. Writing regression tests manually is an onerous task. We will present a systematic and principled approach that automatically generates high-coverage test suites.
We will describe how our tool, Ferret, generates and uses the tests to compare responses from multiple nameserver implementations to find crashes and disagreements. Using these tests, Ferret uncovered 24 unique bugs in 8 eight different implementations, including popular ones like Bind and Knot, with at least one bug in every implementation. For example, we will describe a performance bug relating to the “glue cache” that Ferret uncovered in Bind and a critical error (fixed now) that crashed the CoreDNS server. We will show how developers and operators can leverage our tests and framework to check their implementations’ correctness and any implementation-specific behavior on their zone files.