Performance Guide

To get the best out of the PowerDNS recursor, which is important if you are doing thousands of queries per second, please consider the following.

A busy server may need hundreds of file descriptors on startup, and deals with spikes better if it has that many available later on. Linux by default restricts processes to 1024 file descriptors, which should suffice most of the time, but Solaris has a default limit of 256. This can be raised using the ulimit command or via the LimitNOFILE unit directive when systemd is used. FreeBSD has a default limit that is high enough for even very heavy duty use.

Limit the size of the caches to a sensible value. Cache hit rate does not improve meaningfully beyond 4 million max-cache-entries per thread, reducing the memory footprint reduces CPU cache misses. See below for more information about the various caches.

When deploying (large scale) IPv6, please be aware some Linux distributions leave IPv6 routing cache tables at very small default values. Please check and if necessary raise sysctl net.ipv6.route.max_size.

Set threads to your number of CPU cores (but values above 8 rarely improve performance).

Threading and distribution of queries

When running with several threads, you can either ask PowerDNS to start one or more special threads to dispatch the incoming queries to the workers by setting pdns-distributes-queries to true, or let the worker threads handle the incoming queries themselves.

The dispatch thread enabled by pdns-distributes-queries tries to send the same queries to the same thread to maximize the cache-hit ratio. If the incoming query rate is so high that the dispatch thread becomes a bottleneck, you can increase distributor-threads to use more than one.

If pdns-distributes-queries is set to false and either SO_REUSEPORT support is not available or the reuseport directive is set to false, all worker threads share the same listening sockets.

This prevents a single thread from having to handle every incoming queries, but can lead to thundering herd issues where all threads are awoken at once when a query arrives.

If SO_REUSEPORT support is available and reuseport is set to true, separate listening sockets are opened for each worker thread and the query distributions is handled by the kernel, avoiding any thundering herd issue as well as preventing the distributor thread from becoming the bottleneck.

New in version 4.1.0: The cpu-map parameter can be used to pin worker threads to specific CPUs, in order to keep caches as warm as possible and optimize memory access on NUMA systems.

New in version 4.2.0: The distributor-threads parameter can be used to run more than one distributor thread.

Performance tips

For best PowerDNS Recursor performance, use a recent version of your operating system, since this generally offers the best event multiplexer implementation available (kqueue, epoll, ports or /dev/poll).

On AMD/Intel hardware, wherever possible, run a 64-bit binary. This delivers a nearly twofold performance increase. On UltraSPARC, there is no need to run with 64 bits.

Consider performing a ‘profiled build’ by building with gprof support enabled, running the recursor a bit then feed that info into the next build. This is good for a 20% performance boost in some cases.

When running with >3000 queries per second, and running Linux versions prior to 2.6.17 on some motherboards, your computer may spend an inordinate amount of time working around an ACPI bug for each call to gettimeofday. This is solved by rebooting with clock=tsc or upgrading to a 2.6.17 kernel. This is relevant if dmesg shows Using pmtmr for high-res timesource.

Connection tracking and firewalls

A Recursor under high load puts a severe stress on any stateful (connection tracking) firewall, so much so that the firewall may fail.

Specifically, many Linux distributions run with a connection tracking firewall configured. For high load operation (thousands of queries/second), It is advised to either turn off iptables completely, or use the NOTRACK feature to make sure DNS traffic bypasses the connection tracking.

Sample Linux command lines would be:

## IPv4
iptables -t raw -I OUTPUT -p udp --dport 53 -j CT --notrack
iptables -t raw -I OUTPUT -p udp --sport 53 -j CT --notrack
iptables -t raw -I PREROUTING -p udp --dport 53 -j CT --notrack
iptables -t raw -I PREROUTING -p udp --sport 53 -j CT --notrack
iptables -I INPUT -p udp --dport 53 -j ACCEPT
iptables -I INPUT -p udp --sport 53 -j ACCEPT
iptables -I OUTPUT -p udp --dport 53 -j ACCEPT
iptables -I OUTPUT -p udp --sport 53 -j ACCEPT

## IPv6
ip6tables -t raw -I OUTPUT -p udp --dport 53 -j CT --notrack
ip6tables -t raw -I OUTPUT -p udp --sport 53 -j CT --notrack
ip6tables -t raw -I PREROUTING -p udp --sport 53 -j CT --notrack
ip6tables -t raw -I PREROUTING -p udp --dport 53 -j CT --notrack
ip6tables -I INPUT -p udp --dport 53 -j ACCEPT
ip6tables -I INPUT -p udp --sport 53 -j ACCEPT
ip6tables -I OUTPUT -p udp --dport 53 -j ACCEPT
ip6tables -I OUTPUT -p udp --sport 53 -j ACCEPT

When using FirewallD (Centos 7+ / Red Hat 7+ / Fedora 21+), connection tracking can be disabled via direct rules. The settings can be made permanent by using the --permanent flag:

## IPv4
firewall-cmd --direct --add-rule ipv4 raw OUTPUT 0 -p udp --dport 53 -j CT --notrack
firewall-cmd --direct --add-rule ipv4 raw OUTPUT 0 -p udp --sport 53 -j CT --notrack
firewall-cmd --direct --add-rule ipv4 raw PREROUTING 0 -p udp --dport 53 -j CT --notrack
firewall-cmd --direct --add-rule ipv4 raw PREROUTING 0 -p udp --sport 53 -j CT --notrack
firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p udp --dport 53 -j ACCEPT
firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p udp --sport 53 -j ACCEPT
firewall-cmd --direct --add-rule ipv4 filter OUTPUT 0 -p udp --dport 53 -j ACCEPT
firewall-cmd --direct --add-rule ipv4 filter OUTPUT 0 -p udp --sport 53 -j ACCEPT

## IPv6
firewall-cmd --direct --add-rule ipv6 raw OUTPUT 0 -p udp --dport 53 -j CT --notrack
firewall-cmd --direct --add-rule ipv6 raw OUTPUT 0 -p udp --sport 53 -j CT --notrack
firewall-cmd --direct --add-rule ipv6 raw PREROUTING 0 -p udp --dport 53 -j CT --notrack
firewall-cmd --direct --add-rule ipv6 raw PREROUTING 0 -p udp --sport 53 -j CT --notrack
firewall-cmd --direct --add-rule ipv6 filter INPUT 0 -p udp --dport 53 -j ACCEPT
firewall-cmd --direct --add-rule ipv6 filter INPUT 0 -p udp --sport 53 -j ACCEPT
firewall-cmd --direct --add-rule ipv6 filter OUTPUT 0 -p udp --dport 53 -j ACCEPT
firewall-cmd --direct --add-rule ipv6 filter OUTPUT 0 -p udp --sport 53 -j ACCEPT

Following the instructions above, you should be able to attain very high query rates.

TCP Fast Open Support

On Linux systems, the recursor can use TCP Fast Open for passive (incoming, since 4.1) and active (outgoing, since 4.5) TCP connections. TCP Fast Open allows the initial SYN packet to carry data, saving one network round-trip. For details, consult RFC 7413.

On Linux systems, to enable TCP Fast Open, it might be needed to change the value of the net.ipv4.tcp_fastopen sysctl. Value 0 means Fast Open is disabled, 1 is only use Fast Open for active connections, 2 is only for passive connections and 3 is for both.

The operation of TCP Fast Open can be monitored by looking at these kernel metrics:

netstat -s | grep TCPFastOpen

Please note that if active (outgoing) TCP Fast Open attempts fail in particular ways, the Linux kernel stops using active TCP Fast Open for a while for all connections, even connection to servers that previously worked. This behaviour can be monitored by watching the TCPFastOpenBlackHole kernel metric and influenced by setting the net.ipv4.tcp_fastopen_blackhole_timeout_sec sysctl. While developing active TCP Fast Open, it was needed to set net.ipv4.tcp_fastopen_blackhole_timeout_sec to zero to circumvent the issue, since it was triggered regularly when connecting to authoritative nameservers that did not respond.

At the moment of writing, some Google operated nameservers (both recursive and authoritative) indicate Fast Open support in the TCP handshake, but do not accept the cookie they sent previously and send a new one for each connection. Google is working to fix this.

If you operate an anycast pool of machines, make them share the TCP Fast Open Key by setting the net.ipv4.tcp_fastopen_key sysctl, otherwise you will create a similar issue some Google servers have.

To determine a good value for the tcp-fast-open setting, watch the TCPFastOpenListenOverflow metric. If this value increases often, the value might be too low for your traffic, but note that increasing it will use kernel resources.

Running with a local root zone

Running with a local root zone as described in RFC 8806 can help reduce traffic to the root servers and reduce response times for clients. Since 4.6.0 PowerDNS Recursor supports two ways of doing this.

Running a local Authoritative Server for the root zone

  • The first method is to have a local Authoritative Server that has a copy of the root zone and forward queries to it. Setting up an PowerDNS Authoritative Server to serve a copy of the root zone looks like:

    pdnsutil create-secondary-zone . ip1 ip2

    where ip1 and ip2 are servers willing to serve an AXFR for the root zone; RFC 8806 contains a list of candidates in appendix A. The Authoritative Server will periodically make sure its copy of the root zone is up-to-date. The next step is to configure a forward zone to the IP ip of the Authoritative Server in the settings file or the Recursor:

    forward-zones=.=ip

    The Recursor will use the Authoritative Server to ask questions about the root zone, but if it learns about delegations still follow those. Multiple Recursors can use this Authoritative Server.

  • The second method is to cache the root zone as described in Zone to Cache. Here each Recursor will download and fill its cache with the contents of the root zone. Depending on the timeout parameter, this will be done once or periodically. Refer to Zone to Cache for details.

Recursor Caches

The PowerDNS Recursor contains a number of caches, or information stores:

Nameserver speeds cache

The “NSSpeeds” cache contains the average latency to all remote authoritative servers.

Negative cache

The “Negcache” contains all domains known not to exist, or record types not to exist for a domain.

Recursor Cache

The Recursor Cache contains all DNS knowledge gathered over time. This is also known as a “record cache”.

Packet Cache

The Packet Cache contains previous answers sent to clients. If a question comes in that matches a previous answer, this is sent back directly.

The Packet Cache is consulted first, immediately after receiving a packet. This means that a high hitrate for the Packet Cache automatically lowers the cache hitrate of subsequent caches.

Measuring performance

The PowerDNS Recursor exposes many metrics that can be graphed and monitored.

Event Tracing

Event tracing is an experimental feature introduced in version 4.6.0 that allows following the internals of processing queries in more detail.

In certain spots in the resolving process event records are created that contain an identification of the event, a timestamp, potentially a value and an indication if this was the start or the end of an event. This is relevant for events that describe stages in the resolving process.

At this point in time event logs of queries can be exported using a protobuf log or they can be written to the log file.

Note that this is an experimental feature that will change in upcoming releases.

Currently, an event protobuf message has the following definition:

  enum EventType {
                                                // Range 0..99: Generic events
    CustomEvent = 0;                            // A custom event
    ReqRecv = 1;                                // A request was received
    PCacheCheck = 2;                            // A packet cache check was initiated or completed; value: bool cacheHit
    AnswerSent = 3;                             // An answer was sent to the client

                                                // Range 100: Recursor events
    SyncRes = 100;                              // Recursor Syncres main function has started or completed; value: int rcode
    LuaGetTag = 101;                            // Events below mark start or end of Lua hook calls; value: return value of hook
    LuaGetTagFFI = 102;
    LuaIPFilter = 103;
    LuaPreRPZ = 104;
    LuaPreResolve = 105;
    LuaPreOutQuery = 106;
    LuaPostResolve = 107;
    LuaNoData = 108;
    LuaNXDomain = 109;
}
message Event {
  required uint64 ts = 1;
  required EventType event = 2;
  required bool start = 3;
  optional bool boolVal = 4;
  optional int64 intVal = 5;
  optional string stringVal = 6;
  optional bytes bytesVal = 7;
  optional string custom = 8;
}
repeated Event trace = 23;

Event traces can be enabled by either setting event-trace-enabled or by using the rec_control subcommand set-event-trace-enabled.

An example of a trace (timestamps are relative in nanoseconds) as shown in the logfile:

- ReqRecv(70);
- PCacheCheck(411964);
- PCacheCheck(416783,0,done);
- SyncRes(441811);
- SyncRes(337233971,0,done);
 -AnswerSent(337266453)

The packet cache check event has two events. The first signals the start of packet cache lookup, and the second the completion of the packet cache lookup with result 0 (not found). The SynRec event also has two entries. The value (0) is the return value of the SyncRes function.

An example of a trace with a packet cache hit):

- ReqRecv(60);
- PCacheCheck(22913);
- PCacheCheck(113255,1,done);
- AnswerSent(117493)

Here it can be seen that packet cache returns 1 (found).

An example where various Lua related events can be seen:

ReqRecv(150);
PCacheCheck(26912);
PCacheCheck(51308,0,done);
LuaIPFilter(56868);
LuaIPFilter(57149,0,done);
LuaPreRPZ(82728);
LuaPreRPZ(82918,0,done);
LuaPreResolve(83479);
LuaPreResolve(210621,0,done);
SyncRes(217424);
LuaPreOutQuery(292868);
LuaPreOutQuery(292938,0,done);
LuaPreOutQuery(24702079);
LuaPreOutQuery(24702349,0,done);
LuaPreOutQuery(43055303);
LuaPreOutQuery(43055634,0,done);
SyncRes(80470320,0,done);
LuaPostResolve(80476592);
LuaPostResolve(80476772,0,done);
AnswerSent(80500247)

There is no packet cache hit, so SyncRes is called which does a couple of outgoing queries.