OARC 39 is planned to be a hybrid in-person and online workshop and will have a joint Programme with the 47th CENTR Technical Workshop!
DNS-OARC is a non-profit, membership organization that seeks to improve the security, stability, and understanding of the Internet's DNS infrastructure. Part of these aims are achieved through workshops.
DNS-OARC Workshops are open to OARC members and to all other parties interested in DNS operations and research.
Social Media hashtag: #OARC39
Sponsorship opportunities for OARC 39 are available. Details at:
Annual Workshop Patrons for 2022 are available. Details at:
The intent of this talk is to review many of the differences that I personally have experienced while working with a wide variety of DNS service providers. Because of the corporate acquisitions by Salesforce, we need to bring their domain name assets into our company, handling the transfer of domain registrations, registrars, and operations. While often this is a pretty straightforward process, there's almost always something unusual that comes up even in otherwise small acquisitions.
While I will end up covering many things that make my life as an engineer and administrator harder, this talk will not be for "naming and shaming". I intend to not explicitly call out any particular providers, only to highly various features that can ease or hurt getting to the end goal.
Areas to be covered would include:
- Nameserver requirements
- Documentation requirements
- DNSSEC requirements
- Transfer locks
- Enhanced transfer locks
- Super double plus bonus transfer locks
- Accessing transfer codes
- Delegated access
- DNSSEC, again
- Hosting by registrar
- Change management
- Ordering issues
- DNSSEC, again
The combination of DNS over HTTPS and HTTP Server Push opens up the potential of hainv a server push a DNS response to the client without the client having to perform a DNS query. This has received a lukewarm reception so far in the DNS community, but there are some interesting aspects to this approach in terms of enhanced privacy and improved latency.
One of the main challenge facing IoT today is security. The constrained nature of IoT devices deprives them of using security solutions used in the Internet. Constrained IoT devices cannot use the Public Key Infrastructure with X.509 certificates to establish secure sessions. Moreover, the idea of self-signed certificates and having trust based on a single private trusted CA does does not scale. The Domain Name System (DNS) using the DNS-based Authentication of Named Entities protocol (DANE) and DNS’s security extensions (DNSSEC) can help create the sought after Public Key Infrastructure (PKI) for IoT. With a concrete example, this presentation will explain how DNS can deliver IoT PKI functions based on DANE, backed by DNSSEC. The implementation is based on two drafts in the DANCE (DANE Authentication for Network Clients Everywhere) WG at the IETF.
Most clients, servers and test tools in the Domain Name System (DNS) ecosystem today strive to get the DNS protocol implementation as correct as possible.
This is a particularly difficult effort for DNS test tools, such as Zonemaster, which require a specific infrastructure to ascertain their own correctness.
Testing such tools is traditionally done by having DNS servers serve specially crafted zones, containing malformed Resource Records (RR), invalid DNS Security Extension (DNSSEC) signatures or other invalid data.
However, the server itself also needs to be predictably faulty in order to elicit a particular response from the testing tool.
Hence there was a need for a DNS server that offers a choice between a correct implementation and a faulty implementation of some aspect of the protocol. We named it : « Intentionally Broken DNS (IBDNS) ».
Although this project is still in "work-in-progress", this presentation by its author from Afnic Labs will introduce the project and how it has already led to a bug involving a subtle edge case in a DNS test tool.
The Internet's naming system (DNS) is a hierarchically structured database, with hundreds of millions of domains in a radically distributed management architecture. The distributed nature of the DNS is the primary factor that allowed it to scale to its current size, but it also brings security and stability risks. The Internet standards community (IETF) has published several operational best practices to improve DNS resilience, but operators must make their own decisions that tradeoff security, cost, and complexity. Since these decisions can impact the security of billions of Internet users, recently ICANN has proposed an initiative to codify best practices into a set of global norms to improve security: the Knowledge-Sharing and Instantiating Norms for DNS and Naming Security (KINDNS). A similar effort for routing security -- Mutually Agreed Norms for Routing Security -- provided inspiration for this effort. The MANRS program encourages operators to voluntarily commit to a set of practices that will improve collective routing security -- a challenge when incentives to conform with these practices does not generate a clear return on investment for operators. One challenge for both initiatives is independent verification of conformance with the practices The KINDNS conversation has just started, and stakeholders are still debating what should be in the set of practices. At this early stage, we analyze possible best practices in terms of their measurability by third parties, including a review of DNS measurement studies and available data sets.
We consider how the DNS security and privacy landscape has evolved over time, using data collected annually at A-root between 2008 and 2021. We consider issues such as deployment of security and privacy mechanisms, including source port randomization, TXID randomization, DNSSEC,and QNAME minimization. We find that achieving general adoption of new security practices is a slow, ongoing process. Of particular note, we find a significant number of resolvers lacking nearly all of the security mechanisms we considered, even as late as 2021. Specifically, in 2021, over 4% of the resolvers analyzed were unprotected by either source port randomization, DNSSEC validation, DNS cookies, or 0x20 encoding. Encouragingly, we find that the volume of traffic from resolvers with secure practices is significantly higher
than that of other resolvers.
Delegate badges required
The Border Gateway Protocol (BGP) and the Domain Name System (DNS), are two key protocols that are important for the working of the Internet. When these protocols were developed, security, like integrity, was not an important factor yet. However, with various outages due to the lack of security of these protocols, these protocols needed to be secured. The Resource Public Key Infrastructure (RPKI) was developed to deliver an integrity factor to the routing protocol. Using RPKI, an address prefix and size can be certified and signed. This is a Route Origin Autorisation (ROA) and it certifies that a prefix of a set size may be announced by a specific AS. Using that mechanism, operators can validate routes upon reception, this is called Route Origin Validation (ROV). Thus, giving the ability to drop invalid announcements.
Until now, no research has been done into the state of ROV of authoritative DNS operators. This research shows a method on how to measure, analyze, and answers the question: “What is the state of RPKI adoption on authoritative name servers?”. To answer that main question three subquestions have been defined:
To measure the state, three entities have been created. There is one sender, which sends an order of thousands of DNS requests per second to authoritative name servers. This sender is fed with a list of authoritative name server addresses, both IPv4 and IPv6, provided by OpenINTEL. The list includes gTLDs, ccTLDs, Alexa’s, and Cisco’s Umbrella top one million. There are two collectors, both have the same IPv4 and IPv6 addresses. However, one is a valid collector that resides in RPKI valid prefixes. And there is an invalid collector that resides in an invalid prefix, these are namely more specific. With this setup, it is possible to perform a controlled sub-prefix attack.
The collectors listen for DNS responses from the queries sent by the sender. Depending on where the response arrives, the authoritative name server resides in an AS that implements ROV. A total of 731,113 IPv4 and 79,701 IPv6 authoritative name servers are queried. The measurements were taken between the 17th and 26th of July. The analysis of the measurements shows that 42.87% of the IPv4 reachable authoritative name servers are protected by ROV. 75.06% is covered by a ROA. For IPv6, this is 39.20% and 79.76% respectively. The analysis also shows that IPv6 reachable domains are, in proportion, better protected than IPv4 reachable domains. That is 73.14% for IPv6 and 62.48% for IPv4 respectively.
This research shows more than only an answer to the questions at hand. Responses from individual authoritative name servers during the day are seen on both the valid and invalid collector. Thus, this shows that the Internet is a very dynamic place. The research also reveals the weakest-link problem.
To aid reproducibility, future research, and measurements the code is publicly available:
The planned presentation will include new measurements to compare if there are any differences over time.
IPv6-only networks are expanding, with draft-xie-v6ops-framework-md-ipv6only-underlay being a recent example. For IPv6-only networks to be widely deployable, software must be able to function in IPv6-only networks. However, according to RFC3901 BCP91, "every recursive name server SHOULD be either IPv4-only or dual stack." Meaning recursive resolvers should not be IPv6 only. This is because some authoritative servers do not support IPv6. In an experiment, 15% of the top 500 domains failed to be resolved by an IPv6-only resolver because the authoritative server was IPv4-only.
We propose an IPv6-only network-compatible recursive resolver implementation. With this implementation, the IPv6-only recursive resolver will be able to send queries to IPv4-only authoritative name servers. This is accomplished by the resolver converting IPv4 addresses to IPv6 by adding the Pref64::/n prefix, which instructs the NAT64 to convert the IPv6 packets to IPv4 packets.
The potential impact of Encrypted Client Hello (ECH) on public and private network operators and others
The presentation includes a brief overview of the proposed Encrypted Client Hello (ECH) extension to TLS 1.3, explaining its purpose and current state of development. The presentation goes on to consider some of the implications of ECH being deployed on public and private networks, looking at particular sme of the potential operational impacts the private networks of education and financial services organisations, as well as identifying some of the main issues affecting fixed and mobile network operators.
Having looked at some of the security and threat detection challenges posed by ECH, the presentation goes on to consider the broader, unintended consequences for end users and device owners. It concludes by recommending how interested parties at DNS-OARC can engage in the development of ECH so that any concerns they may have are taken into account.
What did we do to make it possible to add a new nameserver to our anycast network with one click on a button and what does this provide.
We were looking to create a stable anycast platfrom with the right balance between stable anycast and being able to get new features in production.
IETF DNSOPs working group updates
IETF DPRIVE working group updates.
In 2006, RFC 4255  introduced a resource record that holds SSH host key verification fingerprints, named SSHFP. In order to prevent man-in-the-middle attacks, a SSH server's host key fingerprint should be verified by the client . While the manual verification process is prone to errors or ignorance by the user, SSHFP records eliminate any manual interaction. However, SSHFP records must securely reach the client and provide the correct host key verification fingerprint.
In our paper "Oh SSH-it, what's my fingerprint? A Large-Scale Analysis of SSH Host Key Fingerprint Verification Records in the DNS" (accepted at CANS 2022, preprint ) we conduct a large-scale internet study (Tranco 1M and 500 million domain names from Certificate transparency logs). The results show that only about 1 in 10,000 domains has SSHFP records. Further, more than half of them are deployed without DNSSEC, thus drastically reducing security benefits.
The presentation aims to remind of this (niche) SSHFP record and present the paper's methodology and results. To end on a positive note, we will show a proper deployment and possible improvements for current tools (i.e. openssh-client).
Conflicht of Interest: Nils Wisiol
Registrants of critical domains subject to substantial monetary losses per minute of downtime are likely to perceive significant DNSSEC deployment barriers due to long error recovery times and lack of pre-publication validation of DS records.
The presentation suggests potential practices to reduce the risk and thus lower DNSSEC adoption barriers.
In this paper, we propose Phoenix Domain, a general and novel attack that allows adversaries to maintain the revoked malicious domain continuously resolvable at scale, which enables an old, mitigated attack, Ghost Domain. Phoenix Domain has two variations and affects all mainstream DNS software and public DNS resolvers overall because it does not violate any DNS specifications and best security practices. The attack is made possible through systematically "reverse engineer" the cache operations of 8 DNS implementations, and new attack surfaces are revealed in the domain name delegation processes. We select 41 well-known public DNS resolvers and prove that all surveyed DNS services are vulnerable to Phoenix Domain, including Google Public DNS and Cloudflare DNS. Extensive measurement studies are performed with 210k stable and distributed DNS recursive resolvers, and results show that even after one month from domain name revocation and cache expiration, more than 25% of recursive resolvers can still resolve it. The proposed attack provides an opportunity for adversaries to evade the security practices of malicious domain take-down. We have reported discovered vulnerabilities to all affected vendors and suggested 6 types of mitigation approaches to them. Until now, 7 DNS software providers and 14 resolver vendors, including BIND, Unbound, and Cloudflare DNS, have confirmed the vulnerabilities, and some of them are implementing and publishing mitigation patches according to our suggestions. In addition, 9 CVE numbers have been assigned. The study calls for standardization to address the issue of how to revoke domain names securely and maintain cache consistency.
Exploring how CIRA's growth from a single-customer (.CA) unicast DNS platform to an anycast platform supporting 480+ TLD's (almost one third of the root zone), and how customer requirements caused a fundamental change in thinking along the way.
I plan to delve into 4 areas:
1) Zone Propagation architecture, monitoring points, self-healing, and alerting.
2) DNS Availability - CIRA's development of it's own monitoring platform
3) DNS RTT - How CIRA decides on locations/transits/IXP's and the feedback loop provided by both RIPE ATLAS and CIRA's own monitoring platform.
4) DSC/PCAP Delivery - Delivering DSC/PCAP's that are complete and delivered in a timely fashion.
With the DNSThought project we do longitudinal measurements of Resolver capabilities, such as for example qname minimization and all the DNSSEC algorithms, with RIPE Atlas probes. It was the outcome of the DNS Measurements Hackathon organized by the RIPE NCC in April 2017.
Over time some valuable historical information has been collected by DNSThought, but the way it is displayed is still as it was when it came out of the hackathon project in 2017 and not very user friendly, but certainly not geared towards a lot of data.
Recently we started to address this by working with a professional specialized in data visualization funded by the RIPE NCC Community funds. With this lightning talk I want to ask the audience for feedback on how a usable user interface would look like.
A brief talk about the history of DNS @ Meta, how things evolved over the years, a bit of a deep dive into engineering decisions we made, and announcing the open sourcing of our DNS server