OARC 40 is planned to be a hybrid in-person and online workshop.
DNS-OARC is a non-profit, membership organization that seeks to improve the security, stability, and understanding of the Internet's DNS infrastructure. Part of these aims are achieved through workshops.
DNS-OARC Workshops are open to OARC members and to all other parties interested in DNS operations and research.
Social Media hashtag: #OARC40
Mattermost Chatroom: Workshops on chat.dns-oarc.net (sign-up here)
On-site knowledge exchange session on Wednesday 15 Feb at 2PM local time about how to display DSC data in Grafana.
Sponsorship opportunities for OARC 40 are available. Details at:
https://www.dns-oarc.net/workshop/sponsorship-opportunities
Annual Workshop Patrons for 2023 are available. Details at:
https://www.dns-oarc.net/workshop/patronage-opportunities
Short presentation providing updates since presentation in OARC 38.
* new learnings
* trends in adoption of DNS protection mechanisms
* coverage and future plans in Google Public DNS
This presentation describes the need for collaborative multi-stakeholder involvement in research and modeling to inform the DNSSEC post-quantum cryptography (PQC) algorithm standardization and transition agenda. Some key issues that need to be researched are described. This includes DNSSEC's need for long-term cryptographic resiliency and the impact NIST's selected PQC signature algorithm signatures will have on DNS transport and DNSSEC-related memory and processing requirements. Categories of DNS protocol enhancements to address these issues are postulated. The need for collaborative multi-stakeholder research and modeling is then justified as a method for assessing the pros and cons of transport options and potential protocol enhancements. The talk concludes with a proposal for a research and modeling agenda to support the pros and cons assessment.
Several years ago, we completed a large scale deployment of DNSSEC across many zones on multiple DNS providers, in-house servers, and commercial appliances. While largely successful, we faced a number of significant operational challenges too. This talk will walk through some of our noteworthy operational experiences and challenges with the deployment. It will cover topics like configuration, support for standardized vs proprietary features, zone size scaling, bugs, transport issues, debugging processes, and how problems were visible from the point of view of customers. A sizeable part of the talk will also discuss subtle DNSSEC bugs across many diverse implementations (even from quite mature DNS companies, which was quite surprising to us). It will end with some general advice and recommendations for others attempting to deploy DNSSEC on a large scale.
DNSCrypt Protocol has been in existence since 2013 and has received considerable attention from the DNS community with several major DNS services providing support. This protocol also has several established client and server side open-source implementations in different programming languages. While not providing end-to-end DNS security, this protocol is designed to protect the ‘last mile’ traffic between a client and recursive name server (resolver) against eavesdropping, spoofing or man-in-the-middle attacks. DNSCrypt protocol has been designed to have cryptographic security for communication between client and its first level resolver while being efficient and adding minimal overhead to the plain text queries.
Several more recent DNS protocol extensions such as DNS over TLS (DoT), DNS over HTTPS (DoH) and most recently DNS over Quic (DoQ) were designed to protect the DNS traffic each with its own target protection context and limitations.
In this presentation, we provide the current state of art and adoption of DNSCrypt protocol and provide comparison with the more recent protocols for protecting the DNS traffic. We also touch upon our current efforts to prepare DNSCrypt RFC and extend the protocol to version 3 to use P-224 or P-256 elliptic curve digital signature algorithm to authenticate sessions and AES-GCM authenticated encryption for DNS traffic.
Public hosting services provide convenience for domain owners to build web applications with better scalability and security. However, if a domain name points to released service endpoints (e.g., nameservers allocated by a provider), adversaries can take over the domain by applying the same endpoints. Such threat is called hosting-based domain takeover. In recent years, a series of domain takeover incidents with severe impacts are continuously reported, and even high-profile websites such as Microsoft are affected. However, until now, there still lacks an effective detection system to identify these vulnerable domains on a large scale.
In this paper, we introduce a novel framework, DareShark, for effective domain takeover detection. Compared to previous works, DareShark expands the detection scope and improves the detection efficiency by: 1) systematically identifying vulnerable hosting services with a semi-automated method; and 2) detecting vulnerable domains by passively reconstructing domain resolution chains. We evaluate the effectiveness of DareShark and eventually detect 10,351 Top-1M’ subdomains vulnerable to domain takeover, which are over 8 times more than previous findings. Specifically, DareShark allows us to detect the subdomains of Tranco Top-1M sites on a daily basis. In addition, we perform an in-depth security analysis on the affected vendors, like Amazon and Alibaba, and gain a suit of new insights, including flawed implementation of domain validation. Following the responsible disclosure policy, we have reported details to affected vendors, and some of them have adopted our mitigation.
NS1 has introduced a new platform feature providing its authoritative DNS customers with a live telemetry stream of deep DNS analytics like traffic volume by QNAME, Top client IPs, Geo Location information, Client Subnet, ASNs, and much more. Customers get broad traffic analysis as well as the ability to create custom “edge queries” that represent a specific dimension of telemetry relevant to them. A key difference from other solutions is that the traffic is analyzed in real time at the edge, and the result is delivered directly to customer time series databases using OpenTelemetry.
All this is made possible by Orb (https://orb.community), a free and open source network observability platform created at NS1 Labs. As discussed in previous OARC talks, Orb is based on the open source pktvisor and combines deep traffic analysis (pcap, dnstap, flow) with dynamic policies across a fleet of analyzers, all controlled and automated centrally, allowing delivery of actionable telemetry to modern observability stacks through OpenTelemetry.
In this practical talk we will walk you through how Orb is deployed at NS1, from the control plane in Kubernetes to the anycast edge observability architecture, which analyzes more than 1 million queries per second in real time. We’ll also discuss how NS1 is able to efficiently process multi-tenant traffic at wire speed to deliver valuable telemetry to both internal teams for operational purposes, as well as directly to external customer databases.
Throughout, we’ll point out how OARC members and the broader community could also be using open source Orb to provide similar telemetry in support of their own use cases, for free.
The transition to IPV6 has been a visible topic for more than 20 years and lots of energy has been expended at conferences. Most discussions have focused on IPv6 addressing but the DNS is just as essential to the proper functioning of the IPv6 internet. This talk will explore IPv6 readiness of the global DNS, including the long tail. A huge dataset of anonymized query traffic gathered from service providers around the world was categorized and queried with an IPv6 only resolver to check reachability. Results will reveal the state of the DNS for a very large number of domains, including popular domains.
EduDig is a tool for making DNS education a little simpler. A web dig as many that are already seen on the internet today. The crucial difference is we have the goal of making the "DIG" presentation highlightable with an information space to explain the contents to the student/user. We have a working proof of concept and more work is planned in the coming months. Exposure and feedback will help us form EduDig to be as usable as possible to a larger audience.
Delegate badges required (paid event)
DNSSEC has been standardized over a couple of decades to ensure the integrity of DNS messages. However, over two decades, DNSSEC has been deployed only around 4% of second-level domains in .com, .net, and .org. Moreover, the process of uploading DNSSEC-related records to parent zones is turned out to be difficult in practice, which results in pervasive mismanagement.
To provide the integrity of DNS messages without such complexities, we propose a new way that enables individual DNS zones to guarantee the integrity of their DNS records without any dependencies on other entities in the DNS infrastructure (e.g., parent zones or registrars).
We propose to leverage a PKIX certificate issued by a certificate authority (CA), from which a domain generates signatures for its resource records using its private key (corresponding to its public key in the certificate). For this purpose, we reuse existing DNS record types (i.e., DNSKEY, RRSIG and CERT records).
Since version 4.5 PowerDNS Recursor implements an aggressive NSEC/NSEC3 cache, as described in RFC8198. Other recursive resolvers also have an aggressive NSEC/NSEC3 cache implementation.
We will discuss the effectiveness of an aggressive cache for both NSEC and NSEC3 zones. It turns out that especially the NSEC3 results need extra study.
The security extensions of the DNS (DNSSEC) are the only effective measure to protect the integrity of the naming system of the Internet. More than 17 years after the publication of the current DNSSEC standards, deployment at domain names and recursive resolvers still leaves room for improvement. Some report that only 30% of the Internet's population rely on validating resolvers. The reasons for this low deployment-rate at resolvers are unclear, but some operators have raised concerns about operational overhead.
We study as the first why recursive resolver operators do not enable DNSSEC validation. We carry out a survey among 120 operators, serving more than 200 million clients worldwide. We show that there are two major reasons for not enabling validation: scepticism about DNSSEC, and the fear of high operational overhead. We find that the real operational overhead is significantly lower than the expected overhead. Additionally, we discuss how other concerns raised by operators could be addressed in order to improve deployment of DNSSEC validation.
The Domain Name System (DNS) provides a scalable name resolution service. It uses extensive caching to improve its resiliency and performance; every DNS record contains a time-to-live (TTL) value, which specifies how long a DNS record can be cached before being discarded. Since the TTL can play an important role in both DNS security (e.g., determining a DNSSEC-signed response’s caching period) and performance (e.g., the responsiveness of CDN-controlled domains), it is crucial to measure and understand how resolvers violate TTL. Unfortunately, measuring how DNS resolvers manage TTL at scale remains difficult since it usually requires the cooperation of many nodes spread across the globe. In this paper, we present a methodology that measures TTL-violating resolvers at scale using an HTTP/S proxy service called BrightData, which allows us to cover more than 27 K resolvers in 9.5 K ASes. Out of the 8,524 resolvers that we could measure through at least five different vantage points, we find that 8.74% of them extend the TTL arbitrarily, which potentially can degrade the performance of at least 38% of the popular websites that use CDNs. We also report that 43.1% of DNSSEC-validating resolvers incorrectly serve DNSSEC-signed responses from the cache even after their RRSIGs are expired.
ICANN recently launched the RFC Annotations project to help DNS developers, protocol developers, and security researchers see annotated versions of the DNS-related RFCs. The annotations include in-line descriptions of how RFCs have been updated and where there has been errata, but they also allow people in the DNS community to add comments to the RFCs for others to see. Such comments could include which implementations have implemented particular features, pointers to security-related issues that arise after the RFC is published, and problems found when implementing features in a protocol.
The project is an informal collection of DNS-related RFCs and an informal collection of annotations: all of it comes from the DNS community and not from the IETF. ICANN updates the project as new annotations come in and as new DNS-related RFCs are published (or discovered). The project can be seen at https://rfc-annotations.research.icann.org/.
ICANN is encouraging more members of the DNS technical community to contribute annotations to the project.
Drink is an authoritative name server intended for dynamic content, such as returning the IP address of its client. It is experimental but features a lot of things such as cookies, NSID, ability to fetch answers from REST services, etc. It is robust and has reasonable performances. Of course, it is not a replacement for NSD or Knot, but it can be used to deploy funny services.
This talk will present Drink, and its peculiarities.
DNS exfiltration and tunneling tools exploit DNS to evade
surveilance and masquerade online behavior. Identifying these events
in real-time proves challenging because efficent techniques are required
to crack an encrypted message without impacting performance
of a resolver, which must also resolve non-malicious query volumes
at a magnitude of up to millions of queries per second. In this talk we'll
explore an elementary dns tunneling algorithm that is efficient and clever
enough to fit in many recursive DNS resolver code bases. To do that,
we'll first explore DNS resolver caches like those in djbdns-1.05.
We'll outline architectural decisions such introducing two new caches,
a realtime blocklist and tunnneling cache, highligting the pros and cons
of early and late detection techniques. Additionally, two probabilistic
techniques will be discussed to identify unique counts and strings
containing hidden messages with just enough confidence to make the
detection of DNS tunneling and exfiltration events as easy as modifying
a couple threshold values. In closing, we'll discuss how Cisco Umbrella
deployed a realtime DNS tunneling algorithm into it's global resolver
fleet and note a few lessons we learned while maintaining this algorithm
for the past year.
Operators expect DNS servers to respond within microseconds if all the data to answer a given query are locally available. Some BIND operators have reported suspicions that their production servers sometimes pause query responses.
When we attempted to reproduce this in a lab environment, we found that standard benchmarking tools like dnsperf, resperf, and flamethrower do not provide sufficient granularity for latency measurements.
In this talk, we present a new feature in dnsperf, which allows more fine-grained latency measurements, and we also present a new way to post-process dnsperf data into latency plots using DNS Shotgun toolchain.
Using these new features, we were able to measure latency spikes in BIND servers during server management operations.
We confirm that some operations can cause answer latency to spike, and we present recommendations for BIND operators.
In august 2017 the ICANN Root Server System Advisory Committee (RSSAC) published a “Technical Analysis of the Naming Scheme Used For Individual Root Servers”, looking into different naming schemes for the root servers (including DNSSEC signing of the set) and doing risk analysis on them, as RSSAC028.
The first recommendation in the report: “Stick with the current scheme”. The report also recommends further study of feasibility and risk factors of the different naming schemes discussed in the document.
Since September a consortium of people from NLnet Labs and SIDN Labs is performing one of the recommended followup studies to look into resolver behaviour for the different naming schemes. This involved extending a resolver testbed developed at ICANN to simulate as much as possible the different root servers as much as possible - which in turn involved a survey for the root server operators, to collect the different OSes and software in use by the root servers..
This lightning talk will showcase this work and talk about the challenges we had and have doing the study.
Shorter DS TTLs=> shorter Mean Time to Recovery
DS RRsets need to be rolled back or updated promptly
No 24-hour or more downtime after emergency DS updates
Note: Cached validated child RRsets keep their existing TTLs!
No expected impact on child zone query volume
We’re studying expected effect on parent (eTLD) zone query volumes
A quick look at measurements providing insight into the level of centralisation of DNS service at resolver and authoritative servers. The resolver view is presented at the APNIC Labs website whereas the authority view is an initial snapshot of upcoming results