- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
OARC 46 will be a hybrid in-person and online workshop.

DNS-OARC is a non-profit, membership organization that seeks to improve the security, stability, and understanding of the Internet's DNS infrastructure. Part of these aims are achieved through workshops.
DNS-OARC Workshops are open to OARC members and to all other parties interested in DNS operations and research.
Social Media hashtag: #OARC46 #LoveDNS
Mattermost Chatroom: Workshops on chat.dns-oarc.net (sign-up here)
Workshop Patronages for 2026 are available. Details at:
https://www.dns-oarc.net/workshop/patronage-opportunities
DNS is mission-critical for enterprises, yet it is rarely their core business. Unlike TLD operators or public DNS providers, enterprises operate DNS within complex organizations shaped by hybrid infrastructures, regulatory pressure, fragmented ownership, and persistent skill shortages.
Drawing on recent industry analyst research, this session explores the real challenges enterprises face with DNS not at the protocol level, but around ownership, resilience, auditability, and operational maturity. Beyond technical challenges, the session examines the organizational and human factors that influence how enterprise DNS is actually run day to day.
By connecting technical realities with organizational constraints, the session helps explain why real-world enterprise DNS deployments often diverge from the clean architectures and best practices we design. It provides a diagnostic view of systemic patterns that shape enterprise DNS operations and failure modes, helping to contextualize challenges that are often overlooked in protocol- and tooling-focused discussions.
A resilient Internet requires a resilient Domain Name System (DNS).
A resilient DNS ensures the continuous availability of many services and should withstand outages (e.g., power outages or cable cuts), attacks (e.g., DDoS), and technical disruptions while still maintaining integrity and confidentiality.
In this talk, we briefly introduce our project to measure DNS resilience.
The Internet standards community (IETF) has published several operational best practices to improve DNS resilience, but operators must make their own decisions that tradeoff security, cost, and complexity. Since these decisions can impact the security of billions of Internet users, recently ICANN has proposed an initiative to codify best practices into a set of global norms to improve security: the Knowledge-Sharing and Instantiating Norms for DNS and Naming Security (KINDNS).
The goal of this project is to conduct an independent study on the measurable practices of the KINDNS framework, as well as other important DNS resilience practices. We aim to develop a DNS resilience observatory that collects information from public sources together with Internet-wide measurements to provide a longitudinal view of the evolution and adoption of DNS best practices globally.
In the early stages of this project, we would like to ask for feedback and contributions from the community.
QNAME minimization is an extension to the DNS protocol, designed to allow DNS resolvers to prevent disclosure of DNS activity beyond that which is necessary for resolution. Since it was originally proposed in 2014, QNAME minimization has been incorporated into most of the well-known DNS resolvers. But the question remains: how effective is QNAME minimization at preserving privacy in practice? We answer that question by creating a model that defines DNS privacy roles and quantifies information leakage to third parties. We apply that model to DNS query data from a large university. We observe that QNAME minimization adds modest privacy gains and suggest that its benefits be considered alongside its costs.
AS112 is an anycast DNS deployment that responds to junk queries, i.e. leaked queries from internal networks, which should have been handled locally. This includes reverse DNS queries for RFC1918 and link local addresses, and queries for home.arpa and service.arpa.
Unlike other anycast deployments, AS112 is volunteer-run and uncoordinated. Anyone can contribute to AS112 by setting up a DNS server, announcing the AS112 anycast prefixes, and responding to queries.
The choice to run AS112 as an uncoordinated volunteer-run network relies on the implicit assumption that any traffic that goes to AS112 is “harmless”, i.e. that a malicious volunteer operator could not misuse these queries. However, it is not clear that this assumption is justified.
I will present preliminary results from an analysis of query logs from two sites. I will show that AS112 receives a substantial amount of queries that could be misused by a malicious operator, such as queries related to DNS dynamic updates (~17%) and DNS service discovery (~10%).
We will present a simple and comprehensive DNS cache POisoning Prevention System (POPS), designed to integrate as a module in Intrusion Prevention Systems
(IPS).
POPS addresses statistical DNS poisoning attacks - documented from 2002 to the present - and offers robust protection against similar future threats. It comprises
a detection module, which employs three simple rules, and a mitigation module that leverages the TC flag in the DNS header to enhance security. Once activated, the mitigation module has zero false positives or negatives, correcting any such errors on the side of the detection module.
We first analyze POPS against historical DNS services and attacks, showing that it would have mitigated all network based statistical poisoning attacks. We then simulate POPS on traffic benchmarks (PCAPs) incorporating current potential network-based statistical poisoning attacks, and benign PCAPs; the simulated attacks still succeed with a probability of 0.0076%. This occurs because five malicious packets go through before POPS detects the attack and activates the mitigation module. In addition, POPS completes its task using only 20%–50% of the time required by other tools (e.g., Suricata or Snort), and after examining just 5%–10% as many packets. It successfully detects DNS cache poisoning attacks—including fragmentation-based variants—that
Suricata and Snort consistently miss, highlighting POPS’s superiority.
By way of an example, whatismyipaddress.com DNS resolution result can be used to then connect to evilsite.ai. This abuse at the DNS level leads to the following problem we now see widely abused:
Source network pDNS only sees known-good domain of whatismyipaddress.com.
Destination CDN only sees what appears like a valid connection to evilsite.ai.
As our industry attempts to increase trust in DNS, this is a force against that and the degree of the problem needs to be brought to the DNS OARC audience.
Multi-provider DNS, relies on various non-standardized setup and configuration mechanisms. As multi-provider DNS becomes more and more mainstream moving from ad-hoc to a transparent and robust mechanism for orchestration is increasingly important. We present a general architecture for this, which has been implemented, is working and, more or less as a side-effect, mostly solves the "multi-signer problem".
DNSSEC at scale: Enabling signing across 5,500 domains in the real world
Enabling DNSSEC for a single domain is straightforward: sign the zone, submit the DS record to your registrar, verify the chain of trust. Now do it 5,500 times, across hundreds of TLDs, multiple registrars, and every corner of the global domain registry ecosystem.
This talk is a war story from an ongoing project to enable DNSSEC across the entire internet DNS portfolio of a major automotive company. What looked like a routine security improvement turned into a deep dive through the messy reality of the domain industry — where APIs don't exist, registrars refuse manual work, intermediary chains span three organizations and two continents, and a single ambiguous form field can take a production domain offline.
Topics covered
Getting internal buy in
Registrars who offer API coverage only partially
How you suddenly might come across a chain of intermediaries not expected
Time zone considerations when changing DS records
Education gaps about DNSSEC even at registrars
Slight confusions can take down production domains
Registries suddenly demanding more information or updated handles
Unexpected costs for domain updates which scale quickly for 5000+ domains
(Certain features of DNS providers might make DNSSEC signing of zones impossible -> linked zones at NS1) (this is a point I'm debating if I want to keep it in, as it feels very internal)
TLDs where DNSSEC is simply impossible
TTLs in TLDs you don't control and can make rollbacks messy and long
The operational strategy we chose for enabling this
A short talk about the upcoming root key rollover. Important dates and milestones, and what to watch out for..
DNS resolvers increasingly support various encryption protocols, ensuring their communication with end clients remains confidential to external observers. The recursive-to-authoritative link has long been overlooked though, despite multiple reports on traffic analysis and response injection by state censors. The experimental RFC 9539 addresses this confidentiality gap with a unilateral and opportunistic mechanism---recursive resolvers probe nameservers for DNS-over-TLS or DNS-over-QUIC support and, if successful, communicate over the encrypted channel. In this talk, we measure the deployment of ADoT/ADoQ in the wild, covering both recursive resolvers and authoritative nameservers. We identify fewer than 1% (2.9M) of registered domains supporting authoritative DoT or DoQ, with one provider accounting for the vast majority of these deployments. This data-driven study informs DNS operators that increasingly consider the deployment of authoritative DoT/DoQ but lack concrete numbers on the current state of deployment.
RFC 9539 - Unilateral Opportunistic Deployment of Encrypted Recursive-to-Authoritative DNS (also known as ‘Blind Probing’) was published over two years ago and amongst the stated goals were:
Sadly however, it has seen only limited deployment - whilst some open resolvers have adopted it, most authoritative operators are reluctant to do so due to significant operational and performance concerns. As a result, none of the above goals are being fully realised and the real-world experience at scale with encrypted transports has not progressed. The ‘big win’ of shifting as much recursive-to-auth traffic as possible to use at least opportunistic encryption seems stalled at present.
In this presentation we will drill into several related issues:
What are the specific factors preventing adoption of encrypted transports by authoritative servers today and what solutions should the community consider?
What do the criteria look like for establishing encrypted transports as a feasible and scalable solution?
What positive steps can we take to encourage experimentation with and confidence building around encrypted transports today? Can the community create a new roadmap to de-risk encrypted transport deployment and drive future adoption?
How can we better harmonize opportunistic deployment in the existing namespace with future developments to provide the maximum privacy benefit to users?
DELEG is the upcoming incremental revolution of DNS, improving security, privacy and manageability. What shall authoritative DNS operators consider and do before introducing DELEGs into their zones? Let's talk about software support, DE bit, ADT bit, non/existence proofs, specification requirements, pre-requsites and clear overview of necessary steps.
We think we understand how DNS is used. But what does authoritative DNS traffic at scale actually reveal about resolver behavior, application trends, and operational reality? Authoritative DNS servers sit at a uniquely powerful vantage point in enterprise infrastructure. The query and response traffic they handle offers a rich and frequently under-explored source of operational, architectural, and security insight, which this talk will delve into.
What does real-world enterprise DNS traffic actually look like? Who is querying it—and for what? Which record types dominate, and which emerging types are gaining traction? Do resolvers behave as expected, or do we see unexpected behavior such as persistent retries after NXDOMAIN responses? Are there unexpected queries for internal names? Which domains and resolvers are the “top talkers,” and how do these patterns evolve over time?
In this talk, we present findings from a multi-month analysis of authoritative DNS traffic across enterprise zones hosted at a managed DNS provider. We examine domain and resolver populations, distributions of query types and classes, response codes, TTL characteristics, and client retry behavior. We explore DNSSEC deployment signals (e.g., DO-bit prevalence and signed response rates), analyze EDNS header flags and options, looking for signals revealing the adoption of newer protocol features (Compact Answers, DELEG, HTTPS, SVCB etc). We highlight observable trends that reflect broader application, resolver, and DNS ecosystem changes.
Beyond measurement results, we also describe the server-side data collection and analytics architecture that enables high-volume DNS telemetry analysis at scale. Finally, we discuss ongoing work and some early results leveraging emerging A.I. driven techniques to extract deeper operational and security insights from authoritative DNS traffic.
Attendees will come away with a clearer understanding of how enterprise DNS data is actually consumed in the wild—and how authoritative traffic analysis can inform capacity planning, misconfiguration detection, security investigations, and future architectural decisions.
Domain registries manage the entire lifecycle of domain names within TLDs and interact with domain registrars through the Extensible Provisioning Protocol (EPP) specification. Although they adhere to standard policies, EPP implementations and operational practices can vary between registries. Even minor operational flaws at registries can expose their managed resources to abuse. However, the registry operations' closed and opaque nature has limited understanding of these practices and their potential threats.
In this study, we systematically analyzed the security of EPP operations across TLD registries. By analyzing the entire domain lifecycle and mapping operations to corresponding domain statuses, we discovered that registry operations are attributed to overlapping statuses and complex triggering factors. To uncover flaws in registry operations, we employed diverse data sources, including TLD zone files, historical domain registration data, and real-time registrar interfaces for comprehensive domain statuses. The analysis combined static and dynamic techniques, allowing us to externally assess domain existence and registration status, thereby revealing the inner workings of registry policies. Eventually, we discovered three novel EPP implementation deficiencies that pose domain abuse risks in major registries, including Identity Digital, Google, and Nominet. Evidence has shown that adversaries are covertly exploiting these vulnerabilities. Our experiments reveal that over 1.6 million domain names, spanning more than 50% of TLDs (e.g., .app and .top), are vulnerable due to these flawed operations. To address these issues, we responsibly disclosed the problem to the affected registries and assisted in implementing a solution. We believe that these registry operation issues require increased attention from the community.
Hardware memory may suffer bit flips. Previous research has shown that if a bit flip happens in the right place, host names may be be contorted, enabling MITM attacks. This study looks at consequences of bit flips occurring for root-servers.net, such as hijacking resolver priming queries. After introducing the experimental setup, selected instances of observed resolution cascades will be inspected. The audience is invited to discuss the findings and consider any implications.
Since RFC 1034, DNS specifications have mandated that recursive resolvers must "bound the amount of work" performed per query. However, the definition of "work" has remained ambiguous, leading to a class of intrinsic risks that differ fundamentally from traditional volumetric reflection attacks. In practice, the resolution process involves complex interactions among delegations, aliases, retries, caching, and DNSSEC validation. These mechanisms can interact in unexpected ways, allowing a single query to trigger disproportionately large amounts of resolver work and leading to significant performance degradation.
In this presentation, we discuss our recent efforts to systematically understand such performance vulnerabilities in DNS resolvers. We introduce a formal model of recursive resolution that represents resolver behavior as a state transition system with associated resource costs. Building on this model, we develop rProfiler, a framework that explores the space of possible resolution traces and identifies worst-case query patterns that maximize resolver work. Applying rProfiler to three widely deployed resolver implementations reveals that even modest query rates can trigger substantial performance degradation under adversarial resolution patterns.
Our results shed light on how resolver work can grow unexpectedly during recursive resolution and help demystify the performance vulnerabilities that arise from it. We conclude by discussing the broader implications for resolver design and outlining directions toward more robust mechanisms for bounding resolver work in future DNS implementations.
Different DNS resolver implementations handle delegation from parent to child zones in different ways: some resolvers are strictly parent-centric, while others use whatever information is currently available in the local DNS cache, or offer a child-centric mode that always fetches authoritative NS records. In theory, this difference should not affect the ability to resolve domains, since the parent and child sides of a zone cut are expected to hold identical records. In practice, however, this assumption does not always hold true.
Experimental testing of these approaches is challenging because switching a resolver from parent-centric to child-centric behavior is complex and labor-intensive, and real-world resolvers do not provide a configuration option to run in both modes. Fortunately, the latest development version of BIND has adopted the parent-centric approach. This change provides a unique opportunity to compare how the same codebase behaves under a strictly parent-centric model versus the more traditional approach.
In this talk, we present measurements comparing the new parent-centric version of BIND with the original RFC 2181 version. Our primary focus is on the ability to resolve queries and the error rates experienced by end clients while resolving names on the real Internet, where parent and child records sometimes differ. Additionally, we measure end-client latency and resource consumption on the resolver.
Synchronizing globe-wide Authoritative DNS Anycast with traditional DNS Zone Transfers might not be optimal. Can versatile Database backend be used in narrow use-case just for transferring the zone contents over long distance, and is it faster? Multiple diverse setups, measurements, results and takeaways.
DNS is a globally distributed system where even a minor configuration mistake can cause immediate and widespread disruption. Yet most of the existing tools rely on static validation of planned DNS changes.
In this presentation, I’ll introduce the concept of CheckMate, an AI-powered assistant that performs real-time pre-validation of proposed DNS zone updates to prevent costly mistakes. I will demonstrate how CheckMate, using Large Language Models(LLM) and prompt engineering, can identify and flag potential configuration mistakes.
This concept has practical value for DNS zone operators, DNS hosting providers, and infrastructure teams alike — providing guardrails that empower even non-DNS experts to manage zone updates with confidence and safety.
Gonemaster is a Go implementation of Zonemaster that began life as a near 1:1 port of the original software—and then evolved into something that is purpose-built for modern, large-scale DNS measurement work.
At its core, Gonemaster provides robust tests of DNS delegation quality, helping operators and researchers identify misconfigurations and edge cases that impact resolution, availability, and DNS correctness. While preserving the intent and coverage of the upstream test suite, the Go-based approach brings two immediate advantages: significantly faster execution and fewer external software dependencies, making it easier to deploy in constrained environments and simpler to run reproducibly across diverse platforms.
A key design goal has been scalability. Gonemaster’s architecture is particularly well-suited for running large batches of tests—from routine monitoring of portfolios of zones, to broad measurement campaigns where throughput, predictability, and operational simplicity matter as much as test accuracy. This enables new workflows where delegation testing can move from “one domain at a time” troubleshooting into continuous, automated, and data-driven practice.
Just as importantly, Gonemaster formalizes the log output into a structure that is far more suitable for downstream analysis. A complete list of test specifications—including the emitted tags with harmonized arguments—makes it straightforward to correlate results across domains, compare runs over time, and build tooling that can slice measurement data by delegation patterns, failure modes, and test semantics. In practice, this reduces the friction between “running tests” and “learning from results,” making analysis substantially easier than it has been previously.
This talk will cover Gonemaster’s evolution from port to platform: architectural choices, performance considerations, batching at scale, and how formalized output unlocks richer measurement pipelines for DNS operations and research.
We have recently built an open dashboard called Rootviz, which visualizes in real-time measurement data produced by all Ripe Atlas probes.
It allow users to visualize real-time reachability between the probes and each Root Server, for both IPv4 and IPv6.
It complements DNSMON by two ways: using industry's default time-series visualization (Grafana) and by leveraging a different dataset from DNSMON: it uses data from all Atlas probes, and not only the robust anchors.
Moreover, it uses the open-source Grafana, the state-of-the-art visualization tool.
An update on the status of the development and release planning for our new DNSSEC hidden signer "Cascade", first introduced at OARC 45. Highlights include the new incremental signing and IXFR-out functionality and how they relate to one another, performance/resource usage improvements, TSIG support, Prometheus metrics, ods2cascade migration tooling, re-designed memory and state models and more.
Eight years ago at IMC'17, Verfploeter was introduced by De Vries et al.
This technique allowed anycast operators to perform active catchment mappings at large-scale (using millions of ping-responsive hosts on the Internet).
In this talk we introduce MAnycastR, an open-source tool that improves upon Verfploeter; allowing for IPv6 mappings, increased coverage using transport-layer probing, faster mappings (using distributed/synchronous probing).
Initially designed to perform anycast censuses, MAnycastR introduces new active measurement techniques:
* Anycast latencies (measuring RTT from clients to anycast deployment)
* Anycast traceroute (measure path from anycast deployment to clients)
* Optimal catchment (using unicast RTT measurements to infer the best PoP for a client)
* Quantify Improvement (measure possible RTT gain for clients)
* And much more
MAnycastR makes such measurements easy to perform at large scale, allowing to measure the performance (and optimal performance) of an anycast deployment in a matter of minutes.
In this talk we will explain how MAnycastR works, provide results for the performance of our anycast testbed (48 PoPs), and validate its methods using real traffic data from our ccTLD partner that deploys MAnycastR in production.
Finally, we utilize all of MAnycastR's features to perform a case study investigating the impact of IXPs and transits on anycast routing.
With our talk we hope to reach operators that are interested in collaborating and gain operator feedback for future improvements of MAnycastR.
The DITL dataset serves as an invaluable resource for DNS research. The author gratefully acknowledges the data providers and DNS-OARC for permitting access to the Root DITL dataset. Because data collection methodologies vary significantly—with each Root Server Operator (RSO) capturing traffic to the best of their respective capabilities—it is essential to characterize the attributes of each dataset before analysis.
Despite this need, there is currently no standardized documentation regarding whether specific datasets are anonymized, the extent to which IP addresses are masked (e.g., prefix preservation), or whether the data represents partial or complete traffic logs. This presentation details an estimation of the DITL-2024 and 2025 dataset attributes:
Full Source IP Preservation: c (2024), g, k, and m-root datasets.
Partial Anonymization (Prefixes Preserved): a, b, d, f, h, and j-root datasets appear to mask source IPs but preserve /24 (IPv4) and /64 (IPv6) prefixes.
Full Anonymization (No Prefix Preservation): i and l (2024) root datasets.
Furthermore, by cross-referencing these datasets with RSSAC002 metrics for April 10, 2024, and April 9, 2025, I assessed data completeness. My findings suggest that the e-root dataset contains approximately 1% of total queries, the f-root dataset contains roughly one-third of the expected traffic, and the i-root dataset exhibits data gaps. Finally, as UDP checksums appear to be preserved in certain datasets, I attempted to reverse-engineer the original source IP addresses, with limited success in specific instances.
Placeholder for lightning talks.
We will accept up to five 5-minute lightning presentations from in-person presenters for Day 2, Session 4. The call for abstracts will be open from 09:00 to 16:00 UTC (10:00–17:00 local time) on Saturday, May 16.
"I never set out to be a DNS practitioner, but working with it has been a rewarding if unavoidable theme of my 40 year career.."
From a 1980s student seminar on the fresh RFC882/883, through an early stub resolver implementation, becoming the DNS sysadmin at an SME and early ISP, then co-founder of a ccTLD registry, this talk traces the author's experience of working with the DNS. Reflections on establishing and operating root servers, oversight of open-source DNS software development, and spinning out DNS-OARC as an independent, neutral and respected technical community for all things DNS.
The DNS is one of the Internet's success stories, being close to the founding principles of distributed architecture and open interoperable standards based upon rough consensus and running code. But it also has a reputation as one of the Internet's success disasters, where its ubiquity and latter-day complexity makes it the focus for blame, abuse and controversy.
This talk attempts to take a step back from the nitty-gritty of technical standards, implementations, operational snafus, and governance minutiae, and to look at the long arc of what has been achieved in the context of one long-standing participant's perspective.
