- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
OARC 45 will be a hybrid in-person and online workshop.
|
DNS-OARC is a non-profit, membership organization that seeks to improve the security, stability, and understanding of the Internet's DNS infrastructure. Part of these aims are achieved through workshops.
DNS-OARC Workshops are open to OARC members and to all other parties interested in DNS operations and research.
This year, OARC 45 is part of a broader DNS Week—a full calendar of events bringing together the global DNS community in one place:
DNS Week Schedule (October 1–9):
Social Media hashtag: #OARC45 #LoveDNS
Mattermost Chatroom: Workshops on chat.dns-oarc.net (sign-up here)
Workshop Patronages for 2025 are available. Details at:
https://www.dns-oarc.net/workshop/patronage-opportunities
Sponsorship opportunities for OARC 45 are available. Please contact us for details.
DNS TAPIR is a Swedish project that builds a national DNS query analysis platform to monitor traffic and alert on suspicious events. All software is Open Source and has undergone a thorough analysis of privacy handling. The DNS TAPIR project has a few principles that we work hard to implement, with the core one being privacy, the need to protect individual user data.
This talk will introduce the four pillars that drives the project team and give an update on progress and availability.
In 2002 the IP address for j.root-servers.net was changed in order to provide the service from multiple locations using IP anycast. Since that time Verisign has continued to respond to queries sent to J-root's old IP address, 198.41.0.10.
A few months after the address was changed, the old address received approximately 1500 queries per second. Now, nearly 23 years later, the old address still receives 350 queries per second. Based on common understandings of how recursive resolvers work, and especially the process of root server priming, this has defied explanation.
ICANN recently commissioned a team of researchers to thoroughly study the potential impacts of changing the root server names. A short comment made in their report hinted at why these queries might persist. In this presentation we'll tell the story of how this long-time bug was rediscovered and confirmed as the likely reason we continue to receive queries on the old J-root address from a set of resolver clients.
In this study, we examine DNS resolvers used by clients in the Nordic and Baltic countries, conducting active measurements to assess the adoption of security and privacy features. We utilize the RIPE Atlas network of volunteer-run probes for our measurements in July 2025 and analyzed 1066 unique probe-resolver pairs. We reveal that 92% supported IPv6, 87% were validating DNSSEC, 70% implemented QNAME Minimization, 83% avoided using EDNS Client Subnet, and 78% returned minimal responses to the client. We categorize the resolvers based on their network proximity to the client, allowing for more in-depth analysis. We found that private, within-AS, and public (outside-AS) resolvers shows varying levels of feature adoption across these categories. Comparing the Nordic and Baltic countries against each other focusing on preconfigured resolvers in the same AS as the probe (typically operated by ISPs), we found that Norway had the highest adoption of IPv6 support, Denmark had a 100% adoption of DNSSEC, Estonia had the highest adoption of QNAME Minimization, and all countries avoided using the EDNS Client Subnet. We also identified adoption correlations between data minimization features as well as links between DNSSEC and both QNAME Minimization and IPv6 support.
Cisco's resolver fleet infrastructure commonly experiences large scale distributed denial of service (DDOS) attacks. Under the normal circumstances these attacks are dealt with by distributing the traffic over the installed resolver capacity and rarely get to cause operational issues. However on two occasions these DDOS attacks did cause notable internal incidents: thankfully with very limited customer impact. In this talk we will present how these attacks were detected and what was acted to remediate their effects.
In both occasions, attacks were detected through alarms from the resolver fleet complaining about the delayed traffic servicing and delayed configuration updates. However, the causes for resolvers having issues under these two attacks were different.
In the first case, it was noticed that DNAT traffic had a sudden increase during the incident which implied that resolvers in one data center (DC) had an increase in referrals to a different DC to query the authority servers. Blacklisting of the IP-s that were used for the DDOS attack proved to be of limited value due to the large pool of IP-s used. The problem was eventually tracked to the cache contention lock used for encryption of DNSCrypt transmissions in DNAT.
In the second incident, the DDOS attacks were very short lived and therefore difficult to analyze. Only when the team was able to get the state of the processor threads during one of the attack events it was possible to notice that a lot of threads were spinning in a lock that controls access to the list of in-transit upstream queries.
Query's domain name hash (folded into 12 bits) determines which of those 4096 locked lists were used. Multiple locks mean less lock contention, but only assuming good hash distribution. As it turned out, the implementation was using hashing of the first qname label and the target IP address, with the reasoning that these were the most volatile parts of the transmission data.
As a result, a random label attack against <const>.<random>.<domain> would always hash to the same value and use the same lock.
Both incidents were eventually resolved through resolver software upgrades that improved the lock contention mechanism but for two rather different resolver resources. Interestingly, these lock contention issues escaped detection despite extensive application and performance testing of each software release. This emphasizes the need to include specific DDOS-type tests in the software release pipeline.
Historically we built cyber security the way we built cities: over time, without a long-term plan, on top of ruins. Now that we are applying Zero Trust DNS (Microsoft ZTDNS/adam:ONE Don’t Talk to Strangers) to require every outgoing IP connection to first be resolved by DNS, what is it that breaks?
In this presentation we offer insight into client side behaviour and the general readiness of the internet to adopt zero trust principles of connectivity with DNS at the root of trust.
Although DS provisioning automation (RFCs 7344, 8078, 9615) is well-defined on the wire, actual deployment faces various degrees of freedom, leading to non-uniform behavior across parents. For example, the presence of registration locks may (or may not) affect DS automation, and there are different ways to perform CDS/CDNSKEY input validation, report errors, or to handle priority of updates (such as from a manual submission). The lack of related operational guidance has been identified as the main obstacle to DS automation in the gTLD space. We therefore propose a set of practical guidelines on DS automation, so that new deployments can satisfy domain holders' expectation of predictable behavior across TLDs. We invite the audience to discuss, so that the proposal can be amended to best reflect the community position on how to best automate DS provisioning.
We would like to discuss challenges and opportunities with PQC DNSSEC by walking through a variety of measurement studies that the community has conducted till date.
There are presently two “mainline” paths towards deployment of DoT / DoQ for authoritative DNS service between auth server and resolver. The first is RFC 9539 (“blind probing”) and the second is “wait for DELEG”.
Both have problems.
In the RFC 9539 case it is about creating enough incentive to auth server and resolver operators to actually implement this, but, also, that even if it is implemented RFC 9539 does not provide any “signal” to enable an operator to differentiate between “we are now testing our ability to provide {transport}” from “we are now ready and support production traffic over {transport}”.
In the DELEG case the problem is that we simply don’t have DELEG yet, and given the complexity of the current DELEG proposal it seems likely that it will be ~10 years until we have wide scale deployment of DELEG. And from a privacy POV, it really is wide scale deployment that is required.
We therefore propose an enhanced approach where a signaling mechanism is added. Pros and cons of this alternative are presented. The proposal is purely operational. What is needed is operator feedback that this would be a sensible approach to get around the “chicken-and-egg” problem that has made encrypted DNS transport for auth DNS get mostly nowhere for way too many years.
This presentation will showcase Verisign’s Transitive Trust tool, which maps DNS resolution dependencies based on delegation and name server host relationships. We use this tool to analyze all TLD delegations at the DNS root and construct a directed graph of resolution dependencies. The resulting structure reveals distinct subgraphs and dependency clusters associated with common operators or shared infrastructure. Using this graph, we identify critical nodes whose failure could affect disproportionately large portions of the namespace and quantify structural characteristics that may indicate fragility and operational or security risks.
DNSSEC was introduced in 1999 to prevent DNS spoofing and on-path tampering attacks. However, due to the complexity of DNSSEC deployment and management, its popularity remains modest to this day. In this work, we deep dive into the post-deployment complexities of DNSSEC leveraging 1.4 million historical diagnostic snapshots for 319K SLDs and their subdomains obtained from the DNSViz service.
According to our findings, many domain administrators use the DNSViz service to repair their zones or for initial DNSSEC deployment. Our study shows that certain common errors like usage of nonzero iteration count in NSEC3 parameter, missing proper non-existence proofs or signatures, and delegation failures account for more than 70% of all bogus states.
Using these insights, we introduce a semi-automated DNSSEC misconfiguration resolution pipeline called DFixer that transforms multiple complex error codes to a simple root cause and generates both high-level instructions and concrete BIND commands to fix them. We evaluated our pipeline using a custom ZReplicator tool that automatically replicates bogus zones and demonstrated that 99.99% of these erroneous zones can be resolved successfully.
Zone signing pipeline is the heart of large-scale authoritative operations. After the zone is signed, an automatic validator should continously check and block potential errors. Introduction and overview of available methods and tools with some examples.
OpenDNSSEC has served the DNS community for over 20 years now, and we at NLnet Labs are proud of its accomplishments. But DNS and DNSSEC have evolved significantly over the last decades and ODS no longer aligns with their requirements. Cascade is a new DNSSEC signer that learns from ODS and adapts to the modern needs of DNS / DNSSEC. We discuss the design requirements and architecture for Cascade, demonstrate it against a simple DNSSEC-enabled zone, and announce the sunset of OpenDNSSEC.
Legal pressure from companies and countries to censor certain content or content creators is growing drastically; and unfortunately, many have landed upon DNS resolution prevention as a tool to achieve these aims. When a country demands that access to a specific domain be blocked for its users, the common outcome is overly broad filtering that affects users far beyond that jurisdiction (along with legal drama at multiple levels). This talk explores how global Anycast deployments with in-region nodes can absorb and localize censorship mandates, preventing them from impacting users globally. Kate will examine real-world scenarios where centralized or poorly scoped filtering caused collateral damage and contrast them with targeted Anycast-based approaches that maintain availability and legal compliance. Attendees will gain insight into designing DNS and web services that balance regulatory demands with global reach and protect access for users.
At Internet exchanges it is not uncommon to invite DNS operators to connect anycast nodes to their Internet Exchange. This is often done pro-bono, i.e. the DNS provider receives from the IX provider free colocation, IP transit for the management of the server, and IX connectivity. Also, at Internet exchanges asymmetric routing is not uncommon, for example, a DNS server hosted at the exchange might receive requests from IP addresses for which there is no return route available at the exchange. In this situation, a server defaults to sending the response using its default route, which points to the management upstream. If the upstream link has BCP38 configured, the response is usually dropped as the DNS response uses a source IP address that is different from the normal management address of the server. Such drops are bad as they slow down DNS resolution until DNS resolvers fail over to another authoritative server that may respond.
We have observed this problem on several of our RcodeZero DNS local nodes, and some tests revealed that other anycast DNS providers are also affected. To address this issue, we have identified three possible solutions:
- Ask the provider of the management link to add our anycast prefixes to the allow-list of the BCP38 filtering (requires assistance from upstream providers)
- Find a dedicated transit provider at the exchange (which would basically make a global node out of the local node)
- Implement a tunnel workaround that is totally independent from any 3rd party
After evaluation, we decided to implement the tunnel workaround: responses that cannot be routed directly on the exchange get routed via a GRE tunnel to one of our global nodes. This increases the latency but avoids packet loss and unanswered queries. Furthermore, this solution works out of the box without any adjustment of BCP38 filtering. To minimize increased latency and to support automatic rerouting in case of maintenance of global nodes, the GRE endpoint itself is anycasted to our global nodes.
In my talk I describe the terms "DNS local node" and asymmetric routing. I present our tunnel-based solution and how we utilize Linux source based routing for an implementation that separates routing of management traffic and DNS traffic. This presentation requires basic knowledge of Internet routing and BCP38.
Our DNS operators use IP Anycast to make .nl available throughout the world with improved resilience and faster response times. But which points of presence do they choose to optimize the latency of their anycast deployment?
Oftentimes, operators manually test and tweak anycast site selections over many iterations. In this blog we describe Autocast: a data-driven heuristic method to approximate the optimal selection of any number of anycast sites. Novel about our method is that we only use IP unicast measurement data. With Autocast, we can predict the median latency of resolvers to a proposed anycast deployment with millisecond precision, without having to do anycast BGP announcements.
We plan on applying this method together with our operations team in their efforts of moving .nl's anycasted authoritative nameservers to a new infrastructure provider later this year.
PCH is providing DNSSEC service for those who ask for it.
Up until recently the zone operator needed to choose if they sign their zones themselves or ask us to do it for them.
By utilizing RFC 8901 the operator of a zone can do the signing themselves and serve the signed zone on their authoritatives and have an external party do it for their systems as well to increase resiliency and autonomy.
This is a talk about how we implemented "Model 1" of RFC 8901 using knot dns offline-ksk functionality.
Whether tryingly complex implementation and maintenance or downright breakages, DNSSEC related nuisances are a given.
This talk wants to give a light-hearted take on DNSSEC’s failures, deserved and undeserved, by:
• sketching how severely image has impacted DNSSEC deployment – all the more so because the tech community’s self-perception as a fact-driven body has made it oblivious to this exact possibility: namely, that something as irrational as image could outweigh facts in the community’s perception of a given security feature,
• briefly reflecting on how the image issue at hand could be tackled, and finally,
• hinting at what that means for the ongoing and renewed effort to pull DNSSEC out of the swamp.
DNS remains a foundational component of today’s Internet, yet it is a frequent target of increasingly sophisticated DDoS attacks. Traditional detection methods based on static rules or thresholds struggle to keep pace with evolving and obfuscated abuse tactics.
In this work, we take first steps toward exploring a protocol-aware detection approach that leverages large language models (LLMs) for semantic analysis of DNS traffic. Unlike conventional techniques, this approach captures contextual and sequential patterns in queries and responses, enabling the detection of subtle abuse. We group DNS abuse into five categories: flooding (e.g., query/response flooding, NXDOMAIN), reflection/amplification (e.g., NXNS, TsuNAME), redirection, subversion, and DNSSEC abuse. Our preliminary evaluation on real traces, synthetic attacks, and adversarial samples suggests that LLM-based detectors can generalize to novel threats while offering interpretable outputs. We also present a Gradio-based prototype for interactive semantic detection. We invite discussion on the practicality, performance, and future potential of integrating LLMs into operational DNS abuse detection pipelines. This work represents a promising step toward adaptive, explainable, and generalizable defense mechanisms for the evolving DNS threat landscape.
Web3 entities, such as Ethereum Name Service (ENS), increasingly face threats originating
from the traditional DNS ecosystem. Threat actors exploit vulnerable Web2 domains to
target Web3 users and decentralized finance (DeFi) platforms, blurring the lines between
Web2 and Web3 DNS abuse landscapes.
This talk will recount real-world ENS war stories of battling such DNS abuses, focusing on:
• How ENS detected early-stage attacks in the DNS targeting Web3 entities and assets
• A deep dive into an extensive and malicious campaign unraveling over 2,500 Web2
domains weaponized to impersonate or defraud Web3 and other digital asset entities
• Technical countermeasures including DNS monitoring, response coordination, and
legal remedies — alongside the inherent limitations faced in these eVorts
• Why collaboration across registries, registrars, web3, and law enforcement is critical,
together with a proposal for the takedown of thousands of abusive domains
By bringing together lessons from the DNS abuse arena and Web3 defense strategies, this
session aims to underscore the interconnected security challenges and necessary
cooperative approaches in the evolving domain name landscape.
This proposal aligns with ongoing conversations about DNS abuse vectors and mitigations
documented in recent research and industry programs. It addresses the emerging
intersection of Web2 DNS infrastructure abuse and Web3 security, providing valuable
insights for both traditional DNS practitioners and the cryptographic naming community.
If you would like, I can assist further in fleshing out the talk outline or developing specific
technical and legal aspects for the presentation.
This presentation focuses on realtime capable analysis and visualization of DSC processed authoritative DNS traffic data.
Still to this day, DSC and its legacy represent a fundamental part in world wide observability of DNS nameserver and resolver ecosystems. Unfortunately due to cloud era, AI innovations and cybersecure awareness, DSC's well known and distributed collector-presenter framework has become untimely and lacks partially in future support and maintenance.
Therefore we present a lightweight and powerful successor approach, providing realtime DSC dataset metrics via centralized REST-API microservice architecture.
As a proof of concept, realtime and performance evaluation of metrics export towards DSC traffic data for overall .de is covered.
Correlation of performance data for specific queries coming from several DNS servers can be hard. This talk discusses how we use tracing data in the vendor agnostic OpenTelemetry Trace format to provide trace information in a standard form. We show a visual representation of example traces and discuss a proposed EDNS0 extension to pass trace IDs between so correlation of trace data coming from multiple (chained) sources becomes easy and unambiguous.
The Domain Name System (DNS) is a foundational layer of internet infrastructure, yet the operational complexity of managing DNS has outpaced many organizations’ ability to keep up. In a recent study, Akamai evaluated the DNS posture of over 19,000 financial services institutions worldwide. The study measured adoption and configuration of DNS-related controls including SPF, DKIM, DMARC, DNSSEC, CAA, Registry Lock, and the handling of NXDomain responses.
Despite the high visibility and security demands of the financial services industry, the results show surprising inconsistency and misconfiguration across key operational and security features. This suggests a broader trend likely reflected in other verticals.
This talk presents aggregated findings from the study and uses them to explore a deeper question: why is DNS administration so difficult today? We will highlight the expanding operational and threat landscape, including hybrid and multi-cloud deployments, fragmented ownership, legacy records, and gaps in automation. We’ll also discuss the implications of slow detection and response cycles when DNS is not centrally monitored or easily audited.
The session concludes with a call to action for DNS operators, security engineers, and tooling vendors: what can we do to make DNS administration more agile, adaptable, and accurate without sacrificing the operational integrity that DNS demands?
Slide outline
1. Introduction
Quick context on the critical role of DNS in availability, trust, and security
Motivation: Why DNS configuration matters more than ever
Overview of recent research conducted on 19,000+ financial institutions
SPF, DKIM, DMARC
DNSSEC
CAA records
Registry Lock presence
NXDomain behavior and anomalies
Key findings:
Inconsistent adoption across even high-profile financial brands
Misconfigured or partially configured records
Absence of DNS hygiene practices (e.g., stale zones, legacy entries)
Multi-registrar/multi-provider scenarios
DNS record sprawl and inconsistencies across environments
b. Expanding Threat Landscape
Rise of domain-based abuse (phishing, BEC, typosquatting)
Exploiting misconfigured or orphaned DNS records
Operational blind spots that allow persistent misuse
c. Organizational Silos & Ownership Confusion
Who owns DNS? Networking? Security? DevOps?
Gaps in shared responsibility and operational coordination
d. Lack of Visibility and Automation
Manual audits, flat file exports, or spreadsheet tracking
Poor MTTR for DNS-related incidents
Difficulties in correlating DNS misconfigurations with real-world risk
How poor posture amplifies attacker dwell time and evasion
Impact on resilience, uptime, and security posture
Consistent record validation and renewal
Cross-team coordination (SecOps, NetOps, DevOps)
Threat-informed configuration baselines
Opportunities for community and standards:
Open frameworks for posture evaluation
Better alerting/reporting pipelines
Shared registries or transparency models
DNS as a strategic asset, not just plumbing
Open questions for the community:
What role should registrars, providers, and researchers play?
Can we create scalable benchmarks for DNS health?
How do we drive awareness without relying on regulation?
The DNS4EU project, initiated by the European Commission, aims to create a secure, EU-based alternative to existing public DNS resolvers. DNS4EU provides EU citizens, companies, and institutions with a privacy-compliant and resilient recursive DNS, ensuring that DNS traffic data remains within the European Union and supports digital sovereignty and online privacy.
The presentation is focused on the public service aspect of the DNS4EU project from the perspective of a Site Reliability Engineer. We will provide insights into the preparations before launching the DNS4EU public resolvers, observations during and after the launch phase as well as subsequent adjustments, all accompanied by anonymized statistics gathered from their operation.
DNSSEC is not infallible. In certain edge circumstances, DNSSEC fails due to accidental misconfiguration, or failures which can be validated to be not related to malicious activity. Much in the way that "serve stale" allows domains to keep some functionality even during outages, Negative Trust Anchors this may provide a temporary solution for recursive operators in order to prevent significant outages. This breaks the chain of trust, and has significant implications for the chain of trust for those domains.
We would like to propose a new element of transparency policy and/or technical for recursive operators: In these rare cases, end users need to understand the policy of the local recursive operator for creating NTAs, the current list of NTAs and the reasoning behind each one, and the expected duration of this "breakage" of the trust chain, and historical NTA applications in order to be able to audit security conditions in the past.
This talk will briefly cover the elements of policy and possible technical inclusions of messaging (EDE?) and will invite other recursive operators for discussion on this topic.
We'll give a short overview of the Mozilla Trusted Recursive Resolver (TRR) Program, with the intent of recruiting new DoH resolver partners for networks/regions where we have lots of Firefox users.
Enterprises operate DNS at scale, but face very different challenges than public DNS operators. Hybrid setups, multi-vendor silos, compliance requirements and limited expertise often make DNS fragile and underrepresented in broader discussions. This talk highlights these challenges and explores how enterprise DNS teams can benefit from, and contribute to, the OARC community.
Before we started with the devlopment of our DNSSEC signing solution Cascade, we interviewed 16 TLD operators about their DNS operations. We expected the conversation to be about tooling. Instead, the answers went deeper — about trust, continuity, and compliance.
We published the results of this survey here:
https://blog.nlnetlabs.nl/dnssec-operations-in-2026-what-keeps-16-tlds-up-at-night/
Over the last few DNS-OARC workshops, we have been floating the idea of creating a group inside the OARC community to create a DNS Best Current Practices set of documents.
The initiative has started to get traction and some work has been done.
The following lighting talk explains what has been done so far and where are we planning to go into the future, with a call for action for the whole DNS-OARC community.