OARC 43 is planned to be a hybrid in-person and online workshop.
DNS-OARC is a non-profit, membership organization that seeks to improve the security, stability, and understanding of the Internet's DNS infrastructure. Part of these aims are achieved through workshops.
DNS-OARC Workshops are open to OARC members and to all other parties interested in DNS operations and research.
Social Media hashtag: #OARC43 #LoveDNS
Mattermost Chatroom: Workshops on chat.dns-oarc.net (sign-up here)
Patronage opportunities for OARC 43 are available. Please contact us for details.
Workshop Patronages for 2025 are available. Details at:
https://www.dns-oarc.net/workshop/patronage-opportunities
Security Track - Protocol
Although DNS based attacks aren’t a regular part of the news cycle they’re extremely common. Visibility into data from numerous sources shows DDoS attacks using DNS as a vector has been growing steadily, on a trend to double over the past 6 ½ years. As a percentage of other forms of DDoS DNS made up 65% in Q2 2024, an all time high. Duration and intensity of attacks, and assets targeted, have also all been increasing.
Traffic consists of queries with NXDOMAIN responses (aka PRSD) that stress network resources - authorities, firewalls, GSLB; or very large Resource Records (amplification) that saturate targets and network links. Activity targets a wide swath of industries, and often countries embroiled in visible conflicts.
This talk will present recent and historical attack data and discuss different ways malicious traffic can be mitigated, and the advantages and disadvantages of each.
DNSBomb is a new practical and powerful pulsing DoS attack exploiting DNS queries and responses.
DNS employs a variety of mechanisms to guarantee availability, protect security, and enhance reliability. In this paper, however, we reveal that these inherent beneficial mechanisms, including timeout, query aggregation, and response fast-returning, can be transformed into malicious attack vectors. We propose a new practical and powerful pulsing DoS attack, dubbed the DNSBomb attack. DNSBomb exploits multiple widely-implemented DNS mechanisms to accumulate DNS queries that are sent at a low rate, amplify queries into large-sized responses, and concentrate all DNS responses into a short, high-volume periodic pulsing burst to simultaneously overwhelm target systems. Through an extensive evaluation on 10 mainstream DNS software, 46 public DNS services, and around 1.8M open DNS resolvers, we demonstrate all DNS resolvers could be exploited to conduct more practical-and-powerful DNSBomb attacks than previous pulsing DoS attacks. Small-scale experiments show the peak pulse magnitude can approach 8.7Gb/s and the bandwidth amplification factor could exceed 20,000x. Our controlled attacks cause complete packet loss or service degradation on both stateless and stateful connections (TCP, UDP, and QUIC). In addition, we present effective mitigation solutions with detailed evaluations. We have responsibly reported our findings to all affected vendors, and received acknowledgement from 24 of them, which are patching their software using our solutions, such as BIND, Unbound, PowerDNS, and Knot. 10 CVE-IDs are assigned.
We concluded that ANY SYSTEM or MECHANISM, which can aggregate “things”, could be exploited to construct the pulsing DoS traffic, such as DNS and CDN.
While DNS is often exploited for reflective DoS attacks, it can also be turned into a powerful amplifier to overload itself. We refer to this emerging type of attacks as "self-amplification". They enable an attacker to overwhelm a victim DNS server using substantially fewer requests than conventional attacks. The possibilities of such vulnerabilities have been long predicted by the designers of DNS, but their surprising complexity and full potential has just become prominent. In this talk, we'll present a taxonomy of amplification primitives intrinsic to DNS and explain how they can be systematically composed to produce multiplicative amplification effects, which lead to a large family of compositional amplification (CAMP) vulnerabilities. This holistic view will hopefully help developers and operators understand and mitigate self-amplification vulnerabilities in fundamental ways.
Due to the criticality of the KeyTrap vulnerabilities the task force assembled to address the issues decided to prefer fast, working fixes over elaborate long-term solutions. In consequence, the short-term mitigations are sufficient to prevent impactful attacks, but their propriety as long-term fixes is limited. In this talk we propose long-term solutions to address DNSSEC validation-based resource exhaustion attacks, which are designed to mitigate the complexity of future DNS opration and protocol development imposed by the current patches.
We will review different DNS DDoS vulnerabilities identified by academic institutions over the past five years, such as NXNS, NRDelegation, and the recent CacheFlush Attacks discovered by our group.
The commonalities, differences and causal relationships between these attacks will be highlighted.
We will then discuss how these vulnerabilities were discovered and explore whether there is a systematic way to discover all DNS vulnerabilities and harden the protocol.
DNS4EU service for public is committed to provide services anonymously and at the same time aims to provide relevant protection against cyberattacks throughout all the EU member states. We will share our detailed approach to ensure anonymization and at the same time keep enough information for the researches to be able to recognize new malicious domains based on traffic patterns.
We believe that full transparency on the anonymization topic and involvement of the community is the best way to achieve the goal of DNS4EU public service.
Continuing the series of talks about DNS benchmarks, we focus on zone transfers and their specifics. Zone transfers are now far more frequent than they were when the protocol was designed, and some providers have SLAs, a maximum amount of time in which to ensure a zone update is available in the DNS.
On the technical level zone transfers have some unique properties which make them very distinct from a typical DNS query:
As a result, meaningful zone transfer benchmark requires different tools and test methodology than traditional queries-per-second and answer latency benchmarks.
In this talk we present lessons learned from benchmarking zone transfers for the BIND project.
There's been a lot of focus on the performance of the root DNS servers in the recent past. In this talk, we'll go one layer below the root and look at the performance of the authoritative nameservers of the TLDs. We argue that the performance at this level has a higher impact on the end-user experience.
The way TLD operators have setup their authoritative nameservers is highly diverse, and this is reflected in the latency we observe when querying these servers. In this talk we'll take a look at the performance as seen through our global network from which we query the TLD authoritative nameservers billions of times per day.
At a high level, we see that many TLDs offer excellent performance, in the single digit millisecond range. Others are less stellar with median latencies around a hundred milliseconds, 50 to 100 times slower than what we see from the fastest TLDs.
While we might assume that having more nameservers and more IP addresses associated with a TLD is better. We show that this is not necessarily the case, with the best performing TLDs having as few as 3 IPv4/IPv6 addresses.
We conclude with some recommendations for both people looking to select a new TLD for their new domain, as well as for TLD operators.
Every day, over 345 billion emails are sent around the globe, each triggering a number of DNS lookups to determine its destination and validity. It goes without saying, the security of DNS records, specifically TXT records, is vital. However, the phenomenon of dangling DNS—where CNAME records point to domains that no longer exist—presents a systemic vulnerability with significant implications.
In this talk, we shine a light on dangling DNS and how malicious actors can exploit orphaned DNS records to launch fraudulent email campaigns, effectively bypassing DNS security measures.
In this session, we will share:
Why this matters: with a technical deep dive into the exploit of dangling DNS records, (specifically TXT records) to edit the SPF include mechanism and send malicious emails.
Case studies based on our investigations, including the tools and queries used to uncover this vulnerability, demonstrating the scale of this issue.
Our recommendations to DNS service operators and the industry at large on coming together in a collaborative effort to proactively identify and notify customers at risk, as well as promote best practices for DNS security.
This is not an isolated issue. It is a pervasive problem that demands a collective effort. In this talk we will raise awareness and propose actionable steps to address this systemic challenge.
CNAME resource records have been widely used since RFC 1034 and 1035. However, it is well known that using CNAMEs increases the work required for name resolution. The interpretation of the wire format, including CNAME responses, is clear but very complex. It is performed within the application process using libc'sgetaddrinfo() and gethostbyname(), so interpreting complex CNAMEs takes time. In this presentation, we will investigate the current state of CNAMEs using top domain name lists and show that up to nine levels of CNAMEs are in use and that several well-known services use multi-level CNAMEs. We will also investigate queries and responses at an actual university and show the number of CNAME levels used by users.
Generative AI tools like ChatGPT have suddenly risen in fame but DNS is a complex topic. The Gen AI tools are increasingly relied upon by application owners who are not DNS experts to find answers for DNS related questions. DNS is used by various services such as LetsEncrypt or Google-Site to establish domain ownership and these services ask application owner to add CNAME, TXT and A records in their DNS zones.
In this presentation, I will demonstrate an experiment in which I ask the GenAI tools various DNS related question of easy to moderate complexity and examine the answers for correctness.
Secondary goal is to start a conversation in the community about GenAI and DNS. Should DNS-OARC create a tool grounded in IETF documents and answers verified by DNS experts? Should we propose a new IETF draft in dns-ops for best practices in usage of Gen AI in resolver/auth server softwares?
The OARC 43 social event, enjoy drinks and appetisers while catching up with industry colleagues - which will run from 18:00 - 20:00 on Saturday, October 26th, 2024
Registration is required.
Grab your ticket (and those of your +1s) at:
https://oarc43-social.eventbrite.com
Please purchase your ticket(s) by no later than 17:30 CEST on Thursday, October 24th, 2024 so that we may inform the venue about final numbers for catering purposes.
DNSTAP is used extensively by most open-source DNS components to report on events passing through their query or response phases. Processing DNSTAP messages at large volumes and with highly customizable capabilities is a function of the Vector open-source streaming data processor.
This talk is an introduction and lessons-learned summary of Quad9's implementation of Vector as a DNSTAP processing tool, both at the edge of the network as well as a central "hub" for data from the field.
I will explain some of the fundamentals of the tool, with specific focus on the DNSTAP and protobuf ingestion sources, and will also highlight some of the DNS-specific modules that have been recently incorporated into Vector to permit detailed analysis of DNS data and related enrichments.
Event modification, enrichment, and Prometheus-style aggregation will be covered briefly. The intention of the discussion is to build interest in experimenting with and implementation of this tool, which will build the developer community towards more robust features that are DNS-specific.
In this presentation we take a look at both recent and long-term changes in query names received by root name servers, especially those that could be considered as leakage or name collisions. The Internet community has long been aware of this undesirable behavior, yet such traffic persists over long periods of time and new cases continue to appear. Using data from DNS-OARC's Day In the Life of the Internet (DITL) collections, we can track the scale of the problem over time. Using data from root name servers operated by Verisign, we can explore some recent examples and see that outreach can lead to successful remediation.
We present our approach to protecting against denial-of-service attacks, implemented in Knot Resolver. It consists of two parts: rate-limiting and prioritization.
Rate-limiting counts requests originating from the same host and/or network and restricts those that are over the set limits; it serves primarily to mitigate amplification attacks.
Prioritization reorders waiting requests based on the cpu consumption of the past requests from the same origin so that the requests from the more demanding clients will be deferred and possibly dropped in case of overloading.
We will first focus on the basic limiting of individual hosts to show how the counters of the same-origin queries work, incl. their exponential decay and how to set the desired limits -- so called instant limit and rate limit parameters are used to control the behaviour. Then, we will extend it to the whole networks by using the same approach for multiple address prefixes to handle even partially distributed attacks and mention different methods of restriction based on the counters' value. Finally, we will move on to query prioritization.
The presentation will roughly follow this article:
https://en.blog.nic.cz/2024/07/15/knot-resolver-6-news-dos-protection-operators-overview/
Security Talks - DoS and Hijacks
Distributed Denial of Service (DDoS) attacks have been a persistent and ever-growing threat to the availability of networks and services on the Internet. Reflection & Amplification (R&A) is one of the popular DDoS attack types and the DNS is one of the most common attack vectors for this attack type. DNS-based DDoS attacks typically misuse open DNS resolvers by sending them queries with spoofed source addresses. These resolvers in return send a response (which is typically larger than the query in size) to a victim, and, in orchestration, can exhaust the victim’s network capacity or its upstream infrastructure.
Despite many efforts in patching exposed open resolvers, the shrinkage of their pool has slowed down and there is still a long tail of millions of open resolvers available on the Internet. The majority (∼99%) of open resolvers are likely unintentionally exposed as we show in [1]. Thus, we argue that the pool of exposed open resolvers is likely not going to substantially shrink in size in the close future.
Open resolvers are, however, not equally powerful in delivering DDoS attack traffic. For example, a CPE running an open resolver in a household with a limited network connectivity is likely not going to be able to keep up with delivering bursts of attack traffic, while a host in a datacenter likely has ample misusable link capacity. In another research [2], we show that a sizable subset (∼12%) of open resolvers run in datacenter networks. Even when open resolvers would not suffer from a limited link capacity there are other factors that can limit their firepower. One such factor is the internal configuration of open resolvers which results in certain open resolvers being capable of handling specific queries with large response sizes. We investigate this in [3] and show that the collective bandwidth amplification power of open resolvers can be reduced by ∼80% if we patch the top 20% most-potent open resolvers. Several phenomena in the network can indirectly impact the amplification power of open resolvers. We studied this in [4] and show that certain artifacts in the network such as directed IP broadcast can ramp up the amplification power of open resolvers by multiple orders of magnitude. Considering the diversities we observe in the amplification power of open resolvers, we advocate for their prioritized take-downs rather than fitting all of them under the same umbrella. This could proactively reduce the exposed reflection and amplification potential in an efficient way.
Finally, the pool of exposed open resolvers is significantly larger, by multiple orders of magnitude, than the typical number of reflectors that are misused in attacks in practice. This raises the question if there is any rationale behind the selection of the exploited reflecting infrastructure. Knowing the diversities in the open resolver population, it stands to reason that DDoS attacks could be more efficient if attackers would leverage reflectors with a higher amplification power. To quantify this, we investigate real-life DDoS attacks to learn more about reflector selection practices followed by attackers. Our findings reveal that attackers do not yet leverage the full power of DNS reflectors neither considering the number of misused reflectors nor in terms of the amplification potential of each reflector. This means that we can expect attacks to become even more powerful in the future if we do not act in time to make the exposed reflection potential lower.
[1] R. Yazdani, M. Jonker and A. Sperotto. Swamp of Reflectors: Investigating the Ecosystem of Open DNS Resolvers. In International Conference on Passive and Active Network Measurement (PAM ’24), doi: 10.1007/978-3-031-56252-5_1.
[2] R. Yazdani, A. Hilton, J. van der Ham - de Vos, R. van Rijswijk – Deij, C. Deccio, A. Sperotto and M. Jonker. Mirrors in the Sky: On the Potential of Clouds in DNS Reflection-based Denial-of-Service Attacks. In Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses (RAID ’22), doi: 10.1145/3545948.3545959.
[3] R. Yazdani, R. van Rijswijk - Deij, M. Jonker and A. Sperotto. A Matter of Degree: Characterizing the Amplification Power of Open DNS Resolvers. In International Conference on Passive and Active Network Measurement (PAM ’22), doi: 10.1007/978-3-030-98785-5_13.
[4] R. Yazdani, Y. Nosyk, R. Holz, M. Korczyński, M. Jonker and A. Sperotto. Hazardous Echoes: The DNS Resolvers that Should Be Put on Mute. 7th Network Traffic Measurement and Analysis Conference (TMA ’23), doi: 10.23919/TMA58422.2023.10198955.
The open DNS infrastructure (ODNS) includes all devices that
accept and resolve DNS queries from any client. As an open system,
the ODNS infrastructure is a popular target for attackers who search
for amplifiers of DNS requests, for periodic DNS scan campaigns,
which try to expose the attack surface, and for researchers who
want to learn more about DNS behavior.
Due to the danger posed by open DNS resolvers, e.g., misus-
ing them as amplifiers in DNS amplification attacks, several
campaigns have been launched to raise awareness of open DNS
infrastructure services. Their total number decreased from over 30
million in 2013 down to only a few million devices nowadays.
The two ODNS components that get most of the attention are re-
cursive resolvers and recursive forwarders. However, there is also
a third component called transparent forwarders, initially observed
in 2013. These devices transparently relay DNS requests to DNS
resolver by spoofing the clients IP address.
Unfortunately, researchers and scanning campaigns paid little
to no attention to transparent DNS forwarders. We recently revis-
ited the open DNS (ODNS) infrastructure, systematically measured
and analyzed transparent forwarders. Our findings raised con-
cerns for three reasons. First, the relative amount of transparent
forwarders increased from 2.2% in 2014 to 26% in 2021 (and 31%
in 2024). Second, as part of the ODNS, transparent forwarders inter-
act with unsolicited, potentially malicious requests. Third, common
periodic scanning campaigns such as Shadowserver or Censys still
do not capture transparent forwarders and thus underestimate the
current threat potential of the ODNS.
We argue that open transparent DNS forwarders pose a threat
to the Internet infrastructure. In addition to recursive forwarders,
they expand the potential field of attack, as they can be used to
interact with resolvers that are not publicly accessible.
To monitor the current state of the open DNS and better under-
stand the deployment of transparent forwarders, we launched a
long-term measurement campaign. We are currently in the process
of extending support for multiple DNS transports, in addition to
DNS over UDP and DNS over TCP.
In this presentation, we want to talk about our most recent find-
ings on the ODNS infrastructure, in particular we will highlight
insights gained between our initial study and now. We will
present our data set and would like to discuss potential collabora-
tions to improve the current situation by reducing the amount of
open transparent DNS forwarders.
Domain Name System (DNS) establishes clear responsibility boundaries among nameservers for managing DNS records via authoritative delegation. However, the rise of third-party public services has blurred this boundary. We uncover a novel attack surface, named XDAuth, arising from public authoritative nameserver infrastructure’s failure to isolate data across zones adequately. This flaw enables adversaries to inject arbitrary resource records across logical authority boundaries and covertly hijack domain names without authority. Unlike prior research on stale NS records, which concentrated on domain names delegated to expired nameservers or those of hosting service providers, XDAuth targets enterprises that maintain their authoritative domain names.
Specifically, exploiting XDAuth, an attacker could inject arbitrary resource records covertly for a victim's domain name by exploiting an out-of-delegation nameserver. For instance, a customer deploys their authoritative nameserver (e.g., ns.c1.com) leveraging a provider’s DNS infrastructure. Since the lack of DNS zone isolation in the provider, the attacker can manipulate the resource records of domain names delegated to ns.c1.com through the nameserver (e.g., ns.provider.com) of the provider.
To evaluate the prevalence of XDAuth, we proposed a semi-automated detection framework, named XDAuthChecker, to uncover XDAuth threats in the wild effectively. We used the framework to explore nameserver dependencies and identify shared nameserver groups systematically. For each group, we examined the existence of vulnerable hosting providers that can be exploited to inject forged DNS records into the shared nameservers. Subsequently, we conducted a large-scale measurement study of the DNS-sharing ecosystem and the enterprises that XDAuth affects.
We revealed that shared nameservers are indeed widespread and severe by running XDAuthChecker on 1,090 gTLD zone files. We identified a total of 2,372 shared nameserver groups, consisting of 60,974 nameservers with identical IP addresses and 4,800 nameservers with varied NS domains and IP addresses. Upon analyzing these shared groups, we identified 12 potential vulnerable providers, including Amazon Route 53, NSONE, and Digicert DNS. These providers indirectly affect 1,881 other nameservers, with 981 of them ranking in the Tranco top 1M, highlighting a substantial security threat. After detecting domains delegated to affected nameservers, we found that XDAuth poses security risks to numerous well-known enterprises. As a result, we have discovered 125,124 domains vulnerable to XDAuth attacks, encompassing notable entities like McKesson, and Canon. The affected entities also include domain management or digital certificate companies, indicating their customer domains are susceptible to domain hijacking.
The developer community handles security defects on a regular basis and most organisations now use CVSS (the Common Vulnerability Scoring System framework) to convey vulnerability severity and impact to users of their software products. The laudable objective behind encouraging all software vendors and distributors to use the same metrics system is that enables software administrators to more easily make the right decisions on how quickly to respond to each new security report they receive. "Is this an issue that should be patched as soon as possible, or can it wait until the next scheduled maintenance window?"
What we have found however is that the majority of vulnerabilities reported against BIND nearly always score one of a small number of values regardless of their actual operational risk if instead assessed based on the popularity or obscurity of the feature involved and on the likelihood that the defect that has been uncovered would make a feasible attack.
How can we more realistically evaluate and report DNS security vulnerabilities so that the information we provide on each is genuinely useful? How can we do something better than just scoring most of our BIND Security Advisories at 7.5?
We have conducted a field study on post-quantum DNSSEC, involving RIPE ATLAS measurements with around 10,000 probes. Using implementations of PQC signing schemes (Falcon, Dilithium, SPHINCS+, XMSS) in both BIND and PowerDNS, DNS response success and failure rates depending on the signing scheme and other parameters were investigated.
In addition to the above algorithms, we for the first time present results on a new class of DNSSEC signatures, using Merkle trees for optimizing signature sizes. Besides measurement results, we'll provide context on our implementation approach.
We find that depending on circumstances, a significant fraction of clients choke. Failure rates are mainly a function of response packet size, which is mediated by parameters such as DNSSEC configuration (KSK/ZSK vs. CSK, NSEC vs. NSEC3, or compact DoE) and DO bit presence, with some variation depending on transport. This is qualitatively in line with the "educated guess", but adds quantitative detail. We also find surprising results, such as that a number of resolvers claim to have validated PQC signatures, even though it is implausible for resolvers to support these algorithms.
Implementation included adding both signing and validation support to PowerDNS recursor and BIND resolver. Both functions can be tested using a do-it-yourself frontend, which the public can use to work and familiarize themselves with our testbed. We hope that this study helps inform future PQC engineering developments in the DNSSEC context.
Phishing on the web is a model of social engineering and an attack
vector for getting access to sensitive and financial data of individ-
uals and corporations. Phishing has been identified as one of the
prime cyber threats in recent years. With the goal to effectively
identifying and combating phishing as early as possible, we present
in this paper a longitudinal analysis of phishing attacks from the
vantage point of three country-code top-level domain (ccTLD) reg-
istries that manage more than 8 million active domains – namely
the Netherlands’ .nl, Ireland’s .ie, and Belgium’s .be. We perform
a longitudinal analysis on phishing attacks spanning up to 10 years,
based on more than 28 thousand phishing domains. Our results
show two major attack strategies: national companies and organi-
zations are far more often impersonated using malicious registered
domains under their country own ccTLD, which enables better
mimicry of the impersonated company. In stark contrast, interna-
tional companies are impersonated using whatever domains that
can be compromised, reducing overall mimicry but bearing no reg-
istration and financial costs. We show that 80% of phishing attacks
in the studied ccTLDs employ compromised domain names and that
most research works focus on detecting new domain names instead.
We find banks, financial institutions, and high-tech giant compa-
nies at the top of the most impersonated targets. We also show
the impact of ccTLD’s registration and abuse handling policies on
preventing and mitigating phishing attacks, and that mitigation
is complex and performed at both web and DNS level at different
intermediaries. Last, our results provide a unique opportunity for
ccTLDs to compare and revisit their own policies and their impacts,
with the goal to improve their own mitigation procedures.
The increasing deployment of encrypted DNS has enterprises and service providers wanting to identify clients connecting. Identifying clients allows for approved access and custom policies. In this presentation, we will discuss the latest draft for Client Authentication Recommendations for Encrypted DNS (CARED). We will walk through the reasons for this draft, our recommendations for when and how to use it, and alternatives evaluated.
The talk will explain draft-fujiwara-dnsop-dns-upper-limit-value-01 "Upper limit values for DNS". The author requests reviews and discussions in IETF dnsop WG.
In order to resolve a name, DNS resolvers need to resolve the names’ zone, its parent zones, as well as their name servers, leading to a potentially large number of transitive dependencies.
During normal operation, typically only a subset of these dependencies is needed, as the first authoritative answer is accepted.
However, in the presence of inconsistencies between name servers, this behavior may lead to seemingly random and hard-to-find problems.
In this talk, we present a large scale dataset featuring 812M full domain resolutions and over 85B DNS queries. We hope this dataset will be of use for the community.
Digital Medusa is investigating global DNS usage trends, including centralizing DNS resolver services. While the research is ongoing, we have published a preliminary report to receive feedback on reasons for DNS resolver usage trends, the use of open-source software for DNS resolvers, and the creation of a global regulatory DNS blocking tracker.
Read the report: https://digitalmedusa.org/wp-content/uploads/2023/12/Upload-DNS-Resolvers-First-Draft-October.pdf
Some preliminary results:
Public DNS services have traditionally offered Internet users in certain parts of the world better performance and greater privacy and accessibility.
Since 2022, the use of public DNS services has halved in many regions.
Study seeking to learn reasons for this drop in usage and global regulatory tracker of government requests to block domain name resolution.
IBM NS1 is an authoritative DNS provider, and we never shuffled our answers. Now we have it as an optional feature. This lightning talk explains why we added it, and also shows some basic research into what happens if you do not shuffle.