training & consulting |  about the author |  forums |  Mail Me 

If you are comfortable thats OK but your browser may be giving you less than optimum performance on our site. We recommend using a version 5 browser including Mozilla

Commercial DNSSEC?

Also published in February 22nd 2007.

Seems that DNSSEC is being subjected to what an old boss of mine used to call the "fatal flaw seeking missiles" which try to explain the technical reasons that DNSSEC is not being implemented. First it was zone walking, then the complexity of Proof of Non-Existence (PNE), next week ... one shudders to think. While is there is still some modest technical work outstanding on DNSSEC, NSEC3 and the mechanics of key rollover being examples, that work, of itself, does not explain the stunning lack of implementation or aggressive planning being undertaken within the DNS community. Perhaps we need to review in a wider context the incentives to implement DNSSEC - for registry operators and domain owners - given that it is not a trivial process.

The negative incentives are clear, scary and unacceptable. First, we have the minor problem of not knowing whether our browsers are really taking us to or Second, more work is being done and published on DNS exploits and cache poisoning - the nascent DNS hacking community is getting a thorough and on-going education. Third, more organizations are running with nice short value TTLs to help the wannabe attackers get in some serious poisoning practise. Fourth, we are creating caches in stub resolvers and browsers all over the place such that, in the event bad data gets in there, we can be absolutely sure its pernicious effect will last for a long time. Given this non-exhaustive list DNS adminstraors or owners who are not scared witless lack both imagination and professionalism.

With all the acknowledged weaknesses of the current DNS hierarchy and assuming that domain owners are not stupid the question is - why is DNSSEC not being implemented as fast as possible? And in passing lets dismiss the "we are just waiting for feature X or Y or Z and then .." assertion. DNSSEC is being pushed by the technical community not pulled by users.

The simple answer is - given the current DNS operational infrastructure even if we get all the technical details right - and they are largely right now - there is still no compelling incentive to implement DNSSEC.

Registry operators who would have the responsibility for doing serious DNSSEC work - and hence incur a burden of cost - cannot see a way to make additional revenue. Why? Because domain owners who sign their zones have no guarantee that an end user will receive the data that was sent from the authoritative name server. If a user is going to pay filthy lucre for something perhaps they want it to work. Period. Not "downhill with a following wind", or "most of the time", or "on every third Sunday in the month". They want a deterministic, guaranteed solution.

In the current DNS architecture most end-users have at least two levels of intermediate DNS functionality, only one of which they have some limited control over, between user access software, say a browser, and the authoritative DNS records.

  1. First, caching nameservers typically located in a service providers network over which the user has no configuration control and which do most of the "heavy lifting" as far as name resolution is concerned. For the vast majority of users this nameserver's address is either supplied by DHCP or via a DNS proxy located in a DSL modem.

  2. Second, a local cache maintained in the stub resolver located at the users PC over which the user has limited control - in the sense that it requires registry or other configuration editing.

This DNS infrastructure has evolved pragmatically and functions perfectly in a world where all DNS access routes are equally insecure. But even if the target zone is signed and the caching nameserver security-aware - highly unlikely - the communication leg from the caching nameserver to the user application is still wide open to abuse. With the current DNS infrastructure we have not achieved end-to-end security - even with a DNSSEC implementation. And arguably never can.

Accentuate the Positive

So now its time to take away the "fatal flaw seeking missile" targets and get positive.

Those fiendishly clever guys in the IETF DNSEXT working group have provided a solution in which all the weaknesses inherent in the current infrastructure can be removed - it just needs a few lines of code here and there to make it all work!

The current DNSSEC standards define a security-aware (stub) resolver that would be located at the users PC and which can indicate to a security-aware intermediate nameserver that it will perform its own DNSSEC validation by setting the Checking Disabled (CD) flag in the DNS query Header. This has the effect of inhibiting DNSSEC at the security-aware nameserver causing all necessary records to be supplied to the resolver to enable it to perform the security validation. The net result is we have achieved end-to-end security. The signed domain owner can be assured with this architecture that all the hard work and pain involved in implementing DNSSEC will generate the predictable and desired result.

So it remains to consider the tactical details of how could we make all this happen. Could we make this a commercial service? So here in an attempt to start the discussion is one "straw-man" solution to make the DNS world a safer place.

The security-aware stub resolver could either replace the existing stub-resolver on the PC or be embedded into the browser. The latter method would clearly be relatively trivial with an Open Source browser - and would do wonders perhaps for the maketing og Mozilla - but would have the disadvantage of not making the service available to all PC applications, for example, a mail client. The former method has a problem in that standard library calls to a local stub resolver do not have a means to return an indication that the security check failed. However is should be noted in passing that there are already serious problems with this interface best illustrated by MSIE's browser based cache which keeps resolved names for 30 minutes (and thus rendering useless all those short TTLs) simply because the interface also does not have a method of returning TTLs. So perhaps this interface needs overhauling in any case.

And what about building that mythical security-aware stub resolver. Well it exists (UNBOUND) at least in architectural and prototype form due the insight and support of Verisign, Inc. and USC/ISI and is currently being ported to C by NLNETLABS.NL.

The security-aware resolver needs a security-aware nameserver to do the heavy lifting of resolving DNS queries. While not vital it nevertheless seems foolish to bypass this useful level of caching. In the classic architecture this function is typically performed by a service provider's caching nameserver which cannot be guaranteed to be configured to be security-aware. We need a means for our security-aware stub resolver to get to a guaranteed suitably configured security-aware nameserver. The obvious way to do this is that our security-aware stub resolver is simply 'pre-configured' with the (anycast) addresses of suitable nameservers. Such a nameserver could either be used for every query or only if a test query to the default nameservers failed to find a DNSSEC service with the appropriate trust-anchors.

So finally we are left with a minor hole in the architecture. The current gTLDs, ccTLDs (with the honorable exception of Sweden) and sTLDs and, just in passing, the root is not secure.

There are two solutions to this problem. First - wait, perhaps forever. Second, bypass the normal DNS hierarchy when validating DNSSEC. Here again there is a solution which depending on your point of view is either skullduggery or inspired - DNSSEC Lookaside Validation (DLV). One of the main criticisms leveled against DLV is that it does not scale. By this is normally meant that we cannot have hundreds of possible DLV zones each requiring a trusted anchor (the same by the way is true of any "island of security" strategy). But perhaps there is no reason for scaling. Suppose two or three vendors were to offer the services outlined here - here the Mastercard and Visa analogy springs to mind - zone owners would select their chosen supplier for DNSSEC services and these vendors would provide browser or security-aware stub-resolver enhancements and security-aware caching nameserver services. There is no need for scaling.

The solution outlined is not the only possible one but tries to be faithful to the spirit of DNSSEC and could be attractive to critical infrastructure, financial and revenue earning domain owners who might even be persuaded to part with modest sums to let their DNS administrators sleep nights.

Perhaps the bottom line here is this - if the registry operators do not provide the appropriate DNSSEC end-to-end services someone is going to eat their lunch. The depressing question that would then follow is, once this alternate architecture is in place (driven by user demand), is there any residual value-added left for the registry operators?

Problems, comments, suggestions, corrections (including broken links) or some thing to add? Please take the time from a busy life to 'mail me' (at top of screen), the webmaster (below) or info-support at netwidget. You will have a warm inner glow for the rest of the day.

Copyright © 2003 - 2017 NetWidget, Inc.
All rights reserved. Legal and Privacy
site by zytrax
Questions to web-master at netwidget
Page modified: July 11 2011.


training courses

book stuff

short contents
full contents
notes & errata
files (1.1) zip
files (1.1) tarball

where to buy

barnes & noble

book links

dns software
dns telephony


death of hope
Open DNS
commercial DNSSEC
short TTLs

Failover Strategies
TTLs revisited
DNSSEC Adds Value?

useful stuff

zytrax dns info