ICANN ICANN Email List Archives

[new-gtlds-dns-stability]


<<< Chronological Index >>>    <<< Thread Index >>>

Some short comments

  • To: new-gtlds-dns-stability@xxxxxxxxx
  • Subject: Some short comments
  • From: Karl Auerbach <karl@xxxxxxxxxxxx>
  • Date: Thu, 06 Mar 2008 17:46:32 -0800


1. In the section entitled "File Extensions":

A. Why is .com not on the list? .com, like .exe, is a very well established suffix in the DOS and MS Windows world for an executable file. If .exe is somehow tainted then for that same reason so should .com.

B. If one is worrying about how browsers might interpret things that people might type into browser address bars then one had better be prepared for an infinite Pandora's box of issues to arise - for example one can readily conceive of a browser doing a sound-alike mapping: it might map a nasal voice speaking .farm for .firm. Should ICANN thus ban .farm and .firm because they might be misinterpreted? Or should ICANN do, as ICANN is already doing with regard to the fact that names in web browser address bars are often sent to search engines, to place the issue outside of ICANN's purview and leave its solution to those who write web browers rather than overloading additional restrictions onto the domain name system?

C. There will always be overlaps between disjoint name spaces. Why should DNS aspire to be the most withdrawing of name spaces, withdrawing names whenever there is even the slightest overlap with another space? In the human space, there are many people who already have the surname "Li" - the "avoid overlap" approach that ICANN is suggesting here would apply equally to refrain from a TLD named ".li" as to refrain from one named ".pdf".

D. Consequently it seems poor policy to limit DNS because of possible overlaps with other, distinct name spaces. And ICANN does not need to engage in the mission creep that is implicit in such a policy.

2. In the section entitled "Capacity of the Root Zone"

A. Given the fact that our infrastructures are becoming increasingly interlinked and the fact that there are people out there who are becoming increasingly sophisticated in mounting synchronized attacks against multiple points is it wise to so quickly dismiss the time it takes to resurrect a machine serving DNSSEC signed zone? Even though systemic outages at the root or TLD level are unlikely, they are nevertheless possible. It would be useful to know how long it would take to recover the DNS hierarchy should all the root servers, or a large TLD, such as .net (which contains the names of root and TLD servers) have a total systemic failure.

B. Several years ago a friend and I took a day to write two simple python programs. The first of these generated root zone files of arbitrary size and with a mix of names of various sizes and character content (in order to exercise name server lookup algorithms). The second of these programs generated a query load with a mix of names known to be hits and misses against that generated root zone. By running multiple copies of that second program any arbitrary query load could be created. Unfortunately my copies of these programs have long since drift out onto backups else I'd submit them. Nonetheless, it does seem that ICANN ought to generate similar tools so that it can run actual experiments on large root zones.

3. In the section entitled "Operational Capacity" one has to wonder why ICANN requires 1.5 people to handle one zone file update per day. At a burdened rate that means that it costs ICANN on the order of $1000 (US) to handle one update. Verisign, with .com, with an error rate that is nearly nil, manages to process many millions of updates every day and to do so for a cost that is probably on the order of a few cents.

Reading between the lines it appears that ICANN/IANA uses a lot of manual processing. Whatever happened to the administrative software the was revealed with much pomp a couple of years ago?

The argument that errors at the root might cause greater problems then errors in .com is an argument that is not convincing. Verisign's error rate is very low. And Verisign has the ability to push corrections out within 5 minutes. Why can not this be done at the root level as well, particularly given that at even the most generous estimates of growth of the root zone, it will remain tiny as compared to today's .com and thus much easier to disseminate. And why can not the methods that Verisign uses to catch processing errors be adopted for the root zone?

                --karl--
                Karl Auerbach
                Former, and only, Elected member of the
                  ICANN Board of Directors for North America





<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy