ICANN ICANN Email List Archives

[zfa-concept-15feb10]


<<< Chronological Index >>>    <<< Thread Index >>>

Comments on the ZFA Concept Paper

  • To: zfa-concept-15feb10@xxxxxxxxx
  • Subject: Comments on the ZFA Concept Paper
  • From: "Joe St Sauver" <joe@xxxxxxxxxxxxxxxxxx>
  • Date: Thu, 8 Apr 2010 09:35:43 -0700 (PDT)

Since it was mentioned during today's ZFA call that participants in the 
ZFA advisory group are themselves encouraged to comment on the ZFA concept 
paper to insure their comments are part of the record, I would like to
recapitulate five of the points that I've previously raised on the
group's mailing list and during calls.


1) The ZFA model should not consider just issues relating to zone
file access scalability with an increasing number of TLDs, but should 
*also* consider issues of zone file access "modernization" (see
http://mm.icann.org/pipermail/zfa-ag/2010-January/000024.html and 
http://mm.icann.org/pipermail/zfa-ag/2010-January/000059.html ).

By this I mean that currently ZFA users have one (and only one) data 
product they can receive under the ZFA program: the entire zone file
for a TLD, typically released on a once-a-day-basis. 

The adequacy of that once-a-day take-it-or-leave-it format may once 
have been sufficient, when the zone files were small and relatively
static, but today that historical model no longer does a good job 
of meeting the needs of the community given the dynamic nature of 
some zones, the volume of data included in the zone files, and the
increasingly diverse needs of zone file data consumers. 

Specifically, at least some users may only be interested in some subset 
of the records included in the zone file, such as perhaps only NS 
records, or perhaps only zone file records added since the zone 
file was last released. Those customers should not need to download 
(and then discard!) all the other irrelevant records to get just the 
subset of data they're actually interested in. 

As we look at how zone file access should be delivered moving forward, 
we should also serious thought to the possibility of offering zone file
program users more flexible, and potentially more frequent, access to 
zone file contents. Doing so would have the potential to better meet 
the needs of the community, while also potentially reducing the volume 
of traffic that zone file access providers need to transfer. 

I recognize that some access of this sort may be offered as a commercial
data product by some registries, but I'd like to see if made uniformly
available at low or no cost by all TLD operators.


2) The ZFA model should explicitly consider and potentially expand 
the technical channels used to distribute zone file data (see
http://mm.icann.org/pipermail/zfa-ag/2010-January/000069.html ).

Currently the default paradigm is a text file (or a compressed text
file) retrieved via FTP. This distribution channel represents the
state of the art -- from twenty years ago! Consideration should be 
given to modernizing the distribution channels available to zone 
file participants to include things such as GridFTP (orders of 
magnitude faster than conventional FTP), or Bittorrent (access to 
the files can still be controlled, even in a P2P distribution model, 
if the zone files are encrypted with PGP or Gnu Privacy Guard).

If access control and authentication are still required, consideration
should also be given to modernizing the ZFA identity management 
architecture. Since zone file access was originally established, 
advances in identity management such as federated authentication 
(Shibboleth, etc.) would make it possible for ALL zone file providers 
to leverage a SINGLE common source of authentication, rather than 
requiring each zone file provider to create and maintain their own 
ZFA credentials for each ZFA customer.

If security is important, we should also be considering moving toward 
the use of scp or sftp (perhaps with SSL certs) instead of continuing 
to just use ftp with plain text (readily sniffable) passwords.

ICANN should also consider requiring zone files to be made available on 
physical media, such as CDs or DVDs, with delivery by postal mail or 
Fedex for a fee on a "push" subscription basis for those who might 
routinely want copies of zone files, but lack adequate network capacity 
to routinely download them over the network (e.g., some researchers in 
developing nations).


3) Many of the gyrations associated with zone file access program
participation and administration are associated with authentication 
and access control. It is not clear that the benefits of controlling
access to the zone files justify the cost and effort involved, and
ICANN should consider eliminating the requirement for controlled
access to zone file data. (see
http://mm.icann.org/pipermail/zfa-ag/2010-February/000150.html )

I would hypothesize that:

a) Virtually anyone can get access to zone files under the current
model. If anyone does get denied, nothing would prevent them from 
creating a new online identity (via a new free email account on 
Gmail/Hotmail/Yahoo/etc.) and trying again (to the best of my 
knowledge, zone file access is not tied to unique national identity 
documents such as passports or drivers licenses)

b) The log files which presumably document zone file access patterns 
have no routinely performed systematic anomaly detection processing 
(e.g., hypothetically, no one is watching for a zone file being 
downloaded (using the same credentials) in both Los Angeles and 
Paris at approximately the same time)

c) Inactive ZFA accounts aren't systematically and routinely disabled

d) Because data isn't uniquely watermarked in any way, if leakage does 
occur, the source of the leakage could not be tracked nor attributed 

e) Even if someone does get kicked out of the ZFA program, there is 
nothing that precludes them from turning right back around and signing 
up again under a new fake identity; the authentication revolving door 
spins freely.

Since we're requiring auth, we should describe WHY authentication 
is still necessary and WHAT the registries (or aggregator or proxy 
operator) should be doing to leverage that data and actually improve 
ZFA security.

I'd also like to see transparent disclosure of actions taken against
zone file access agreement violators, just so that the community can
see that policies ARE being taken seriously and actually ARE being 
enforced. We don't need to have names named, but I would like to see 
some sort of annual summary, e.g.,

-- "A zone file access program participant shared his credentials on a
   publicly accessible web page, resulting in users from dozens
   of locations around the world accessing the zone file with that
   username and password. This user's access was terminated for this
   intentional policy violation."

-- "A zone file access program participant accidentally scheduled a
   cron job to download the zone file every minute instead of every
   day, thereby causing multiple download jobs to 'stack' as new
   ones started before the previous one finished. The user's access
   was temporarily disabled until the inadvertent error was corrected."

If zone file access controls aren't mitigating an actual articulable
threat, and zone file access control security isn't being actively 
policed, consideration should be given to simplifying zone file 
access: zone files could simply be shared via anonymous ftp or a 
publicly accessible web page rather than requiring registration and 
password-controlled access.

While this might sound revolutionary, note that other zone files are
routinely publicly available without requiring per-user authentication
(e.g., ARIN and other regional registries make the in-addr zones freely 
available, e.g., ftp://ftp.arin.net/pub/zones/ARIN.in-addr.arpa.tar ,
177,664,000 octets as of April 8th). I see no evidence that doing so
has jeopardized the safety, stability or privacy of the Internet or the
users thereof, and I don't believe that TLD zone files would be materially
different. (If there is a clear distinction justifying a difference
in treatment between the in-addr zone files and the TLD zone files,
that difference should be clearly described as part of the report's
justification for not simply making the TLD zone files freely available)


4) The general public may confound the information that's available
via zone files with the far broader range of information available
via whois. The ZFA document should carefully describe and illustrate
the sort of data that's included in the zone files to eliminate any
possible misunderstanding when it comes to things like registrant
point of contact information privacy. (see the discussion at
http://mm.icann.org/pipermail/zfa-ag/2010-March/000235.html )


5) My preference would be to see zone file access continue to be 
available at no direct out of pocket cost to users. However, if
there are new fees associated with any new zone file access model,
those fees should be explicitly recognize one time costs, fixed annual
costs, and per-download (or usage based) costs in setting those fees,
and should strive to accomodate both those with limited needs (and
limited budgets) as well as those with more demanding needs (and the
budget to support those requirements). ICANN should also 
carefully consider how to equitably distribute the revenues it might 
receive from a zone file access program. Should the total pool of
received revenues be divided evenly on a per-registry basis? Should
the total pool be proportionate to the volume of zone file traffic
distributed? Should revenues be distributed proportionate to the 
number of registered zone file access customers? There are many possible
models (see the discussion at
http://mm.icann.org/pipermail/zfa-ag/2010-February/000144.html )


So to summarize, I would urge ICANN to look not just at whether 
the current zone-by-zone access program continues to be a model
that will scale as we move from a handful of TLDs to hundreds
or even thousands, I would also ask that ICANN consider:

-- moving away from a primitive "batch-oriented" one-size-fits-all
   once-a-day distribution of flat files to a more flexible model
   where users can get just the data they need, when they need it

-- moving away from slow and insecure file distribution technologies
   such as FTP to faster technologies (such as GridFTP) or at least
   more secure technologies (such as scp or sftp with SSL certs)

-- making sure that if we *are* going to jump through all the hoops
   associated with requiring users to request access to zone files
   and provisioning individual accounts, that we actually see
   true security benefits from those controls -- currently I see
   scant evidence that those controls are mitigating any real risk
   or that those steps are anything but historically mandated 
   policies that convey little if any real benefit

-- emphasizing the important differences between the contents of 
   zone files and the contents of whois data, particularly the
   absence of things like registrant point of contact data in the
   zone files -- I'm convinced that if more understood what was
   actually IN the zone files, there would be far less reluctance
   to simply make them publicly available. 

-- if fees end up being involved for any new zone file access model,
   insure that those fees decompose the various cost components 
   involved, and also carefully consider and document how any 
   revenues from a new zone file access program will get equitably
   apportioned to registries and others who may be providing that 
   access.


Thank you for considering these comments.

Regards,

Joe St Sauver, Ph.D.

Disclaimer: the opinions expressed are those of the author and do not
necessarily represent the opinion of any other entity or organization


<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy