ICANN ICANN Email List Archives

[gnso-idng]


<<< Chronological Index >>>    <<< Thread Index >>>

Re: [gnso-idng] rethinking IDN gTLDs

  • To: Avri Doria <avri@xxxxxxx>
  • Subject: Re: [gnso-idng] rethinking IDN gTLDs
  • From: Eric Brunner-Williams <ebw@xxxxxxxxxxxxxxxxxxxx>
  • Date: Mon, 30 Nov 2009 12:29:18 -0500

Avri Doria wrote:
hi,

Would/could  this not be dealt in the extended evaluation stage where one 
requests an extended review of the rejection on the basis of Confusing 
Similarity because there is no risk of adverse effect?

At Sydney, ICANN's consultant on a portion of the evaluation process had breakfast with Werner and I. We explained that as the authors of several similar applications, we thought it likely that evaluation of similar applications with no knowledge available to the evaluators that some applications had a common property -- like 10 lines of total difference between all of the applications sharing a common authoring property, that is, a scaling property available to the author, would miss the scaling property available to evaluators.
In a nutshell, I can write 10 applications for very little more than 
the cost of 1 application, where the requirements are equal. If the 
evaluator is unable to discover the similarity of the applications, 
the cost of the evaluation must be closer to 10 times the cost of 
evaluating one application than to one times the cost of evaluating 
one application.
My point is that it matters where in the application process 
information is available to the evaluator.
Concealing the similarity, or the process being so stupid that the 
information is not used by the evaluator, of .foo and .bar, leads to 
higher costs (cost includes time) to the evaluation process.
So, to place the utility of the discovery that, say, VGRS is applying 
for .mumble and .momble (invent your favorite IDN to IDN or IDN to 
ASCII similarities here) in the failure-extended-review-request 
sequence creates avoidable cost.
Or do you think it should be a complicating factor in the initial evaluation?  
Do we need a stmt somewhere in the doc allowing for this possiblity?

Please see the point Werner and I tried to get across to the KPMG person. What you suggest is a "complicating factor" seems to me to be major cost and complexity savings for the evaluator.
Seriously, I've a score of linguistic and cultural apps that I expect 
to differ by a few pages over a significant fraction of a ream each. 
What is the rational basis for concealing the common case and a score 
of 10 page changes off that common case, from the evaluator?
If no rational basis, other than the desire to spend more applicant 
monies on repeated evaluation of the same application, exists, than 
what rational basis is there, other than the same caveat, from knowing 
ab initio, that some two or more applications have a relationship with 
each other, and that mutually ignorant evaluation is the least useful 
course of action possible.
We shouldn't be designing the least efficient system imaginable.

Eric



<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy