<<<
Chronological Index
>>> <<<
Thread Index
>>>
[soac-newgtldapsup-wg] Fwd: RE: [Tdg-legal] Estimating the cost of continuity capacity
- To: "soac-newgtldapsup-wg@xxxxxxxxx" <SOAC-newgtldapsup-wg@xxxxxxxxx>
- Subject: [soac-newgtldapsup-wg] Fwd: RE: [Tdg-legal] Estimating the cost of continuity capacity
- From: Eric Brunner-Williams <ebw@xxxxxxxxxxxxxxxxxxxx>
- Date: Thu, 17 Feb 2011 17:37:57 -0500
A reply from Afilias. Enjoy.
-------- Original Message --------
Subject: RE: [Tdg-legal] Estimating the cost of continuity capacity
Date: Thu, 17 Feb 2011 16:24:04 -0500
From: Michael Young <myoung@xxxxxxxxxxxxxxx>
To: 'Eric Brunner-Williams' <ebw@xxxxxxxxxxxxxxxxxxxx>,
<tdg-legal@xxxxxxxxx>
CC: 'soac-newgtldapsup-wg@xxxxxxxxx' <SOAC-newgtldapsup-wg@xxxxxxxxx>
Eric while I find this truly interesting, I feel it is based on a large
degree of dangerous assumptions.
1) It assumes, .CAT is by itself a statistically valid sample of the
domain
registry space. I believe using just .CAT fails the random requirement of
valid sample data.
2) It assumes that reads are not part of essential registry services, I
can't agree with that and I doubt any operator would.
3) It assumes that costs scale in a linear fashion, they do not. Per unit
costs by service providers consistently drop in larger volumes.
4) It assumes common costs for different business models - again not the
case.
There are several other problems with doing this type of modelling in that
you are looking at a cost to carry versus a true breakeven.
In a full cost to carry model:
You have to do a projection of domains that will remain in the
registry for
three years following the business failure point. To do so accurately
means
you have to consider projected registration growth rates until the
point of
failure and then project subsequent renewal rates following that. So in
effect you end up with a rolling conditional model. You will also want to
calculate a Net Present Value for the contingency fund - it only makes
sense
to invest it in risk-free securities, since it can go forward for a full 6
years. 6 years comes from the fact the contingency must be in place
for the
first three years of operation, and then if you fail just before the
end of
three years, the funds have to cover cost for the next three years - 6
years
).
In short, each bidder will need to present their own contingency model
that
is integrated with their financial section of their bid.
I also find renewals during the emergency transition period an interesting
problem. So do you suspend expirations until regular registry (billing)
operations recommence? Do you allow renewals and charge for them? Who
collects and keeps the renewal fees if the cost to carry the domain is
already paid for the next three year? If a new operator takes over the
failed NTLD within say, 12 months, does the remaining contingency fund
return to the bankruptcy officer for pay out the debtors?
While financial modeling is something I can speak to, I have other related
questions for the lawyers on this list,......
How viable is it that you can even set up a contingency fund that is truly
untouchable by creditors in the event of bankruptcy?
I would think, the order of priority it terms of claims would go something
like this (again I defer to the lawyers here):
1)Secured Debt Holders
2) Registrars (as the retailer) would have first claim on prepaid
funds and
any applicable rebates amounts.
3) Service Providers for work completed.
4) Service Providers for contracts work not yet completed (Emergency
Registry Provider costs I would think fit into this point)
5) Shareholders
The more I think about this, the more and more like it seems like the
contingency fund be a common insurance instrument, operated by a neutral
third party - versus 500 different contingency fund setups.
Michael Young
M:+1-647-289-1220
-----Original Message-----
From: Eric Brunner-Williams [mailto:ebw@xxxxxxxxxxxxxxxxxxxx]
Sent: February-17-11 2:47 PM
To: tdg-legal@xxxxxxxxx
Cc: soac-newgtldapsup-wg@xxxxxxxxx
Subject: [Tdg-legal] Estimating the cost of continuity capacity
TDG-Legal Colleagues,
I just looked over the January 2011 numbers for .cat.
The sum of all add, modify, or delete operations on all objects for the
month was 48012, slightly greater than the number of domains.
The definition of "object" in this registry includes domain objects,
contact
object, and host objects.
I propose that we agree that as a estimating tool, that the sum of all
operations on all objects, is equal to the number of domain objects in the
registry.
This means that we ignore the (check + info) operations, .cat had
1,361,398 of these, the (check only) operations, .cat had an additional
1,147,927 of these, and the (info only) operations, .cat had an additional
213,471 of these. The reason we ignore these 2.7m transactions is that
they
are non-essential to continuity operations.
These are also read-only operations, and on average complete in half the
time as add, modify and delete operations.
The reason we need not ignore the add operations is two part. First,
we know
registry's growth rate and can factor out the add operations to a first
order. In the case of .cat the growth rate is 10k/year, or 1k/month
with an
80% renewal rate. Second, the add, modify and delete operations are
reported
as an aggregate, and the contribution of the add operations is less than a
quarter of all transactions.
I further propose that we agree that the operations are not highly
clustered
in time, that they occur during 2000 hours of a 8760 hour year
corresponding
to a 5 day work week of 8 hour days, and with uniform distribution within
those 2000 hours.
If you agree to these two proposals, we may define a continuity
transactional load in units of 10k domain registrations at the point of
transition to continuity.
(10k domain objects) x (12 months) = 120,000 transactions/year
120,000 / 2000 = 60 transactions/hour
I propose therefore a continuity transactional capacity definition of one
transaction/minute for a standard "unit" of 10k domain objects.
It is possible that registries with unlimited admission policies have
radically different transactional properties. Not wanting to apply
data from
the application of a community-based ("sponsored" in the
2004 contractual form) admission policy to applications lacking that,
or any
restriction on admission, I propose the 1t/min/unit apply to applications
designated as "community-based" in the pending evaluation.
If this is not an acceptable definition for continuity transactional
capacity, please let me know. Again "unit" here is 10k domains.
If we can agree to this, and if we assume that the transactions should be
completed in 3 seconds or less, for nearly all transactions, then the
continuity transactional capacity cost can be defined by the cost for
systems capable of completion of all i/o and computation associated with a
standard database and read/write operation in 3 seconds.
I will offer estimates on the cost of zone file generation, and name
server
capacity following the same methodology, modulo the caveat above on
acceptable definition for continuity transactional capacity.
Eric
_______________________________________________
TDG-legal mailing list
TDG-legal@xxxxxxxxx
https://mm.icann.org/mailman/listinfo/tdg-legal
<<<
Chronological Index
>>> <<<
Thread Index
>>>
|