ICANN ICANN Email List Archives


<<< Chronological Index >>>    <<< Thread Index >>>

[gnso-wpm-dt] WPM-DT: Step 3a (Proposed)

  • To: <gnso-wpm-dt@xxxxxxxxx>
  • Subject: [gnso-wpm-dt] WPM-DT: Step 3a (Proposed)
  • From: "Ken Bour" <ken.bour@xxxxxxxxxxx>
  • Date: Wed, 09 Dec 2009 15:46:47 -0500

Team Members:


Now that we have a finalized Project List (Step 1) and expect tomorrow to
complete a set of definitions for the X and Y axes (Step 2), Step 3 involves
utilizing this drafting team to exercise and test one or more ranking/rating
methodologies as a proof-of-concept.


As originally formulated, our goals in this step are to ensure that:

a)      the process we select and recommend is user-friendly, unambiguous,
and straightforward to execute; and

b)     realistic outputs can be created that will enable the Council to make
prioritization decisions once the process is actually completed.

Staff suggests that the WPM-DT test all candidate methodologies before
making a final recommendation to the Council, that is, if each Council
member will be asked to rate/rank individually, then the drafting team
should perform such a test.   If, alternatively, the team thinks that the
Council should form sub-groups to produce consensus rankings/ratings, then
the DT should exercise that option.   

Step 3a, then, might be to decide how many candidate methodologies will be
considered in this testing phase.  

For example, the team could choose to execute multiple approaches and, after
comparing the pros/cons of those various trials, decide which one combines
the best features.   

Options that have been identified thus far are: 

a)      Rating vs. Ranking:  should projects be rated (relatively) with a
scale such as H, M, L or ranked numerically?  

If the latter option is selected, should ties be permitted, that is, can two
projects be ranked the same (e.g. 1-1-3-4-5-5-7 .)?  

b)     Individual vs. Group:  should Council members rate/rank individually
or should sub-groups be formed to discuss and recommend a single consensus
answer from each one?   

Instead of attempting to narrow down the options, the team could perform all
4 permutations as follows:

1)      Rank individually in numerical order (with ties?)

2)      Rate individually using a simple H-M-L scale

3)      Rank in sub-groups in numerical order (with ties?)

4)      Rate in sub-groups using a simple H-M-L scale

If all four options will be exercised, the team should also discuss the most
advantageous order.    Staff notes that it might be beneficial to work
individually first and then in sub-groups.    Since some Councilors, like
members of this team, are likely NOT to have deep understanding of the
projects, rating/ranking individually will simulate that condition.   In
terms of rating vs. ranking, Staff does not have an opinion to offer as to

Hopefully, during our next call, we will finalize Step 2 and make progress
on Step 3a by deciding which tests to perform and, if applicable, what
specific order.

I note that our original target completion date for Step 3 (Liz's email of
23 Nov) was  11 Dec 2009, but that would seem too ambitious.  If we have
time, we might also discuss a revised implementation timeframe leading to a
final recommendation.


If there is anything else that Staff can do in the interim to assist this
effort, please let us know.



<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy