ICANN ICANN Email List Archives

[gnso-wpm-dt]


<<< Chronological Index >>>    <<< Thread Index >>>

RE: [gnso-wpm-dt] WPM-DT: Step 3a (Proposed)

  • To: "Ken Bour" <ken.bour@xxxxxxxxxxx>, <gnso-wpm-dt@xxxxxxxxx>
  • Subject: RE: [gnso-wpm-dt] WPM-DT: Step 3a (Proposed)
  • From: "Gomes, Chuck" <cgomes@xxxxxxxxxxxx>
  • Date: Wed, 9 Dec 2009 17:54:36 -0500

Thanks Ken.  I inserted a few personal thoughts below.
 
Chuck


________________________________

        From: owner-gnso-wpm-dt@xxxxxxxxx
[mailto:owner-gnso-wpm-dt@xxxxxxxxx] On Behalf Of Ken Bour
        Sent: Wednesday, December 09, 2009 3:47 PM
        To: gnso-wpm-dt@xxxxxxxxx
        Subject: [gnso-wpm-dt] WPM-DT: Step 3a (Proposed)
        
        

        Team Members:

         

        Now that we have a finalized Project List (Step 1) and expect
tomorrow to complete a set of definitions for the X and Y axes (Step 2),
Step 3 involves utilizing this drafting team to exercise and test one or
more ranking/rating methodologies as a proof-of-concept.

         

        As originally formulated, our goals in this step are to ensure
that:

        a)      the process we select and recommend is user-friendly,
unambiguous, and straightforward to execute; and

        b)     realistic outputs can be created that will enable the
Council to make prioritization decisions once the process is actually
completed.[Gomes, Chuck]  I would add to this: "not only as a one time
prioritization exercise but also in considering new projects as they are
proposed in the future". 

        Staff suggests that the WPM-DT test all candidate methodologies
before making a final recommendation to the Council, that is, if each
Council member will be asked to rate/rank individually, then the
drafting team should perform such a test.   If, alternatively, the team
thinks that the Council should form sub-groups to produce consensus
rankings/ratings, then the DT should exercise that option.  [Gomes,
Chuck]  Not sure we could adequately test the sub-group approach because
of the small size of our group. Our subgroups would probably have to be
groups of 2.  Of course, at the Council level the subgroups could be
that small by design to make it simpler.  If the Council also used
groups of two or at most three, then we could probably test the approach
adequately.  

        Step 3a, then, might be to decide how many candidate
methodologies will be considered in this testing phase.  

        For example, the team could choose to execute multiple
approaches and, after comparing the pros/cons of those various trials,
decide which one combines the best features.   

        Options that have been identified thus far are: 

        a)      Rating vs. Ranking:  should projects be rated
(relatively) with a scale such as H, M, L or ranked numerically?  

        If the latter option is selected, should ties be permitted, that
is, can two projects be ranked the same (e.g. 1-1-3-4-5-5-7 ...)?
[Gomes, Chuck]  I prefer rating to ranking and that seems to fit the X-Y
axis approach we are considering.  As I said previously, I also prefer
numercially rating to H, M, L because it provides more differentiation,
assuming that the numerical range is larger than 3.  In any case, I
think ties should be allowed.  Of course, with H-M-L, we would have to
allow ties, but I support them in the other two approaches as well.
Chances are that ties will be broken when total results are compiled.

        b)     Individual vs. Group:  should Council members rate/rank
individually or should sub-groups be formed to discuss and recommend a
single consensus answer from each one?   [Gomes, Chuck] Individual
ranking is simpler and hence better meets the criteria of user-friendly
and straightforward.  But there are ways that the sub-group approach
could be simplified.  For example, subgroups could consist of Councilors
from the same SG with NomCom appointees serving as a separate sub group;
we could then form a subgroup of the liaisons. If the subgroups go
across SGs, I think the ability to reach consensus may be over
complicated. 

        Instead of attempting to narrow down the options, the team could
perform all 4 permutations as follows:

        1)      Rank individually in numerical order (with ties?)[Gomes,
Chuck]  I do not like this option very much.  If it was realistic, we
probably already would have done it. My preference would be to throw it
out but if the rest of the group wants to test it, I am willing.

        2)      Rate individually using a simple H-M-L scale

        3)      Rank in sub-groups in numerical order (with ties?)

        4)      Rate in sub-groups using a simple H-M-L scale

        If all four options will be exercised, the team should also
discuss the most advantageous order.    Staff notes that it might be
beneficial to work individually first and then in sub-groups.[Gomes,
Chuck]  I agree with this.     Since some Councilors, like members of
this team, are likely NOT to have deep understanding of the projects,
rating/ranking individually will simulate that condition.   In terms of
rating vs. ranking, Staff does not have an opinion to offer as to order.


        Hopefully, during our next call, we will finalize Step 2 and
make progress on Step 3a by deciding which tests to perform and, if
applicable, what specific order.

        I note that our original target completion date for Step 3
(Liz's email of 23 Nov) was  11 Dec 2009, but that would seem too
ambitious.  If we have time, we might also discuss a revised
implementation timeframe leading to a final recommendation.

         

        If there is anything else that Staff can do in the interim to
assist this effort, please let us know.

         

        Ken



<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy