ICANN ICANN Email List Archives

[gnso-wpm-dt]


<<< Chronological Index >>>    <<< Thread Index >>>

[gnso-wpm-dt] WPM-DT: Step 3a (Rating Test #1 - In Progress)

  • To: <gnso-wpm-dt@xxxxxxxxx>
  • Subject: [gnso-wpm-dt] WPM-DT: Step 3a (Rating Test #1 - In Progress)
  • From: "Ken Bour" <ken.bour@xxxxxxxxxxx>
  • Date: Tue, 15 Dec 2009 00:24:24 -0500

Jaime:

 

Concerning your comment about estimating, ?But even so, in my case, I have
just a faint idea of relative resource consumption.  My estimate will be
very poor...?   I think that candid admission will likely be true for any
Councilor who is new to the GNSO and unfamiliar with many, if not all, of
the active projects.   That is one reason why we decided to also test one or
more group rating solutions, which could be either homogenously or
heterogeneously formed including ensuring that ?new? Councilors would be
paired with ?senior? ones (your Q#2).   Because there are pros/cons to these
various approach, we postponed that discussion until a subsequent meeting
TBD.  

 

I think I can address the remainder of your email in the following response.
Starting with your last question/item (#3), you did not understand the
reference to a drop from 4 permutations to 3.   When we originally
identified rating vs. ranking and individuals vs. groups, that generated 4
mathematical combinations to be tested:   

1)      Rate individually using a scale (e.g.  1-3, 1-5, 1-7, 1-11, etc.)

2)      Rate individually using a numerical ranking 

3)      Rate in group(s) using a scale

4)      Rate in group(s) using a numerical ranking            

 

During our last meeting?s discussion, we abandoned the idea of performing
any numerical rankings, which left us with only 2 permutations:

1)      Rate individually using a scale (we selected Likert 1-7)

2)      Rate in group(s) using a scale

 

In terms of group size (large vs. small), we decided that on 17 December,
our first test would be one large group ranking session (procedure TBD), but
held out the option to try smaller group sizes, which would create a third
possible test, that is, 3) Rate in small groups (size TBD) using a scale.  

 

I noted in my meeting summary that Staff would try to recommend a process
for the group rating session (your Q#1); however, I have not had time to
work through any options yet.   Thus, your idea of an iterative DELPHI
approach is not only intriguing, but nicely timed.   I note that whatever
process we decide should be close to what we would ultimately recommend for
the Council.  Having said that, if I understand your suggestion, the session
could be orchestrated something like you outlined: 

 

1)      Fortunately, because everyone will have already rated all 15
projects individually on both dimensions, we could consider those results
DELPHI Iteration #1.  Note:  I only have two submissions thus far (Stephane
and Wolf).   

2)      Once all of the ratings are received, I could attempt to do some
kind of analysis to see where the larger gaps/outliers are.   Perhaps, if
there is some commonality in ratings on certain project/dimension
combinations, that would save the team time and avoid having to discuss them
all.   If not, we would have to discuss each project and each dimension in
order to address the outlier ratings.   

[Note:  I would need at least a day of prep time in order to provide the
group anything useful.   If I receive too many rankings the day of or the
day before the 17th, I may not have time to make sufficient progress.
Assuming that all ratings can be completed by the morning of the 16th, I
will continue?]

3)      One way of handling the agenda would be to take each project one at
a time.   I would show everyone all of the individual ratings for X
(identified by person) and then there would be a brief discussion of the
outliers and differences.   [Note:  we?ll need some kind of visual online
meeting capability (e.g. Adobe Connect) to make that feasible.  I will talk
to Glen/Gisella about setting that up].  No one would be challenged to
?defend? any position, but simply explain his/her reasoning for the purposes
of group learning and building consensus.   

4)      Immediately following the discussion, we would ask each individual
on the call to re-rate that project on dimension X [Note:  I?ll have to
research what technical methods might be available] and show those results ?
all at the same time.   As long as there is not a ?significant? variance
(yet to be defined), we would stop, accept the mean (or median, mode?) as
the final group answer, and move to the Y dimension for that project.  If
there continues to be a large gap, we would have one more round of
discussion, rate again, and then stop (accepting that answer).   Maximum 3
tries to reach tighter variance, if not outright consensus.  

5)      We would continue this approach until all 15 projects have been
rated on both dimensions.  

 

There are several embedded ?notes? and ?questions? to be resolved including
how to: 

1)      handle this iterative DELPHI rating approach via Adobe Connect
including simultaneous display of individual ratings (!); 

2)      determine a collective group answer when there is not full consensus
(e.g. mean, median, mode?); and 

3)      decide, mathematically, when we have reached a sufficiently tight
variance to stop the iterations.  

 

I have doubts that we would be able to accomplish this process for 30
separate combinations (15 projects x 2 dimensions); so, unless there is a
fair number that could be skipped at the outset, it might take two full
sessions.   I also suspect that, even if we ask everyone to read the short
descriptions in advance, some time will inevitably be spent bringing
everyone up to the same level of understanding as to what the project
entails.   I have to develop an answer to #3 above before I can even know
the extent to which there are combinations of project/dimension that can be
skipped.   That remains a puzzle to be solved as I sit here thinking and
typing?  

 

One principle of your approach, which I learned through researching the
SCRUM methodology and ?Planning Poker,? is that having each individual
express an opinion at the same time (collective show and tell) avoids the
situation all too common in group work:   one knowledgeable or expert person
speaks first and everyone else remains silent ? thus inhibiting discussion
and learning.   On the other hand, this iterative DELPHI approach requires
that everyone be agreeable, at least in principle, to move toward (if not
all the way to) consensus.   That process takes considerably longer, but has
the benefit of generating deeper (vs. superficial) levels of agreement.  

 

Do others have thoughts about this general outline before I begin working on
it further?   I have a paper to write on another critical issue tomorrow, so
I may not be able to flesh this out until Wednesday of this week, the day
before our session.   

 

Thanks, Jaime, for kick-starting this group rating procedural discussion.
There could be a lot more to it than I originally had in mind... 

 

Regards,

 

Ken

 

From: Jaime Wagner [mailto:jaime@xxxxxxxxxxxxxxxxxx] 
Sent: Monday, December 14, 2009 9:15 PM
To: 'Ken Bour'
Cc: gnso-wpm-dt@xxxxxxxxx
Subject: RE: WPM-DT: Step 3a (Rating Test #1 - In Progress)

 

Thanks Ken,

 

I agree that if we?re aiming just at prioritization and not at scheduling
there?s no need for capacity estimation.

 

But even so, in my case, I have just a faint idea of relative resource
consumption. My estimate will be very poor and I will be led by any opinion.

 

I was just now listening to the record of your conference on the 10th and I
collected some points to comment.

 

1)      Ken said: 

what we will do when we have all the individual ratings and all the
individual rankings or however we finally decide it we will aggregate them
all into a single chart.

 

Why not use directly the  <http://en.wikipedia.org/wiki/Wideband_Delphi>
Wideband Delphi technique that you referenced below and that is used in
SCRUM

1.      Coordinator (Ken) presents each expert (us) with a specification and
an estimation form. 
2.      Coordinator calls a group meeting in which the experts discuss
estimation issues with the coordinator and each other. 
3.      Experts fill out forms anonymously. 
4.      Coordinator prepares and distributes a summary of the estimates 
5.      Coordinator calls a group meeting, specifically focusing on having
the experts discuss points where their estimates vary widely

What we do here at UOL is: those participants that gave the extreme
differences have to justify and defend their reasoning before the whole
group 

6.      Experts fill out forms, again anonymously, and steps 4 to 6 are
iterated for as many rounds as appropriate. 

Could we try this approach in our meeting on the 17th? I think we are 7:
Olga, Chuck, Stéphane, Wolf,  Ken, Liz, Jaime. We could drop the requirement
for anonymity.

Note that the aim is not to come up with consensus (which often occurs). The
process finishes after a set amount of iterations (typically 3).

 

2)      Chuck said

when you form groups do you want those groups to be homogeneous or
heterogeneous?

 

There?s another kind of heterogeneity besides the interest group
represented. I?m talking about seniority and ?juniority? in GNSO.  Here I
think that an heterogeneous group would be beneficial, mixing new councilors
(as is my case) with more experienced ones (as Chuck, for instance).

 

3)      During wrap-up Ken said something I didn?t understand:

 

We?re obviously going to drop from four (unintelligible) mutations to just
three, or actually just two, maybe three, we?ll see. 

 

 

Regards

 

Jaime Wagner
j@xxxxxxxxxxxxxx             

+55(51)8126-0916
skype: jaime_wagner

 

From: Ken Bour [mailto:ken.bour@xxxxxxxxxxx] 
Sent: segunda-feira, 14 de dezembro de 2009 18:13
To: 'Jaime Wagner'; 'Gomes, Chuck'; 'Stéphane Van Gelder'
Cc: gnso-wpm-dt@xxxxxxxxx
Subject: WPM-DT: Step 3a (Rating Test #1 - In Progress)

 

Jaime:

 

In my view, the X-axis should not be about capacity, but about your
perception of how much resource a project will consume relative to the
others.

 

Perhaps an example might help.

 

Let?s say that I have 3 things on my TO DO list:

1)      Have my car battery checked

2)      Fix a leaking house faucet

3)      Put up exterior holiday decorations

 

On Value/Benefit, I rate them as follows:

1)      Average or 4

2)      Slightly Above Average or 6

3)      Moderately Below Average or 2

 

On Resource Consumption, I rate them as follows:

1)      I rate 5 due to the time commitment and inconvenience of driving to
dealership and waiting.

2)      I rate 1 because it?s simple, I have tools and spare washers.

3)      I rate 7 due to freezing temperatures, difficulty climbing on a
ladder, and danger of electrical wiring.

 

Note that my capacity to do than is not a factor, at this stage, only the
resource that will be consumed.

 

Now let?s assume that I only have capacity to do just one of the above
activities today.   One way to determine priority is to divide Y by X
yielding a benefit/consumption ratio:

1)      4/5 = .9

2)      6/1 = 6

3)      2/7 = .3

 

If I follow my own rating system, I would work them in this order:  2 ? 1 ?
3.   Project 2 delivers 6x more benefit per resource expenditure than
Project 1, so fixing the leaky faucet gets worked on first!

 

I hope that helps?

 

Ken

 

 

From: Jaime Wagner [mailto:jaime@xxxxxxxxxxxxxxxxxx] 
Sent: Monday, December 14, 2009 2:01 PM
To: 'Gomes, Chuck'; 'Stéphane Van Gelder'; 'Ken Bour'
Cc: gnso-wpm-dt@xxxxxxxxx
Subject: RE: [gnso-wpm-dt] WPM-DT: Step 3a (Rating Test #1 - In Progress)

 

In fact I tend to agree with Stéphane with respect to the resource
consumption axis estimation.

 

For me the difficulty is very close to impossibility.

 

What brings about my suggestion of uneven weights, which in view of the
situation, should be discarded. 

 

We do this when we have a good sense of resource capacity, which is not the
case, at least in my case.

 

So let me pose some questions: 

 

1)      how could we have an estimate of capacity?

 

2)      Which capacity are we talking about? Staff?s work hours?
Councilors?? Budget?

 

Jaime Wagner
j@xxxxxxxxxxxxxx             

+55(51)8126-0916
skype: jaime_wagner

 

From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] On
Behalf Of Gomes, Chuck
Sent: segunda-feira, 14 de dezembro de 2009 13:49
To: Stéphane Van Gelder; Ken Bour
Cc: gnso-wpm-dt@xxxxxxxxx
Subject: RE: [gnso-wpm-dt] WPM-DT: Step 3a (Rating Test #1 - In Progress)

 

Please see my responses below.

 

Chuck

 

  _____  

From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] On
Behalf Of Stéphane Van Gelder
Sent: Monday, December 14, 2009 9:40 AM
To: Ken Bour
Cc: gnso-wpm-dt@xxxxxxxxx
Subject: Re: [gnso-wpm-dt] WPM-DT: Step 3a (Rating Test #1 - In Progress)

Thanks Ken, 

 

Please find attached my contribution.

 

A few comments:

 

- I did not deem it necessary or even desirable to take the time to go back
to the RrSG in order to get feedback on rating. I consider this a test and
the points awarded only reflect my own personal judgment or experience.

- For the X axis, I consider the higher the number of points, the less
desirable the project (as it is consuming more resources).
[Gomes, Chuck] Not sure I agree with this if I correctly understand.  New
gTLDs would have been an undesirable project using this logic. It certainly
would have been rated very high on the X axis and rightly so but that would
not make it undesirable.  Maybe it is just a matter of not suggesting that a
high X axis rating does not mean desirable or undesirable projects.    This
is reversed for the Y axis (the more points awarded, the more a project is
worthy of the GNSO's attention).[Gomes, Chuck]  Again, I am not sure this
will always be true.  If this is consistent with everyone else's
understanding, it may be worthwhile making this very clear once the
definitive rating instructions are sent to the Council.

- I found the X axis very difficult to rate. It is impossible for me to have
a clear idea of the amount of budgetary resources a project requires without
having some kind of figure in front of me from staff. Would it be worthwhile
thinking about putting such a figure next to each project description listed
in the word document that came with Ken's email?[Gomes, Chuck]  Keep in mind
that budgetary info is just one aspect and that it may be very difficult to
get reasonable budgetary estimates early in the game.  For existing projects
it will be much easier to estimate budget impact; for new ones, it will be
more challenging.  If we ask for budget estimates from Staff to perform this
exercise, it will take us too long to do it. At the same time, each
Councilor should be able estimate resources required on a comparative basis
at a high level so as to be able to complete the exercise.

- I was surprised when rating to find that projects that tended to be of
lower value (according to me) also tended to require less resources (less
man hours spent on them, less expensive). I think there's something in that,
still trying to work out what it is ;)[Gomes, Chuck]  I think this
illustrates what I tried to say above.  Just because something requires
smaller amount of resources doesn't mean we should do it. 

 

Thanks,

 

Stéphane



<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy