ICANN ICANN Email List Archives

[gnso-wpm-dt]


<<< Chronological Index >>>    <<< Thread Index >>>

RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group Rating Session 21 Dec 2009

  • To: "Ken Bour" <ken.bour@xxxxxxxxxxx>, "Jaime Wagner" <jaime@xxxxxxxxxxxxxxxxxx>, <gnso-wpm-dt@xxxxxxxxx>
  • Subject: RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group Rating Session 21 Dec 2009
  • From: "Gomes, Chuck" <cgomes@xxxxxxxxxxxx>
  • Date: Tue, 22 Dec 2009 11:10:36 -0500

I think it would be useful to test the small group approach because I think 
that is a possible approach that should be considered at the Council level. 
Without testing it, we will be mostly guessing as to whether it would work.
 
Chuck


________________________________

        From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] 
On Behalf Of Ken Bour
        Sent: Tuesday, December 22, 2009 10:53 AM
        To: 'Jaime Wagner'; gnso-wpm-dt@xxxxxxxxx
        Subject: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of 
Group Rating Session 21 Dec 2009
        
        

        Jaime:

         

        Just a couple of thoughts concerning your email message below.  

         

        1)      I recommended to Gisella a 90 minute session when we attempt 
the X axis group ratings.   It took us about 7 minutes per element yesterday 
and we have 11 more to discuss.   Allowing time in the beginning and at the end 
for intro and wrap-up, we will still be pressed to complete the exercise - and 
I am assuming only one polling iteration for each project. 

        2)      I think that, if we recommend reaching full consensus on each 
element, it will take more than one iteration to accomplish.  As we saw from 
yesterday's session, several times we ended up with something like this after 
polling: 

        Rating

        1

        2

        3                  (2 votes)

        4                  (2 vote)

        5                  (1 vote)

        6

        7

        In the interest of time and process, it seemed reasonable to accept the 
median or 4 as the group's rating.   We could, however, impose a rule that 
polled votes cannot span more than two consecutive rating categories (Range=1). 
  In the above case, with a more restrictive rule, we would then have a 2nd 
round of discussion and attempt another poll; and so on....   Does anyone want 
to try that approach during our X rating session?   If so, we should probably 
allow closer to 120 minutes for the session.  

         

        In terms of another testing method, the only thing we held open was to 
possibly test rating in small groups of, say, 2-3.   We can discuss that option 
in our call in early January assuming that we complete the X ratings early next 
week.

         

        Ken

         

        From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] 
On Behalf Of Jaime Wagner
        Sent: Tuesday, December 22, 2009 9:20 AM
        To: gnso-wpm-dt@xxxxxxxxx
        Subject: RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of 
Group Rating Session 21 Dec 2009

         

        I think we are progressing quite well and much of our success is due to 
the quality of Ken's work.

         

        I understand that tasks remaining are:

        1)      Finish X (onus) ratings in the same way we did with the Y 
(bonus) axis. [One hour meeting]

        2)      Exercise the method of convergence by "defense of extremes" 
through more iterations. We did just one and the usual maximum is three. 

         

        Am I wrong or there was an idea of testing another method?

         

        I like the idea of a revision by another group, but I don't know if it 
wouldn't delay the process. In a certain way the whole council will review the 
process once they apply it.

         

        Like Wolf I had to rely entirely on the short descriptions which are 
very good but still short - as they should be. Anyway, even ignorant, I have an 
 opinion. And it's wise to change it in front of sensible arguments. When it 
comes to value judgments knowledge counts but diversity of opinions and 
backgrounds adds too.

         

        Jaime Wagner
        j@xxxxxxxxxxxxxx             

        +55(51)8126-0916
        skype: jaime_wagner

         

        From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] 
On Behalf Of KnobenW@xxxxxxxxxx
        Sent: terça-feira, 22 de dezembro de 2009 08:20
        To: adrian@xxxxxxxxxxxxxxxxxx; ken.bour@xxxxxxxxxxx; 
gnso-wpm-dt@xxxxxxxxx
        Subject: AW: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of 
Group Rating Session 21 Dec 2009

         

        Adrian,

         

        I welcome this idea and would be happy if we could encourage others to 
be supportive this way. My personal experience in trying to rate the council 
projects seems to be comparable to a blind person using a crutch to find his 
way.

        What I've learned yesterday is that with regards to some projects I 
need more background info than provided with the short description. Otherwise I 
may misinterprete the intention, targets and community implications (e.g. IRTB, 
IRD).

        My personal rating approach is in two steps: first setting the X and Y 
"values" relatively to each other according to my opinion; secondly fine tuning 
the absolute figures. If new ideas can help, Adrian, I'd very much appreciate.

         

        Thanks, and Merry Christmas to all of you

         

        Wolf-Ulrich

         

         

        
________________________________


        Von: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] 
Im Auftrag von Adrian Kinderis
        Gesendet: Dienstag, 22. Dezember 2009 04:19
        An: Ken Bour; gnso-wpm-dt@xxxxxxxxx
        Betreff: RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of 
Group Rating Session 21 Dec 2009

        Team,

         

        I know I have been distant on this topic but I have been reading and 
watching with interest.

         

        Can I suggest the following (and it is only a suggestion);

         

        In our organisation prior to a task being started, for example a 
release of software into production, the Production Support Team will do a 
detailed plan. This plan is the reviewed by the "Red Team" which are 
knowledgeable team members that were not involved in the preparation of the 
plan. The logic being that, a fresh set of eyes for review may be better to 
pick holes in the plan. 

         

        Is it worth while me, and potentially others, putting my hand up to act 
as a "red team" for this body of work? I could wait until you are complete and 
take a look at the plan with a view to providing feedback?

         

        Just a thought on how I could help given I have had limited interaction 
with the team.

         

        Merry Christmas to all.

         

        Adrian Kinderis

         

        From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] 
On Behalf Of Ken Bour
        Sent: Tuesday, 22 December 2009 10:59 AM
        To: gnso-wpm-dt@xxxxxxxxx
        Subject: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of 
Group Rating Session 21 Dec 2009

         

        WPM-DT Members:

         

        I thought we had a productive call today even though we did not finish 
both sets of X and Y dimensions in our group rating session.  As I indicated in 
my earlier email, it was an extremely ambitious undertaking to attempt 21 
elements in 45 minutes by the time everyone is connected and we have gotten 
through the agenda preliminaries.  

         

        Five team members participated in today's DELPHI rating session:   
Jaime, Olga, Chuck, Wolf, and Liz (Staff).   Ken handled the session 
administration including opening/closing the polls at the appropriate time and 
keeping track of the results.   

         

        The team managed to complete the Y dimensions and the chart below shows 
the DELPHI results for Value/Benefit (Y axis).   The orange and green values 
are median results that were taken directly from the individual ratings.   
Since the original range between high and low was 1 or 2 for those projects 
(and StdDev < 1.0), we accepted the median result as the DELPHI rating without 
further discussion.  

         

        The black figures (see Delphi column) are the results of our collective 
discussion and re-rating of each project dimension.   Taking advantage of Adobe 
Connect, the process we used was to start with the Value/Benefit or Y axis and, 
working from top to bottom (skipping the orange/green), Ken read out the 
starting individual ratings.  Then he asked those who rated at one spectrum 
(e.g. high or low) to provide their thinking and rationale.  Following that, we 
opened the floor to any other comments.  At that point, Ken opened the online 
polling feature and asked the group to re-rate this project dimension.   In all 
but one case, the first poll results were pretty close to each other, thus, we 
accepted the median answer.   The one case that would have normally taken a 
second round (or third?) was the ABUS project in which we ended up with five 
different ratings of:  2, 3, 4, 5, 6.   Since time was running out, we decided 
to table the discussion until later; but, on return at the tail end of the 
session (already 20-30 minutes over), we opted to accept the median value of 4. 
  Keep in mind that we are only testing the "process" and not officially rating 
any project/dimension.  

         

        Y VALUES = VALUE/BENEFIT

                        
Project

SVG

WUK

CG

JW

OC

LG

        DELPHI

STI

7

6

6

6

5

6

        6.0

IDNF

4

6

3

6

3

2

        4.0

GEO

2

5

1

4

1

1

        2.0

TRAV

5

2

1

4

3

1

        2.0

PED

5

4

4

4

3

6

        4.0

ABUS

5

3

1

7

2

6

        4.0

JIG

4

6

5

7

4

3

        5.0

PDP

6

7

7

6

6

6

        6.0

WG

6

4

7

6

6

5

        6.0

GCOT

6

4

5

5

4

5

        5.0

CSG

6

4

4

5

5

5

        5.0

CCT

6

3

5

6

4

5

        5.0

IRTB

4

3

4

3

3

5

        3.5

RAA

4

6

5

7

5

7

        6.0

IRD

5

4

5

7

4

4

        5.0

         

        After this first DELPHI rating session, a few questions occurred to me 
that may be helpful once we get to the point of evaluating/assessing the model, 
its X/Y definitions, and the various rating processes that we tried.    There 
is no need to answer these questions on the email list unless you feel so 
inclined.   They are intended to be preliminary thoughts and perceptions, 
phrased as questions, from my role as your facilitator.  

         

        Thinking about our first DELPHI rating session: 

        1)      Even though time was compressed, did you find that you 
broadened your perspectives from the discussions?

        2)      Would you prefer more or less time for each project/dimension 
discussion?   Should there be specific time limits or do recommend that 
discussion time be kept flexible and unconstrained?  

        3)      Did you feel as though you compromised your ratings (during 
polling) in a way that was not the result of having changed your perspective or 
learned something new?   In other words, did you feel any unwelcome or 
unhealthy pressure in trying to find common ground?   

        4)      Do you think that the group's DELPHI ratings for the Y axis are 
generally better (i.e. more representative of the definition) than any single 
person's individual ratings?   

        5)      Did the Adobe polling process work satisfactorily?   Ken 
noticed that several times, we waiting for the last result or two.   Were the 
early voters influencing the later ones?   There is a feature to turn OFF the 
results display so that raters cannot see what has occurred until after they 
have voted.   Perhaps we will try it that way next time to see which way works 
best.    

        6)      I noticed that some comments made during the discussion implied 
that certain individuals had been thinking of a different definition that was 
previously approved for Value/Benefit, e.g. considering value/benefit only to 
GNSO vs. the entire Internet community.   Should the Y axis definition be 
revisited now that the team has had a chance to actually work with it?  

         

        Next Steps:

         

        In terms of efficiency, the group managed to rate 10 elements in 
approximately 70 minutes.   For the X axis, we have 11 elements remaining; 
therefore, I have suggested to Gisella a 90 minute session for the 28 or 29 
December Doodle poll.   Assuming we are successful in accomplishing this 2nd 
rating session, we also agreed to try for an evaluation meeting the 1st week of 
January; a 2nd Doodle poll will be sent out for that purpose (Length=60 
minutes).  

         

        Again, thank you all for a successful session today and, hopefully, we 
will have an opportunity to complete the X axis dimensions on either 28 or 29 
December.   

         

        Happy holidays to all,

         

        Ken Bour

         

        P.S.   I uploaded a new PDF to our Adobe Connect room, which now shows 
the project acronyms instead of Sequence No.   Thanks for that suggestion!   I 
also created a Note box that will remain visible at all times showing the 
definitions for X and Y.    If anyone has other ideas for improving the 
process, please let me know.   I will keep thinking about it also...   

         



<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy