ICANN ICANN Email List Archives

[gnso-wpm-dt]


<<< Chronological Index >>>    <<< Thread Index >>>

RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group Rating Session 21 Dec 2009

  • To: "'Olga Cavalli'" <olgac@xxxxxxxxxxxxxxx>, "'Gomes, Chuck'" <cgomes@xxxxxxxxxxxx>
  • Subject: RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group Rating Session 21 Dec 2009
  • From: "Jaime Wagner" <jaime@xxxxxxxxxxxxxxxxxx>
  • Date: Wed, 23 Dec 2009 10:25:25 -0200

I understand the point raised by Chuck. But I?d like to add a counterpoint.

 

I also think it?s better to reason around hypothetical cases rather than
real ones in order not to have opinion contaminated by particularities of a
given project.

 

So, we are talking about a project that is beneficial to the overall
community and that depends on the availability of GNSO resources to go
ahead.

 

Also this project is to be compared with another that is not as beneficial
to the entire community as the first, but is indeed more beneficial to the
GNSO community than the first.

 

Which one should be given priority?

 

In my understanding it would be the first one.

 

By the way, I?m not saying that this is the case with any of our projects,
and particularly, I?m not saying that this is the case of Geo Regions WG.

 

Jaime Wagner
j@xxxxxxxxxxxxxx             

+55(51)8126-0916
skype: jaime_wagner



 

From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] On
Behalf Of Olga Cavalli
Sent: terça-feira, 22 de dezembro de 2009 18:38
To: Gomes, Chuck
Cc: Ken Bour; gnso-wpm-dt@xxxxxxxxx
Subject: Re: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group
Rating Session 21 Dec 2009

 

Hi,
thanks Ken for the hard work, and for the summary.

Chuck raises an important issue that I also indicated during our last call,
the meaning of value / benefit.

For me, the purpose of our working team is to find a methodology to
proritize GNSO work, in order to use more efficiently scarce resources like
time, staff, face to face meetings, etc. Of course the general community
aspect could be considered, but in my modest oppinion should not be the main
focus.

Also, please correct me if I am wrong, we were going to make two tests. One
is rating the projects individually (what we are doing now) and after this
we do the ratings in small groups defined among ourselves.

I think we should do both tests.

Best wishes for all!!

Regards
Olga







2009/12/22 Gomes, Chuck <cgomes@xxxxxxxxxxxx>

Thank you very much Ken for the great summary and also for your excellent
work. And thanks to everyone else for the great cooperation.

 

I am going to comment on just one thing, the definition of value.  I doing
my ratings as well as in doing the exercise yesterday, I found that the
applicability of a project to the GNSO in comparison to the entire Internet
community became an important factor. My reasoning was as follows: the
reason for prioritizing our work is to decide how we will use scarce
resources; if one project has little value to the GNSO community and another
one has high value to the GNSO community, I favor using GNSO resources for
the latter.  Therefore, I think we should  revisit our definition of
Value/Benefit.  Value to the entire Internet community is still important
but I think we should also consider value to the more narrow GNSO community
as well.

 

The Geo Regions WG is the project that caused me to come to this opinion.
It's important for the GNSO to be involved in the WG and it will have some
impact on us, but nearly as much as it will the ccNSO.  The value to the
entire community is fairly high but the value to the GNSO is not so high, so
if we have to choose between projects, it doesn't make sense to ignore the
GNSO value.

 

Chuck

 


  _____  


From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] On
Behalf Of Ken Bour

Sent: Monday, December 21, 2009 6:59 PM


To: gnso-wpm-dt@xxxxxxxxx
Subject: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group
Rating Session 21 Dec 2009

 

WPM-DT Members:

 

I thought we had a productive call today even though we did not finish both
sets of X and Y dimensions in our group rating session.  As I indicated in
my earlier email, it was an extremely ambitious undertaking to attempt 21
elements in 45 minutes by the time everyone is connected and we have gotten
through the agenda preliminaries.  

 

Five team members participated in today?s DELPHI rating session:   Jaime,
Olga, Chuck, Wolf, and Liz (Staff).   Ken handled the session administration
including opening/closing the polls at the appropriate time and keeping
track of the results.   

 

The team managed to complete the Y dimensions and the chart below shows the
DELPHI results for Value/Benefit (Y axis).   The orange and green values are
median results that were taken directly from the individual ratings.   Since
the original range between high and low was 1 or 2 for those projects (and
StdDev < 1.0), we accepted the median result as the DELPHI rating without
further discussion.  

 

The black figures (see Delphi column) are the results of our collective
discussion and re-rating of each project dimension.   Taking advantage of
Adobe Connect, the process we used was to start with the Value/Benefit or Y
axis and, working from top to bottom (skipping the orange/green), Ken read
out the starting individual ratings.  Then he asked those who rated at one
spectrum (e.g. high or low) to provide their thinking and rationale.
Following that, we opened the floor to any other comments.  At that point,
Ken opened the online polling feature and asked the group to re-rate this
project dimension.   In all but one case, the first poll results were pretty
close to each other, thus, we accepted the median answer.   The one case
that would have normally taken a second round (or third?) was the ABUS
project in which we ended up with five different ratings of:  2, 3, 4, 5, 6.
Since time was running out, we decided to table the discussion until later;
but, on return at the tail end of the session (already 20-30 minutes over),
we opted to accept the median value of 4.   Keep in mind that we are only
testing the ?process? and not officially rating any project/dimension.  

 

        
Y VALUES = VALUE/BENEFIT

                        

Project

SVG

WUK

CG

JW

OC

LG

        DELPHI


STI

7

6

6

6

5

6

        6.0


IDNF

4

6

3

6

3

2

        4.0


GEO

2

5

1

4

1

1

        2.0


TRAV

5

2

1

4

3

1

        2.0


PED

5

4

4

4

3

6

        4.0


ABUS

5

3

1

7

2

6

        4.0


JIG

4

6

5

7

4

3

        5.0


PDP

6

7

7

6

6

6

        6.0


WG

6

4

7

6

6

5

        6.0


GCOT

6

4

5

5

4

5

        5.0


CSG

6

4

4

5

5

5

        5.0


CCT

6

3

5

6

4

5

        5.0


IRTB

4

3

4

3

3

5

        3.5


RAA

4

6

5

7

5

7

        6.0


IRD

5

4

5

7

4

4

        5.0

 

After this first DELPHI rating session, a few questions occurred to me that
may be helpful once we get to the point of evaluating/assessing the model,
its X/Y definitions, and the various rating processes that we tried.
There is no need to answer these questions on the email list unless you feel
so inclined.   They are intended to be preliminary thoughts and perceptions,
phrased as questions, from my role as your facilitator.  

 

Thinking about our first DELPHI rating session: 

1)      Even though time was compressed, did you find that you broadened
your perspectives from the discussions?

2)      Would you prefer more or less time for each project/dimension
discussion?   Should there be specific time limits or do recommend that
discussion time be kept flexible and unconstrained?  

3)      Did you feel as though you compromised your ratings (during polling)
in a way that was not the result of having changed your perspective or
learned something new?   In other words, did you feel any unwelcome or
unhealthy pressure in trying to find common ground?   

4)      Do you think that the group?s DELPHI ratings for the Y axis are
generally better (i.e. more representative of the definition) than any
single person?s individual ratings?   

5)      Did the Adobe polling process work satisfactorily?   Ken noticed
that several times, we waiting for the last result or two.   Were the early
voters influencing the later ones?   There is a feature to turn OFF the
results display so that raters cannot see what has occurred until after they
have voted.   Perhaps we will try it that way next time to see which way
works best.    

6)      I noticed that some comments made during the discussion implied that
certain individuals had been thinking of a different definition that was
previously approved for Value/Benefit, e.g. considering value/benefit only
to GNSO vs. the entire Internet community.   Should the Y axis definition be
revisited now that the team has had a chance to actually work with it?  

 

Next Steps:

 

In terms of efficiency, the group managed to rate 10 elements in
approximately 70 minutes.   For the X axis, we have 11 elements remaining;
therefore, I have suggested to Gisella a 90 minute session for the 28 or 29
December Doodle poll.   Assuming we are successful in accomplishing this 2nd
rating session, we also agreed to try for an evaluation meeting the 1st week
of January; a 2nd Doodle poll will be sent out for that purpose (Length=60
minutes).  

 

Again, thank you all for a successful session today and, hopefully, we will
have an opportunity to complete the X axis dimensions on either 28 or 29
December.   

 

Happy holidays to all,

 

Ken Bour

 

P.S.   I uploaded a new PDF to our Adobe Connect room, which now shows the
project acronyms instead of Sequence No.   Thanks for that suggestion!   I
also created a Note box that will remain visible at all times showing the
definitions for X and Y.    If anyone has other ideas for improving the
process, please let me know.   I will keep thinking about it also?   

 




-- 
Olga Cavalli, Dr. Ing.
www.south-ssig.com.ar



<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy