<<<
Chronological Index
>>> <<<
Thread Index
>>>
RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group Rating Session 21 Dec 2009
- To: "Gomes, Chuck" <cgomes@xxxxxxxxxxxx>, Ken Bour <ken.bour@xxxxxxxxxxx>, "gnso-wpm-dt@xxxxxxxxx" <gnso-wpm-dt@xxxxxxxxx>
- Subject: RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group Rating Session 21 Dec 2009
- From: Adrian Kinderis <adrian@xxxxxxxxxxxxxxxxxx>
- Date: Wed, 23 Dec 2009 19:49:30 +1100
Agreed. Let me know how you want me to assist and support.
Adrian Kinderis
From: Gomes, Chuck [mailto:cgomes@xxxxxxxxxxxx]
Sent: Wednesday, 23 December 2009 2:20 AM
To: Adrian Kinderis; Ken Bour; gnso-wpm-dt@xxxxxxxxx
Subject: RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group
Rating Session 21 Dec 2009
I like the idea of having Adrian do a 'red team' review of our proposal toward
the end of our work. By waiting until we are nearly finished, as red teams
normally do, it allows him to be more independent.
Chuck
________________________________
From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] On
Behalf Of Adrian Kinderis
Sent: Monday, December 21, 2009 10:19 PM
To: Ken Bour; gnso-wpm-dt@xxxxxxxxx
Subject: RE: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group
Rating Session 21 Dec 2009
Team,
I know I have been distant on this topic but I have been reading and watching
with interest.
Can I suggest the following (and it is only a suggestion);
In our organisation prior to a task being started, for example a release of
software into production, the Production Support Team will do a detailed plan.
This plan is the reviewed by the "Red Team" which are knowledgeable team
members that were not involved in the preparation of the plan. The logic being
that, a fresh set of eyes for review may be better to pick holes in the plan.
Is it worth while me, and potentially others, putting my hand up to act as a
"red team" for this body of work? I could wait until you are complete and take
a look at the plan with a view to providing feedback?
Just a thought on how I could help given I have had limited interaction with
the team.
Merry Christmas to all.
Adrian Kinderis
From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] On
Behalf Of Ken Bour
Sent: Tuesday, 22 December 2009 10:59 AM
To: gnso-wpm-dt@xxxxxxxxx
Subject: [gnso-wpm-dt] WPM-DT: Step 3a (In Progress) -- Summary of Group Rating
Session 21 Dec 2009
WPM-DT Members:
I thought we had a productive call today even though we did not finish both
sets of X and Y dimensions in our group rating session. As I indicated in my
earlier email, it was an extremely ambitious undertaking to attempt 21 elements
in 45 minutes by the time everyone is connected and we have gotten through the
agenda preliminaries.
Five team members participated in today's DELPHI rating session: Jaime, Olga,
Chuck, Wolf, and Liz (Staff). Ken handled the session administration
including opening/closing the polls at the appropriate time and keeping track
of the results.
The team managed to complete the Y dimensions and the chart below shows the
DELPHI results for Value/Benefit (Y axis). The orange and green values are
median results that were taken directly from the individual ratings. Since
the original range between high and low was 1 or 2 for those projects (and
StdDev < 1.0), we accepted the median result as the DELPHI rating without
further discussion.
The black figures (see Delphi column) are the results of our collective
discussion and re-rating of each project dimension. Taking advantage of Adobe
Connect, the process we used was to start with the Value/Benefit or Y axis and,
working from top to bottom (skipping the orange/green), Ken read out the
starting individual ratings. Then he asked those who rated at one spectrum
(e.g. high or low) to provide their thinking and rationale. Following that, we
opened the floor to any other comments. At that point, Ken opened the online
polling feature and asked the group to re-rate this project dimension. In all
but one case, the first poll results were pretty close to each other, thus, we
accepted the median answer. The one case that would have normally taken a
second round (or third?) was the ABUS project in which we ended up with five
different ratings of: 2, 3, 4, 5, 6. Since time was running out, we decided
to table the discussion until later; but, on return at the tail end of the
session (already 20-30 minutes over), we opted to accept the median value of 4.
Keep in mind that we are only testing the "process" and not officially rating
any project/dimension.
Y VALUES = VALUE/BENEFIT
Project
SVG
WUK
CG
JW
OC
LG
DELPHI
STI
7
6
6
6
5
6
6.0
IDNF
4
6
3
6
3
2
4.0
GEO
2
5
1
4
1
1
2.0
TRAV
5
2
1
4
3
1
2.0
PED
5
4
4
4
3
6
4.0
ABUS
5
3
1
7
2
6
4.0
JIG
4
6
5
7
4
3
5.0
PDP
6
7
7
6
6
6
6.0
WG
6
4
7
6
6
5
6.0
GCOT
6
4
5
5
4
5
5.0
CSG
6
4
4
5
5
5
5.0
CCT
6
3
5
6
4
5
5.0
IRTB
4
3
4
3
3
5
3.5
RAA
4
6
5
7
5
7
6.0
IRD
5
4
5
7
4
4
5.0
After this first DELPHI rating session, a few questions occurred to me that may
be helpful once we get to the point of evaluating/assessing the model, its X/Y
definitions, and the various rating processes that we tried. There is no
need to answer these questions on the email list unless you feel so inclined.
They are intended to be preliminary thoughts and perceptions, phrased as
questions, from my role as your facilitator.
Thinking about our first DELPHI rating session:
1) Even though time was compressed, did you find that you broadened your
perspectives from the discussions?
2) Would you prefer more or less time for each project/dimension
discussion? Should there be specific time limits or do recommend that
discussion time be kept flexible and unconstrained?
3) Did you feel as though you compromised your ratings (during polling) in
a way that was not the result of having changed your perspective or learned
something new? In other words, did you feel any unwelcome or unhealthy
pressure in trying to find common ground?
4) Do you think that the group's DELPHI ratings for the Y axis are
generally better (i.e. more representative of the definition) than any single
person's individual ratings?
5) Did the Adobe polling process work satisfactorily? Ken noticed that
several times, we waiting for the last result or two. Were the early voters
influencing the later ones? There is a feature to turn OFF the results
display so that raters cannot see what has occurred until after they have
voted. Perhaps we will try it that way next time to see which way works best.
6) I noticed that some comments made during the discussion implied that
certain individuals had been thinking of a different definition that was
previously approved for Value/Benefit, e.g. considering value/benefit only to
GNSO vs. the entire Internet community. Should the Y axis definition be
revisited now that the team has had a chance to actually work with it?
Next Steps:
In terms of efficiency, the group managed to rate 10 elements in approximately
70 minutes. For the X axis, we have 11 elements remaining; therefore, I have
suggested to Gisella a 90 minute session for the 28 or 29 December Doodle poll.
Assuming we are successful in accomplishing this 2nd rating session, we also
agreed to try for an evaluation meeting the 1st week of January; a 2nd Doodle
poll will be sent out for that purpose (Length=60 minutes).
Again, thank you all for a successful session today and, hopefully, we will
have an opportunity to complete the X axis dimensions on either 28 or 29
December.
Happy holidays to all,
Ken Bour
P.S. I uploaded a new PDF to our Adobe Connect room, which now shows the
project acronyms instead of Sequence No. Thanks for that suggestion! I also
created a Note box that will remain visible at all times showing the
definitions for X and Y. If anyone has other ideas for improving the
process, please let me know. I will keep thinking about it also...
<<<
Chronological Index
>>> <<<
Thread Index
>>>
|