ICANN ICANN Email List Archives

[gnso-wpm-dt]


<<< Chronological Index >>>    <<< Thread Index >>>

Re: [gnso-wpm-dt] WPM-DT: Step 5 (In Progress)

  • To: "Gomes, Chuck" <cgomes@xxxxxxxxxxxx>
  • Subject: Re: [gnso-wpm-dt] WPM-DT: Step 5 (In Progress)
  • From: Stéphane Van Gelder <stephane.vangelder@xxxxxxxxx>
  • Date: Tue, 5 Jan 2010 17:02:10 +0100

Yes, good point. To answer it, we need to be able to define what separates an 
implementation project from other projects. On our current list of projects, 
what would be considered an implementation project?

Stéphane

Le 5 janv. 2010 à 16:55, Gomes, Chuck a écrit :

> Jaime rasises a good question: Should implementation projects be added to our 
> list of projects for prioritization, separate for the projects tasked with 
> developing recommendations?  That increase the number of projects quite a 
> lot.  How would we compare implementation projects to other projects?  Would 
> their value be the same as assigned to the corresponding recommendation 
> development projects?
>  
> Chuck
> 
> From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] On 
> Behalf Of Jaime Wagner
> Sent: Tuesday, January 05, 2010 9:09 AM
> To: gnso-wpm-dt@xxxxxxxxx
> Subject: RE: [gnso-wpm-dt] WPM-DT: Step 5 (In Progress)
> 
> I added a comment to item 3 and to the considerations on small groups.
>  
> Jaime Wagner
> j@xxxxxxxxxxxxxx            
> +55(51)8126-0916
> skype: jaime_wagner
> 
>  
> From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] On 
> Behalf Of Gomes, Chuck
> Sent: segunda-feira, 28 de dezembro de 2009 20:00
> To: Ken Bour; gnso-wpm-dt@xxxxxxxxx
> Subject: RE: [gnso-wpm-dt] WPM-DT: Step 5 (In Progress)
>  
> Thanks Ken.  To get discussion started I inserted a few comments below.
>  
> Chuck
>  
> From: owner-gnso-wpm-dt@xxxxxxxxx [mailto:owner-gnso-wpm-dt@xxxxxxxxx] On 
> Behalf Of Ken Bour
> Sent: Monday, December 28, 2009 3:44 PM
> To: gnso-wpm-dt@xxxxxxxxx
> Subject: [gnso-wpm-dt] WPM-DT: Step 5 (In Progress)
> 
> WPM-DT Members:
>  
> As promised, this second memo today will outline a few issues that were 
> brought up in today’s discussion as well as discussing some of what we might 
> consider in our next session.   
>  
> Outstanding Issues:
>  
> 1)      Value/Benefit (Y Axis) Definition-Issue #1:   this is a holdover from 
> the 21 Dec session during which a question was raised as to whether/how to 
> modify the definition to ensure that the narrower GNSO is represented vs. 
> only the global Internet community and ICANN stakeholders.  [Gomes, Chuck]  I 
> still support including the GNSO focus in addition to the globabl community 
> focus. 
> 2)      Resource Consumption (X Axis) Definition-Issue #1:  we noted that 
> there were variances in interpretation by participants as to whether a 
> project’s Resource Consumption should be exhaustive, i.e. from the beginning 
> of its instantiation (sunk cost) through anticipated closure or limited to 
> the current period forward.  [Gomes, Chuck]  As stated in our meeting today, 
> I think that measuring resource consumption from the present point in time 
> makes the most sense because we will be prioritizing our work going forward. 
> [J.Wagner] I agree with Chuck.   
> 3)      Resource Consumption (X Axis) Definition-Issue #2:  another question 
> surfaced as to whether the Resource Consumption estimate should include only 
> that portion of the project that relates to developing a GNSO recommendation 
> (e.g. Final Report) or also include an assessment of the project’s full 
> implementation.   To bring everyone up to speed, the example that was used 
> was the Communications and Coordination Work Team (CCT) and its GNSO website 
> recommendations.    Given the Work Team’s near completion of its final 
> Technology report, one view is that there is little work left to do; thus, a 
> relatively low X axis rating would result.   On the other hand, if the 
> project is to also consider the entire website development effort (whether 
> performed by Staff or outside consultants), that would produce a much higher 
> assessment of Resource Consumption.   The DT agreed that additional 
> discussion is needed to provide the Council clear guidance in terms of where 
> to draw the lines so that the ratings estimates are consistently interpreted 
> under a common rubric. [Gomes, Chuck]  We definitely need to provide clear 
> direction in this regard. I favor treating actual implementation separately 
> from the rating of resources needed and therefore not including 
> implementation resources in the estimate of X values.
> [J.Wagner] I understood we had reached a consensus that for the projects 
> listed we would consider only the development of a recommendation. The point 
> that, in my opinion, requires consideration is if we will add implementation 
> projects to our list. I think this is a must from the moment these projects 
> initiate, since they imply resource consumption. The point is: are there any 
> implementation projects already running?
>  
> 4)      Liz and Ken would like to introduce an additional question/concern 
> regarding the assessments of X and Y.   Staff calculated in previous 
> iterations that the correlation statistics between X and Y ratings, in many 
> instances, were high (> 50%) – a result that continued with the DELPHI 
> technique.   While a high correlation between these variables is not 
> necessarily indicative of a problem (i.e. highly valued projects might indeed 
> have high resource consumptions), they should be considered independent 
> variables.  Ken notes that, during today’s discussion of the X axis, comments 
> were made a few times about the “priority” of a project even when the team 
> was explicitly evaluating its resource consumption.   For some individuals, 
> it may be challenging to cleanly separate the issues between value and 
> consumption.   One potential remedy to this natural cross-correlation bias, 
> would be to ask DIFFERENT parties to rate X and Y independently.   For 
> example, one possibility might be to ask the Council to rate Value/Benefit 
> and have Staff rate Resource Consumption – especially since Staff has deeper 
> knowledge of and more extensive involvement with the various project teams 
> and also understands the amount/extent of Staff resources that are being 
> applied. [Gomes, Chuck]  This is an interesting idea worthy of further 
> discussion.  We could also consider having Staff and member(s) of the 
> applicable work team rate the resources needed.  We might want to use the 
> term "Resources Needed" instead of "Resource Consumption" to avoid the 
> implication that it includes resources already consumed, assuming we agree on 
> that approach. 
>  
> Next Session:
>  
> 1)      Next Meeting:  a Doodle poll will be forthcoming to find a time that 
> the DT can meet on either 5 or 6 January 2010.  
> 2)      After today’s work [Note: for technical results, see companion email; 
> Subject: Step 3a (Completed)], the team agreed to proceed to Step 5, “Team 
> assessment of the construct and process/methodology and recommendations.”   
> However, during the next meeting, in addition to the process evaluation 
> topics (see below), the team also wishes to consider whether to create a Step 
> 3b in which it would try another rating permutation (e.g. small groups of 
> 2-3).   If the team elects to perform a Step 3b, we could repeat Steps 4 and 
> 5 for that iteration before moving to Step 6 “Focus on HOW the team might 
> utilize the data in terms of developing a prioritization -- the ultimate goal 
> of this effort.” 
> 3)      We should also formalize the earlier list discussion which would add 
> a Step 7: “Red Team Analysis of Model, Methodology, and Procedures” 
> 4)      The above step also suggests another one or Step 8:  “Finalize WPM-DT 
> Model and Recommendations to GNSO Council”  
>  
> Considerations on Small Group DELPHI:
>  
> Following are some thoughts that the team might begin to weigh prior to next 
> week’s session:
>  
> ·         After having rated all 15 projects both individually and in one 
> large group (of 5), what new information might be learned by breaking into 
> smaller groups of 2-3? [Gomes, Chuck]  I am started to wonder more about 
> this.  There was a lot of value in our group discussion for each of the 
> rating and the most helpful part of that came from participants who were more 
> knowledgeable of the applicable projects. In a group of 2 or 3, the chances 
> of having someone who is knowledgeable on every project will be significantly 
> reduced. 
> [J.Wagner] In my opinion the small groups should not do work already done in 
> the larger group. It seems we’ve lost a step (or either I’ve lost something). 
> I thought that after step 4, with the Delphi consensus on both axis in mind, 
> we would have a final round to produce a one dimensional priority list. What 
> I tried to say in our cal is that for this step we could try the small group 
> approach. I share your doubts if even this would be worthwhile. Although, if 
> there’s a doubt, I think it’s in our scope to try it before discarding it.
>  
> ·         What, if any, criteria, should be applied in constructing small 
> groups?   For example:  (a) experience/knowledge (pairing senior Councilor 
> with new member); (b) heterogeneous or homogeneous by Constituency and/or SG; 
> (c) contracted party vs. non-contracted; … others?  [Gomes, Chuck]  In my 
> opinion, having members with experience/knowledge is critical for small 
> groups; the smaller the groups, the more difficult it will be to have 
> experience/knowledge across all projects.
> [J.Wagner] Repeating: we should NOT redo ratings in small groups. I think 
> knowledge and experience are fundamental to do the ratings that are already 
> depicted in the chart produced in step 4. To reduce it to a one dimensional 
> priority list is much more a matter of opinion. So the groups could be formed 
> respecting convenience only.
> ·         If small groups are constructed, should they be facilitated by 
> Staff using an Adobe Connect room or should individuals get together 
> independently to discuss and agree upon ratings to be submitted (e.g. using 
> form similar to the one provided earlier for individual ratings)? [Gomes, 
> Chuck]  The Adobe Connect tool worked very well for us so far.  Would it work 
> as well in smaller groups?  We should at least test it if we decide to go 
> that direction.
> [J.Wagner] Since our small groups are indeed small (2 people) we don’t need 
> Adobe Connect, but I think a facilitator would add uniformity and objectivity.
>  
> Questions for DT Interim Evaluation:
>  
> The following approach is a suggestion for how we might proceed in conducting 
> next meeting’s assessment discussion:[Gomes, Chuck]  Looks good to me. 
>  
> 1)      Briefly recap the work completed thus far:
> o   Project list (15) and requirements for inclusion as “active”
> o   Two-dimensional model and definitions for X and Y
> o   Ratings scale 1-7 from Far Below Average to Far Above Average
> o   Individual ratings using rating scale with final chart plotting means or 
> medians
> o   Large group DELPHI session using rating scale; facilitated using Adobe 
> Connect room with polling feature
> 2)      Any changes recommended to work already completed, i.e., what worked, 
> what didn’t work?
> 3)      How is the DT doing against its objectives thus far? 
>  
> Final Thought:  
>  
> I have recently begun experimenting with Mind Mapping software (using 
> NovaMind4) and, mostly for my own purposes, I created a map of this team’s 
> process so that I could keep track not only of the steps (I did change 
> sequencing a bit), but the various issues, questions, documents, etc., that 
> we have been accumulating via the list.   It is still a work-in-progress, but 
> if you would like to see a copy of the map, please click on this link:  
> http://www.novamind.com/connect/nm_documents/723.   It should not require any 
> login or other requirements to view.   I also attached a static JPEG image in 
> case you have any trouble with the link. 
>  
> Ken Bour
>  

Attachment: smime.p7s
Description: S/MIME cryptographic signature



<<< Chronological Index >>>    <<< Thread Index >>>

Privacy Policy | Terms of Service | Cookies Policy