Re: [gnso-wpm-dt] Introduction to draft Work Prioritization model
Thanks for that work Liz, it's impressive.I like the work prioritization construct you propose. The graph setup seems extremely clear and easy to read. On the other hand, I am less sure about the execution model. Whether it be poling each single councillor or in groups, we are basically asking people to rank according to their own priorities rather than use a, dare I use the word, scientific ranking method. As I am out of my field of expertise here, it may simply be that there is no mathematical or scientific model to allow us to rank our projects. But I would like to get as far away from the human element in this as possible so that we end up with a prioritization schedule that leaves nothing to chance and that is based purely on models. What do others think? Thanks, Stéphane Le 20 nov. 2009 à 21:42, Liz Gasster a écrit : Work Prioritization Team:As a way to help bring everyone to the same level on the GNSO Work Prioritization project, I have attempted to consolidate various emails and organize our latest thinking into a single document. Again, this is a suggested draft starting place offered by staff and the group is encouraged to modify it as you feel appropriate. There are three sections as follows:1) Recommended construct and methodology (see also attached spreadsheet)2) Draft definitions for two dimensions 3) Procedural questions to be considered 1) Recommended construct and methodologyFor this effort, Staff is envisioning a two dimensional matrix or chart (X,Y) to help the GNSO Council graphically depict its work prioritization. This concept is based on having each discrete project rated on two dimensions: Value/Benefit (Y axis) and Difficulty/Cost (X axis). Section 2 below outlines the preliminary draft definitions for each dimension (or axis), so we will concentrate in this section on what the chart means, how it would be produced, and the rating/ranking methodology including sample instructions.Illustration: The chart below shows 8 illustrative projects (simply labeled ABC, DEF, GHI, etc.) plotted on two dimensions: Value/Benefit (Y axis) and Difficulty/Cost (X axis). In this sample depiction, Q1, Q2, Q3, and Q4 represent four quadrants which are drawn at the midpoints of each axis (arbitrarily set to 10). Thinking about Value/Benefit versus Difficulty/Cost, Q1 includes those projects that have the highest value and lowest cost; whereas, Q4 would contain projects with the lowest value and highest cost. Project ABC, in this example, is ranked 3.25 on Difficulty and 7.75 in Value; therefore, it is located squarely in Q1. Conversely, project GHI, is rated 7.75 on Difficulty, but only 1.00 on Value and is thereby placed in Q4.<image001.png>How do the projects end up with these individual X, Y coordinates that determine their placement on the chart?There are several options for rating/ranking individual projects. We will look specifically at two alternatives below:Rating Alternative A:One option is to ask each Council member, individually and separately, to rate/rank each project on both dimensions. Even with this alternative (and B following), there are different methods possible, for example, (1) place a ranking from 1 to n for each project under each column, or (2) use something a bit simpler, e.g. High, Medium, and Low to rate each project relative to the others. Since it is arguably easier to rate each project as H, M, or L versus ranking them discretely from 1 to n, we will illustrate the former approach here. Keep in mind that an ordinal ranking methodology would simply substitute a number (from 1 to 8 in our example) instead of the letters H, M, or L.Directions: Rate each project on a scale of HIGH, MEDIUM, or LOW for each dimension (Value/Benefit, Difficulty/Cost), but keep in mind that the rating should be relative to the other projects in the set. There are no fixed anchors for either dimension, so raters are asked to group projects as LOW, MEDIUM, or HIGH compared to each other. A HIGH ranking on Value simply means that this project is perceived to provide significantly greater benefit than projects ranked as MEDIUM.If there are 20+ raters, we could provide a simple blank matrix and ask them to provide their individual scorings. For example, assume that the matrix below is one individual’s ratings for all 8 illustrative projects:PROJECT VALUE/BENEFIT DIFFICULTY/COST ABC L H DEF L M GHI H L JKL M M MNO L L PQR H H STU M M VWX M LOnce we have all results submitted (could be simple Word, Excel, or even Text Email) from all individual raters, Staff would convert each LOW to a Score of 1, MEDIUM = 5.5, and HIGH = 10 (see attached spreadsheet, Rankings tab). We would then average the rankings for all raters and produce a chart as shown in the attached spreadsheet (see Summary tab). Note: We only used 4 raters in the spreadsheet for illustrative purposes, but it is trivial to extend to as many raters as we decide to involve.Rating Alternative B:Instead of asking each Council member to rate/rank each project individually, the Council could use a grouping technique (sometimes referred to as “DELPHI”). For example, suppose we set up 4 teams based upon existing Stakeholder Group structures as follows: Attachment:
smime.p7s
|