Online Scenario Planning Results

A few weeks ago I issued a call for participation in an experiment in online scenario planning with colleagues Dave Snowden and Wendy Schultz.  The experiment was one of several which I am conducting for my PhD.

I am still crunching the numbers but I thought readers would be interested in some early statistics on participation and feedback.  Thanks to everyone in advance who contributed.  This should answer some of the questions you’ve been sending me.

Questions

The goal of the experiment was to test a new approach to crowdsourced scenario planning and to collect useful information on a relevant case study.  Respondents answered four generic questions relating to the near-term future of public services, given the level of financial uncertainty seen around the world.  These questions were:

  • What is the future of public service provision under financial uncertainty?
  • How will governments and cities adapt to managing public resources under increasing constraints?
  • What factors will be critical for public service provision in the coming decade?
  • How will these factors combine to influence public service provision in the 2010s and beyond?

Response rate

The experiment ran for one week and we received 265 contributions.  Contributions ranged from anecdotal stories of personal experience, short analyses of the situation and even just personal opinions.  Most contributions were around 2 paragraphs in length, although some were much longer.

It is difficult to determine a response rate given the open nature of the invitation.  In addition to the announcement on this blog, announcements were made on several other blogs, through Twitter, via professional and academic list-servs, and through personal email invitations.  I will be calculating a proxy response rate soon by comparing the number of hits to the website with the number of actual responses once I get the server stats for the capture period.

Demographics

Demographic data was collected under a variety of headings.  Approximately 10% of respondents called themselves “expert” in the subject matter, around 55% said they had “significant personal or professional experience”, 25% had “some personal or professional experience”, and only about 20% indicated that they had “read about it” or knew “relatively little” about the subject area.

Over 50% of respondents were aged 50 years old or above, 27% aged 40 to 49 years of age, 17% aged 30 to 39, and less than 5% aged 19 to 29 years old.

Origin

Approximately 39% of respondents were from the Americas, 39% from Europe, 19% from Asia and the Pacific, and the remaining 4% from Africa, the Middle East, or elsewhere.

Education level

A remarkable 72% of all respondents reported being educated “up to the post-graduate level”, with an additional 16% reporting having education “up to graduate school”.  This is the likely result of the way the experiment was promoted (through academic list-servs and email invitations), although it could also represent a bias towards respondents who were more comfortable or familiar with the particular form of web engagement.

Significance of contribution

Two questions were asked which can be used as a guide for how important people thought their contributions were.  These were, “How long will you remember this story?” and “Who do you think should pay attention to this story?”.  Of the two, approximately 70% stated that they would remember the story “Forever” or “For years”, suggesting that respondents felt strongly about the significance of the stories they were contributing.  Over 70% thought that “The World” or “My Country” should pay attention to their story.

The significance of the anecdotes was echoed in the comments section and in several post-experiment interviews, where respondents indicated that they spent time to make a thoughtful contribution to the exercise.  This combination suggests that the results of the experiment are not trivial and can be used as valid material for the construction of serious future scenarios.

Comments and feedback

Feedback from the experiment focused on two areas, first, technical aspects of the user interface and survey design and second, thematic thoughts or feedback relative to the subject area.

Overall feedback was very positive, with a significant number of respondents indicating a positive response to the approach and desire to learn more. Negative feedback was by and large entirely constructive, with specific thoughts and recommendations for improving the capture process.  This in and of itself is remarkable, giving the potential for spam and trolling which such an open approach to internet participation can attract.

Key comments related to the technical aspects of the experiment were:

  • Ambiguity about what was being asked for and the desire for “seed” examples or prompts
  • Desire for a clear, step-by-step model at the beginning to help users understand the process
  • Confusion over specific terms or words, especially with the signifier labels
  • Desire for immediate feedback or follow-up
  • Desire to share the results and see other’s stories and results
  • Technical frustrations such as with the text editing in Flash

Key comments related to the thematic material were:

  • Excellent and important approach on a timely question
  • Positive feedback on the use of a narrative capture approach
  • Recognition of the thought and research which went into the process
  • Enthusiasm for a distribution cognition approach to foresight
  • High level of interest in the results and outcome

Drivers identification

One of the key goals of this experiment was to see if a rich set of drivers and forces for scenario creation could be collected in a rapid, distributed fashion.  Respondents were asked to score their contribution in terms of the “Magnitude of Impact” on various topics, as well as to identify the relative level of uncertainty involved and the time frame of impact.

  • 160 stories were classified as having high Social impact (>75% score)
  • 49 stories were classified as having high Environmental impact (>75% score)
  • 127 stories were classified as having high Economic impact (>75% score)
  • 159 stories were classified as having high Political impact (>75% score)
  • 41 stories were classified as having high Technological impact (>75% score)

Note that a single story can be classified as having impact in multiple categories, leading to rich material cross-impact interpretation.  Of these responses, 83 were classified as having a “High” level of uncertainty and 21 were classified as having a “Low” level of uncertainty.  Finally, 29 stories were classified as “Short term”, 88 as “Medium term” and 55 as “Long term”.  This mix of subject factors, uncertainty levels and time frames provided a rich basis for the identification and clustering of impact factors into critical certainties and critical uncertainties.

Subject tagging

Another goal was to test the use of folksonomic tags to identify interrelated impacts and causal relationships.  This approach was successfully tested in previous experiments using a Wikipedia style form for fragment entry (email me for more details), producing a rich network of causal relationships between fragments.

In this experiment respondents were asked to type in subject tags for the “Primary” and “Secondary” impacts related to their story, separated by commas.  This produced tag phrases such as “helpless public health staff, talk about making change or really doing it,” or “improved scrutiny, greater accountability, improved staff morale”.

Past experiments used a pre-defined database with predictive text completion to successfully eliminate excessive overlap between tags.  This function was not implemented in the current experiment, resulting in completely free form tag entry.  This has two effects.  First, the relationships between tags is non-standard, thereby prohibiting automated relationship analysis between story fragments.  Second, tags phrases were far more open ended, resulting in a richer qualitative input to the story.

In practice this meant that the “Primary” and “Secondary” tags provided rich, higher level abstraction which is proving very useful in the scenario narrative creation.  But this came at the cost of the automatic relationship building so successfully employed in previous experiments.  The relative utility of both will be explored in more detail later.

Narrative archetypes

A final goal of the experiment was to test the process of using futures “archetypes” as frameworks for semi-automated grouping of responses into pre-definined narrative structures with different meanings.  This work is based on Wendy Schultz’s content analysis of over 35 different scenarios generated in a range of different contexts.  Although Schultz’s work uses six different narrative archetypes, only three of which were able to be used for this experiment due to time and technical constraints of the current web system.

Although this aspect of the experiment is still being explored, preliminary results suggest that the coding along “Distinguishing Characteristics” for each narrative archetype was quite successful.  Although some respondents reported confusion over the meaning and interpretation of the labels used, preliminary grouping of story fragments into narrative clusters based on self-coded archetype scores has successfully collected fragments with the right emotional, social and political “tone” that each archetype was designed to represent.

The screenshot above provides an example of stories clustering around narrative archetype values.  The window to the left displays the title of five stories which displayed high values associated with the “Environmental and Social Balance” archetype.  The window to the left displays the text for the highlighted story.  The story describes a situation where different groups need to reassess their values and collaborate together to create balanced policy.  The author called this resetting their “inner operating system”.  Both the tone and the language of this story is highly consistent with the values of this archetype, which typifies values of balance, harmony, equality and integration.

Additional analysis of the emotional content of anecdotes coded by archetype provide further detail.  Respondents were asked how their story made them feel, through a drop down menu of various emotional states.  Clustering anecdotes by archetype uncovered rough groupings of emotional states, with the “Socio-ecological balance” archetype attracting stories coded primarily as “Angry” and “Informed”, the “Centralised control” archetype coded predominantly as “Sad”, and the “Free market exploration” archetype coded primarily as “Glad”.

These two examples illustrates how the use of self-signified scenario archetypes can quickly and accurately cluster similar stories around shared values. Further analysis is currently under way.

Conclusion

Although this experiment was the result of a rapid collaboration under a tight deadline, I was surprised and pleased with the quality of contributions we received.  The deeper I dig into the data, the more impressed I am; both the thoughtfulness and depth of data collected and with the speed and efficiency with which it was gathered.  I am still familiarising myself with Snowden’s SenseMaker Suite software, but even the preliminary explorations outlined above have proven extremely interesting.  Overall I would judge the experiment a success, both as a “learning by doing” proof-of-concept and as an actual content collection and analysis procedure.

In the coming weeks I’ll be putting up more of the substantive results.  I am currently grouping drivers by theme and impact, using a combination of their scores and subject tags.  After this is complete I’ll take some time to reflect on the overall process and how this fits into the larger intellectual framework of foresight and scenario planning.  In the mean time, please feel free to comment and share this.  We look forward to your thoughts and recommendations for future tests.

2 Comments

  1. Greg Rippon
    Posted May 5, 2010 at 4:30 am | Permalink

    Hi Noah,

    Very interesting – I was a participant. I am looking at using Sensemaker on a number of projects, one in particular for an Industry in Australia.

    It would be good if we could connect as what you are doing now, drivers, is what I am interested in using it for.

    Please conatc me if you can via email ASAP.

    Thanks
    Greg Rippon

  2. Noah Raford
    Posted May 10, 2010 at 8:28 pm | Permalink

    Thanks Greg, I’ll email you offline ASAP.

One Trackback

  1. […] post, although you can view a detailed description of the first experiment using the system here.  You can also see the full list of quotes and anecdotes by clicking here. Although a certain […]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*