Knowledge Service Methodology


  • Share on Pinterest

Over the last few months I’ve been working on developing a Knowledge Service Methodology (400K, PDF) for Templeton College, University of Oxford (where I work) and the Metokis Project (EC Framework 6). I’ve summarised the conclusions below, but the full (albeit first draft) document can be found on the link above. It’s 60 odd pages – though there are pictures 🙂 – and I’m aware some parts flow better than others, nonetheless I thought it might be of interest? Do let me know any comments/criticisms etc. – I’m thinking about wikifying the thing (and so practising what I preach …) but I thought this might be a sensible first step.

  • We need to optimise the moderating layer in knowledge systems to improve knowledge worker productivity
  • Optimising this layer is essentially a context problem
  • This context problem is typically addressed in one of two broad ways.
  • The first way – the positivist approach – looks to model actor, task and system to embed context into the moderating layer. This plays to the strengths of the IT Agents in the moderating layer
  • The second way – the interpretivist approach – argues that context cannot ever be successfully embedded into a system because individuals and their environments are complex and ever changing. This acknowledges the needs of the human agents in the moderating layer.
  • Non-linear dynamic systems provide us with a model which might offer a valuable compromise between the positivist approach that supports much of context-aware computing, and the interpretivist approach, which makes sense of how we as human agents work with and develop context.
  • Such a model offers some valuable insights as to how, and when best, to model user, task and system in such a way that helps improve the performance of the moderating layer.
  • Consequently, using this model to help support knowledge systems should improve knowledge worker productivity.
  • Happily, tools already exist and are being developed that support implementations of this model.

The argument from theory and practice outlined above indicates various lessons that need to be taken to heart in any effort to optimise the moderating layer of a knowledge system.

The key lessons identified are:

  • IT agents need semantic metadata to be able to effectively moderate content. Economic considerations mean that the only type of metadata that can be supported is automatically created. Ideally this will be transparently created, rather than heuristically.
  • Taxonomies and ontologies are snapshots of consensus (the system, user and tasks they depict are fluid, fragmented and political). Treating them as anything else affects the ability of the system to be context-sensitive, hence there is a need for some ontological oscillation.
  • Where used, taxonomies and ontologies inherently affect the behaviour of the system.
  • External modellers also potentially bias the model. The best modeller of the actor is the actor; the best modeller of the group is the group; the best modeller of the organization is the organization. This is not to say that there is no place for external modellers. They may provide very valuable foils and “reality checks” to actor, group and organisation. Nonetheless, the less observer bias there is, the more useful the model.
  • To be relevant, context must be based on consensus.
  • Knowledge systems that are context-aware, and so consensus enabled, are more resilient if developed from the bottom up rather than managed from the top down (Snowden’s Children’s Party metaphor)
  • Projects and plans form a natural frame for a dynamic environment, as shown by Weick’s work on Sensemaking.
  • Effective decisions are based on consensus among perspectives and are action-oriented.
  • The mechanisms that work for individual content moderation should work for group content moderation thanks to the fractal nature of complex systems.
  • Tend to openness where possible (Open source content, “given enough eyeballs all bugs are shallow”, scale-free and bell-curve distributions of networks)
  • Broader, open aggregation allows for more resilient systems.
  • Overt or forced user profiling narrows possible aggregations of terministic screens. The context cycle is: conversation, document, structure conversation, document, and so on.
  • Using “dialogue” tools, such as Social Network Analysis, to identify network patterns allows you view emergent structures. These can then be encouraged.
  • The practical lessons of current social tools, suggest that the system should be only as resilient as it needs to be, and as reliant on self-policing as it can be, such as shown by the minimal editorial input into Wikis.
  • The practical experience from aggregators and communications networks suggest that individuals manage channels, not content.

And it’s these lessons that form the basis of the Methodology.

  • soulsoup
    Reply
    Author
    soulsoup soulsoup

    Knowledge Service Methodology

    Knowledge Service Methodology by Piers Young The complete version [PDF: 56 pages] Yet to read the complete version, but the excerpt looks very interesting. The key lessons identified are: IT agents need semantic metadata to be able to effectively moder…