The New KM Sites

dkms
macroinnovation
kmci


Other KM Sites

AOK KnowledgeBoard
KT web
KM Metasite
KM Network
Knowledge Management Magazine
KM World
Destination KM
Knowledge Connections
ValueTrue
KM Cluster



KM Blogs

Knowledge Bridge
How to save the world
Mathemagenic
Knowledge-at-work
Gurteen Knowledge Log
Networks, Complexity and Relatedness
Many 2 Many
Blog of Collective Intelligence
A Working Model
John Maloney's Weblog
Knowledge Musings
Intellectual Capital Punishment
How Do You Know That
McGee's Musing
Judith Meskill's knowledge Notes
Value Creation by Communities of Practice
News & Opinion on All Things KM and CM
Knowledge Jolt With Jack
When
Connected Selves
Seb's Open Research
R&D Notepad
KM Blog
Curiouser and Curiouser
High Context
Jon Husband's Wirearchy
Hugh's Ramblings
Natural Storytelling
Robert Paterson
Shawn Callahan
Synap Shots
Thought?Horizon
Tins
Bob Andrew




Subscribe to "All Life Is Problem Solving" in Radio UserLand.

Click to see the XML version of this web page.

Click here to send an email to the editor of this weblog.


Wednesday, May 05, 2004
 




Angels Standing in the Sun (J. W. M. Turner, 1846)





What About Knowledge Claim Evaluation?


It must be the times. Go to any Knowledge Management (KM) professional meeting. Read any KM Journal or popular magazine. Join any KM newsgroup. The story is the same. Except for those pesky KMCI types and a few of their friends, no one seems to be interested in practices, methods, or theory about evaluating knowledge claims. Now, given the general and long-standing interest in decision support, and the fact that KM is often justified as improving it, isn't this passing strange?

The problem of knowledge claim evaluation is a decision problem itself. It is the problem of selecting the best among competing knowledge claims, and the problem exists whether or not one thinks that knowledge is a type of belief network, or whether it is a type of semantic network. For even if one thinks that semantic networks are only information, one should still care very much about the relative quality of information and its relationship to knowledge (viewed as belief) and should, therefore, select among competing knowledge claims by deciding which one has the highest quality. So, why aren't KM professionals concerned about Knowledge Claim Evaluation?

Explanation1: "The Old Knowledge Management" and Knowledge Sharing

"The Old Knowledge Management" is about Knowledge Sharing. Its value propositions are better decision support, higher job productivity and performance, and capture of knowledge assets that would otherwise leave the organization. The Old Knowledge Management is not about making new knowledge, problem solving or innovation. So why should it be concerned with Knowledge Claim Evaluation, the sub-process that allows us to decide what is knowledge and what is "just information"?

Even if one thinks there's some truth to this explanation, the Old Knowledge Management has now been supplemented by a concern for knowledge making and innovation. "Second Generation Knowledge Management" (SGKM) has arrived and KM is concerned with much more than knowledge sharing as a visit to any of the major periodicals and news magazines in the field will attest. Yet the appearance of SGKM has hardly increased the concern for Knowledge Claim Evaluation within the mainstream of KM.

Explanation 2: The Belief that Knowledge Making in Business is a Practical Activity and Includes No Time or Resources for Knowledge Claim Evaluation

I've heard from some that Knowledge Claim Evaluation is not very important in KM, because it is a business activity, process, or discipline, not a science. The implication, of course, is that science uses Knowledge Claim Evaluation because of its deliberative, exacting, theoretical, and precise character, while business with its much more imprecise and action-oriented practical reasoning just can't afford the time and effort that the deliberative approach to knowledge making requires.

This line of reasoning, if it represents a widespread attitude in KM, may provide a part of the reason why there is so little concern about Knowledge Claim Evaluation in KM. As I explained in All Life Is Problem Solving, however, all of our non-routine knowledge making, and that means whether in science or business, or any other area of organizational or human behavior, involves problem recognition, formulating tentative solutions, and error elimination. In organizations we do perform Knowledge Claim Evaluation. It is how we attempt to eliminate errors in our knowledge claims. The only important questions are whether we do so with full awareness of what we are doing, and whether our practices produce knowledge claims that are effective in raising the quality of our business process performance or not.

Explanation 3: The Belief that Knowledge Claim Evaluation Is Based On Authority

Knowledge Claim Evaluation is not of great concern to KM because, since knowledge claims cannot be justified as true through evaluation, there are only three theories of evaluation that count in organizations anyway: (1) what managers think, (2) what experts think, and (3) what one's community thinks. In all three cases, some form of authority: managerial, expert, or community consensus, "justifies" our knowledge claims.

This is another view that may explain why Knowledge Claim Evaluation is not of greater interest to KM. If only authority can justify our knowledge claims, the issue of how we ought to select among knowledge claims is of no importance. We have no choice. What we select is determined by various authorities, by politics and not any rational procedure.

Explanation 4: The Belief That Knowledge Is Socially Constructed, Determined By Social And Cultural Background, and Unaffected By Reality

Social constructivism, an epistemological theory held by many in the social sciences, holds that reality as well as our knowledge of it is socially constructed and that such knowledge constructions are unaffected by an independently existing reality. Social constructivism often goes along with two other beliefs. First, the distinction between objective and subjective knowledge is meaningless because all knowledge is a function of our social and cultural context and can only be justified relative to that context. And second, such justification can only be provided by community consensus, since only it reliably reflects the influence of social and cultural context on our knowledge. Since community consensus is the only legitimate basis of knowledge, explanation 4 partly agrees with explanation 3. It holds that Knowledge Claim Evaluation is a simple matter of determining whether a knowledge claim network is backed by a community consensus. So we need not spend our time worrying about effective methods of Knowledge Claim Evaluation. All we need do is see to it that knowledge is effectively shared so that the community is informed. Then we just need to wait for consensus to emerge.

Of course, the problem with this reasoning begins with reality. Reality is not socially constructed.  Our knowledge of it is certainly mediated by our social networks, along with our psychological predispositions, and biological heritage, but it is also influenced by reality itself, which exists apart from our social construction of it. Since reality, and our knowledge of it, are it least partly independent, the issues of the correspondence of our knowledge claims with reality, i.e. of their truth, and of which of a competing set of knowledge claims is closest to the truth, need to be faced. And since neither correspondence to reality, nor closeness of approach to the truth can be measured directly, facing these issues means facing the issue of how we can effectively evaluate our knowledge claims.

We know enough about knowledge claim evaluation through the centuries to know that it is not effective to use any form of authority, including community consensus as a criterion for evaluating knowledge claims. Knowledge claims cannot be validated by community consensus, but rather should be continuously tested and evaluated in order to eliminate error.

The Job Ahead

Whether KM's lack of concern about Knowledge Claim Evaluation is due to the idea that only knowledge sharing is important, or to the idea that business is imprecise and neither needs nor has the time nor the resources for it, or to the idea that Knowledge Claim Evaluation in business can only be based on authority, or, to the idea that reality and our knowledge of it are both socially constructed, or all four, or some other reasons that haven't occurred to me, is interesting, but, in the end, secondary. What is important, is that this lack of concern means that KM has not been doing anything to enhance the key sub-process in the Knowledge Life Cycle, Knowledge Claim Evaluation. It hasn't been doing anything to distinguish among knowledge claims according to their quality, which also implies that it hasn't been doing anything to distinguish objective knowledge from information, or to measure the success of knowledge claim evaluation in producing effective knowledge.

Knowledge Claim Evaluation is not ignored in every field of business application today. Not even in fields that are closely related to Knowledge Management. Knowledge Discovery in Databases and Data Mining (KDD) has, since its inception in the 1990s, considered validating models an important activity, and continues to produce useful research on validation criteria that are applied in model estimation. But KDD has had little effect on KM, perhaps because its orientation toward using formal reasoning in development of its own perspectives is foreign to most KM practitioners.

The job ahead is to develop methods of Knowledge Claim Evaluation that will enable knowledge workers to do a better job of selecting among competing knowledge claims. In our book, Key Issues In the New Knowledge Management, KMCI Press/Butterworth-Heinemann, 2003, Chapter 5, Mark McElroy and I have begun this process by outlining a theory of fair comparison and two formal approaches to measuring "truthlikeness". But this is just the first little bit of work in an area that requires substantial effort.



12:24:42 PM    comment []


Click here to visit the Radio UserLand website. © Copyright 2004 Joe Firestone.
Last update: 7/18/2004; 11:29:24 PM.
May 2004
Sun Mon Tue Wed Thu Fri Sat
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31          
Apr   Jun