Monday, May 14, 2012

Analysis Notes: Excerpts: Richards Heuer's "Psychology of Intelligence Analysis"


Excerpts from Richards Heuer’s “Psychology of Intelligence Analysis”

A few perhaps counter-intuitive quotes from Richards Heuer, author of “The Psychology of Intelligence Analysis,” whose title could just as well have been, “The Psychology of Analysis.”

The book is available free in PDF format from the CIA here:


Scientists, engineers, and other analytically driven professionals should heed the warnings that Heuer brings to our attention in his book.

My comments are in red.

  • The question is not whether one’s prior assumptions and expectations influence analysis, but only whether this influence is made explicit or remains implicit. The distinction appears to be important. In research to determine how physicians make medical diagnoses, the doctors who comprised the test subjects were asked to describe their analytical strategies.  Those who stressed thorough collection of data as their principal analytical method were significantly less accurate in their diagnoses than those who described themselves as following other analytical strategies such as identifying and testing hypotheses.  Moreover, the collection of additional data through greater thoroughness in the medical history and physical examination did not lead to increased diagnostic accuracy (p. 41)

If you stress the collection of data, you have a higher chance of failure.  This may be shocking to hard core data-driven people.  To increase the chances of success, one must develop structured methods and analytical strategies.  More data will not help without structured analysis!  Furthermore, structured analysis will reveal whether or not more data is just a waste of time and resources.

  • Analysts should keep a record of unexpected events and think hard about what they might mean, not disregard them or explain them away.  It is important to consider whether these surprises, however small, are consistent with some alternative hypothesis. One unexpected event may be easy to disregard, but a pattern of surprises may be the first clue that your understanding of what is happening requires some adjustment, is at best incomplete, and may be quite wrong.  (p. 74)

When something unexpected happens that does not fit into the analyst’s working model, one must evaluate whether or not the model needs updating.  One should always pay close attention to the unexpected that challenges the working model.

  • Tactical indicators are specific reports of preparations or intent to initiate hostile action or, in the recent Indian case, reports of preparations for a nuclear test. Ben-Zvi found that whenever strategic assumptions and tactical indicators of impending attack converged, an immediate threat was perceived and appropriate precautionary measures were taken.  When discrepancies existed between tactical indicators and strategic assumptions in the five cases Ben-Zvi analyzed, the strategic assumptions always prevailed, and they were never reevaluated in the light of the increasing flow of contradictory information. Ben-Zvi concludes that tactical indicators should be given increased weight in the decision making process. At a minimum, the emergence of tactical indicators that contradict our strategic assumption should trigger a higher level of intelligence alert.  It may indicate that a bigger surprise is on the way. (p. 74-75)

The emergence of evidence that contradicts the working model may indicate that a big surprise is on the way.

  • New ideas are, by definition, unconventional, and therefore likely to be suppressed, either consciously or unconsciously, unless they are born in a secure and protected environment. Critical judgment should be suspended until after the idea-generation stage of analysis has been completed. A series of ideas should be written down and then evaluated later. This applies to idea searching by individuals as well as brainstorming in a group. Get all the ideas out on the table before evaluating any of them.  (p. 77)

Established ideas facilitate speed of development.  However, one must constantly guard against the dismissal of new ideas simply because they do not fit the model.  Analysts must always be open to challenging the established model.


  • Analysis identifies and emphasizes the few items of evidence or assumptions that have the greatest diagnostic value in judging the relative likelihood of the alternative hypotheses. In conventional intuitive analysis, the fact that key evidence may also be consistent with alternative hypotheses is rarely considered explicitly and often ignored.  (p. 108)

Data is useful only insofar as it has diagnostic value.  Data without diagnostic value is usually worthless to the problem at hand.  The collection of “more data” can often be a waste of time and resources if it serves no purpose to drive the analysis.

  • The most probable hypothesis is usually the one with the least evidence against it, not the one with the most evidence for it. Conventional analysis generally entails looking for evidence to confirm a favored hypothesis.  (p. 108)

Evidence does not prove any hypothesis, according to our best understanding of the Philosophy of Science.  Evidence can only disprove a hypothesis.  Thus, the generation of multiple hypotheses is crucial in any analytical exercise because of the dangerous tendency to settle upon the first hypothesis that the evidence seems to confirm.  Heuer’s Analysis of Competing Hypothesis method was invented to address this issue.

  • A familiar form of this error is the single, vivid case that outweighs a much larger body of statistical evidence or conclusions reached by abstract reasoning. When a potential car buyer overhears a stranger complaining about how his Volvo turned out to be a lemon, this may have as much impact on the potential buyer’s thinking as statistics in Consumer Reports on the average annual repair costs for foreign-made cars. If the personal testimony comes from the potential buyer’s brother or close friend, it will probably be given even more weight. Yet the logical status of this new information is to increase by one the sample on which the Consumer Reports statistics were based; the personal experience of a single Volvo owner has little evidential value.  (p. 117)

Things that are very close and personal to us are sometimes the very things that cloud our thinking when we do analysis. 

  • People expect patterned events to look patterned, and random events to look random, but this is not the case. Random events often look patterned. The random process of flipping a coin six times may result in six consecutive heads. Of the 32 possible sequences resulting from six coin flips, few actually look “random.”  This is because randomness is a property of the process that generates the data that are produced.  Randomness may in some cases be demonstrated by scientific (statistical) analysis. However, events will almost never be perceived intuitively as being random; one can find an apparent pattern in almost any set of data or create a coherent narrative from any set of events.  (p. 130)

Sometimes we see patterns in random events because our brains are designed to look for patterns.  This is a common analytical pitfall.

  • Some research in paleobiology seems to illustrate the same tendency.  A group of paleobiologists has developed a computer program to simulate evolutionary changes in animal species over time. But the transitions from one time period to the next are not determined by natural selection or any other regular process: they are determined by computer-generated random numbers. The patterns produced by this program are similar to the patterns in nature that paleobiologists have been trying to understand.  Hypothetical evolutionary events that seem, intuitively, to have a strong pattern were, in fact, generated by random processes.  (p. 130)

Even field experts can see patterns where there are none, because they are looking for patterns.  This is not to say that patterns do not exist, but to remind us to ask the question as to whether or not we are forcing patterns upon the data where there are none.