Fallacies of Reason

In determining the likelihood of success for a particular project or program, we should adopt a scientific approach to assessing and analyzing data.

Too often, we base our expectations on highly subjective sources of information. In determining the likelihood of success for a particular project or program, we should adopt a scientific approach to assessing and analyzing data.

During the presidential election of 1936, a massive effort by the Literary Digest was undertaken to forecast the outcome of the race between incumbent Franklin D. Roosevelt and Kansas Governor Alfred M. Landon. The Digest distributed more than 10 million ballots, receiving a response of about 2.4 million. The Digest reported its prediction: The Republican candidate, Landon, would receive 370 electoral votes; the Democratic candidate, Roosevelt, would receive 161.

Based on the amount of data collected, the prediction was reasonable. As shown in Figure 1, however, the results of the forecast were incorrect. Roosevelt won the election by an overwhelming margin, receiving 523 electoral votes to Landon’s 8. Why was the forecast so inaccurate?

All too often, we base our conclusions on:

• What we think we see.

• What we want to see.

• What someone else wants us to see.

In doing so, we draw false conclusions that often can result in inaccurate and sometimes costly consequences.

What We Think We See
The intent of the Literary Digest’s research was to predict the view of the entire voting population in the 1936 presidential campaign. The number of ballots administered and the corresponding number of responses would lead one to expect some level of accuracy in the prediction. Unfortunately, the researchers missed the mark, and thus misled readers. (See Figure 1.) The fallacy in its reporting was not the prediction itself; rather, it was in the way the Digest came to its conclusion.

The 10 million ballots were distributed to addresses collected from the Digest’s subscription lists as well as from telephone directories and automobile registration lists, which seems like a reasonable approach to conducting research. The researchers thought they had a good representation of the U.S. interest and thought their prediction would be accurate. But the year was 1936: Magazine subscriptions, telephone directories and automobiles were not evenly distributed throughout the country. The consequence was false reasoning (sample selection) based on factual premises (survey results) that resulted in false conclusions (inaccurate prediction).

We all view information, processes and outcomes differently. Our perception sets an expectation. But perception is relative.

Expectation based on perception often disappoints us. We think a person has a good quality in one dimension, therefore, we expect he or she will have good qualities in other dimensions. In essence, we attribute characteristics to a person that are consistent with what we already believe about the person. Because we expect someone to be honest or to provide good information, we perceive them to do so in every case. This “halo effect” impacts decisions we make and can lead to false conclusions.

Expectation often is influenced by the desire to be believed. The more we perceive our beliefs as being supported, the more we believe that what we see supports our belief. In January 2008, in the town of Stephenville, Texas, community members saw something in the sky. People who had already experienced a UFO “sighting” now had additional data supporting their belief. When a retired military pilot argued against the UFO sighting on “Larry King Live” by suggesting it was a government project, he provided the nonbelievers additional basis for their disbelief.

By letting expectation and desire drive our perspective, we miss the opportunity to better understand an issue that could help us achieve more accurate conclusions.

What We Want to See
Seeking to confirm is inherent in our nature. We look for information that confirms our beliefs rather than look for information that disconfirms our beliefs. When was the last time someone asked your opinion only to be disappointed because your opinion differed from what they wanted to hear?

People often purchase books, subscribe to journals, watch news channels or attend conference sessions to gather information to support their positions. Republicans at-large did not rush out to purchase Al Gore’s book An Inconvenient Truth. Conversely, Democrats generally aren’t logging on to Amazon.com in droves to purchase Wynton Halls’ book, The Right Words: Great Republican Speeches.

We read, listen to, hear and seek out information that will confirm our position, ideas and concepts rather than risk being proven wrong by positioning our beliefs to be challenged. This can result in conclusions based on incomplete information.

What Someone Else Wants Us to See
The influence of others often has the greatest impact on how we draw conclusions. We may be rock solid in our position, yet when one persuasive person or group tells us something different, we fold.

Authority is the biggest culprit in influencing how we view information and draw conclusions. Enron, WorldCom and HealthSouth felt the consequence of this particular lapse in reason. People have an underlying tendency to obey authority figures.

Groups also influence our views and decisions. When members of groups think alike, they insulate themselves from the opinions of others. Surrounding yourself with others who think like you limits the information available as well as the options for varying conclusions and decisions. Even when members of groups have differing perceptions, as members of the group change their opinions, greater pressure often is placed on those who do not. This often leads to groupthink, which also can be disastrous, as in the case of NASA’s decision to launch the Challenger in 1986.

These fallacies of reason are not isolated to politics, government and UFOs. A learning leader for a large corporate entity made the decision to launch a coaching initiative. The base cost was $30,000 per person with 15 people participating. When challenged on the expense, the reason for her decision to move forward was exposed: She explained, “Everyone knows that executive coaching can improve job performance.”

How can chief learning officers avoid fallacies of reason? They can open themselves up to scientific thinking.

Scientific Approach to Reason
Thinking like a scientist doesn’t mean you have to become one. Nor does it mean you have to routinely conduct controlled experiments. It means addressing a few simple questions when asked to draw conclusions about an issue, make a decision and/or support a claim.

1. What is the claim? The first question defines the claim or the issue at hand. For example: “Everyone knows that executive coaching can improve job performance.” Who can argue with a statement like that? The claim is absent of any real meaning. To develop meaning, specifics are needed — a claim must be “operationalized” in order to get the real picture of what is being stated. The intent is to get measurable clarification and a clear understanding.

Questions that could be asked if such a claim is made about executive coaching may include:

• How do you define executive coaching?

• What business opportunities can be improved?

• What is happening on the job that needs to change?

• What will people learn through that method they don’t already know?

• Who needs coaching?

2. What evidence supports the claim? Evidence can mislead us, so our conclusions must not be drawn from quantity of evidence alone. The quality of evidence is as important. More than 2 million responses out of a potential 10 million, as in the 1936 presidential forecast, represent a lot of data. But the data came from a select group. To counter this problem, look for patterns of evidence supporting a claim. More importantly, look for evidence that disproves the claim. When the evidence is presented to you, ask questions about the source of the evidence, potential bias of the source and the methodology used to develop the evidence.

3. What are alternative explanations? If evidence of the claim indicates, for example, that executive coaching actually increased sales by 30 percent and improved efficiencies by 45 percent, then ask for alternative explanations for that performance. Other factors might have contributed to the sales and efficiency gains. How does the executive coaching stack up? If the evidence supporting the claim does not account for other factors, then the evidence is meaningless. If the evidence does account for other factors, then what are those factors and how much did they contribute?

4. How reasonable are the explanations? Once the alternative explanations for the claim are identified, ask: How reasonable are they? Can the explanations be tested, or are they vague and ambiguous? How simple are the explanations? If they are too complicated with too many assumptions, throw them out. Look for the explanations with the fewest assumptions. Finally, look at the explanations (including the executive coaching) and see how they stack up against well-established knowledge.

Our conclusions too often are drawn from what we think we see, what we want to see or what someone else wants us to see. Wouldn’t we see more clearly if we fine-tuned our vision by taking a more scientific approach to our reasoning?