In a commentary piece, Kahneman (1991) characterised the judgment and de- cision making field by three features: the role of the normative theory of rational belief and choice, the emphasis on risky choice, and the cognitive, rather than social or emotional, focus. Put simply, the judgment and decision making field aims to explain the cognitive basis of human thinking, and in particular, its

departure from rationality. This latter aspect is known as the heuristics and bi- ases tradition and was pioneered by Kahneman and Tversky in the early 1970s (e.g. Tversky & Kahneman, 1971, 1974). This area of work has attempted to understand human rationality (or irrationality) by examining the biases we are prone to and their basis in heuristic processes.

4.2.1

Heuristics and Biases tasks

Law of large numbers problems

Tversky and Kahneman (1974) gave their participants the following problem: A certain town is serviced by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50 percent of all babies born are boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50 percent, sometimes lower.

For a period of one year, each hospital recorded the days on which more than 60 percent of the babies born were boys. Which hospital do you think recorded more such days?

• The larger hospital • The smaller hospital

• About the same (that is, within 5% of each other)

According to the statistical law of large numbers, the greater the sample size the more closely it represents the population. By this rule, the large hospital should have a birth rate of boys that is closer to 50% than the small hospital. Therefore, the small hospital will have more days where over 60% of the babies born are boys. In contrast to the law of large numbers, most participants (56%) thought that the hospitals were ‘about the same’ in terms of the number of days on which more than 60% of babies were boys. Equal numbers of participants (22% each) chose the smaller hospital or larger hospital. This and many other studies (e.g. Neilens, Handley & Newstead, 2009; Nisbett, Krantz, Jepson & Kunda, 1983; West et al., 2008) have shown that, on the whole, people do not invoke the law of large numbers when they should.

Base rate neglect

A base rate is the probability of an event occurring in the absence of any other information. For example, in a sample of 100 people where 99 are women, the

base rate of women in the sample is 99/100. Base rate neglect is another issue that was raised by Kahneman and Tversky (1972). The issue is nicely described with Tversky and Kahneman’s (1982) taxi problem:

A taxi is involved in a hit and run accident at night. In the city, there are two taxi firms, the Green Cab Company and the Blue Cab Company. Of the taxis in the city, 85% are Green and the rest are Blue.

A witness identifies the offending cab as Blue. In tests under similar conditions to those on the night of the accident, this witness correctly identified each of the two colours 80% of the time, and was wrong 20% of the time.

What is the probability that the taxi involved in the accident was in fact blue?

According to Bayes’s rule, given that the base rate of Blue cabs is .15 and that the witness said it was blue with .8 accuracy, the probability of the taxi actually being Blue is .41. Conversely, most of Tversky and Kahneman’s (1982) participants rated the probability as .8, which is simply the accuracy of the witness. This suggests that participants do not take account of the base rate information at all.

The Linda problem

The famous Linda problem originated with Tversky and Kahneman’s (1983) study, and demonstrated a bias towards thinking that a conjunction of two factors could be more probable than either factor alone. The problem goes like this:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti- nuclear demonstrations.

1. Linda is a teacher in elementary school.

2. Linda works in a bookstore and takes Yoga classes. 3. Linda is active in the feminist movement.

4. Linda is a psychiatric social worker.

5. Linda is a member of the League of Women Voters. 6. Linda is a bank teller.

7. Linda is an insurance salesperson.

8. Linda is a bank teller and is active in the feminist movement. Participants were asked to rank the eight statements associated with Linda in order of their probability. The interesting finding was that participants were inclined to rank the conjunction ‘Linda is a bank teller and is active in the feminist movement’ as more probable than the constituent part ‘Linda is a bank teller’, presumably because the description leads people into think that Linda must be a feminist, whether or not she is a bank teller. However, it is simply not possible for a conjunction of two characteristics to be more probable than either one alone.

Framing bias

Framing bias describes the finding that participants may give different responses to two questions which are essentially the same, but which are framed differently. Take this example from Tversky and Kahneman (1981):

1. You are a health service official making plans for dealing with a new disease that is about to break out. It is expected to kill 600 people. Your scientific advisors tell you about the con- sequences of two possible treatment programmes: Programme A will definitely save 200 lives, whereas Programme B will have a one-third (.33) chance of saving 600. Which programme will you approve?

2. Your colleague has a choice between Programme C, which will certainly result in 400 deaths, and Programme D, which has a two-thirds chance (.67) that 600 people will die. Which should she approve?

In the first scenario participants are more inclined to choose Programme A, whereas in the second scenario Programme D is the preferred choice, although of course A and C are equivalent and B and D are equivalent. This is thought to reflect adversity to risk when it is framed in terms of positive outcomes, but preference for risk when it is framed in terms of negative outcomes. Nevertheless, this pattern of responses demonstrates a departure from logic.

Sunk cost fallacy

The sunk cost fallacy refers to the tendency to allow previous sunk costs (past costs that cannot be recovered) to affect current decision making. For example, say you bought a non-refundable ticket to a concert, but on the day you felt

very ill and did not want to go. If you thought you should go anyway because you had paid for the ticket then you would be committing the sunk cost fallacy. Toplak, West and Stanovich (2011) gave their participants the film problem from Frisch (1993). First, participants are told to imagine that they are staying in a hotel room, and they have just paid $6.95 to see a film on TV. Then they are told that 5 minutes in the film seems pretty bad and they are bored. They are asked whether they would continue to watch the film or switch to another channel. Second, participants see the same scenario except that they have not had to pay for the movie. They are again asked whether they would continue to watch the movie or switch to another channel. If participants report that they would change the channel when the film was free but that they would keep watching when they had paid for it, they were presumed to have committed the sunk cost fallacy. This was the case for 35.8% of participants in Toplak et al.’s (2011) study.

Outcome bias

Outcome bias is the tendency to rate the quality of a decision based on the outcome rather than on the situation at the time the decision was made. A problem that is often used to measure outcome bias derives from Baron and Hershey (1988) and has been used in many studies by Stanovich (Stanovich & West, 1998, Stanovich & West, 2000, Stanovich & West, 2008, Toplak et al., 2011). Participants are told about a 55-year-old man who had a heart condition and who was given an operation with an 8% mortality rate. The surgery was successful and participants rated the quality of the decision on a seven point scale. Later participants are told about a patient with a hip condition who was given an operation with a 2% mortality rate. Despite the decision to operate being objectively better, the patient died during the operation. If a participant rates the first decision (with a positive outcome) as better than the second (with a negative outcome), they have displayed outcome bias.

Gambler’s fallacy

The Gambler’s fallacy refers to people’s misunderstanding of chance. Often, people incorrectly believe that what has happened in the past can affect the probability of future events. Toplak et al. (2011) gave their participants two problems designed to tap into the gambler’s fallacy. The first problem went as follows:

When playing slot machines, people win something about 1 in every 10 times. Julie, however, has just won on her first three plays. What

are her chances of winning the next time she plays? out of

The correct response to this problem is 1 out of 10, the odds given in the question. The fact that Julie has already won three times has no bearing on the probability that she will win on any subsequent tries. Toplak et al. (2011) found only 69.4% correct responses to this problem in their study with undergraduate and graduate students. The second problem they gave participants was as follows:

Imagine that we are tossing a fair coin (a coin that has a 50/50 chance of coming up heads or tails) and it has just come up heads 5 times in a row. For the 6th toss do you think that:

1. It is more likely that tails will come up than heads. 2. It is more likely that heads will come up than tails. 3. Heads and tails are equally probable on the sixth toss.

The correct answer is 3, because again the past events are irrelevant to future probabilities, and in this case 92.2% of participants in Toplak et al.’s (2011) study answered correctly, suggesting that the bias is shown inconsistently across tasks.

Summary

This section has presented some of the common tasks used to measure pervas- ive biases in human judgement and decision making. The problems tend to resemble real world scenarios and each measures a small aspect of human reas- oning (usually on a binary scale) which may be important in a limited range of scenarios but which are not necessarily more widely relevant.

The next section discusses deductive reasoning, which may be measured with problems resembling the real world, but which often is not. Deductive reasoning tasks require a necessary conclusion to be derived from given premises. Necessity means that the conclusion must be true when the premises are true. As such, deductive reasoning is about assessing logical validity when all the necessary information is available, rather than about making decisions in the face of limited information.

In document Advanced mathematics and deductive reasoning skills: testing the Theory of Formal Discipline (Page 78-83)