CHAPTER 4. THE REAL PROBLEM: DISAGREEMENT WITH EPISTEMIC
1.4 Some Intuitions Possibly Track Truth Better Than Others
Perhaps some peoples’ intuitions are more reliable than others. For example, some people share the same moral intuitions which cause them to make the same moral judgments after evaluating cases. Others, who lack those intuitions, make contrary judgments. Notice, however, that neither disputant has a noncircular proof for the reliability of their intuitions. More importantly: epistemologists not only lack a noncircular proof for the reliability of
our faculties (in general), it’s hard to conceive of what one would even look like.158 Given
this: why think one needs a noncircular proof for the reliability of one’s intuitions in order to justify one’s beliefs that derive from them? To put the point differently, just because one lacks a noncircular proof for the reliability of the method from which one’s belief derives, it doesn’t follow that one is irrational or has a normative defeater undercutting one’s subjective justification. For if such a proof was required, then virtually all of our beliefs are irrationally held since even perceptual beliefs can’t be vindicated as reliable from a neutral standpoint without begging the question.
Granted, we do well by appealing to the superior track record satellite images have
in predicting the weather over the method of appealing to groundhogs and aching joints.159
But the problem of lacking a neutral way of determining which intuitions are more reliable at producing true beliefs than others is not restricted to philosophy. Quantum physicists disagree over the determinacy or indeterminacy of quantum events; psychoanalysts disagree with behaviorists over what constitutes legitimate behavioral data; biologist disagree over whether or not genes encode information about phenotypic traits; etc. But it’s too strong
158 One might consider how even an omniscient person could provide such a proof. This person presumably
knows all true propositions and doesn’t believe any false ones. But other than the authority of this person’s testimony, it’s hard to see how a noncircular proof can be provided, despite knowing all true propositions.
to suggest such disputants are irrational for believing what they do in these domains despite lacking a non-question begging argument for why their method is more reliable than their disputants. Nor do they seem to suffer from a normative defeater or subjective rationality defeater against their belief that they know such things. Such disagreement may provide a partial-defeater (at best) or a potential-defeater against one’s belief. But a full subjective rationality defeater does not seem to be the right way to normatively evaluate cases of disagreement. To get at this put a little differently, let us consider a familiar case of moral disagreement.
TORTURING CHIDLREN: Jamie thoroughly enjoys torturing children in ways I will protect the reader from hearing. Each episode is the same: Jamie spends days torturing the new innocent child that has been recently
abducted in front of the child’s parents before eventually murdering, first, the child, and then the parents, just so the parents can see their children in agony before they experience their own death. Jamie is so elusive and knows how to avoid getting captured by others, including the legal
authorities. Jamie thereby continues terrorizing a community that is stricken with grief and paralyzed by fear of the next attack.
Suppose a non-philosopher or layperson was asked to evaluate the moral status of Jamie’s actions. Further, suppose such a person was asked to morally evaluate this case alongside two redoubtable professional philosophers who specialize in value theory. One is an ethical
egoist; the other is a moral nihilist.160 The ethical egoist believes Jamie’s actions are not
morally wrong because Jamie’s self-interests are being satisfied. Of course, the ethical egoist may accede to having the intuition or sentiment that such an act is “wrong.” But the ethical egoist also has some kind error theory that undercuts the reliability of the intuition’s ability to track truth. Furthermore, the ethical egoist places a greater value on satisfying one’s self-interests. So, the error theory and primacy of the value of self-interest can, in principle, provide justification for such acts of torture.
160 This case was inspired by Michael Bergman. See Michael Bergman, “Rational Disagreement After Full
Regarding the moral nihilist, s/he believes Jamie’s actions are not morally wrong because such an evaluation is based on a false assumption: namely, objective moral values exist. Since there are no objective moral values, there are no morally right or wrong
actions—period. So the moral nihilist may share an intuition or sentiment that such acts are “bad.” But this intuition or sentiment is a mere appearance that doesn’t accurately capture anything about reality—specifically, that objective moral values exist.
Now suppose that each disputant disclosed all the relevant reasons, evidence, and arguments for both their ethical theories and evaluation of the TORTURING CHILD case. Suppose further that the only ground for the layperson’s belief that Jamie’s actions are wrong is an intuitive insight—that is, s/he just “sees” that Jamie’s actions are wrong in a basic, non-inferential way. The other two disputants may, perhaps, share this intuition; nevertheless, they both have an error theory for why their intuition doesn’t track truth.
Now the upshot: the fact there is disagreement between the disputants at the level of conflicting intuitions doesn’t imply that the layperson, or anyone else for that matter, is being epistemically irresponsible or irrational for retaining belief in the face of
disagreement. Nor does it demonstrate that intuitions are an unreliable method for evaluating the case. That is, the disagreement itself is not enough evidence to conclude intuitions are an unreliable method to get the truth. Therefore, disagreement that is sourced in a philosophical method that employs intuitions doesn’t provide a subjective rationality defeater to one’s belief that Jamie’s actions are wrong; moreover, disagreement alone is not
enough to undercut that such a belief is an instance of knowledge.161
161 Indeed, I agree with Bergman who says, “Instead of being moved to doubt the reliability of our own
beliefs on the topic, we are moved to feel badly for the ones with whom we disagree and to be glad that we are fortunate enough not to lack the [intuition] we have or to have the misleading apparent insights that they have.” See Bergman, “Rational Disagreement After Full Disclosure,” p. 346.
In light of these reasons, it seems the problem of philosophical method (in general) and intuition (in particular) lacking a consensus amongst its practitioners is not enough to undercut the justification and potential knowledge such methods can deliver. The fact that one cannot provide an objectively neutral, noncircular reason for the reliability of
philosophical method and intuition is not a problem unique to philosophical method. Indeed, it is a problem that plagues all basic sources of justification and knowledge. Even perception, which is the method used by scientists, is not immune to this problem, despite the kind of consensus some of the hard sciences enjoy. Just ruminate over the typical case involving eyewitnesses testifying to the facts about a car accident. There will inevitably be discrepancies once we compare each independent report. So, unless we are willing to relegate sources like perception and memory to the bin of other unreliable sources like wishful-thinking and Ouija boards, then inter- or external-inconsistency alone is not
sufficient to undercut reliability. If it were, we wouldn’t be left with any reliable sources.162
Goldberg and Kornblith do well, then, to heed Alston’s advice, which traces back to Reid, when he says, “[A]n absolute ban on inconsistency in output will deprive us of all sources of justification, [so] it is clearly the better part of wisdom to require only that the source
not generate ‘persistent and massive [internal] inconsistency.’”163 I think once we recognize
162 I’m not denying that perception used in easy domains like distinguishing middle-sized objects from other
ones is more reliable than intuitions used in difficult domains like philosophy, morality, and politics. What I am pointing out, among other things, is that we lack a metric to determine what degree of reliability a method must meet in order to be epistemically responsible for employing it. The degree of reliability perception must meet is going to be much higher than that of a person we deem reliable at getting base hits. Having the ability to perceive a vehicle bearing down on you as you cross the road at only a 30% success rate is not going to meet any standard for reliability. But getting base hits 30% of the time in Major League Baseball is a degree of success taken to manifest reliability. And any Major League coach would be rational in putting such a hitter at the top of their lineup. But because we lack a neural, non-circular way of verifying the reliability of many belief forming sources we use in an array of domains, we lack the data and resources needed to come up with a definitive answer to how reliable some methods in certain domains need to be in order to be rationally employed. I borrow this point from William Alston in Perceiving God, p. 238.
163 William P. Alston, Perceiving God, p. 235. The scope of persistent and massive inconsistency seems more
one of the goals of philosophy is to get the truth in domains that are by their very nature difficult and are, in principle, resistant to a neutral, noncircular way of demonstrating the truth to others, the benefits of convincing everyone—thereby achieving consensus—will be seen as misplaced. As a result, we will see disagreement as something we should expect.
2. THE ARGUMENT FOR SKEPTICISM FROM EPISTEMIC SUPERIOR