|Explanations of Basic Fallacies|
Prepared by T. Gracyk
Ad Hominem (Personal Attack or Attacking the Person)
The fallacy of responding to an opponent's argument by changing the subject to the person who gave the subject, introducing the false assumption that a person of this sort cannot offer an argument worth considering. One can deflect attention from the arguer's position by shifting to discussion of the arguer's personality, character, associates, motives, intentions, qualifications, and so on.
When we attack the person, we ignore the content of the argument. But an argument is a relationship between premises and conclusion, and the argument's soundness (or lack of soundness) is completely independent of who is giving the argument. "There is nothing to what Smith says about the Nazi death camps; just remember that his family included prominent Nazis" may sound convincing, but this refusal to look at Smith's evidence is a refusal to evaluate the soundness of the argument. (After all, Smith may be ashamed of her relatives and might be confirming the existence of the death camps against anti-Semitic opponents who argue that the camps are a hoax.)
Given this definition, the fallacy is restricted to cases where there is an opponent and where one is responding to that opponent.
Recognition of the fallacy of attacking the person DOES NOT DENY OR CONFLICT WITH the legitimate need to evaluate the source of information that is put into an argument. Consider these two cases:
These responses to Green and Johnson are perfectly appropriate, since they are calling arguments into question by questioning sources of information in the arguments. They are pointing to flaws in the argument by discussing sources of specific premises, not dismissing the whole argument by appeal to facts about Green and Johnson.
A special case of Ad Hominem is phony refutation, in which one dismisses an argument or position by citing inconsistency between the speaker's words and actions. Inconsistency is a fallacy when the inconsistency is between two parts of an argument, but inconsistency between words and actions may be due to understandable moral weakness or to a legitimate change in one's position.
Yes, and the first speaker may regret that "wild" youth and may be offering you hard-earned advice that is worth your consideration. The smoker who advises you against smoking may have tried to quit many times, and is warning you of what you face if you make the mistake of starting. The third speaker may now understand, through personal experience, that underage drinking is a serious social problem, and has revised their thinking about the topic.
The Latin name for phony refutation is Ad Hominem Tu Quoque. Informally, it's known as the "You Too Fallacy."
As with many of the fallacies, ambiguity is only a fallacy if we first establish that it takes place within a context of reasoning! People make ambiguous statements all the time, but it doesn't have the status of a fallacy unless they are engaged in reasoning (in leading someone to a conclusion).
The fallacy requires the following:
When we cannot "pin down" the meaning of a premise due to its ambiguous wording, we must suspend judgment on its truth, making the argument unsound.
When the distinct meanings are generated by a punctuation error or error in grammatical construction, the fallacy is technically known as the fallacy of amphiboly.
Warning: There is no standard agreement on where to draw the line between ambiguity and equivocation. Many logicians treat the fallacy of ambiguity more like the way I have treated the fallacy of equivocation.
Example: "I don't care what the experts say. Someone who lives in my hometown burned to death in a car accident because he couldn't get out of his seat belt. Anyone who wears a seat belt is just asking for trouble."
Real Example (NY Times) -- A response to an essay on the value of college majors that promote critical thinking:
In 2006, I graduated from a respected 4 year university with a degree in Political Science, emphasis Middle Eastern Affairs. After failing to find a job in my field, and many years working at a bank, I'm back in school studying engineering. So much for critical thinking, no money in it.
NY Times, online comment by gitrjoda, March 5, 2011
It's invalid because there might be a different reason why the antecedent fails (in this example, maybe Sam was well rested but didn't study)
Appeal to Common Belief
Any argument that defends a belief by pointing out how many other people have the same belief. But consensus does not make something true. Just remember that even today, huge numbers of people remain ignorant of basic science and think that the earth is the center of the universe. The fact that most Americans believe in angels doesn't make angels real; the fact that most Americans believe that JFK was a great president doesn't prove that he was.
There is only a subtle difference between this fallacy and appeal to common practice.
Appeal to Common Practice (or Bandwagon Fallacy, or fallacy of mindless conformity)
Any argument that defends or recommends a behavior by pointing out how many other people have the same behavior.
Remember when you told your mother, "But I have to have it! All the other kids have one!"? She probably answered, "If all the other kids jumped off a bridge, would you do that, too?"
Just because large numbers of people do something is not particularly good evidence that you should, too.
There is only a subtle difference between this fallacy and appeal to common belief. Common practice is an appeal to people's behavior, while common belief is an appeal to their opinions.
The following advertisement is not a fallacy of appeal to common practice:
This advertisement does not tell us to patronize the place because it is popular. Instead, it tells us that we should go there because an authority (MSP Magazine) says it's the area's best Japanese restaurant.
Appeal to Fear, Appeal to Force (Scare Tactics)
An argument in which a strong emotion, in this case fear of what will happen if we disagree, is offered by the arguer as a reason to agree. Basically, the arguer says "Agree with me or else something bad will happen to you." Example: "How can you say that Bill Clinton was a sleaze? Remind me of that next time you want to borrow my car!" (There is no relationship between the two things, and the threat of no car is being used to force agreement.)
Keep in mind that fear can be a good reason to do something. I floss my teeth because I'm afraid that not doing so will lead to gum disease. But there is a real and pre-existing relationship between these things. But fear cannot itself be a reason in the necessary sense of evidence: fear cannot be a reason to agree with someone. In the fallacy, the arguer creates a threat to force agreement, where the arguer has no legitimate right to do so.
The Latin name for this fallacy is Argumentum Ad Baculum.
Appeal to Good Intentions
An argument that foolishly assumes that good intentions excuse all behavior. It often takes the form, "but he/she meant well." But as the saying goes, "The road to hell is paved with good intentions." Example: "I just can't believe that our government knew about Nazi death camps and did nothing about it. FDR must have had some good reason for it."
A variation of it is the foolish assumption that a good person can neither do nor approve of anything bad: "Brutus is one of the nicest people I've ever met, so it just can't be true that he beats his children!"
Appeal to Ignorance (Shifting the Burden of Proof)
Being ignorant is not a fallacy. We're all ignorant about a great many things.
The appeal to ignorance or argument from ignorance is a special case of false dilemma. The arguer assumes that every claim is either known as true or known as false, overlooking the agnostic position that we sometimes don't know if a specific claim is true or false!
The arguer does not spell out the assumption, but it is implicit in the argument, which proceeds from a premise that acknowledges ignorance about the issue: "I don't know that my position is false." (Usually in something like this wording: "You haven't proven me wrong" or "There's no evidence that I'm wrong" or "You might be wrong" or even "Nobody knows about this.") From this recognition of ignorance, the arguer draws the conclusion that their position on the issue is correct!
Viewed from a different angle, this fallacy misplaces the burden of proof. When an issue is being debated, someone who wants to defend the truth of a claim must have some evidence for it. One can't "prove" one's side by simply knocking down the other side. (Remember: you might both be wrong if there is a third alternative.) Don't confuse arguing for a claim with the legal standard of innocent until proven guilty, where the burden of proof is on the prosecution and where both ignorance and knocking down the prosecution argument are enough to render the person not guilty.
Ignorance is sometimes combined with wishful thinking, as in the popular saying, "What you don't know can't hurt you." Unfortunately, it can.
Appeal to Pity
An argument in which a strong emotion, in this case pity, is offered by the arguer as a reason to agree. Most often, the arguer warns us that disagreement will bring harm to the arguer. ("How can you say that I did badly on this exam, Dr. Watson? You know that if I fail this course I'll have to drop out of school!") Occasionally we are asked to agree so that harm will not befall some third party, as in a bogus charity appeal that tells us that small children will starve to death if we don't contribute (when the money is really going to line the pockets of the arguer).
Pity can be a good reason to DO something (such as giving to legitimate charity). This only works if we already see that there is a real and pre-existing relationship between our actions and harm to others. ("You shouldn't burn those leaves. Don't you know that by burning them you might give your next door neighbor an asthma attack?") But pity cannot be a reason in the sense of evidence, as a reason to agree with someone.
So we have a fallacy of appeal to pity in either of
The Latin name for this fallacy is Ad Misericordiam.
Appeal to Tradition
Any argument that defends a behavior or choice by pointing out that the behavior or choice is a longstanding practice. Unfortunately, many foolish and destructive behaviors are also very traditional, such as slavery, forced prostitution, and punishing children by hitting them with belts.
For an excellent example of how tradition is used to justify the mistreatment of children, click here.
This fallacy is very common in advertising, as in this simple example: (Reader's Digest, March, 1999, p. 15)
The fact that this brand has been made the same way for over one hundred years is no reason to buy it. There are many brands that have been around less time that I like much better than this one!
The Latin name for this fallacy is ad antiquitatem.
Begging the Question
The classic definition is that the arguer uses premises that are no more plausible than the conclusion. To see that this is a problem, remember that the conclusion is being supported, and support should be more secure than whatever it supports. So if the premises are just as questionable as the conclusion, the argument is unsound.
The argument usually begs the question (literally: avoids the issue through a questionable assumption) by using premises that seem like good reasons to the arguer, but only because the arguer is sure that the conclusion is correct. The arguer therefore offers reasons that will be just as problematic to others as would be the conclusion itself. This problem is sometimes known as "preaching to the choir."
A special case is begging the question by synonymy, in which the arguer simply uses the conclusion as a premise, but disguises this operation by rewording it (by saying something synonymous).
Another variation is loaded question, where the arguer biases the debate by "loading" a questionable assumption into a question.
A special case is the fallacy of misuse of hypothesis or persuasive definition. In this case, the arguer gives words a definition favorable to the argument, but has no good reason for defining things in this way except to advance the argument. This is most often done by attaching a special emotional significance to the words. But it can also be a case of giving a term an unjustifiably non-standard interpretation.
The problem of generating an unrepresentative sample by using a method of sampling that misrepresents an important subgroup of the population. Obviously, this is not an issue in a highly homogenous population. Looking at penguins in one location of Antarctica is fine if you just want to know the average height of penguins; where they live in Antarctica probably has no effect on penguin height. But choosing just one place to sample all Americans is going to be biased, because Americans include many, many different subgroups. So going to a shopping mall in Fargo, N.D., is not going to get you a representative sample when trying to determine how many Americans are of Norwegian descent (such a sample will be biased by over-representing that group).
One way to create bias is through loaded questions, which contain assumptions that influence the response.
If the questions are not themselves biased, most bias can be eliminated by stratifying the sample. Stratification is the process of sorting the sample into groups ("strata") that have been identified in advance as highly relevant to the issue being studied. For instance, when the topic is abortion, both a person's sex and religious affiliation will influence his or her position. If we cannot get a highly random sample, then we should stratify our sample for at least these two factors. So in addition to asking them whatever we want to know, we need to determine these additional facts. If we sample more than we really need, we can then sort the sample into the relevant subgroups, see if any are over-represented, and then randomly eliminate samples from the over-represented groups until we reach a sample in which the major sub-groups are represented according to their numbers in the general population. Notice that stratification only works when we already know which subgroups are relevant to the issue and we know their numbers in the general population.
Confirming evidence is a special case of biased sample. It is the special case of using a method that gets the arguer or researcher a result favorable to his or her desired conclusion. The method can create this bias either intentionally or accidentally. Either way, the fallacy consists in choosing a method that generates a large sample of one very specific, highly unrepresentative group. There are two ways to generate confirming evidence. One is to use a method that initially selects a sample from one particular group. For example, if I want to know how many Americans think that it's time for a woman to become the U.S. President, then I would generate confirming evidence if I only called women. It would be even worse if I only called women whose telephone numbers were provided by N.O.W. (the National Organization for Women). Similarly, if I'm interesting in supporting the idea that video games harm children, I might generate my sample of children by choosing children at a juvenile corrections facility.
The other method to generate confirming evidence is to ask a question that automatically favors one answer over other possible answers. If I want to "discover" strong support for private schools, I might ask people, "Do you favor continuing massive subsidies for our failing public schools or do you support directing some of that money to school vouchers that give parents a choice?" This method involves the fallacy of the leading question.
If one generates a sample with confirming evidence, stratification is not going to remove the biases from the sample.
Arguments that generalize are unsound when they have a biased sample or involve confirming evidence.
Four fallacies pose problems for arguments that try to demonstrate a cause in a population. They are:
The fallacy of reversing cause and effect
The fallacy of coincidental correlation
The fallacy of overlooking a common cause
The fallacy of Post Hoc
Circularity (Arguing in a circle)
In a situation that calls for evidence, circularity occurs when the person who needs to supply the evidence avoids doing so and instead offers, as evidence, the very thing that needs support by evidence. To put it another way: the very claim that is being debated is introduced into the argument as a piece of evidence for itself. Its Latin name: circulus in probando.
For example, consider this argument: "Clean-O Toothpaste is the best on the market, because it outperforms all competitors." Since something is only "best" when it outperforms its competitors, the evidence is merely a paraphrase of the conclusion that it supports. This is circularity by synonymy.
Circularity is sometimes equated with begging the question. However, I think that begging the question is better understood as a broader category of fallaciously offering evidence that is no better than the conclusion it defends. Circularity is the a special case.
Circularity is often disguised by introducing the repeated information, but doing so several steps removed from the conclusion. For example: "God's existence can be proven by the miracles that He provides. When a skydiver's parachute fails and yet a tree breaks the fall, allowing the skydiver to survive, that survival is miraculous. Since no one but God can do anything miraculous, such cases prove God's existence." The circle is revealed when one ask why God is needed to explain the skydiver's survival if a tree already explains it.
Fallacy of coincidence
Coincidence is a fallacy of sampling, but it is mainly a concern when we use sampling to establish a correlation in order to establish a cause. Basically, it is the fallacy of putting too much trust in a sample that, due to no other fallacy, fails to represent the general picture. How does this happen?
Statistical information has both a margin of error and a confidence level. The two are related. Failure to consult the margin of error can result in hasty generalization.
The fallacy of coincidence is the failure to understand the importance of the confidence level. No matter how carefully we sample a population, about 5% of the time we'll get a result that does not reflect what's really happening in the population. Sound sampling gives us a confidence level of 95% (we have a 95% likelihood that the truth falls within our margin of error).
Because we cannot know the truth falls outside our margin of error, it is foolish to trust studies and experiments that are not replicated by a second study or experiment. (If a separate study gets the same result and neither has any other fallacies, then there is almost no chance that both got the same erroneous result by coincidence!)
We can improve the confidence level (say, to 3%) by lowering our confidence level.
So what is the fallacy of coincidence? Basically, it is trusting statistical information that would, on further study, turn out to lack statistical significance. Either the result was one of those few cases predicted to be wrong by the confidence level, or the statistic is just a coincidence of the circumstances in which we conduct our study.
Either way, we can only say that a statistic is subject to the fallacy of coincidence when we can point to further data that shows the sample happened to be one of the "bad" results predicted by the confidence level.
It's invalid because the premises might be true while the conclusion remains false, due to some additional reason making the consequent of the first premise true as it originally stands. (In this example, perhaps Sam is well rested but Sam didn't study for the exam.)
An extreme form of provincialism in which someone identifies one's own needs and then settles a problem by simply determining how those needs can be met. Someone is egocentric when she or he oversimplifies a problem by refusing to consider the relevance of facts and information that conflicts with one's personal interests. A display of egocentrism constitutes a fallacy when it blocks discussion of relevant information in the context of arguing.
A common form of provincialism in which someone identifies oneself in terms of one's ethnicity and then settles a problem by simply determining how that group is benefited. Someone is ethnocentric when she or he oversimplifies a problem by refusing to consider the relevance of facts and information that conflicts with the beliefs and/or values of one's ethnic group. A display of ethnocentrism constitutes a fallacy when it blocks discussion of relevant information in the context of arguing.
As with many of the fallacies, equivocation is only a fallacy if we first establish that it takes place within a context of reasoning! People equivocate all the time, but it doesn't have the status of a fallacy unless they are engaged in reasoning (in leading someone to a conclusion).
The fallacy requires the following:
The fallacy can be intentional or unintentional. In the former case, the person giving the argument misleads the audience by exploiting the equivocation. In the second case, the speaker does not try to mislead, but the audience draws an unsound conclusion by misinterpreting statements that can be taken two different ways.
When the equivocation is created by a punctuation error or error in grammatical construction, the fallacy is technically known as the fallacy of amphiboly.
False or faulty analogy
When an argument by analogy overlooks significant differences, it is subject to this fallacy and is unsound. To accuse it of false or faulty analogy, one must note at least one significant difference between the things being compared, and must explain how the difference is relevant to the issue being debated.
False Dilemma (Limited Options Fallacy)
We show that the fallacy has taken place by pointing out one or more plausible but overlooked options.
In this fallacy, the argument is usually valid. So this is not a "formal fallacy" (the problem is not the form of the argument).
One variation of this fallacy is known as the Black-and-White Fallacy. In this case, the arguer oversimplifies a complex situation by seeing the situation as "black and white," putting all cases into one of two extreme categories. The arguer ignores "shades of gray." (Shown the color yellow, the arguer classifies it as white because it's more like white than black!)
Another variation is the middle ground fallacy (a.k.a. the split-the-difference fallacy or the moderation fallacy). In this variation, the arguer begins the argument with a false dilemma by proposing two extreme options. Based on their extreme nature, the arguer then proposes that the reasonable option must be in the middle ground between the two, and proposes a specific third option as that middle ground. Yet this remains a false dilemma if the arguer has ignored additional options that would be alternatives (such as none of them).
The fallacy of generalizing from a small sample and failing to take this into account in the conclusion of the argument. "Small" is a relative term here. A sample that is adequate in size when asking people who they plan to vote for will be too small for medical research that looks for low levels of allergic response to a new medication. Basically, the fallacy occurs when we present a conclusion that is more precise than the sample warrants.
To be more precise, every sample size creates a margin of error for the resulting generalization. These margins specify the range within which we can expect to find the correct answer. A 10% margin of error means that the truth is somewhere within a range of 10% on either side of the reported number. A 3% margin of error (commonly found in professional polling) means that the truth is somewhere within a range of 3% on either side of the reported number.
For example, suppose a unbiased sample of 100 Americans gets 55 positive responses to the question "Do you like ice cream with apple pie?" There is no fallacy of hasty generalization if the conclusion is reported as "Approximately 55% of Americans like ice cream with apple pie." There is a fallacy if we report it as "A clear majority of Americans like ice cream with apple pie." This is misleading, because our sample is consistent with the result that as few as 45% agree.
The fallacy of giving evidence against the position being defended, either directly, by undercutting the conclusion, or indirectly, by undercutting evidence being presented by the conclusion.
It is not a fallacy to state objections to one's argument if one then goes on to answer those objections. To recognize serious problems but to insist that one is still correct would be a case of inconsistency.
The fallacy of biasing an exchange by asking a question that has an unjustified assumption built right into the question, influencing the answer given to it.
Another version of this problem is known as complex question, where two unrelated topics are combined in a single question, so that answering on part will seem to answer the other, as well.
Loaded question is often done by introducing a false dilemma:
Misuse of Authority / False Authority
Often called Appeal to Authority, I prefer a title that doesn't suggest that there's a problem with appealing to authorities! It is often appropriate to defend the truth of a claim, or a recommendation of behavior, by citing the testimony or advice of some authority. The fallacy consists in citing someone who is not really an authority on the subject at issue.
The Latin name for this fallacy is ad verecundiam.
The fallacy takes many forms:
Some authors (such as V. R. Ruggiero), distinguish between misuse of authority (the person cited doesn't count as an authority) and irrational appeal to authority (versions 4, 5, and 6 in the list just above).
Variations of this fallacy appeal to other kinds of bogus authority, among them:
No True Scotsman Fallacy
This fallacy is sometimes treated as a type of ambiguity or begging the question. However, I regard it as primarily a slanter fallacy: the arguer adds the qualifying term "true" or "real" in order to secure their questionable assumption.
The fallacy involves making a claim and then, in the face of counterexamples, protecting the claim by adding the qualifying term "true" or "real." Anthony Flew gives this example:
We see this fallacy in operation whenever someone opines that "real Christians" behave a certain way, or argues that "true Americans" support the President, no matter what.
Overlooking common cause
This fallacy is restricted to arguments to establish a cause. It is the mistake of finding a correlation between two things, then drawing a conclusion without checking for other variables that are also correlated with those two. This problem does not occur in a controlled experiment, but it is a common problem in a study of existing behaviors and events.
Let's suppose we correlate two things, A and B. But perhaps A keeps turning up with B because some previous thing, X, is independently causing A and independently causing B. Here, X is the common cause of A and B.
Failure to screen for such things is the fallacy of overlooking common cause.
The full title is Post Hoc, Ergo Propter Hoc. This Latin phrase means "After it, therefore because of it." This fallacy is restricted to arguments to establish a cause. This problem does not occur in a controlled experiment, but it is a common problem in a study of existing behaviors and events.
It is the fallacy of thinking that if one thing happens and then another thing happens, the first thing was the cause of the second. But time order alone cannot show that something is a cause. At the very least, we also need a control group! The Post Hoc Fallacy occurs when someone does not understand the need for a control group and draws a conclusion based on nothing but the time relationship.
Many superstitions are based on post hoc reasoning.
Post Hoc is merely one of four common fallacies associated with causal reasoning.
Poverty of Aspect
Any fallacy of oversimplifying a complex issue due to a limited perspective on it. Some common sub-categories are egocentrism, ethnocentrism, and patriotism/nationalism. In most cases, it arises from over-generalizing, from looking at a problem only from a narrow professional perspective, or from assuming that one can identify "normal" patterns of behavior within human cultural patterns. See also provincialism (group identification that assumes group superiority).
A huge category of fallacies, covering all strategies for substituting argument wording for legitimate reasons for a position. Basically, we have a fallacy of prejudicial language whenever an arguer's choice of words is used to hide the arguer's introduction of a false or questionable assumption.
Major types include:
Any argument that attempts to settle a complex issue by appeal to group membership and group loyalty. Such arguments normally postulate an "us" versus "them" distinction and improperly assume that "our side" is automatically better than (or more trustworthy than, or more important than) theirs. Such reasoning is unsound because we often find ourselves to be members of groups who, as a group, turn out to be wrong. (Some social scientists call this inability to look beyond our group "poverty of aspect.")
A common variation is the appeal to loyalty. In this variant, the arguer says that we must act as a group and therefore cannot permit disagreement, or that we should not listen to informed, internal criticism. This form of provincialism is common in arguments against minority or dissenting views.
A common variation is the "it-couldn't-happen-here" assumption.
A common variation is nationalism: expecting others to agree on the basis of national identity. For example, advertisements aimed at the American market sometimes feature an American flag or red-white-and-blue imagery for no apparent reason. This is also know as flag-waving.
Red Herring (Changing the Subject / Lack of Relevance)
The fallacy of introducing, as reasons for one's position, a topic that is not of genuine relevance to the issue originally being debated. In effect, the arguer starts on one topic, changes the subject, and then proceeds as if there has been no change in subject.
Supposedly, the fallacy is known as "red herring" on an analogy with escaped convicts who might smear herring (a smelly fish) on themselves to throw bloodhounds off their trail.
Since a great many fallacies involve giving reasons that lack relevance to the issue under debate, we reserve the accusation of "red herring" to those cases where the argument fallacy is a simple change of subject, apart from one of the specialized tactics (e.g., Appeal to Pity, Appeal to Force, Straw Man, etc.)
Reversing cause and effect
This fallacy is restricted to arguments to establish a cause. This problem does not occur in a controlled experiment, but it is a common problem in a study of existing behaviors and events.
The fallacy occurs when we have a genuine correlation, but we have not clearly established which of the two things really comes first. We simply assume we know which comes first, but in reality it is the other way around. If it is plausible that we have turned the time order around, then the argument is unsound due to this fallacy.
As with many of the fallacies, slanters are only a fallacy if we first establish that it takes place within a context of reasoning! People use slanters all the time, but it doesn't have the status of a fallacy unless they are engaged in reasoning (in leading someone to a conclusion).
This fallacy is a type of prejudicial language in which the wording of one's argument is designed to influence the audience into accepting unstated but questionable assumptions. (To put it another way, the arguer "hides" assumptions through word choice.) Slanters are regarded as a fallacy because a good argument should make its assumptions clear to the audience.
One variation is the No True Scotsman Fallacy.
There are two versions of this fallacy. They are both given this name because they share a common idea that taking a first step will lead us to something we don't want. It is the unjustified assumption of this idea that is the fallacy. (When the assumption is justified, there's no fallacy, even if the argument otherwise looks like any other slope.)
The assumption in question is that choosing one thing leads to, or is equivalent to, choosing a second thing. But the move from the first to the second is not immediate: one leads to the other (or is shown equivalent) by a series of small, plausible steps. The result is then noted to be undesirable, and therefore (by the valid move of modus tollens), we are advised to avoid the first.
Hence, the name "slippery slope," which conjures up a picture of sloped ground that is slippery. If you take even one step onto the slope, you will find yourself down on the bottom, where you may not want to be!
CLICK HERE for another illustration.
The first type (where the first step leads to the second by causing it) is the causal slope. The second type (that one leads by equivalences to the second) is the semantic slope.
When there no justification (no good reason to believe) that the first step must cause the second or that the first is really equivalent, then the argument contains a fallacy.
Why are these fallacies? Because in each case we can point to a "break" in the chain, a place where one step really does not take us to the next. In the first example, acceptance of fake violence does not necessitate acceptance of real violence, at least for those of us who distinguish fantasy from reality. In the second example, even if we "steal time," that's not equivalent to forced servitude. The server's job is to serve even if the customer does not tip, and the server knows that when taking the job. The customer doesn't force the server to be a server.
Slippery slope is closely related to scare tactics, the difference being that the slope argument tries to hide the threat by following a series of steps before arriving at the "scary" result.
This fallacy consists of misrepresenting an opponent's position in order to make your own position look more reasonable. To be blunt, it involves putting words into your opponent's mouth that your opponent would not recognize as theirs. By substituting a different position (usually much weaker than their real position), one fails to genuinely engage the opponent, and thus one hasn't really done anything to support one's own position.
The name comes from the practice of stuffing dummies and scarecrows with straw. When one attacks an opponent by putting words into the opponent's mouth, one makes up a "dummy" position. But just as beating up a scarecrow doesn't demonstrate any athletic accomplishment, beating up a "straw man" in an argument doesn't demonstrate anything.
Given this definition, the fallacy of straw man is restricted to cases where there is an opponent and where one is responding to that opponent.
The difficulty with spotting the fallacy is that one must know enough about the issue to know when an opponents is being misrepresented.
An argument in which a strong emotional investment persuades the arguer to advance a completely implausible reason, where it's clear that the arguer merely advances that reason to feel better about himself or herself (rather than because it's the truth).
The arguer adopts the (usually implicit) assumption: "I should believe whatever makes me feel better about myself." An example would be: "I'm not really hurting the environment by driving this gas-guzzling SUV. After all, it lets me go out into rugged places where I can be at peace with nature."
|Explanations and examples (but not the graphics) on this page © 2002, 2006 Theodore Gracyk|
Last updated March 6, 2011