Definitions of Basic Terminology
Prepared by T. Gracyk
If you only want fallacies, click here.
Analogical Argument (Argument by analogy)
Not every analogy is offered as an argument. The famous comparison that "A woman needs a man like a fish needs a bicycle" is a fancy way of saying that women don't need men, but it is NOT an argument.
When an analogy is offered as an argument, it can be understood as having this standard form:
Besides considering the truth of the premises, any evaluation of an analogical argument must weigh the strength of the comparison. This is done by considering the possibility of a false or faulty analogy. We can also show them unsound by using a reductio ad absurdum.
Example: "I don't care what the experts say. Someone who lives in my hometown burned to death in a car accident because he couldn't get out of his seat belt. Anyone who wears a seat belt is just asking for trouble."
It's invalid because there might be a different reason why the antecedent fails (in this example, maybe Sam was well rested but didn't study).
Argument from generalization
These arguments are strong when the generalization is about almost all members of the reference class, or almost none of them.
Even if the premises are acceptable, these arguments are frequently weak and therefore unsound. They are weak when the number or statistic deviates much from "almost all" and "very few." And they are weak when the arguer has chosen a poor reference class (often done when the argument only has numerical information about that reference class and no others).
The following argument is weak because the statistic is not clearly at one of the two extremes:
The following argument is weak because the reference class is not appropriate to this case:
Another weak form of the argument parallels the problem of affirming the consequent, that is, reasoning backwards. The following argument form is always weak:
In this case, MANY people and animals can swim without being adult humans. Samantha might be a child, or even a pet goldfish.
And of course any inductive argument is unsound when evaluation of the premises reveals that one is false or invites us to suspend judgment.
For a link to another professor's notes on this topic, click here.
Biased Sample and Overgeneralization
The problem of generating an unrepresentative sample by using a method of sampling that misrepresents an important subgroup of the population.
In other words, it involves observing a distinctive subgroup of a larger group, and then mistakenly drawing a conclusion about the larger group instead of the subgroup. It is very easy to do this when you relay on anecdotal evidence.
Obviously, this is not an issue in a highly homogenous population. Looking at penguins in one location of Antarctica is fine if you just want to know the average height of penguins; where they live in Antarctica probably has no effect on penguin height. But choosing just one place to sample all Americans is going to be biased, because Americans include many, many different subgroups. So going to a shopping mall in Fargo, N.D., is not going to get you a representative sample when trying to determine how many Americans are of Norwegian descent (such a sample will be biased by over-representing that group).
Most bias can be eliminated by stratifying the sample. Stratification is the process of sorting the sample into groups ("strata") that have been identified in advance as highly relevant to the issue being studied. For instance, when the topic is abortion, both a person's sex and religious affiliation will influence his or her position. If we cannot get a highly random sample, then we should stratify our sample for at least these two factors. So in addition to asking them whatever we want to know, we need to determine these additional facts. If we sample more than we really need, we can then sort the sample into the relevant subgroups, see if any are over-represented, and then randomly eliminate samples from the over-represented groups until we reach a sample in which the major sub-groups are represented according to their numbers in the general population. Notice that stratification only works when we already know which subgroups are relevant to the issue and we know their numbers in the general population.
Confirming evidence is a special case of biased sample. It is the special case of using a method that gets the arguer or researcher a result favorable to his or her desired conclusion. The method can create this bias either intentionally or accidentally. Either way, the fallacy consists in choosing a method that generates a large sample of one very specific, highly unrepresentative group. There are two ways to generate confirming evidence. One is to use a method that initially selects a sample from one particular group. For example, if I want to know how many Americans think that it's time for a woman to become the U.S. President, then I would generate confirming evidence if I only called women. It would be even worse if I only called women whose telephone numbers were provided by N.O.W. (the National Organization for Women). Similarly, if I'm interesting in supporting the idea that video games harm children, I might generate my sample of children by choosing children at a juvenile corrections facility.
The other method to generate confirming evidence is to ask a question that automatically favors one answer over other possible answers. If I want to "discover" strong support for private schools, I might ask people, "Do you favor continuing massive subsidies for our failing public schools or do you support directing some of that money to school vouchers that give parents a choice?" This method involves the fallacy of the leading or loaded question.
If one generates a sample with confirming evidence, stratification is not going to remove the biases from the sample.
Arguments that generalize are unsound when they have a biased sample or involve confirming evidence.
Most of the categorical syllogisms that we encounter have at least one premise that is a universal claim (i.e., of the type All S are P or the type No S are P). Categorical syllogisms without a universal premise are always invalid.
It can be quite difficult to tell if a categorical syllogism is valid. For most people, learning to make a diagram of the premises is probably the simplest and most reliable of the available methods.
Besides being positive (argument to confirm) or negative (argument to disconfirm), the conclusion may be about a specific case or about what generally happens. The first sort is a cause of a particular case and the second sort is a cause in a population. (Technically, arguments about populations will be a special type of argument that generalizes.) Therefore there are four types of causal arguments:
1. Argument to confirm a particular cause.
2. Argument to disconfirm a particular cause.
4. Argument to disconfirm a cause in a population.
When the correlation is strong and there are no other known causes of the same effect, we can word the conclusion of the third type as "X probably causes Y." We must fully understand the normal background conditions for the cause and effect relationship before we can use this wording. But most causes are not understood well enough to justify this wording.
Evaluation of causal arguments: As with any other argument, first determine whether the premises are true. Since correlations are demonstrated through sampling and generalizing, we should not accept the claim of correlation until we agree that the process of generalizing would be sound. In other words, we must evaluate the claim of a correlation as being an inductive generalization from samples.
But agreeing to the correlation is not enough to make the causal conclusion sound. By itself, a correlation never proves a cause. (Example: The craze for pet rocks may have coincided with the peak year for sales of disco music, but nobody thinks that disco causes pet rocks or that pet rocks cause disco music.) The arguer must take care to rule out the causal fallacies that often lead one to postulate a cause when there's nothing but a correlation.
Four fallacies pose problems for arguments that try to demonstrate a cause in a population. The presence of any of these four problems will make a causal argument weak. The four are:
The fallacy of reversing cause and effect
The fallacy of coincidental correlation
The fallacy of overlooking a common cause
The fallacy of Post Hoc
Chain or Hypothetical Argument
Chain arguments are often set up in order to lead us from one action to another, in order to point out that a seemingly innocent decision will have terrible results. It is assumed that we will not like these results, leading us to add a modus tollens step at the end of the chain. This would give us a valid conclusion that we shouldn't take that first step!
#1 is false (it's not orange) but it's still a claim. #2 is true, since there is less gravitational force to create drag on a golf ball on the moon.
Questions and commands are not claims.
It is difficult to determine the truth of a claim if it is unclear in any way. For instance, pronouns can make it unclear about who the claim is about.
One claims is the contradictory of another if its truth shows that the other is false.
The contradictory of a conditional claim is the true description of any example where the antecedent is true but the consequent is false. (Such a case is called a counterexample.)
The contradictory of a universal claim is also any description of a counterexample.
The contradictory of a disjunction will be the information that none of the disjuncts (the choices) is true. This is most easily presented by naming an additional choice which, if selected, will mean that the specified choices are false.
When a false disjunction is the basis of an argument, it is called a false dilemma.
Sometimes we deal with an issue in which it is impossible to totally exclude the cause from the control group. Here, we have a group where the cause is present in higher amounts (the experimental group) and lower amounts (the control group).
When the researcher introduces the cause into the sample and withholds it from the second sample, thus creating the experimental and control groups, we have a controlled design. This is also known as a randomized experimental design or, more simply, an experiment. The obvious advantage is that the researcher can control other variables that might affect the outcome. It is much less likely to result in a causal fallacy.
An uncontrolled design is used when the researcher cannot run a randomized experimental design (it might be illegal or immoral or just not very practical). In that case, the researcher has to go and find existing cases for the experimental group, and find similar cases where the suspected cause is absent or found at lower rates (the control group). This is also know as a prospective design (the researcher goes "prospecting" for the cause) or it is simply know as a study.
Sometimes the same group is used for both the control and experimental group, by comparing how things were before an event and then again after. To argue that the September 11 terrorist attacks on the World Trade Center in New York caused insomnia in Americans, we could compare insomnia rates in the months before and after the attack. (Yes, there are researchers who track these rates.) Comparing the before and after rates in the same group (Americans) is much easier than trying to find two groups of Americans, those who don't know about the attack and those who know about it! Of course, in this case it is an uncontrolled study and we would have to look for the possible causal fallacies.
A correlation is a measurable relationship between two variables. A correlation can be either positive or negative. Combined with enough other information, correlations play an important role in determining whether one thing is a cause of another.
A variable is an identifiable, changing feature of the world. In other words, it's something that varies! For example, air temperature and humidity are two different variables that affect how hot or cold we feel.
As the examples of air temperature and humidity suggest, it is important NOT to approach variables with the "false dilemma" thinking that you either have it or you don't. Many variables are present or absent to some measurable degree. If we suspect that something is a variable that has an influence on something else, we will often miss its true significance if we treat it as an "all or none thing." For example, smoking is a variable that is often linked to lung cancer. But we can't just divide the world into smokers and non-smokers. Different smokers smoke in different amounts, and most non-smokers inhale some tobacco smoke sometimes. (For example, non-smokers who live with smokers may actually have a higher exposure to the variable than do smokers who don't smoke very much.) The lesson here is that SOME variables are "all or none" (an animal is either a wombat or it isn't -- an animal can't be 50% wombat) and some are matters of degree ( whether an animal is dangerous is a matter of degree: pit bulls and fleas are both dangerous, but in very different degrees).
A POSITIVE correlation holds when we have any of the following:
The positive association can be an overall pattern that does not necessarily occur in all cases.
FOR EXAMPLE, the height from which an object is dropped is positively correlated with the force of its impact. (The higher the drop point, the higher the force of impact. But not always. Some objects dropped from a height might have parachutes attached to them, so their impact might be more like that of an object dropped from a much lower height.) Observing that being male is is positively correlated with violent behavior DOES NOT MEAN that every male is violent, and it DOES NOT MEAN that women aren't ever violent.
A NEGATIVE correlation holds when the increase in the presence of one outcome for the first variable is associated with a decrease in the presence of a particular outcome for the second variable. Again, the association can be an overall pattern that does not necessarily occur in all cases. FOR EXAMPLE, years spent in higher education (beyond high school) is negatively correlated with enjoyment of professional wrestling: the more college credits a person has completed, the less the person enjoys pro wrestling. But not necessarily. A small number of Ph.D.'s go to people who enjoy pro wrestling. ANOTHER EXAMPLE: Regular flossing of your teeth is negatively correlated with gum disease.
There is NO correlation when there are no regular patterns of increase and decrease for pairings of the two variables. FOR EXAMPLE, the number of panda bears living in zoos changes every year, and the total amount of ice sold by Igloo Ice Company changes every year, but the patterns of decline and increase are not associated. Although the number of panda bears living in zoos has steadily increased every year for the past decade, sales by the Igloo Ice Company have gone up and down from year to year over the same period. So if Igloo Ice sales increase the month that the local zoo gets a new panda bear, we should regard this sequence of events as a COINCIDENCE.
Although one might ASSUME that "common sense" can tell what variable will be correlated with which other variables, there are often surprises. Many people believe that women talk more than men (that being female is positively correlated with verbosity), but recent studies in several countries reveal no differences in average number of words spoken by men and women on a daily basis. Or, to take another interesting example, one might think that being a member of a church that opposes abortion would be negatively correlated with having abortions. The Roman Catholic church is officially against the practice of abortion. Yet in a recent multi-year period in which 23% of all Americans identified themselves as Roman Catholic, 27% of women receiving abortions in the U.S. identified themselves as Roman Catholic. So being Roman Catholic is positively correlated with abortion in the U.S. (There's an even stronger positive correlation between being an atheist and having an abortion.) There is a negative correlation between being a Protestant and receiving an abortion. However, here is also a negative correlation between being married and receiving an abortion, and it is a much stronger correlation than the negative correlation between being Protestant and abortion.
In response to claims of any kind, a critical thinker demands adequate evidence before accepting or acting on a claim. Essentially, critical thinking is an evaluative stance. The primary characteristics of someone engaged in critical thinking are curiosity, a questioning attitude, a demand for evidence, and suspicion of extreme positions.
Fallacy of coincidence (Looking too hard)
Coincidence is a fallacy of sampling, but it is mainly a concern when we use sampling to establish a correlation in order to establish a cause. Most of the time, the fallacy consists of extrapolating too quickly. For instance, it happens when a small preliminary study suggests a correlation and thus cause, but the correlation vanishes when we try to replicate the study. Other times, it is the fallacy of putting too much trust in a sample that, due to no other fallacy, fails to represent the general picture. How does this happen? To learn more, click here.
When most clearly expressed, conditionals take the
form "If A then B."
Go back to the example about cows and milk. The antecedent is "cows are mammals" and the consequent is "cows give milk." The conditional is claiming that "cows are mammals" is sufficient to claim "cows give milk." The conditional is true when the antecedent is a sufficient condition for the consequent.
But there are many other ways to express a conditional claim besides the form "If A then B."
It's invalid because the premises might be true while the conclusion remains false, due to some additional reason making the consequent of the first premise true as it originally stands. (In this example, perhaps Sam is well rested but Sam didn't study for the exam.)
We evaluate an argument when we decide whether it is good (sound) or bad (unsound). This determination requires attention to both the form of the argument (its logical pattern) and to the truth of each of the claims that go into making up the pattern. So evaluating an argument requires evaluating both its type and its premises.
We should accept a claim (and thus agree that it's true) when one of the following holds:
We treat a claim as false when we have personal experience that shows it to be false, or we have agreement among reputable experts saying it is false, or we have another claim that contradicts it (and the one that contradicts it is true because of any of 1-6 above).
Rule of thumb: Never accept any controversial claim from an unknown source. Instead, suspend judgment!
Rule of thumb: Never accept any claim that you do not understand. Instead, suspend judgment!
Many claims can be determined to be true or false through personal experience (e.g., "The sky normally looks green" is false) or because of common knowledge (e.g., "The United States currently has 50 states" is true, but you probably don't know where or when you learned this fact).
It is a mistake (a fallacy) to dismiss a premise as false (or an argument as bad) merely because of who gave it. The identity of the arguer is only relevant if their testimony is supposed to be our reason for accepting the truth what is said. But in that case, we must restrict our discussion of the arguer to facts that actually relate to the arguer's reliability on the topic being discussed. To discuss irrelevant facts about the arguer is the fallacy of ad hominem (e.g., "Why should I listen to Professor Oogle's argument about stock market bubbles? Look at the ugly shirts he wears.")
In order to evaluate premises or evidence, a critical thinker must identify the TYPE of evidence that is being offered. The most important kinds of evidence are the following:
Each of these has its drawbacks or weaknesses (i.e., its associated fallacies), and a good critical thinker is familiar with these and is on the look-out for them.
Excluding Possibilities (also called a Disjunctive
The basic standard form for these arguments:
Because the order of the disjuncts makes no
difference, this version of the
It is not valid (invalid) when we agree with a choice and then conclude that the other is false:
Existential Claim (also called a PARTICULAR
Existential claims can be affirmative ("Some cute animals have big eyes") or negative ("Some cute animals do not have big eyes").
To demonstrate that a particular claim is false, one
must know that a contradictory universal claim is true.
False or faulty analogy
When an argument by analogy overlooks significant differences, it is subject to this fallacy and is inductively weak. When it has a false analogy, it is unsound. Obviously, if you find an analogy but it's not intended as an argument, then it does not commit the fallacy.
To accuse it of false or faulty analogy, one can point to any of several problems with the analogy.
First, one might note at least one significant difference between the things being compared, and must explain how the difference is relevant to the issue being debated.
Second, one might point out that one of the things being compared is a fiction. It is pure speculation.
Third, we might ridicule the analogy by showing that, when another feature of it is considered, it actually supports the opposite of what it is intended to show. The famous "this is your brain on drugs" can be ridiculed by pointing out that, since hot greased frying pans are desirable objects in some circumstances (like when one wants to fry a meal), it follows that drugs that "fry your brain" are also desirable in some circumstances, such as in a controlled recreational setting. So the anti-drug conclusion is a false analogy.
False Dilemma (Limited Options Fallacy)
We show that the fallacy has taken place by pointing out one or more plausible but overlooked options.
For more information, click here.
An inventory or census is not a sample, for they involve looking at (or at least intending to look at) every case. Sampling is a situation in which there is no attempt to look at every case. When the group is a group of humans, sampling is often done by polling or surveying (that is, though verbal or written communication) rather than by "looking" in the normal sense of the term. Sampling is found in the work of a field anthropologist who visits a tribe in the jungles of South America and then returns and reports on the culture (the anthropologist did not look at all the villages of the tribe). Sampling is also at work when a newspaper reports that the President's approval rating has risen or fallen (the newspaper did not try to talk to everyone in the United States).
Strong versus weak generalizations
Generalizations are weak when they employ an untrustworthy sample. They are strong if the sampling process has no fallacies.
Sampling is subject to four fallacies: biased sampling, confirming evidence, hasty generalization, and anecdotal evidence. (The first pair relate to the method used to get the sample. The last two relate to the size of the sample.)
The fallacy of generalizing from a sample that is too small and failing to take this into account in the conclusion of the argument. "Small" is a relative term here. A sample that is adequate in size when asking people who they plan to vote for will be too small for medical research that looks for low levels of allergic response to a new medication. Basically, the fallacy occurs when we present a conclusion that is more precise than the sample warrants.
The more variety there is in the population, the larger the sample that's needed to generalize with confidence.
If we can be confident that the population is extremely homogenous (all its members are alike with respect to what we're studying), then a few cases will be sufficient. If we do not have this confidence, then we should be careful to consider how our sample size generates a margin of error.
To be more precise, different sample sizes create different margins of error for the resulting generalization. These margins specify the range within which we can expect to find the correct answer. A 10% margin of error means that the truth is somewhere within a range of 10% on either side of the reported number. A 3% margin of error (commonly found in professional polling) means that the truth is somewhere within a range of 3% on either side of the reported number.
For example, suppose a unbiased sample of 100 Americans gets 55 positive responses to the question "Do you like ice cream with apple pie?" There is no fallacy of hasty generalization if the conclusion is reported as "Approximately 55% of Americans like ice cream with apple pie." There is a fallacy if we report it as "A clear majority of Americans like ice cream with apple pie." This is misleading, because our sample is consistent with the result that as few as 45% agree.
Implicit claims and hidden premises
How to find a hidden premise: When you have a premise and a conclusion but it's not clear how they are linked, the solution is generally one of three things:
Some words (premise indicators) typically indicate that what immediately follows is a premise of an argument:
Sometimes whole phrases work as indicators. "It follows that" is a conclusion indicator, while "for the reason that" is a premise indicator.
But we often lack the information necessary to construct a sound deductive argument. In that case, the best argument strategy is to construct an inductive argument.
Although there is no question of validity with an inductive argument, a properly constructed inductive argument with true premises can succeed in showing that the conclusion is likely. When it succeeds, the argument can be called "sound" in the sense that it is inductively strong. Some people prefer to restrict the word "sound" to successful deductive arguments, but there is an increasing willingness to use "sound" to mean a successful argument of any kind.
There are four important types of inductive argument:
Here is an example of a simple argument, "The recession has been going for two years already, so the economy will improve this year."
Now compare these two examples:
Example 1: Will the economy improve this year?
Example 2: What's going on with the economy?
The two examples share the same TOPIC, which is the economy. Example 1 is an issue that can be addressed with an argument (no change in the economy would be included under the "no" response). Example 2 is not the issue of an argument. It calls for an explanation, not an argument. So Example 1 is the issue for our sample argument.
For additional examples of issues for arguments, look at the sample portfolio pages.
(An English translation of the Latin name "modus ponens" is something like "the direct route" or "direct way.")
Be careful not to confuse modus ponens with the fallacy of affirming the consequent.
(An English translation of the Latin name "modus tollens" is something like "the indirect route" or "indirect way.")
Be careful not to confuse modus tollens with the fallacy of denying the antecedent.
Overlooking common cause
This fallacy is restricted to arguments to establish a cause. It is the mistake of finding a correlation between two things, then drawing a conclusion without checking for other variables that are also correlated with those two. This problem does not occur in a controlled experiment, but it is a common problem in a study of existing behaviors and events.
Let's suppose we correlate two things, A and B. But perhaps A keeps turning up with B because some previous thing, X, is independently causing A and independently causing B. Here, X is the common cause of A and B.
Failure to screen for such things is the fallacy of overlooking common cause.
Persuasive definition is also known as misuse of hypothesis.
The full title is Post Hoc, Ergo Propter Hoc. This Latin phrase means "After it, therefore because of it."
It is the fallacy of thinking that if one thing happens and then another thing happens, the first thing was the cause of the second. But time order alone cannot show that something is a cause. At the very least, we also need a control group! The Post Hoc Fallacy occurs when someone does not understand the need for a control group and draws a conclusion based on nothing but the time relationship. A contributing problem is that the time relationship is so obvious that the person jumps to a causal conclusion using only the most anecdotal of evidence.
Many superstitions are based on post hoc reasoning.
Principle of rational discussion
This principle is also known as the principle of charitable interpretation.
The point of the principle is to guide the interpretation of a vague or incomplete argument. It helps us decide what implicit information to make explicit in our standard form reconstruction. The goal of interpretation is to construct from them the best possible argument, given the author's point of view. We use the principle so that our interpretation and evaluation of another's argument does not commit the fallacy of strawman.
Unfortunately, there are cases where it is perfectly clear that the person giving the argument intends to violate one of the three elements of the principle of rational discussion. (Usually, it will become clear that they don't intend to give a good argument, or they don't care about the truth of what they're saying.) In this case, we simply do our best to capture what the arguer intends to say.
Reductio ad absurdum
Also known as the indirect method: you show that a claim is false by initially pretending that it is true and then showing that an absurd result follows from that assumption. It is "indirect" because you do not directly confront their premises. (In its strict form, you show that an actual contradiction follows from the assumption.)
This technique can be used to respond to an argument. While it does not prove WHICH premise in a group of premises is faulty, if we can show that a set of premises leads to an absurdity, then we have no reason to regard that set as trustworthy. (We know something is wrong with the connection between the premises and the conclusion.) In using the reductio technique, we evaluate an argument by AGREEING with it, then showing that a false conclusion follows just as plausibly as THEIR conclusion does. In this way we have cast doubt on the plausibility of the support the premises give to their conclusion.
This technique is particularly useful for responding to an argument by analogy, where you think the analogy is weak but are having trouble saying just what is not really analogous in the comparison.
Example: A speaker says "All cute animals have big eyes, so Muzzles has big eyes." This is defective as it stands, because Muzzles could be a threatening, ugly animal like a shark. Or muzzles might not be an animal at all, but the name of a town in North Dakota. The speaker evidently intends for us to understand that there is an implicit premise, "Muzzles is a cute animal." We would add this premise when putting the argument into standard form.
How do we know that this is what the speaker intends? General rule of thumb: Nothing belongs in a conclusion that isn't in the premises. Here, "Muzzles" is in the conclusion but not the premises. So we have a hole in the argument concerning Muzzles. The arguer is obviously making an assumption about Muzzles, and we repair the argument by stating that assumption about Muzzles. General rule of thumb for repairs: Find what's in the conclusion but not mentioned anywhere in the premises. Make that the subject of the repair claim. Finish writing the repair claim by incorporating all information that is in the premises but not mentioned in premises.
Although we try to avoid it, sometimes our goal of capturing what the speaker intends means that we must repair an argument in a way that leaves it as a bad argument. (For an example, click here)
Reversing cause and effect
This fallacy is restricted to arguments to establish a cause. This problem does not occur in a controlled experiment, but it is a common problem in a study of existing behaviors and events.
The fallacy occurs when we have a genuine correlation, but we have not clearly established which of the two things really comes first. We simply assume we know which comes first, but in reality it is the other way around. If it is plausible that we have turned the time order around, then the argument is unsound due to this fallacy.
There are two versions of this fallacy. They are both given this name because they share a common idea that taking a first step will lead us to something we don't want. It is the unjustified assumption of this idea that is the fallacy. (When the assumption is justified, there's no fallacy, even if the argument otherwise looks like any other slope.)
The assumption in question is that choosing one thing leads to, or is equivalent to, choosing a second thing. But the move from the first to the second is not immediate: one leads to the other (or is shown equivalent) by a series of small, plausible steps. The result is then noted to be undesirable, and therefore (by the valid move of modus tollens), we are advised to avoid the first.
For more information and examples, click here.
Soundness (as in "sound argument" --
has more to do with a "sound" or solid object than with what you hear))
Evaluating an argument is the same as determining its soundness.
An argument might be deductively sound or inductively sound. An argument that is not sound is unsound. (Notice that if we apply soundness to both deductive and inductive arguments, then a deductively unsound argument might be inductively sound).
Some people prefer to restrict the word "sound" to successful deductive arguments, but there is an increasing willingness to use "sound" to mean a successful argument of any kind.
Example as done in Epstein's Pocket Guide, page 10:
A better way to put an argument into standard form is to number the premises and to replace the conclusion indicator with a solid line.
Stereotypes are a special case of generalization. A stereotype is an unjustified judgment that a certain feature is typical or uniform within a specific group.
Stereotypes are not necessarily negative. For example, one common stereotype about medical doctors is that they are financially well off. One common stereotype of college professors is that they are very intelligent. But stereotypes are often negative: "blond" jokes are based on the stereotype that blonds lack intelligence, and most "lawyer" jokes are based on the stereotype that lawyers are greedy and cruel.
Of course, many stereotypes are very harmful. For example, one common mode of stereotyping combines generalizing about a group of people with an ethnocentric view of their inferiority. Many Americans stereotype the people of eastern Asia as users of chopsticks. (In actuality, some of the cultures of eastern Asia have not adopted the use chopsticks.) This might encourage unsound inferences, such as the unsound inductive argument that, because Piak is from Thailand, which is in Asia, Piak eats with chopsticks.
This stereotype about people from eastern Asia is ethnocentric only if it is combined with the assumption that the American way of eating is superior or "normal" in comparison.
It is almost impossible to deal with complex information without stereotyping. Many psychologists now believe that concept-formation requires the creation of stereotypes.
There are three degrees of support.
A valid argument offers evidence that, if true, guarantees the truth of the conclusion. But with most topics we just cannot find information that will guarantee our conclusion, and we are better off seeking a strong argument rather than a valid one.
A strong argument gives us good reasons to accept its conclusion. There is no plausible reason that the conclusion would be false when the premises are true.
A weak argument gives no good reasons to accept its conclusions. It leaves some "hole" in the argument and we can identify additional information that would easily allow us to avoid the conclusion while accepting the premises. Fallacies always make an argument weak.
The actual truth of the premises is not relevant to the degree of support. We can see how much support they give even if we don't know that they are true. We want to know if an argument is valid or strong so we'll know whether the truth of the premises is worth investigating!
Example: "All arachnids weigh less than one pound, and spiders are arachnids, so the spider in the bathroom weighs less than one pound." This argument has valid support for its conclusion. I don't actually know whether all arachnids weight less than one pound, but if it IS true, and if spiders are really arachnids, then the one in the bathroom weighs less than a pound! And that's all we are saying when we say it is valid.
Universal claims are most clearly written as
A universal claim is false when just one member of the group or class fails to have the feature in question. So if we found just cute animal with small eyes, we would have a counter-example to "All cute animals have big eyes." A universal negative claim is false if just one member of the group or class has the feature.
Informally, we say that there are no holes in the argument.
To be more technical, an argument is valid (has validity) when there is no possible way for the conclusion to be false if the premises are true. Arguments are invalid when they aim at guaranteeing the conclusion's truth but fail to do so even when the premises are true. In other words, the structure of the argument allows the conclusion to be false when the premises are true. An invalid argument might still be good by meeting a weaker standard of support, by being strong.
With some arguments, validity is very obvious. One can see that agreeing with the premises will require agreeing with the conclusion.
Example of obvious validity:
But in other cases the validity will not be so obvious. The reason is that the issue of validity is independent of the actual truth of the premises. We're evaluating what the connection between premises and conclusion IF the premises are true. The following is valid:
Will the first person born in Maine in 2006 live to be 90? I don't know! You don't know, either! We don't even know who that person is. So we don't know if premise 2 is true. And premise 1 is silly. The two parts have nothing to do with each other. Yet the implausibility of premise 1 and the uncertainty about premise 2 cannot deprive the argument of validity: if both premises are true, then the conclusion must be true. This argument is valid but unsound. (It is a valid argument of the excluding possibilities type.)
Look at the conclusion. Are the words written in Japanese? No. But a valid argument can have a false conclusion. (Remember: we don't dismiss an argument as a bad one just because we don't like its conclusion.) If the two premises WERE true, then those words would have to be written in Japanese. So it's valid. Validity is a matter of whether the conclusion would be true when the premises are true. In this case, there is no way the conclusion could be false while the premises are true.
Here is a less silly example of a valid case of excluding possibilities with a false conclusion:
Think about the premises. If the first one is true (if there are really only these two choices), and the second is also true (we can't find it on our map of North Dakota after checking the name of every town), then we'd better start looking for it in Manitoba.
Warning: A sentence is not vague just because you, its audience, can't understand it. The average person doesn't understand what "E=mc2" means, but that doesn't make it vague. Within the context of physics, it is quite clear.
If a sentence is purposely vague, and neither context nor speaker clarifies it, then it should not be counted as a claim within the context of an argument. Example: "Too much TV is bad for children, so you should monitor your child's TV time." The premise of this argument is hopelessly vague unless the speaker clarifies what counts as too much. The argument is automatically weak due to its failure to provide a legitimate claim as a premise.
Explanations and examples (but not the graphics) on this page © 2002, 2003, 2007 Theodore Gracyk
Last updated OCT. 22, 2007