So we're in our third week now. Uh, I'm not really sure what to say about that, other than the fact that we're three weeks in. It goes fast. It's a 12-week class. Right, we're 25% done at the end of this week.
I hope everything is becoming clear. It's going to be kind of a short week on the lectures, and really weeks two and three, I said this previously, are to be taken as one unit. You might notice the sort of similar look of the slides.
on your screen. That's because they come from the same set. They're really dealing in subject matter that I feel is interlocked with one another. There isn't a difference between the two that's so significant that they can't be compacted into one.
So I'm gonna put another glossary up, as I did in the first week, but it'll be for the things we got here in weeks two and three. There will be a quiz on this and some things from week two that I'll open up on Friday and Wednesday. And just as a heads up, week four is when I'm going to open up the first exam. I'm going to give you a couple weeks, probably two, maybe even three weeks to do it. So it's no rush.
This is a fast track class, kind of a, what do they call it? It's a late start short. It's 12 weeks rather than the usual 16. And so I'm not going to rush anyone.
I know your lives are probably busy. And if I really tried to hammer through this class in 12 weeks and cover everything, it would be so fast. that I don't believe anyone would get the material.
So I'm really trying to take it slow here. If it seems like the class has been kind of slow going, it's because of that. It'll get faster.
It's going to get a whole lot faster. But right now, I'm trying to ease everyone in. It's a lot nicer than just sort of pushing you off a cliff and hoping everything turns out okay.
So to start, we talked about validity, mostly, last week. That was kind of the centerpiece. And as I said, in logic, especially in logic, in reasoning in general, but especially in logic. and logic. Validity is really the central topic.
It's the big sort of point of sale of logic, what it's attempting to do. It's its center of gravity. But we're going to talk about something that I mentioned previously. When I said truth doesn't matter to validity, it doesn't matter if the premises are true.
We're assuming they're true and then hoping that the conclusion could not be otherwise. This stuff can be confusing, I understand. That in a deductive argument, the conclusion must be true if the premises are true. If it could be otherwise, and this is where we got the informal test of validity that we talked about, then you have an invalid argument, obviously.
But going along with that, we're going to get truth back this week. And that truth that we're going to recover, I'm not being strange, excuse me, is soundness. So we have validity of an argument, and we also have truth.
have soundness. You've probably heard the term sound logic, that kind of thing. It's a sound argument. Well soundness is something, it's validity plus.
I'll show you. It's like I said. There's only a possibility of truth that matters.
In a valid argument, it's whether or not the argument could be true, whether or not the premises could be true. Remember, facts can be true or false. And so we assume they're truth, and then we see how they play through to the conclusion. Well here, we're going to do something else, right? It does take account, soundness takes account of whether or not a premise or a conclusion or an argument overall is sound.
Because a sound argument is one whose premises are true and whose conclusion is true. And a good argument should work at this. Soundness in most cases.
There's a lot of argument about this within philosophy circles. In here, let's assume I may say otherwise during my lectures or during any talk that you have. you have with me. I might betray this later on because I'm kind of in the waffling stage on it. I'm not sure where I stand.
But to get to the point before I get onto a tangent, soundness for most purposes implies validity. If an argument is sound, then it's true and it's valid. There are ways to make sound arguments that aren't valid. I'll demonstrate one, but they require an obviously bad form of reasoning that will jump right out at you. So again, they can be one and not the other.
They can be valid and not sound, sound and not valid, but the sound and not valid arguments are... there are questions among philosophers whether or not we can consider them sound. So these are easy to construct, an invalid and unsound argument.
It's not a difficult thing. ...toes, then he wasn't a Republican. Trump was married 11 times, therefore he's a chimpanzee.
Now, the first one is a bad conditional, because Republicans, at least as far as I know, I've never looked into this, but I'm going to assume that Republicans tend to have toes. I'm neither a Democrat nor a Republican. Again, keeping my politics out of this, I'm not on either side, but I'm pretty sure Republicans have toes.
At least most of them. Trump was married 11 times. It's not true. He was married three times.
and therefore he is a chimpanzee. Whatever you think of Trump, he is not in fact a chimpanzee. So this is not valid in that the conclusion isn't supported at all by the premises, and it's not sound because not one statement in there is true.
And there's a word for this kind of argument. It's nonsense. An invalid and unsound argument.
It's easy to make. You just kind of throw garbage information out that isn't true and then pretend that it links. It's almost like a comedy act at that point, saying...
silly things that have absolutely no relevance to one another, and you just pretend that it does, and it looks like an argument in its patterning, but it obviously isn't. It doesn't take a critical thinking class or a degree in philosophy to see that that argument is just junk. It's just junk information being thrown out.
In fact, it's kind of just invective. Valid and unsound arguments are a little harder to create. I gave one on a previous quiz. It was kind of like this.
So either Obama was a Republican or a Libertarian. All right, so we got this disjunctive. He was either one or he was the other.
And in this case, it's probably exclusive because he couldn't be both. So he was either a Republican or he was a Libertarian. If he was a Libertarian, he wasn't a Republican and vice versa.
Obama was not a Republican. Therefore, Obama was a libertarian. Now, the second premise is true.
The second premise is correct. But for an argument to be sound, its premise is all of them, and its conclusion must be true. And in this case, not. all of the premises and conclusion are correct because he was neither a republican nor a libertarian he was a democrat which substantiates the fact that he was not a republican so that one's true and to say he was a libertarian well I just covered that.
He was not. This is a valid argument, though. This is a valid argument.
This is what's called a disjunctive syllogism. We'll cover it later when we cover formal logic. This is a completely valid form of argument. It's just not sound. It doesn't deliver anything true, and it should jump right out at you if you remember who was president.
Oh, man. I'm really slipping this one. Well, he got out in 2017, so do the math on it. For some reason, I can't.
Seven years ago. Almost eight. This is valid since the conclusion is necessarily true if all the premises are true.
You see that, right? If we assume that he was either a Republican or a Libertarian, and we know that he wasn't a Republican, then we know that he was a Libertarian. It's valid. That's validity on display right there. That's a valid form of argument.
But it's not sound. Well, one of the premises, anyway. The first premise is not true. The second one is true. He was not a Republican.
That's a false dichotomy, is the first thing. Later on we're going to study logical fallacies, argumentative fallacies. This is a false dichotomy.
This will be our first fallacy. It's not true that somebody has to be a Republican or a Libertarian. There are, I don't even know how many political parties out there, but a lot of them, a dozen at least.
And so to say you're either a Republican or a Libertarian, a Republican or you're a Libertarian. No, he could be a registered member of the Peace and Freedom Party. And obviously, Obama's a Democrat, like I said.
This is just a generally crummy argument, even though its patterning is good. This is sometimes called bogus logos. It looks like logic, because it kind of is.
It's following a valid logical patterning, but the information it delivers is no good. So Sound and invalid arguments are the hard ones. These are the ones that I said there's a level of argumentation among philosophers as to whether these can even exist, whether they can be considered arguments. I'll give you an example.
They require our second fallacy here. So we've got the false dichotomy. This is going to be an example of a non sequitur, which is Latin.
It just means does not follow. If Biden is, I'm going with the presidents here. If Biden is 75 years old, then he is unfit for command.
Say he's over 75 years old and he's unfit for command. Biden is 81 years old. This is looking like a good argument, right?
Oops. I mean, yes, that's all true. A senior citizen is generally considered somebody who's over the age of 65. Biden is over 75 years old and he is unfit for command.
That one, that one, well, depends. Some people would say that. You might think that.
Maybe you don't care. I don't know. But Biden is 81, I think.
He was when I wrote these slides. Maybe he's had a birthday since then. In any case, it was true when this was written. You can make an argument against the first premise, but let's just say that you're a believer in a sort of age limit on people who can run for president. And so that would make him unfit for command, but it doesn't end on that.
What it should end on is that he is therefore unfit for command, but instead it concludes that he's a senior citizen. So it's not valid, because the conclusion doesn't follow from the premises, but it's at least arguably. true. I'm much more certain here.
All of this is true, but it has nothing to do with the premises. It doesn't follow. Non sequitur. Does not follow.
The conclusion does not follow from its premises. But again, truth is secondary to logic, right? So soundness isn't going to be a central topic in this class. It's really just something that I wanted to go over because it's important in your day-to-day life. If you're going to aim at making an argument and you literally tell someone it doesn't matter if my premises are true or false as long as my argument is valid, you're kind of being a jerk.
Soundness matters in your day-to-day life. It doesn't matter in your logical analysis of an argument, but it does matter in your creation of arguments. If you're writing a paper or something in a later class, it matters if your data is correct.
People are going to call you out on lying or making false statements, that kind of thing. I mean it's in the Ten Commandments. See, now nobody likes a liar. Because the best arguments are sound.
The best arguments aren't just valid. The conclusion doesn't just follow from the premises. It's also true. Those are the most convincing of the bunch, or at least they're apparently true to the people that are speaking and listening to such a thing.
So this is a huge topic. What we've covered so far with all of this, with soundness and validity and everything, is deduction. We've handled deductive arguments.
Deductive arguments are the ones that we've been talking about where... the conclusion is necessarily true. If the premises are assumed true, then the conclusion must also be true.
It could not be any other way. There's no alternate conclusion that you could draw from the data given. Well, induction's a little different from that.
I'll talk about those probably later in the semester more, but I want to kind of glance past it today. So validity and soundness, as you know them, are deductive principles. They only exist in deductive arguments. Deductive arguments can be valid. Soundness is, well, the kind of soundness that we're talking about that implies validity, well, that only exists in deductive arguments because validity only exists in deductive arguments.
If all the premises are true and well-formed with an deductive argument, then the conclusion is certainly true. Couldn't be any other way. I already covered this. Think of mathematics on this one.
I think a lot of this that we've been doing is so much natural language argumentation that we're missing out on probably the most familiar deductive system that you would know, which is mathematics. Mathematics is deductive reasoning. Look at this. This isn't the usual equation you'll run across in a math class.
But if x equals 3, then x plus y equals 5. Let's say x equals 3 is true, therefore y must equal 2. That's deductive. It could not be any other way. If x is 3, and therefore adding it to y, I shouldn't say therefore, it's a conclusion indicator.
If x is 3, then x plus y equals 5. We now know that x is 3, it's said in that second premise, and therefore we have to conclude that y is 2. It can't be 1, it can't be 7, it has to be 2 for that second part of the conditional there, for the then part, what you'll find out later on. It's called a consequent. For the consequent to be correct, y must equal 2. That's a valid argument. It's a valid deductive argument.
Because math as a whole is a deductive system. There's inductive math, but the math that you're going to be doing in most of your classes is deductive. Inductive math is proving theorems because you can't know every single number you'd plug into a variable, so you have to prove that it works in a certain way by doing inductive steps.
whatnot. But that's really advanced and we're not going to go into that in this class. We're not doing mathematical induction. We're doing a different kind of induction.
We're gonna be doing the kind of induction that they do in the natural sciences. And I know that this is kind of a strange way to do this, but it works, right? The conclusion couldn't be any other. other way. Inductive arguments are less certain in their conclusions.
They're not like a deductive argument where it's almost mathematically precise. It could not be any other way. There's something else going on in an inductive argument. So there's no absolute certainty, but a high degree of probability based upon information given. I mentioned fuzzy logic here because if any of you ever take an advanced logic class, and it's doubtful that any of you ever will, that's really something.
something that philosophy majors take. Maybe there's one of you out there that's a philosophy major. I didn't get one in the discussion. But if there is, you may eventually transfer to a university and take an advanced logic class where you study fuzzy logic, where in here we deal with almost like a computer, true and false, or one and zero. Well, in fuzzy logic you deal with probability, and probability expresses a decimal.
So if it's 80% likely, then you would say x equals.80, that kind of thing. Fuzzy logic, actually most modern day clothes dryers use fuzzy logic for their ability to dynamically determine dryness. It's kind of interesting.
So when a conclusion doesn't follow from its premises, conclusion, oh, what was it? I worded that weird, I'm sorry. If the conclusion is a follow-up to a supremacist, the argument is not invalid, but highly unlikely to be the case when it's inductive. It's not invalid. You can say that that doesn't really make sense, and we're going to have terms to these, what is highly unlikely to be the case in inductive arguments, and what is highly likely to be the case have terms.
But neither of these are going to be certain. So, watch this. I hope you guys know what a BMI is, body mass index.
It's a weight to height ratio and how they do it. You can look it up. I won't explain it here because it's not too important. But if you have a body mass index of over 30, it's a reliable indicator that you're obese, statistically speaking. Let's say there's a guy named Dave, and he has a BMI of 33. Therefore, Dave is probably obese.
It's probably obese. Statistically speaking, if you have a BMI of 33, you are obese. But probabilities don't always work out the way we hope they would. Probabilities don't always reflect well on individuals.
Statistical truths don't always translate down to the individual. In fact, probability can be very uncertain in some cases. You know, think of like a gambler's fallacy, that you're rolling a six-sided die, your standard Yahtzee-style die, and that you have some one in six chance of getting a one.
and you believe that if you roll it six times you should get a 1. No, theoretically you could just roll six 4s in a row. Probability doesn't pan out with any degree of certitude the way that something like mathematics does, so we have to be careful with these things. and when we're dealing with statistical truths that one in six of something, oh, let's say, I don't even know what we're at right now in the presidential polls, but they're about split. About half of the people are for Harris, about half are for Trump, and that if you just grabbed 10 people off the street and lined them up, you'd get five Harris supporters and five Trump supporters. Not necessarily.
You could get, you have such a small sample there, you could get 10 Trump supporters, 10 Harris supporters, or some other mixture thereof. You could get 10 Trump get very few of one and many of the other, it's not necessarily going to pan out in individual small case scenarios like that. And so BMI is a statistical tool.
It's for determining obesity within a population. It's really bad at determining individual obesity. And I'll give you an example in a second.
Because when you move from the statistical to the individual, it's a very error-prone reasoning form. Like this guy. John Cena's BMI is like 33 or 34 by my calculation. but he doesn't look obese, does he? I mean, what does he got there, about half a percentage of body fat going on?
That's not anybody that would be considered obese by any clinician, but his BMI would indicate as much, because statistical truths don't always pan out to individual truths. test of validity doesn't work in induction because of this uncertainty of the conclusion. The conclusion of an inductive argument is not one that is certain. Dave is probably obese.
Well, what's the, I mean, you could... Replace that with Dave is probably not obese. Both of them could be considered conclusions from the data given. Because we can always imagine the conclusion of an inductive argument being false. We can always find ways to work around induction.
I know this is kind of mysterious right now, but it'll become more clear as we move forward. So take this. All the birds I have observed have...
hollow bones. Therefore, I conclude that hollow bones are a feature of the genus, meaning all birds have hollow bones because all the birds I have observed have hollow bones. We can always imagine a bird with solid bones, right? It wouldn't change our conception of a bird. It might if you, if you're, what do they call that?
Aviology or something? Terology? I'm not sure. In any case, I'm not sure who, who studies birds, but... The bird people might say, no, no, no, if they had solid bones, it would completely change our conception of a bird.
I understand that's part of their ability to fly and all of that, but I don't know. I'm not a birdologist or whatever they call themselves. The point being that we can imagine a bird without hollow bones and it wouldn't change anything else we know about birds. We can always kind of imagine the conclusion to be false.
Like this guy. Like, what if there's a bird out there that you haven't seen and, yeah, it has solid bones. What if? What if? What if I observe 50,000 squirrels and they're all gray and I say, therefore, I think all squirrels are gray.
And you say, well, you haven't seen them all. What if there's a purple squirrel out there? But yeah, I mean, I don't know.
We haven't seen one. But you can't prove that they don't exist, which we'll get into later. It doesn't destroy the argument at hand is the interesting thing.
Supplying a second conclusion and showing that an alternate could be true in an inductive argument doesn't do what it does to a deductive argument. As you remember with our informal test of validity, showing that a second conclusion is possible shows that the deductive argument is bad, that it's invalid. But in this case, it doesn't do that. Because it's not to say that there aren't birds out there with hollow bones because you observed one with solid bones. It's just...
just to say that maybe you need to make a more modest statement there. Maybe it's not all birds, maybe it's not all squirrels that are gray, but just most, almost all, something like that. You have to moderate your statement, you have to alter your data. Maybe you can see where this is going, those of you who are in the sciences, that this is how science has worked since Galileo.
They often call them the inductive sciences for this reason, is that as we observe new data we update what we know about it. It doesn't destroy what we previously, I mean it can. It can.
It can. There can be new discoveries that just say everything we thought we knew was wrong. Scientific revolutions are an example of that happening.
People like Newton brought in new ways of understanding nature that kicked out old ones, but it still, in a lot of ways, uplifted older principles, preserved them, and sometimes just moderated certain positions that were taken. The arguments weren't destroyed. They're just moderated. And this is because inductive arguments are defeasible.
Kind of a weird word, but what defeasible means is that they're open to being overturned by the introduction of further data. Even when you make inductive arguments, observational arguments that you'll make in scientific experiments and that kind of thing, You're leaving it open to the fact that more data may alter the conclusions that you've drawn. At least their breadth, right? They might not be taken so broadly. Newtonian physics that I brought up is a good example.
Newton assumed that... nature operated in the ways that he observed. But those are terrestrial physics.
Those don't exist 50 miles up. Well, I mean, they do to an extent. Some of them hold, but not all of it. They work down here on the ground when we're in this particular context, out in the middle of a vacuum, though. A lot of that, maybe besides the preservation of energy and everything, it doesn't work.
It just doesn't work. A lot of it falls apart because it's only here. And so further observation shows that...
that while Newtonian physics are a good explanation, kind of an immediate practical physics here on Earth, they don't work so well out in orbit. Something like that. We needed a more complete physics, a physics that could explain other contexts. And it doesn't destroy Newtonian physics, it just recontextualizes it.
It just puts it in a place where it says, well, that's partially right. Whereas for a little while, they thought it was completely right. And so you can see this connection between inductive reasoning and the natural sciences.
So what inductive arguments give us in terms of a conclusion are strong arguments and weak arguments. Rather than valid or invalid, we get strong and weak. So we don't try to find falsehood in the claims. I mean, if it's there, it's there.
You know, obviously if somebody states something untruthfully in an inductive argument, purple squirrels are probably because somebody was in the throes of a delusional state or was on drugs. saw it. But for the most part, you're not looking for truth, necessarily. That's there.
It's there. It's part of it. But when you're analyzing inductive arguments, you're not looking for truth.
You're looking for a sort of probability. You're looking for... a preponderance of the claim, not just for its truth.
That's there, but that's just one part of it. And so you're trying to find the strong and weak probability of a claim. The ultimate refutation of an inductive argument is showing that there is reason to believe the data set that its conclusion claims to represent is incomplete or too small to make such a conclusion.
That's the big thing. I can't emphasize that enough. When you're dealing in induction, or one of the ways to notice induction, is you're not arguing over the truth of the conclusion, the truth of the main claim that it's making, you're arguing over the completeness of it, the totality of it. squirrels are gray and you might say well no I mean most squirrels in certain regions are gray but there are brown squirrels they exist they've been observed and and so it starts to look at how broad the conclusion is conclusion is, and refutation attempts to narrow it, or attempts to deprecate it, attempts to take it out of kind of being a universal position and just a more general one.
And so rather than trying to prove falsehood like we do in deduction, we try to show flaws, flaws at a probability level. And the science part of this should be super obvious. So how do we do this with any reliability?
How do we gauge whether or not an argument is strong or weak? Well, we don't. It's the short answer.
Induction has been called the queen of the sciences because of this ability to deal with accumulated observation and to make broad judgments based upon them. upon what has been observed, what's in the literature and things like that, to do these Bayesian analyses of outstanding research and make some large observation of it. And so it's the queen of the sciences, but it's also considered the embarrassment of the world. of philosophy because we can't really, philosophers of science can't really define, there's a lot of work still going on in this, a way to reliably distinguish a weak argument from a strong argument within logic itself. We have to often move outside of it to even make arguments for something being strong or weak.
A good example of this, we'll talk about it later because it's going to come down to some statistical reasoning, is the way that most people will handle it. There really isn't, it's not like deduction, where we can lay out systems, as Bertram Russell and people like that did, to show how logic functions. He wrote a whole book, him and Alfred North Whitehead, called the Principia Mathematica back in the early 20th century, laying out how math and logic function.
They were called logisticists. They thought that math was a superset of logic. Without going into the intricacies of that, that can be done. We can lay out, we can systematize deductive logic.
Inductive logic. no. There's not really...
we are reasoning to a most likely explanation is what we're doing in an inductive argument. Rather than toward a certain explanation, toward a conclusion, we're saying that the best probable explanation for the data given is this. And there just isn't a way to judge that that's sound and valid to keep it within logic.
There isn't a way to do that within logic. It has to go outside of itself for a moment and try to find something external that can determine strength and weakness. And so, like I said, it's the embarrassment of philosophy. And in the last couple weeks of the semester, if you want to talk about it, or maybe I'll just put a discussion post.
So I want to do another discussion post or two about some fun philosophy topics, and I might talk about the problem of induction, especially as it's introduced by David Hume, a philosopher that I'm not even really a big fan of, but he always comes up in this class. Whenever I teach logic, I end up talking about David Hume. So the problematic, and problematic here means uncertain, the original meaning of problematic wasn't like mean or bigoted as it got used in the 2000 teens. Problematic in this case, I think I talked about this when I was talking about Kant.
I can't remember, maybe that was another class. In any case, it all kind of blurs together sometimes. Problematic just means uncertain.
So the uncertain nature of handling inductive arguments. Well, logic texts just kind of go over them. They don't really touch on them too much because there's not much to touch on.
It's so uncertain that we're not really sure what to do with it. And we'll kind of do that here. That's going to be our approach too.
Know that inductive arguments exist. Know that their conclusions... are probabilistic, and a good probability for a conclusion to be correct, however you judge that, is strong, and a bad probability for a conclusion to be correct is weak. The data that you're given should be statistically correlated and all of that.
That's why they're always doing this. I mentioned Bayesian analyses and all of that. If you ever want to know how, look that word up.
You'll see what I'm talking about. So the major takeaway here is to be able to spot an inductive argument. not to make them, not to deal in it.
The most proximate we're going to get to inductive reasoning in this class is when we're doing fallacies, because some fallacies are inductive fallacies. For the most part, we're going to just kind of push past it, because, again, it's uncertain. There's no way to systematize it and do it with any solid certainty. And a deductive argument is going to make certain claims, which is one way to do it.
to spot it, and an inductive argument is going to make probable claims. So there's a certitude to a deductive argument, like I said, like math, 100% couldn't be any other way. Inductive is going to be from the evidence given it is likely that some conclusion is true, or is the case.
And sometimes these are going to be ambiguous because natural language does that, and we'll see more about natural language and arguments because that's going to be the second lecture this week. It's tough. It's tough sometimes to find an argument within a natural language presentation, any kind of composition or something like that. I think we've talked about this before. It can be difficult to unwind English into logic and pull it out, put it in standard form or something like that, and analyze it.
And with all that linguistic ambiguity that tends to be sewn into and shot through language, it can be hard to tell deduction and induction from one another. Remember the stolen gun argument we used? That Jeremy's prints were on the gun, but it was his gun, and so it could have been stolen. Basically, that's kind of an inductive argument.
There's a probability that it was taken. And once again, we'll come back to them later. We'll come back to induction in a more coherent format when we talk about statistical reasoning.
This class for a brief period turns into a stats class. You'll love it. Trust me, I'm kidding.
You probably will hate it. Most people hate math. But going back to deduction for a minute here, in speaking of natural language arguments, there is a tendency to have missing or implied premises in arguments. And so when we're analyzing natural language arguments, which again is kind of the...
the point of this final part of this unit is to start looking at some natural language arguments and giving them a good, thorough, logical analysis. What you're going to find is there's language has... implicature, as linguists call it. Implicature is just something that's implied. You know, there's the famous, I don't know what you'd call it, it's an accusation or a statement, when did you stop beating your wife?
Right, that's an example of a very malicious linguistic implicature because it's assuming you did it, it's implying that you did it, and then asking you when you ceased to do it, which to cease doing something implies. you did it in the first place. And so some linguistic structures are going to imply things, imply them to such an extent that they're going to leave out, and this is very important, two very important words, implicit, where it's not said, but it's assumed, and explicit, where it is said, it's right there. And so not all arguments are going to have explicit premises. They're going to have implicit premises as well.
And so we talked about bias in our analysis of arguments. We're biased toward being right. All of us are.
This includes me, this is you, this is everyone. If you think you aren't, that's because you're so biased toward being right that you can't admit. you're like this.
It feels good to be right. They've done studies on this. When you're right, when you feel like you're right, when you feel like you're correct in your religious or political beliefs, you get a rush out of it. When you feel like you're wrong, you feel stupid, you feel worthless, it feels good to be right, it feels bad to be wrong.
And so a lot of what we do with argumentation, and this is one of the things that I hope a critical thinking class like this will teach you to at least temper a little bit, if not avoid altogether, is we seek to justify the views we hold, to confirm our prejudices. That's what scientists call confirmation bias, right? We look for data that supports the prejudice that we carried in. into the experiment or into the study.
Confirmation bias, yeah, like I said. And we seek to deprecate those views that oppose ours. So not only do we seek to uplift the views that we agree with, we seek to push down the views that we don't. The views that don't conform to our prejudices, we want to just do away with. We want to show how wrong they are and show how right our views are.
This is just confirmation bias. This is something that in any good sort of science methodology class they'll teach you to avoid. but critical thinking can do the same thing.
When you're analyzing information make sure that you're not just looking for how it makes you look right. Try to figure out what the speaker is actually saying, not how it makes you write about something. And so one way around this bias is called the principle of charity.
We'll get to missing premises in a second because they kind of reflect through this. The principle of charity is an interesting thing. So when we encounter an argument, rather than tearing it down, we should work to make it as strong or valid as is feasible in the moment.
Now when people see this, they often think that the point I'm making is moral in nature. That I'm saying you should be forthright, you should be upstanding, you should be a good person, and you should be nice to people. That's not what I'm doing here.
It's not what I'm doing at all. You'll see sometimes this is called self-righteousness. steel manning, by the way, and I'll explain that in a minute. So once the argument is reconstructed in its strongest, most valid form, you argue against that argument.
So you take what somebody's saying, and you make it as powerful as you can, and then you argue against that most powerful form. Once again, steel manning, and you'll see why that's important in a moment. Because the opposite of this is typically called a straw man fallacy.
Straw man fallacy is when you distort your opponent's position. You either weaken it in a way that they didn't, or you distort it in some way that makes it easy. It's a straw man because it's not real.
It's not an actual person that you're fighting. It's a fake opponent, and then it's easy to knock over. It's easy to topple the argument because you made a distortion of it. This is often silly.
This is really common on social media. People will reply to you and make mincemeat of what you actually said and attack the mincemeat they made and then pretend that they refuted what you actually said. This is a really common one when people are not well-versed in logic or just being jerks. But the charitability principle does the opposite of the story.
straw man. You can get what a steel man is now. It's creating the strongest one, the one that's hardest to topple.
Because you have to be charitable through this principle to people or arguments that you might find bad on their face when it's given, or even really bad people making arguments. is really uncommon today. I mean, in our wonderfully toxic environment where you just yell at everyone who happens to differ with you, it's hard enough to find a good discussion, much less two people who disagree on something, not just falling down to straw men.
and personal attacks and social media, namely Twitter, let's be honest. I don't use my Twitter. I just use it to read people being very mean to one another on all sides.
I want to make sure that's correct before anybody goes, oh, yeah, this other political side or social category that I don't belong to. No, no, no, no, your side too. I've seen every possible configuration or subdivision of humanity just being garbage. to people outside of that group on Twitter.
And it's just, it's really not a good place. It's really not. So, you know, also probably anywhere else that people conglomerate.
Even though the internet, I think, has a tendency behind the screen, you guys have never seen me. Well, maybe in that picture I have on Canvas. If I even have one, I'm not sure if I do.
Yeah, I'm just a voice, right? So why not tear me down? Why not mock me and things like that? I'm not a real person anyway. I'm just a disembodied voice in the lectures you're getting on YouTube.
Wow, this thing's running almost 40 minutes, so I'll knock off the tangent. But yes, we have a tendency to just be jerks. to knock down arguments the person didn't make rather than the argument that they did. And this is where charitability comes in.
You're being charitable to their argument so that you can give the best treatment of it and topple that. And it becomes really important when encountering natural language arguments that are incomplete, that don't have all their premises laid out in an explicit way. We'll give some examples. Like I said, they're not all going to be complete.
Authors and speakers leave things out, or they imply them, they assume. you agree with some point so they don't include it. So here's an example.
This class is a waste of time because it doesn't contribute to my major. I'm sure a lot of you agree with that. Let's put this in standard form.
It's got one premise and one conclusion. This class doesn't contribute to my major. So the conclusion is, therefore, this class is a waste of time.
This isn't a terrible argument so much as it's missing something. It's not that the conclusion doesn't follow from the premise, it's that it doesn't clearly follow. There's something that this is saying that it isn't, so it's, it doesn't contribute to my major so it's a waste of time, meaning that a waste of time is...
that doesn't contribute to your major. Does drinking beer contribute to your major? I'm sure plenty of you do that.
Do you think it's a waste of time? Well, maybe you do. I guess you can enjoy a waste of time. There's a missing premise here. So here's a Here's a more complete argument.
Here's the one I would argue against if someone said that to me. Classes that do not contribute to my major are a waste of time. This class does not contribute to my major. See that first one? How it's laying out, it's giving you some groundwork to build this argument from.
And so therefore, this class is a waste of time. So if you're taking a class that doesn't contribute to your major, why? Why do it? There's much more economical uses of your time rather than dilly-dallying with some weird philosopher whose face you've never even seen.
No, instead, you should be doing... something that moves toward your goal more readily, more expeditiously, and so this class is a waste of time. That's a bigger argument. It's more clear, it's more coherent.
I don't talk about cogency in this class, but this is a more cogent argument. You can look that up. Maybe I'll have a discussion about it just to cover the topic.
So the premise is implied by the original argument. It's in there. It's part of it. It's kind of saying it, but it's not saying it explicitly.
You can feel it, as bad as that sounds, as much as that sounds like magical thinking or something, you can kind of tell that the person is saying, if a class doesn't contribute to your major, it's a waste of time. And this can be tricky, because like I said, linguistic implication, linguistics in general, natural language, is not exact. Meaning is, if you've ever taken a class on meaning, and I doubt anyone has, it's more of like an upper division thing, remembering once again that I am also a linguist, semantics, the study of meaning within language, is really tricky stuff. Semantics is heavy stuff. So, people fight over this all the time, over implicature.
It's because it's not said, because an implicit statement is not explicit, you're kind of assuming what they're saying at all times. You're never 100% certain. what they're saying, so you're trying to be as charitable as you can and work it out. And logic is one way to help you do this. You're making a statement that seems to hinge on some other statement that you're not making.
And so I'll make that statement so that your argument looks a little more cogent, looks a little more coherent. It looks more like it has a better grounding, has a better foundation than you gave it when you made it. People are sometimes just in a hurry and just don't care to take a lot of time to work through these things.
And yeah, we fight over implications literally all the time. I mean, I've seen big fights happen over, you don't know what I'm implying, you don't know my heart, and things like that. People will get very mad. Usually, just as a rule of thumb, if somebody's getting mad about it...
an implication you drew that usually means you're right, but that's not logical. Don't use that principle. It's called the Galileo Gambit. It's a fallacy.
Really, it ruins marriages and everything, these kinds of arguments. But what we want to look for when we suspect missing premises is an argumentative gap. Looking for a gap in the argument, that this argument up here had a gap in it. So, here's an example. Steve is only 30 years old, so he cannot be president.
There's a clear argumentative gap here. Something going on there that we're missing. Now, you might know the Constitution. You might have read Article 2 and you know this. But that's the argumentative gap.
This sort of informal knowledge you have isn't being included in this argument. Something makes a connection between Steve is 30 and cannot be present is less certain than it should be. Imagine if you're from another country and someone said that to you.
You'd be like, why not? And you have to tell them. So, there's also this weird problem with specificity. So if you were really to steel man this argument, you'd want to add this in. Which country are we talking about?
Mexico has a president. Are we talking about Mexico? Let's assume we're talking about the United States.
So let's do this one. Premise one will be Steve is 30 years old. Therefore, Steve cannot be president of the United States.
All right. So that's the first part. Got that out of the way.
You can't be president in this country. You might be. I don't actually know what the age limitation is in Mexico, or even if there is one. Maybe a 10-year-old can be president in Mexico. I wouldn't know.
I've never read their constitution. And so the missing premise becomes somewhat obvious here. What's barring Steve from being president? Well, he's 30 years old.
It's clearly an age issue. What would more clearly make this argument? I think you can see this. I don't want to belabor the point too much, right? So here's our first premise.
To be president of the United States, a person must be 35 years old. Steve is 30. Therefore, Steve cannot be president of the U.S. That's a clearer argument. That's making an argument that's more solid, and I mean, honestly, it's irrefutable. That's like a basic truth.
You can look it up. It's in Article 2 of the Constitution. It lays out...
qualifications to be president of the United States, which is a natural born citizen of 35 years of age. So yeah, he's too young here. That's really clear. There's not much of an argument to be had against that.
with all of that information there, it's less assailable. So if you make your argument solid like this and try to explicate all of your premises, they will be more ironclad. And if you make it, and in this case, I know this one's kind of bulletproof, but if you take an argument given to you and put it in this position, you might see that you should agree is the first thing, that it is, as I said, it's bulletproof, it's unassailable.
There's not really a way around this. or at least you're toppling the stronger argument. There's a good reason for that.
So the relation of the original premises conclusion is clear and undeniable. You can see this just work right through. And this is one of the easier sorts of arguments to complete. Many incomplete arguments are going to be normative rather than descriptive. This is where we get into some shady territory a lot of the time with popular argumentation.
So that's to say, they're going to have a premise that describes some state of the world and a conclusion that makes a suggestion on how the world ought to be. This normative is often called prescriptive as well. We prescribe how the world ought to be, and we describe how the world ought to be. To describe it is to tell us what you observe, what you see, right?
We're back to induction with that. But to prescribe is to tell you how you think it should be. How the world ought to be. This is where we move from a sort of practical philosophy that's dealing with science to a practical philosophy that's dealing with morals. We're down to not how you are acting, but how you should act.
And they're very different things. I mentioned earlier, I said it, if you're paying attention to this lecture, I said David Hume always comes up in this class. Well, here he is.
So this is an old philosophical problem. It dates back further than Hume, but Hume is really the modern philosopher. He was a Scottish empiricist.
from the 17th century, a long time ago. But he was the 18th. I think it was the 18th.
Again, I'm not a big fan. I mentioned Kant the last time. Kant was a huge fan of David Hume. And so, you know, he plays into the tradition that I work in, but he's not my favorite.
In any case. Hume is the first person to really iterate this philosophical problem in the modern period. It's called the is-ought gap, or the fact-value gap. So, to describe it simply, there's no clear manner in which we can logically move from how the world is to how the world should be. There's no logical process that can get us to certainly conclude how things ought to be.
This is where ethics... and politics and all these other things that philosophy deals in are finding their point of departure with something like metaphysics and epistemology, what the world is, how we know it. Logic would be another.
another, well, typically, there are forms of logic that deal in how the world should be, deontological logic and things like that. But the kind we're doing here is purely descriptive, or at least it attempts to be. It attempts to only handle facts, right? Statements about the world. Not statements about how the world should be, but statements about how the world is.
So, for example, let's take this argument. College is one of the best paths to success, so you should stay in school. In standard form, let's be nice and charitable to it, we get this.
College is one of the best paths to success, so you should stay in school. Now that's just a reiteration of the sentence up there, right? We get one premise, one conclusion.
Well, this is problematic. Look at that word in the conclusion, what we've been talking about. You should stay in school.
Why should you? It doesn't state a clear objective fact. It's not telling us some arrangement of the world. It's telling us something we ought to do. It's not a statement in the usual sense of the term.
But why should we do it? Why should we do this? Well, you'd say, because you want to succeed. Well, what if I don't?
What if I want to be a loser? I mean, I became a college professor. I kind of want to be a loser. That's a joke.
What if I simply don't care? What if I'm honestly just nihilistic, and I just do not care? I don't want to succeed.
I hate the world. And I don't care if I do well in it. I don't care if I die tomorrow.
Waking up in the morning sucks, and I don't want to do it anymore. I've known a lot of people who've done this. There's no obvious path between a fact in the logical sense and a value or a normative statement.
But supplying a missing premise can often clear up some of it. It can make at least the transition seem a little smoother than just jumping. from facts to values. That you'll succeed in life if you go to college, so you should go to college. Well, you're assuming I want to succeed in life.
What if I just want to do drugs and die in a gutter? So here's our first premise to this clearer argument. A successful life makes health and well-being more possible. This is, I think, without any kind of denial possible, right? You're generally going to live longer the wealthier you are.
The data's really obvious on this. Your well-being, your happiness. Trust me, having more money as somebody who has moved up the ladder slightly throughout his life, like, the ability to go on vacation in the summer makes your life a lot better.
Because of this, you should aim at being successful. Because you're happier, because you feel better, because you're healthier, because you will live longer and you will just have, in general, a better experience of existence, you should aim at success at some level. College is one of the best paths to success, so you should stay in school. Now this is still fraught with the fact value gap, right? It's still there, but it's smoothed over when we add P2 and P1 to this.
Why? What's going on there? P1 is a fact.
Like I said, that's true. The statistics are available on this. And that supports P2.
In fact, to be clear here, it adds a second should, right? It adds a second value statement. You should aim at being successful. because you'll be happy if you do.
If you succeed. I mean, I guess there's always accidentally failing, but you know, don't waste your time and do nothing. Try anyway. Rage against the dying of the light or whatever.
P2 is a value and it supports the prescription contained in the conclusion. You should aim at being successful. Since you should aim at being successful, you should stay in school. So you can see how those two interlock with one another.
This could be convoluted. These kinds of... Arguments can be difficult to handle because again you're almost giving them the status of a statement when they actually are not. So the short explanation here is that normative arguments should, ought, etc. are missing an implied normative premise.
There's something that they're saying you should do that they're not saying. They're not giving it to you on its face. They're making you work for it.
They're assuming you agree. Once more, like, they're assuming that you agree that you should want to be happy. That you should desire success. Some people don't.
And so that argument might be lost on them. You should stay in school, son. Why? I just want to be a heroin addict and die at 25. Oh, well, I mean, get psychological help?
I don't know. And finding that implied value is not always clear or easy. And getting around the fact-value gap is not always possible.
Because some arguments are going to rely on falsely stating how the world ought to be, or even couching that normative statement as a fact. The second one's really common. People are going to say that their morals, their values, are facts. This is, and I'm not trying to attack anyone here, very common with religious people, where they'll say that what is good is objectively good, God is good, and following God's plan is objective goodness. And so through that lens...
they can often state their moral values as facts when they're really values. They're saying how you should act. They're not talking about how the world is. They're talking about how the world should be. And this happens a lot.
A lot. Kind of making fun of Donald Trump there, but honestly, again, not on either side. This is really common political rhetoric.
Both sides will state their prescriptions as descriptions. If you talk to a socialist, they're going to tell you that socialism is the way the world ought to be, and they're going to do it in a way, they're going to state it as a truth, as an incontrovertible truth. If you talk to a capitalist, on the other side, They'll do the same thing, just for capitalism.
They're going to state it as this truth, that people are going to be free, and people are going to be prosperous, and all of these things, and they're going to state these prescriptions as if they're describing an existing world, but they're not, they're telling us the world we ought to create. They're even going to govern vital necessities for the survival of humanity in these ways, and it can get really... sometimes politics can be evil, let's be honest here, right? We should do X. the whole world will fall apart.
The moral fabric will suffer. The country will fall apart. God's veil of protection will be lifted from us.
Everyone will die. There's a million ways that they do this in political rhetoric. So that's the end for now.
I do want to go back to something that I kind of left hanging there, steel manning. So when you're being charitable to someone's argument, when you're creating the strongest form of it, what you're doing is you're challenging the best argument they have. And if you can defeat that argument, if you can defeat the steel man rather than the straw man, you defeat all the straw men too.
You're more completely as competitive and terrible as this sounds. For those of you that are into debate or anything like that, I never was, you destroy their position more thoroughly when you destroy the strong position. Destroying the weak position, you not only provide them an opportunity to come back, but you also just might be distorting what they're saying and looking like a jerk. But if you build up a good version of their argument for them, and then you trip it and make it fall on its face, you defeat them entirely.
So steelmanning is always kind of important in discussions and debates when you're trying to prove yourself correct and prove an opponent wrong. When you prove them thoroughly wrong, you prove them thoroughly wrong. I mean, it's a little redundant, but that's the case.
And it's not, charitability isn't a moral value, to go back to moral values. I'm not saying be nice to your opponent. I mean, you probably should.
Don't be a jerk. But that's not what we're going to learn in a logic class. You should. be charitable to their argument so that you're defeating the proper argument, so that you're actually addressing what is said, what's before you, rather than addressing distortions of it. Not for moral reasons, but for logical reasons.
They are very different things. We'll see some of the ways that logic and its counterpart, rhetoric, kind of engage these topics going forward. We're going to talk about some rhetorical devices.
And I think that's next week. I think I'm going to do that next week. The next video I'm going to do is just going to be some natural language arguments, some videos and other productions. I'm even going to read a couple of poems and try to find the central argument within them. Because, as I said, it's difficult to find arguments in natural language.
We're going to do some exercises on that. So hopefully, hopefully, because I've never done this one online before, this pans out correctly. because otherwise it'll be a waste of time, but hopefully it'll be fun anyway.
I usually choose some fun videos and things like that just to get the discussion started. And yeah, I want to do more discussion posts, things like that, so keep your eye on Canvas, and I hope everyone has a decent rest of their week.