Transcript for:
Understanding Utilitarianism and Consequentialism

picture the scene your best friend has just bought a new car and they ask you what you think of it are you really going to tell them the truth and ruin an otherwise great friendship surely it's better for the sake of both of your happiness to tell a white lie here or let's up the stakes you're starving and you and your family are going to die unless you steal a loaf of bread from billionaire Jeff Bezos let's say there's no other way to get food and so your only two options are to either steal from Jeff or you and your family are going to die what should you do or an amplified version of the classic trolley problem some Maniac has tied the entire population of planet Earth except you to the train tracks and there's a switch that will divert the train onto a different track where there's only a single person died and so by pulling this switch you're going to kill this one person but you're going to save everybody on planet Earth so what should you do chances are that at least in one of these examples your intuition is that the morally correct thing to do is to either lie or steal or perhaps even kill someone and the reason being that while lying and stealing and killing are bad they're not as bad as the consequences if you don't you know it's better to kill one person if it means saving the entire population of planet Earth for example and so uh you guessed it we're starting the moral philosophy topic with consequentialism and utilitarianism the idea that what makes actions right or wrong are the consequences [Music] [Applause] so utilitarian theories are consequentialist they say that what morality is and what makes actions right or wrong or good and bad are the consequences and the most obviously relevant consequences as far as morality is concerned are pleasures and pains or happiness and unhappiness you know you don't need to explain to somebody why making someone happy is a good thing or why causing someone pain is a bad thing and so this is what so-called hedonistic utilitarianism says it says that uh actions that increase pleasure or happiness are good and actions that de increase pleasure or increased pain are bad now there are other non- hedonistic forms of utilitarianism which say that the relevant consequences are something other than pleasures and pains but we'll talk about those a little bit later in response to various objections and there's also the issue of whether we should uh calculate the consequences at the level of um specific actions or the level of sort of more general rules but again we'll talk about this distinction uh in response to various objections in a moment so the most basic form of utilitarianism is probably hedonistic act utilitarianism and this form of utilitarianism says that uh we should consider the consequences of each specific action we take and then choose the one the action that increases pleasure most effectively so if a homeless person were to ask you for £10 well if that £10 would give the homeless person more happiness than it would give you to keep the £10 then utilitarianism says the right thing to do is to give the homeless person the £10 because that's what maximizes pleasure or maximizes happiness or I mean there's no real rules to out utilitarianism or hedonistic act utilitarianism uh Beyond pleasure so if you're in the shop and you like the look of something and you thinking to steal it well again you weigh up the consequences and if stealing this thing would cause you more happiness than it would cause unhappiness to the shopkeeper are getting stolen from then act utilitarianism would say the right thing to do is to steal that thing you know the fact that it's an act of stealing doesn't matter here all that matters are the consequences of pleasures and pains and Jeremy benam who's kind of considered the creator of utilitarianism came up with a sort of formula to calculate the consequences or the pleasures uh resulting from an action and thus a way to calculate and quantify how good or bad a particular course of action is so bentham's utility calculus lists seven variables to consider when calculating the uh utility or pleasure that results from an action so you've got the intensity of the pleasure uh its duration um its certainty its propinquity which means um sort of how soon the pleasure is to occur you've got to consider its fecundity which is uh How likely the pleasure is to lead to more pleasure and then you've got to consider the purity of the pleasure I.E how much if there's pain mixed in with a pleasure then it's a less pure pleasure than just straight up Pure Pleasure and finally uh the extent I.E the number of uh people who are going to affected by the action you know what I'm saying whatever anyway I was uh explaining bentham's uh utility calculus to my girlfriend the other night and I told her that you know while a duration of 30 seconds maybe isn't anything to write home about uh she wasn't considering the other six variables of bentham's utility calculus you know she wasn't considering the intensity of pleasure in those 30 seconds or its facundity so I I pulled out my copy of bentham's introduction to the principles of morals and legislation and explained to her that uh you know those 30 seconds of 10 out of 10 intensity pleasure was according to benam at least uh more morally valuable than a whole 2 minutes of two out of 10 intensity pleasure but she just called me a nerd anyway uh bentham's utility calculus gives us a way to measure and quantify the utility and thus the moral worth of a a particular course of action so each time we act we add up all the pleasures and minus all the pains and whichever course of action results in the highest number I.E the highest net amount of pleasure is the morally correct course of action so it's kind of a you know a scientific way uh of quantifying morality and it's in theory a simple way to decide what is good and bad bentham's utility calculus might sound uh fairly straightforward in principle but it soon becomes very complicated when we try to apply in practice because for one thing you'd have to predict the future and how you supposed to do that and if so how far into the future are we supposed to anticipate the consequences of our actions because you know saving a baby's life might increase pleasure in the short term but what if that baby grew up to become a serial killer then in that case saving the baby's life might actually decrease net pleasure overall and so would be wrong according to act utilitarianism but parking that issue to the side for now uh let's say you could predict the future well then how are you supposed to measure each of these seven variables are we supposed to hook everyone up to brain scanners for example and measure how intense their Pleasures are and then you've got to calculate the other six variables and compare them against each other and how exactly are you supposed to decide between say a more more intense but less certain pleasure versus a less intense but more certain pleasure then there's the question of which beings do we include in our calculation because dogs and cats for example can feel pleasures and pains as well so are we supposed to include their pleasures and pains in our calculus as well well benam seems to think so a famous quote from him is the question is not can they reason nor can they talk but can they suffer and I guess frogs and spiders can also suffer so presumably we have to include them in our calculations every time we act as well but this still raises the question of uh how we waight these pleasures and pains like is a frog's 10 out of 10 pleasure equal to a human being's 10 out of 10 pleasure and if not how do we kind of compare these uh the 10 out of 10 pleasures of two different species as you can see uh what initially seemed like perhaps a rather simple and scientific approach to morality soon becomes incredibly complicated we have to predict the compounding consequences of our actions from now until the end of time considering all seven variables of bentham's utility calculus and weighing them not only for human beings but for pigeons snakes frogs aliens and every kind of being capable of feeling pleasures and pains maybe that was a bit of a straw man interpretation of act utilitarianism and bentham's utility calculus uh Bentham does somewhat address this difficulties with calculation objection when he says it is not to be expected that this process I.E the process of the utility calculus or felicific calculus as it sometimes called should be strictly pursued previously to every moral judgment or to every legislative or judicial operation it may however be always kept in view in practice most act utilitarians would advocate for a common sense approach where the felicific calculus is kept in view but not strictly pursued to use bentham's words and so uh you could save the baby's life for example you don't have to consider the consequence that it might you know grow up to injure a frog or whatever another potential way to avoid this difficulties with calculation issue is uh rule utilitarianism but we'll talk more about that in a moment bentham's utilitarianism uh can be summarized by the phrase the greatest good for the greatest number but what if the greatest number gets pleasure from something that causes was his pain to a smaller number to a minority and this is known as the problem of the tyranny of the majority I uh don't want to get too dark with my examples here but it's not hard to think up such scenarios so for example let's say there's 10,000 people who are messed up and they get pleasure from seeing an innocent person being tortured and let's say each of these people would get this is an arbitrary unit arbitrary scale but let's say each of these 10,000 people would get 10 units of pleasure from seeing an innocent person being tortured well let's also say that we have some innocent person who no one really cares about who would suffer a thousand units of pain from being tortured well act utilitarianism would say that the right thing to do in this scenario is to take the innocent person and torture them for the pleasure of these 10,000 people who'd enjoy seeing that because 10,000 time 10 I 100,000 is more than the uh these 100,000 units of pleasure massively outweighs the 1,000 units of pain uh being suffered by this innocent person who's being tortured and so if we're optimizing for the greatest good for the greatest number then according to act utilitarianism the morally correct thing to do is to torture this innocent person but this seems kind of wrong because regardless of whether it makes people happy some things are just wrong you know it's just it's just wrong to torture an innocent person and so if our moral theory says this is the right thing to do then that suggests that the moral theory has kind of gone wrong somewhere the phrase tyranny of the majority actually comes from John Stuart Mill who's also utilitarian and someone we'll talk more about in a bit anyway Mill was talking about the tyranny of the majority in the context of democracy uh Mill was worried about the majority voting to impose unfair and unjust limitations on people's freedoms anyway in uh utilitarianism M John Stewart Mill lays a lot of the groundwork for what would later be known as rule utilitarianism by defending Notions of uh individual rights and Justice on utilitarian grounds and uh rule utilitarianism gives us a potential way to respond to the tyranny of the majority problem so where act utilitarianism says we should calculate utility at the level of individual actions I.E each time we make a decision R utilitarianism uh says we should calculate utility and pleasure at the level of more general rules I.E that we should follow the rules that in general maximize happiness most efficiently for example let's say a poor person is weighing up whether it's morally acceptable to steal something from Jeff Bezos and his Amazon warehouse well act utilitarianism would presumably say to do it because the person's pleasure at whatever he steals from the Amazon warehouse is probably going to outweigh Jeff bezos's unhappiness at being stolen from however Ro utilitarianism might take a different approach Ro utilitarianism might argue that while in this specific instance stealing might increase happiness as a general rule stealing leads to less happiness and so the rule don't steal can be justified on the basis that it increases happiness in general one way the V utilitarian might argue against stealing is that the consequences of an action are sometimes more than the sum of the parts so with a case of stealing not only do you have to consider the actual kind of first order uh pleasures and pains that result from an act of stealing but there are also these kind of second order effects because if you knew you lived in a society where you could be stolen from at any moment provided that the pleasure of the thief outweighed your displeasure at being stolen from if that was a society we lived in then there'd kind of always be this background level of anxiety and fear that you would be stolen from at any moment and so the as you as such Ro utilitarianism could potentially justify property rights by appealing to the general positive consequences that result from such rights and roal utilitarianism could make a similar argument against the tyranny of the majority objection previously so while there may be specific instances where torturing an innocent person leads to Greater happiness such as the example I went over previously the rule utilitarian might argue that as a general rule torture leads to less happiness overall and so could justify the rule don't torture innocent people on these grounds and again there's also the second order effects to consider so if you knew you lived in a society that wasn't going to defend any notion of individual rights that you could be tortured or stolen from or perhaps even killed as soon as the consequences Justified doing so then there'd again be this kind of in addition to the actual pleasures and pains of torture and stealing and so on there'd again be this kind of generalized societal distrust and general anxiety knowing that you could be stolen from or tortured uh whenever the consequences Justified doing so and so as such rule utilitarianism could argue that as a general rule Notions of individual rights rules such as don't torture people don't steal from people and so on can be justified on utilitarian grounds and provide a potential way to avoid the tyranny of the majority problem a further advantage of rule utilitarianism is that it avoids some of the difficulties with calculation that we looked at earlier in the context of the felicific calculus and act utilitarianism so whereas Ben wants you to get your crystal ball out and your felicific calculator every single time you make a decision Ro utilitarianism allows for a sort of oneandone approach because once you work out which rules maximize happiness then you just have to follow these rules and while it might take a bit of working out to decide which rules maximize happiness once you know what these rules are you just have to follow those rules rather than get your felicific calculator out every single time you act but Ro utilitarianism is not without its issues so if we're optimizing for pleasure and happiness as a utilitarian Theory then are we really supposed to follow these Ro utilitarian rules 100% of the time in every situation what about the example I gave earlier where stealing a loaf of bread from Jeff Bezos means the difference between life and death surely in that scenario it makes sense on utilitarian grounds to break the rule don't steal given the consequences or again some you know crazy Amplified version of the trolley problem where I don't know again stealing a single being from Jeff Bezos somehow saves the entire population of planet Earth I mean again if Ro utilitarianism says we can't steal in that scenario then it's maybe kind of lost sight of the point of utilitarianism of maximizing pleasure so there's a kind of perhaps tension between Ru utilitarianism and act utilitarianism if we go too far towards act utilitarianism then we end up justifying these kind of crazy tyranny of the majority scenarios where you're torturing people to please a angry mob but if we go too far the other way and we follow these rules too strictly then we kind of lose sight of the point of utilitarianism in optimizing for pleasure because you know you can't even steal a marble from Jeff Bezos in order to save everyone on Earth which kind of seems silly a different objection to utilitarianism is that there are other things that we value uh other things that have moral worth besides pleasure and a nice example that illustrates this comes from Robert nosik and his experience machine so the experience machine is kind of like perfect VR a virtual simulation that you can plug into and experience the perfect life that optimizes for pleasure but it's a fake life and once you're plugged in you don't realize your experiences aren't real you think you're living a real life nosic asks should you plug into this machine for life pre-programming your life's experiences I actually asked A variation of this poll a few weeks ago and most people said no and just to emphasize again the experience machine optimizes for pleasure whatever life would give you the most amount of pleasure you know you could be a film star or a top athlete or a secret agent with the best friends and family or some combination of all these things you could have the perfect life that gives you the most amount of pleasure it just wouldn't be real so if all that matters is pleasure as hedonistic utilitarianism claims then it's a no-brainer of course you should plug into the experience machine because however fantastic and pleasurable your real life is the experience machine would give you an even more pleasurable experience why then did most of the people who answered the poll say they wouldn't go into the experience machine presumably the reason is that there are other things that we value besides pleasure such as living in the real world and actually doing things instead of Simply experiencing doing things so hedonistic utilitarianism seems to be wrong uh in its claim that there is nothing we value over and above pleasure I suppose if you really wanted to push the objection against utilitarianism You could argue that bentham's utility calculus and hedonistic forms of utilitarianism imply that we should force everybody into the experience machine regardless of whether they want to go in or not again if what's good is to maximize pleasure and forcing people against their will into the experience machine is the most efficient way to maximize pleasure then that's what we should do according to bentham's calculus you know what's a few minutes of distress and fear at the thought of being forced into the experience Machine by the utilitarian army versus a entire lifetime of pleasurable experiences albeit fake experiences forcing people into the experience machine is a consequence of hedonistic forms of utilitarianism forms of utilitarianism which say that the Rel consequences are pleasures and pains but there are other non- hedonistic forms of utilitarianism that can potentially avoid this criticism preference utilitarianism says that rather than trying to maximize people's Pleasures we should Instead try to maximize people's preferences because sometimes people have preferences for things which don't maximize their pleasure not wanting to go into the experience machine is an example of this you know someone might prefer to live in the real world and live a real life even if doing so means less pleasure another example of a preference for something which doesn't maximize pleasure might be a monk who prefers to live a sort of atic and disciplined life for religious reasons and if that's what they prefer preference utilitarianism says we should try to maximize this preference and help the monk live this life whereas hedonistic utilitarianism in contrast would probably say something like we should force the monk to live a hedonistic and wild lifestyle of drugs and women and partying or something even if he'd prefer to live the disciplined and less pleasurable life however a potential issue for preference utilitarianism is what to do in the case of competing preferences for example a serial killer might prefer to kill people but the victim would presumably prefer not to be killed so whose preference do we go with in this example with hedonistic utilitarianism pleasures and pains can at least in theory be Quantified by bentham's utility calculus and this gives us a way to decide between these competing preferences presumably the victim's pain and also the loss of future Pleasures from being killed would outweigh the Killers pleasure from killing them but with preference utilitarianism you just have one preference versus another preference with no way to decide between them I suppose the obvious solution is to put it to a democratic vote and if most people would prefer not to live in a society where you can just randomly kill people then preference utilitarianism would say that's what we should do but then this potentially re raises the tyranny of the majority problem we talked about earlier because what if most people would prefer to persecute a minority or would prefer to see innocent people be tortured well then preference utilitarianism would presumably have to say that that the majority preference is the right thing to do but again this just seems wrong and sometimes people just have dumb preferences uh I saw a nice example on the Stanford philosophy page of somebody who would prefer to spend their life trying to write as small as possible now most people probably wouldn't have a problem with somebody preferring to spend their life trying to write as small as possible but this is moral philosophy and we're trying to Define what a good and bad and but with moral philosophy we're trying to Define what's good and with preference utilitarianism we can't say that the preference to spend your life trying to write as small as possible has any more moral value than the preference to spend your life say trying to cure cancer which again seems a bit silly so returning to hedonistic utilitarianism an issue kind of adjacent to nosic and his experience machine is that in reducing good to pleasure hedonistic utilitarianism uh reduces the moral moral value of a human life to that of a pig or any other animal that can feel pleasure that utilitarianism is a doctrine of swine as Mill describes it and yet if you think about it the kind of logical end point of utilitarianism probably is the experience machine or everyone you know plugged into the metaverse or whatever or maybe if we had the technology we could bypass the need for the metaverse Al together and just create a drug that just gives people Pure Pleasure with no side effects or another example that JJC smart describes in utilitarianism for and against is uh the utilitarian dream of a BAL man with electrodes uh wide into his brain that he can just push buttons that stimulate the pleasurable sensations of sex eating drinking and so on if this uh brain electrod man existed he would definitely have a very pleasurable life but we probably wouldn't want to call that a good life and so this again kind of underscores the problem that there's more to what's good and what's morally valuable than simply pleasure so as mentioned a second ago it was actually Mill who coined the doctrine of swine phrase but Mill was still a utilitarian and in fact I think uh Jeremy benam the kind of father of utilitarianism was Mill's Godfather or something uh John Stuart Mill's dad James Mill was like best friends with benam or something anyway Mill agreed with bentham's utilitarianism to an extent but where bentham's approach in the utility calculus was purely quantitative Mill adds this sort of qualitate qualitative distinction I've lost my place Mill makes a qualitative distinction between the higher pleasures of thought feeling um Imagination and morality versus the lower Pleasures that can be felt by animals such as the pleasures of eating and sex so where Bentham saw all pleasures as equally valuable a purely quantitative approach Mill introduces this qualitative distinction and says that some Pleasures are better or more morally valuable than others so why does Mill think this well Mill argues that people who have experienced both higher and lower Pleasures always prefer the higher Pleasures they Place more value on them so he says few human creatures would consent to be changed into any of the lower animals for a promise of the fullest allowance of a be BEAST's Pleasures no intelligent human being would consent to be a fool no instructed person would be an ignoramus no person of feeling and conscience would be selfish and base even though they should be persuaded that the fool the dunce or the Rascal is better satisfied with his lot than they are with theirs Mill's reasoning for this preference is essentially that human beings are more complicated than pigs and other animals for human beings happiness isn't the same thing as contentment of having your lower Pleasures satisfied and uh smart defends a similar distinction between pleasure and happiness in the for part of utilitarianism for and against true happiness for a human being requires dignity and dignity is part of what makes us happy or rather uh lacking dignity like the pig rolling around in the mud would make us unhappy according to Mill the pig doesn't even understand what dignity is it can't appreciate this higher pleasure but if it could it would no longer be satisfied uh rolling around in the mud and this is why Mill famously says it is better to be a human being dissatisfied than a pig satisfied better to be Socrates dissatisfied than a fool satisfied and if the fool or the pig are of a different opinion it is because they only know their own side of the question the other party to the comparison knows both sides so Mill argues that utilitarianism is not a doctrine of swine happiness for a human being is not the same thing as contentment of having your lower Pleasures satisfied a human being wouldn't be truly happy according to Mill and also smart just you know pressing buttons to stimulate electrodes in their brain to give them pleasure because this life wouldn't satisfy the higher Pleasures Mill's point is that properly understood utilitarianism requires that we maximize these higher pleasures and not simply the lower Pleasures available to pigs and other animals because these higher Pleasures are of a higher quality they're worth more than these lower Pleasures so a potential issue for Mill's distinction between higher and lower pleasures is that it adds yet another layer of complexity onto the already complicated calculation of utility so not only do we have to consider benm seven variables of the fit calculus we now have this extra qualitative dimension on top so how many hours do you need to spend having sex which is a lower pleasure for that pleasure to be worth the same as 1 hour of the higher pleasure of reading metalogic and introduction to the meta theory of standard first order logic by Jeffrey Hunter uh what's the exchange rate between the higher pleasures and the lower pleasures and on this point is Mill even right that higher Pleasures are worth more than lower Pleasures you know he's probably right that most human beings wouldn't want to swap places with a satisfied Pig but does it follow from this that the higher Pleasures are really more valuable than the lower pleasures you know as pleasurable as reading metalogic and introduction to the Met theory of St of first order logic by Jeffrey Hunter is uh is it really more pleasurable than say eating a delicious pizza or banging your wife I don't know maybe that's a bit of a straw man but if it's possible of s to be more pleasurable but less valuable then Mill seems to be rejecting a claim of utilitarianism or at least of hedonistic utilitarianism that moral value is determined by Pleasure and uh Bernard Williams makes a similar point in the against part of utilitarianism for and against when he says in his struggles with the problem of the brain electrode man smart commends the idea that happy is a partly evaluative term in the sense that we call happiness those kinds of satisfaction which we approve of but by what standard is this Surplus element of approval supposed from a utilitarian point of view to be allocated there is no source for it on a strictly utilitarian view except further degrees of satisfaction but there are none of these available or the problem would not arise an argument that could be made to support utilitarianism is that it's equal and fair because pleasure is pleasure and so no single person's happiness is worth more than anyone else's everyone's happiness has equal moral worth but this same fairness and impartiality can potentially become a bit of an issue for utilitarianism because we might think that we should prioritize certain people's happiness over others for example we might think that we should prefer to prioritize the happiness of our friends and families rather than just random people on the street so for example uh let's say Your mother is a huge fan of Truth functional propositional logic and her birthday is coming up so you're out in town and you walk past the Bookshop when you see the perfect gift for her it's a first edition copy of metalogic an introduction to The Meta theory of standard first order logic by Jeffrey Hunter so you're just about to walk into the Bookshop when you remember bentham's utility calculus you realize that although spending the £10 on a copy of metalogic for your mother would increase her happiness greatly it's not the most effective way to maximize happiness it turns out that Joe blogs from mosambique who you've never met has recently fallen on hard times and would really uh benefit from that 1010 to help get his new business off the ground and this turn out to be the most effective way to maximize happiness so you run the ffic calculus and it turns out that giving the10 to Joe who you've never met is the most efficient way to increase happiness and so rather than buying your mom a birthday present you instead send the money to mamb Beek and Joe blogs because that's the most effective way to increase happiness and actually go through the list of all the people in the world your mother probably ranks quite lowly in terms of who would benefit most from that £10 and so you walk straight past the book shop and uh unfortunately your mother doesn't get a birthday present this year because it doesn't fit with bentham's utility calculus and you could extend this example to basically every decision you'd ever make such that you'd never spend any money or even time with your family and friends so sure uh going to your friend's birthday barbecue would increase theirs and your happiness but not as much as uh say volunteering in the local soup kitchen and so you don't go to your friend's birthday or you know your daughter wants you to teach her how to ride a bike well sorry uh Joe blogs needs me more so there's a couple of ways we can frame this as an objection to utilitarianism the first way is to argue that this is a massively impractical way to go about living your life but some utilitarians do advocate for a sort of watered down version of this approach where we do have to spend certainly a lot of our time and money on Joe blogs and random people who need our help but the other way to make this objection would be to argue that we have moral obligations to certain people that fathers should teach their daughters how to ride their bikes and Sons should buy birthday presents for their mothers for example that we have certain duties to our friends and families and that the argument is that utilitarianism forces us to ignore these moral duties to those uh closest to us so Mill actually says a few words that could be used in response to this issue in utilitarianism so I'm paraphrasing but the gist of his argument is essentially that situations where you really can increase the utility of someone you don't know more effectively than you can increase the utility of your immediate circle of family and friends these situations are really rare uh one in a thousand Mill says in every other case he says private utility the interest or happiness of some few persons is all he has to attend to so this might have been true in Mill's day but in the modern era the internet and globalization mean that we're connected with basically everyone on planet Earth and can potentially give help to anyone who needs it and there are tons of Charities that exist for this exact purpose I know it's kind of hard times for many people at the moment but if you have even some resources to spare and chances are you probably do if you're watching this video then it's very hard to defend spending those resources on your family and friends according to utilitarianism because as long as you've got your basic needs met then anything Surplus to that could be spent say feeding somebody who might otherwise starve to death but perhaps Ro utilitarianism provides a potential way we can defend this partiality we have towards our family and friends we might argue that as a general rule prioritizing those closest to us such as family and friends leads to Greater happiness overall and this might be another one of these scenarios where it's kind of greater than the sum of the parts the ru utilitarian might argue that if everybody followed act utilitarianism to the letter and spent their time helping random people on the other side of the world then something of greater value I.E the value of friendship and family would be lost and so would kind of counterintuitively lead to less happiness overall but then again we can point to extreme examples that highlight the sort of tension between act utilitarianism and rle utilitarianism so if we do establish this rule to help your family and friends then are we really supposed to say that I'm sorry to keep picking on you Jeff but um are we really supposed to say that Jeff Bezos should concentrate all his charitable efforts amongst his already rich family and friends rather than say giving the occasional tenor to Joe blogs in mosambique this seems a bit of a counterintuitive outcome and so if we go too far with act utilitarianism we have undesirable outcomes but then if we go too far towards Ro utilitarianism again that doesn't seem to provide the perfect solution either one final issue I want to talk about with utilitarianism today is that it ignores a person's intentions see we typically think that a person's intentions do have moral relevance uh attempted murder is still a criminal offense for example even if nobody actually gets harmed or if you trip someone up deliberately that's far more kind of devilish than uh if you do so accidentally even if both act ultimately cause the same amount of pain so imagine for example some villainous and bitter citizen uh wants to take revenge on his town for some reason and to do this he acquires a load of poison with the intention of poisoning the town's water supply however our villain here isn't particularly bright and he miscalculates the dosage such that rather than killing anybody all he ends up doing is is getting everybody kind of mildly high and causing them slight increase in pleasure and so according to utilitarianism what the villain did here was a good thing because he increased happiness never mind the fact that his intention was to commit mass murder but this seems wrong what the villain did wasn't a good thing at all despite the fact that he increased pleasure and so this highlights another potential issue for utilitarianism One Way utilitarianism can respond to this problem of intentions is to distinguish between judging a person's actions and judging the moral value of a person themselves and this is kind of the approach Mill seems to suggest in utilitarianism and similarly uh JJC smart reserves the words right and wrong when talking about a person's actions and uses the words good and bad when assessing the kind of moral value of the person themselves so with this distinction in mind the utilitarian framework could potentially pass moral judgment negative moral judgment on the villain here maybe not on his actions but his intentions because although his intentions to harm people increased happiness on this particular occasion well 99 times out of 100 the intention to harm people usually decreases utility and decreases pleasure and so the utilitarian framework could say the villain is not a morally good person even though he luckily committed uh morally good actions in this particular instance smart gives an example that illustrates this distinction between intentions and actions but kind of the other way round so if someone in 1938 Germany were to rescue a drown man from the river and that drowning man turned out to be Hitler then we could potentially say his actions here were wrong because if he hadn't saved the man's life it could have potentially avoided all the pain and suffering of World War II however smart says though the man acted wrongly in this case we can still say his motivation was good because he says in general though not in this case the desire to save life leads to acting rightly in other words the desire to save life generally increases happiness and increases utility but perhaps this kind of sidesteps the issue we might still want to say that the moral motivations do matter at the level of individual actions so going back to the example of the villainous man poisoning the water supply imagine the same scenario the exact same amount of pleasure but this time the person who put the drugs into the water supply did so with the express intention of increasing pleasure and happiness perhaps he noticed that the town's people had been a bit down lately and he carefully measured out the uh dosages of the drugs such that he could be sure that nobody would be harmed well even though the pleasure is exactly the same in both these examples we surely want to say that the act of putting the drugs in the water app to increase happiness while still kind of morally dubious um is nowhere near as bad as putting drugs into the water supply with the intention of killing everybody maybe that's not the best example but the uh intuition that our intentions and motivations have moral Worth or are morally important is a strong one and this is something we're going to look at in a bit more detail in the next video when it comes to Canan deontological ethics so in a nutshell utilitarian theories are consequentialist they say that whether something is morally good or bad right or wrong depends on the consequences and while there are other non- hedonistic forms of utilitarianism most utilitarians are concerned with the consequences of Pleasure and Pain or happiness and sadness this is so-called hedonistic utilitarianism but within Hedonism we saw how act utilitarians say we should maximize pleasure at the level of individual actions whereas rule utilitarianism says we should follow general rules that maximize pleasure even in instances where breaking these rules would maximize pleasure more effectively so we saw how bentham's utility calculus potentially provides a way to quantify these pleasures and pains and thus quantify the goodness or Badness of an action but we also saw how this calculation soon becomes incredibly complicated when trying to apply it in practice we also looked at some other issues for ACT utilitarianism such as how it potentially leads to scenarios where a majority could overrule the rights of individuals and minorities as well as issues such as how uh act utilitarianism ignores the intentions behind uh in action and also how it potentially leads to scenarios where you're forcing everybody into a fake virtual reality that maximizes their pleasure even if people don't want to go into this virtual reality but we also saw how different forms of utilitarianism such as rule utilitarianism and preference utilitarianism can potentially avoid and provide a response to some of these issues when it comes to ethics basically everybody has the intuition that consequences matter at least to some degree but perhaps as you saw today if we only optimize for consequences we end up ignoring other important aspects of morality such as uh principles intentions and rights so the next Theory we're going to look at can down to logical ethics is all about the intentions and principles so uh perhaps we will solve morality once and for all with that one uh I don't know tune in to find out so there we go the first uh moral philosophy video in the books and uh speaking of books um it's time for the book review so uh I only really referenced two books today the first one well this isn't really a book John St Milton Jeremy benam utilitarianism and other essays now this is a kind of anthology of essays as you might expect um from John sh Mill and Jeremy benam this is a really good book really good reference I recommend it um it's got the lot in there it's got all the good essays all the classics from utilitarianism so thumbs up for that one although well John strip Mills on Liberty is not in that Anthology and I did I'm mentioning it because this is where the uh tyranny of the majority comes from this book does get a thumbs up for me as well but but it's may be less to do with utilitarianism more to do with politics um didn't really talk about that one then uh yeah speaking of politics Anarchy State and Utopia by Robert nosik this is where the um experience machine example came from um but if you're getting this book for utilitarianism uh we don't get it for utilitarianism because there's not much on utilitarianism in it but if you're interested in sort of libertarian politics and uh by all means help yourself and then uh another one that gets a thumbs up from me is this uh utilitarianism for and against by JJC smart and Bernard Williams JJC smart is for utilitarianism Bernard Williams is against and Y this is a good book it's not particularly long but it covers quite a lot of ground and it covers as you might expect from the title uh the arguments for and against utilitarianism and uh although in places it does that kind of annoying academic philosophy thing where it's like overly complicated like X1 whatever um it is quite readable so I guess kind of fun for all the family it's going to satisfy the academic nerds and it's also going to satisfy actually might a bit of a stretch to say it's going to satisfy the Casual reader but it's definitely quite accessible so thumbs up from me anyway I'm rambling now uh almost forgot to mention my own book uh which is what these videos are based around um it covers these same arguments we went over today but this is also available on my website so I'll uh link that down below along with any other resources you utilitarianism that you might find useful but that about does it for this video um thank you so much for watching I hope you enjoyed it and uh yeah I will see you in the next one