Transcript for:
Understanding Metric Spaces in Real Analysis

foreign so welcome to 18 s190 intro to metric spaces my name is Paige dote so sometimes you'll see it as page bright online that's just because I'm going to change my name uh but you can just call me Paige and yeah this is intro to metric spaces where today we're going to talk about the connection between what's covered in 18100a and was covered in 18100b or respectively p and Q and for those of y'all who don't know there's this nice little rubric that I like to draw every time I teach this class because it was drawn for me in my first year and it helped me really understand what the difference is between the courses were which were not cim a communications class communications class real analysis on euclidean space and or I guess I should say more than euclidean space and then the classes go as follows 18 100a b p and Q but realistically what I hope to highlight today is the fact that there's not too much difference between these two courses it's just a conceptual leap but this conceptual leap is really important to have for next courses so 18 101 102 103 901 there's any number of classes that come after this that having intuition of metric spaces will be deeply helpful for so yeah metric spaces are going to be what lies on the interim between a B and P and Q okay a little bit about me before I jump right in I'm a third year at MIT I've been teaching this class for two years this is my second year and I started teaching it because one of my friends was in 18102 and having a really hard time with things like Norms which is a version of a metric space which we'll talk about a little bit in this class but it's rather unfortunate if you get to the next class and it seems like you're unprepared for one of the introductory tools but one that comes with a lot of baggage a lot of conceptual baggage to keep in mind and hopefully today through today's lecture you'll see what some of those concepts are but let's start with a simpler example before we jump right into metric spaces we're just going to talk about what makes real analysis work like what are the what's the basic tool of real analysis that makes it work and as I'm certain y'all have realized through the definitions that we use in real analysis like convergent sequences continuous functions all of it relies on a notion of absolute values and nor and um euclidean distance and so let me write out what that euclidian distance is so given two points X and Y and euclidean space we call the distance between them x minus y in RN to be the sum of the distances between each of the components squared to the one-half this is just the Pythagorean theorem and this definition is known as euclidean distance as I'm certain y'all know from 1802. mostly speaking though we focus on R just because most of the theory breaks down into studying everything in r so if you don't prefer thinking in N Dimensions you can always think in one dimension and for the most part everything will be fine and in fact we'll see why it'll be fine in a moment but what are the most important properties of this distance well firstly we have that it's symmetric meaning that the distance from X to y is the same as the distance from y to X over RN and this is something we should expect we should expect the distance from me to use the same as the distance from U to me we also have that it's positive or positive definite specifically that the distances between points is always bigger than or equal to zero and if the distance is exactly equal to zero this is true only if if and only if the two points are the same so the distance between you and someone else is only zero if you are the same person uh and if any of the notation I use doesn't make sense I'm happy to elaborate but this is the notation for if and only if and then finally the most important one arguably though the hardest one to show most of the time is the triangle inequality that the distance between x and z is less than or equal to the distance from X to y plus the distance from y to Z these are the three properties that make real analysis work now if you think back to all those definitions that I was talking about earlier convergent sequences koshy sequences and so on all of them have something to do with these absolute values and that's because these absolute values are in fact a metric when I when I Define what a metric space is which will just be in a moment is simply going to be a set with these three properties so let me go ahead and write that down a metric space is a set X with the function D which will be called our metric which takes in two points in x and spits out a real number in fact it doesn't just spit out a real number it spits out one that is between 0 and infinity and not including Infinity and it satisfies the three properties listed here and D satisfies I can rewrite these definitions if you all would like but just replace the apps what's really happening is your place the absolute values between X and Y with the distance from X to y does this make sense everyone and one point of confusion last year was what is the notation with this times thing mean it just means that we're taking in one point in X and another point in X so distance from X to y this is the definition of a metric space it's not terribly insane but what it allows us to do is study all the tools that we did in real analysis all of a sudden and the completely new setting so now the first few examples I'm going to talk about r01s on euclidean Space just because we have a lot of intuition about that already but Sooners we'll see you can study this on plain old sets you can study this on topological spaces you can study this on on functions it's a very powerful tool that generalized a lot of math when it was first invented all right so let's start off with some more of these examples so another distance on RN that we can consider is one that takes in two points X and Y in fact I'll call this one D Infinity for the metric so it takes an X and Y and spits out the maximum over every component of the distances so in other words if I have a vector here in three-dimensional Space X and one that goes here the largest distance would probably be the vertical one I will just say and that would be the euclidean distance it's a maximum distance between the components so if I write X as X1 through xn the metric just gives out the maximum value of the differences of these components uh since you just walked in let me just briefly explain what's happening um so redefine what a metric space is I simply a set with a function that acts like a distance and the three properties we wanted to have is we wanted it to be symmetric so the distance from me to use the same as U to me we want it to be positive definite we don't want our distances to be negative and the triangle inequality which we should be a little bit more familiar with so yeah that's the definition of a metric space and now I'm just defining a new example but if I ask you on a problem set which I will do to prove that something isometric you have to prove the three properties which isn't in this case terrible to do because uh I guess proof technically one it's definitely positive or it's definitely positive because we're using absolute values it's not going to give us something negative but the thing that's a little bit harder to check sometimes is we have to check its positive definite so definite so if the distance D Infinity from X to Y is zero what does that tell us well that tells us that for all I the distance between x i and y i must be zero why is this true well if it wasn't zero if it was something slightly bigger than zero then the supplement metric on this D Infinity would make this have to be bigger than zero right so this implies that X is in fact equal to Y that's how we Define equality on euclidean space and the opposite direction usually a little bit easier if X is equal to y then the same conclusion holds so usually that direction is a little bit easier if you assume equality showing that it's zero but the other direction can be a little bit harder assume it's zero what can you say or assume it's not zero what can you say to do a proof by contradiction and then lastly we have to or sorry no two more uh it's definitely symmetric the fact that it's symmetric just follows from the fact that you can just swap any of the two in the maximum it's not too bad and three I have to check the triangle inequality and this is where it can be a little bit more tricky because you might be inclined to just say oh it's the maximum of things that satisfy the triangle inequality but we have to be a little bit more careful than that and I'll explain why in a moment so let's consider two three points x y and z in euclidean space RN and let's suppose we're considering the distance D Infinity from X to Z which I'll write down again is the maximum of I between 1 and n of the different distances between the components now we want to go from this to information about Y and what we want to do is exploit the fact that we know that the absolute value satisfy the triangle inequality but this maximum function makes it a little bit more difficult and ideas of what we can do before we use the triangle inequality well in this case there's only finitely many terms that we're considering so we know that a maximum has to exist so I'll just call that term J this is as opposed to looking at the maximum I'll just consider the distances of x j to ZJ and then I can apply the triangle inequality and this is where it's important to double check that everything is working out we had to note that a maximum existed before we could apply the triangle inequality so then this is less than or equal to the distance from X J to y j plus the distance from y j to Z J and now we can apply the maximum operator again to both of these because we know that this one is going to be less than or equal to the maximum over I of x i to y i and we know that this term will be less than or equal to the maximum of that side so plus the maximum y i to z i and then we're done any questions happy to reiterate any of these points cool so this shows that it is in fact a metric and it's a pretty useful one at that like what this is telling you is if I want to study things on RN I can essentially just study them on R and things will work out the same because we can just study the maximum of the differences between their components okay let's look at one more example on euclidean space now as opposed to looking at the maximum I could instead some over all of the terms which it will in fact be a little bit easier so I'll call this the D1 distance between X and Y in euclidean space this is the sum of the distances between the components so y i x i to y i and I'll leave you to prove that this is in fact a metric but let's run through the checklist one time one we know that it's definitely positive positive definite is a little bit harder right I mean if they're equal then we definitely have that the distances between these ones are all zero if it's if the distance is zero then note that we're taking the sum over non-negative things so if the thing on the left hand side is zero every single term in the sum must be zero and that allows us to use the same conclusion and then you can apply the triangle inequality right now because we're just summing over terms so the triangle inequality automatically applies so if you want to write through that feel free to the thing that's more important that I want to note here is that this is known as the L1 metric in fact if you wanted to I could just replace this D1 with the DP raise this to the piece power and raise this to the one over p and that would give me the lp metric which is deeply important in functional analysis um it's a tool that it's surprise tools that'll help us later um so if you take a class on functional analysis this is likely one of the first ones that you'll consider so to check that the lp metric is in fact a metric is a little bit more difficult there's a problem on the first problem set that's optional if you want to work through those details and there's a lot of hints so that's helpful okay so now that we've gone through all of these examples on euclidean space before I go much further I want to explain how this actually relates to the definitions that we already have because you might be sitting here thinking to yourself why does this matter we've been studying real analysis and now we're going back to studying a function that acts like a distance so what well these four definitions we're going to be able to rewrite in terms of sequences of a metric space so let's write out what a convergent sequence is uh let x i be a sequence the metric space XD and let X be just a point in the metric space here sequence is just as we defined it in real analysis it's a bijection between this uh the set of points and the natural numbers and then we Define convergence sequence for all Epsilon bigger than zero there exists an n in the natural numbers touch that for all n bigger than or equal to n the distance between X and X is less than Epsilon so this is what we mean for convert a sequence that converges to the point x and this is essentially just the same definition that we've been dealing with in real analysis just replace the distance with absolute values and I'm just going to go through these other definitions as well uh so same setup let X I be a sequence and XD then a koshy sequence is such that for all Epsilon bigger than zero there exists an n in the natural numbers such that for all n and M bigger than or equal to n the distance between individual points of the sequence is less than Epsilon so here's the definition of a koshy sequence who here hasn't heard of a definition of an open set before no worries if not some classes don't cover it yeah let me just briefly explain up here what an open actually do it down here what an open set is actually no I'll do it up here so that y'all can keep taking notes so an open set is just a generalization of what an open interval is so an example of an open set in r is for instance zero to one is such that if I'm considering the interval from zero to one so the thing that makes it an open interval is the fact that for any point I consider let's say that one that I can choose a ball of radius Epsilon around it so distance Epsilon on both sides such that the ball of radius Epsilon around the point x is contained between 0 and 1. and I can do this for every single point and the interval 0 to 1. this is what makes it an open interval and the definition of an open set is going to be essentially the same where instead the definition of the ball of radius Epsilon this is defined as the set of points in your metric space such that the distance from X to y is less than Epsilon so that's the only difference between what's happening on a metric space rather than euclidean space before again this would be absolute values so to Define an open set a set a contained in X where X is a metric space is open if for all points A and A there exists an Epsilon bigger than zero such that the ball of radius Epsilon around a is contained in a pictorially if you prefer to think about it this way this essentially means that there is no boundary on your set so if I were to draw a conceptual diagram of what's Happening Here 's my set X here's my subset a that lies in it and I'm just cutting out the boundary and this allows us to say okay I can get for any point even as close as we want to the boundary of a I can still squeeze in a little ball there and in fact thinking about it in this conceptual drawing is deeply important and I'll give an example of this in a moment um but this is the definition of an open set does everyone feel a little bit comfortable with what this is cool I know it's just a little bit weird because some classes don't cover it because on 100a you don't particularly need it as much um now continuous functions are going to be a little bit weirder because before when we studied continuous functions we studied once from R to R right it takes in values in the real numbers and it spits out a real number but now that we have a notion of a metric space we can now study continuous functions between them because all we really need again is absolute values right so let F be a function between X and Y where both of these are metric spaces in fact I'll write that out now I'm going to say that X has metric DX and Y has metric DUI and this is the appropriate notation for noting which one goes with which one then I say that f is continuous continuous if we're all Epsilon bigger than zero there exists a Delta bigger than zero such that if the distance between two points X and Y is less than Delta or sorry DX then the distance is in the space y of f of x and F of Y is less than Epsilon now let's double check that this definition actually makes sense this is a very helpful way to remember what metric goes where X and Y again are two points in x right so it only makes sense to consider the distance on the metric Space X acting on them and then F takes points in X to points in y so here on the right hand side we should have the metric on y and this is the definition of continuous I'm going to prove that it's very specific special operator is continuous today and that'll be helpful conceptually um but this is far more General if you prefer you can just let y be the set of real numbers and this is still a really powerful tool already right because now we can study them between metrics basis X and the real numbers okay so I just wanted to highlight what these four definitions convert over to in this class tomorrow not tomorrow sorry on Thursday we'll be proving quite a few theorems about all of these spaces to do yeah on Thursday we'll be proving quite a bit of properties about these definitions uh in fact I'm calling that the general theory of metric spaces it'll be a very intense class in terms of proof writing but um I wanted to bring them up today so that y'all saw that it wasn't just random stuff so this is all good and dandy we've only talked about uh euclidean metrics and this is fine but things get a little bit weirder once you go from finite Dimensions to infinite dimensions and in fact the definitions of for instance a continuous function will look a little bit different right so let's give an example of one that's just a little bit weirder than uh a set of points in euclidean space actually I'm going to rearrange this a little bit to do okay first I want to give a really weird example that's pretty simple but interesting nonetheless let X be any set then I Define the metric on x so the metric between two points X and Y could be as follows one If X is not equal to y and zero otherwise uh so if x is equal to Y then it's going to be 0. why is this important because this is telling you that every single set you can give a metric to now that this metric isn't all too interesting but it is a useful example to keep in mind right and let's prove that this is a metric because it's even though it's a simple function it's going to be a little bit weird one definitely positive or specifically non-negative I guess I should say that not negative now we just have to check a positive definiteness so if x is equal to Y then the distance from X to y is zero and the distance from X to Y is zero only if X is equal to y That's the definition of the metric two is definitely symmetric because equality is symmetric and finally three the hardest one triangle inequality why is this one the hardest because now we have three points and we have a binary acting on them right so let X y and z be in x then we have three possibilities or quite a few more but I'll state it in generality uh one X is not equal to y y is not equal to Z and Z is not equal to X 2 X is equal to Y but Y is not equal to Z or three all of them are in fact equal why are these the only three cases well because X Y and Z that we're choosing are just arbitrary right so if I wanted to decide the case where Y is equal to Z but X is not equal to Y I would just relabel them right and so these are the three cases we have to check to make sure that our metric is actually a metric which happens in cases like this where our metric is defined via a binary or similarly simple cases so let's check each of these um actually I'll just do it here I can squeeze it in here if none of them are equal then the distance from X to Z is just going to be one by definition and this is certainly less than or equal to 2 which is the sum of the other two distances so we're good there two the distance from X to Z what's this going to be you know what if if x is equal to Y and Y is not equal to Z what's the distance from X to Z yeah one exactly because X being equal to y means X is also not equal to Z and this is equal to 1 which is the distance from uh which is the distance from X to Y plus the distance from y to Z because only one of them is one and then of course the last case when you plug in x equal to Y is equal to zero we're just going to get zeros equal to zero plus zero so we're all good there now you're going to have an example on your homework that is similarly based off of a binary where instead the binary is going to be based off of if three points x y and z are specifically sorry I'll just say X and Y where if X and Y are an R2 then the distance between X and Y is just going to be the regular distance in R2 if x y and zero are collinear and the sum of the two otherwise so to draw a picture of what I mean here If X and Y lie on the same line then the distance is just as normal if they're not then I have to add up the two distances between them this is similarly a binary and when you're doing this on your homework one thing I would suggest is breaking it up into similar case work right either they all are collinear or none of them are or only one of them is that's the sort of thing that you should do on a problem like this on the homework this casework can be a little bit annoying but is useful to do okay so even though this metric is vaguely interesting in that it makes any a metric space let's start studying a set that we care about a little bit more continuous functions and we can in fact define a metric on them okay so I'm going to Define C naught of a b to be the set of continuous functions from a b to R and I'm going to define a metric on them and the metric so example or I guess I should say this is the definition example if I'm considering two functions f and g in the set of continuous functions on the interval A to B then the distance between these two functions is going to be the supper mom over all X and A to B of the distances between the two points or between the two functions this is what I'm claiming as a metric and it's going to be a little bit difficult to prove in one of the steps if you had to guess probably the triangle inequality because that's where everything sort of messes up but let's go ahead and check that this is a metric S as well specifically first it's definitely going to be symmetric because the distance from F to G is equal to the supper mom of the distances between the functions f and g which is certainly equal to the distances of the supper mom from G of x minus f of x now one thing to be careful about here though is you have to you don't have to be as careful about it with symmetry because symmetry is a little bit more clear that you can mess around with these operations where the triangle inequality becomes a little bit harder there is a technical thing I'll come back to later about this step but essentially symmetry is done uh to the positive definite well it's definitely going to be positive because it's absolute values again or non-negative and we have to check definiteness well if f is equal to G then by definition f of x is equal to f h g of X everywhere for all X which implies that is the distance between f and g is zero now what if the distance between f and g is zero how do we conclude that f and g are in fact equal you know that because it's a supper mom everywhere at every single point they have to be equal you can also make an argument via continuity which is um which makes use of the extreme value theorem but you know we don't have to get into that much detail it's the harder part though so the positive definiteness the hardest part though is going to be the triangle inequality because here we do have to use the extreme values here okay uh triangle inequality so let f G and H B continuous functions on a to B then let's consider the distance from F to H minus h of x we want to go from this to information about f and g and G and H right so we can apply the triangle inequality well to do so this is the only part where we use the fact that the functions are continuous because knowing that they're continuous we can apply the extreme value theorem to them right so we know that this suburbum has to exist somewhere and I'm going to just let that point be y so this is going to be equal to the distance from F of Y to H of Y so what now well now we can just directly apply the triangle inequality because these are absolute values and we know that this is going to be less or equal to the distance from F of Y to G of Y plus absolute value of G of Y minus h of Y and then we can take the separate mom of both of these individually and we'll conclude the proof but but the thing we have to make sure of first was we couldn't just apply the triangle inequality right away the subramum operator acting on it made it so that we had to go through these steps individually and in fact if we wanted to be really careful it might have made sense to do it here as well where we know the sub problem exists and it's going to be equal to if I swap the two orders so but really it's important in the triangle inequality and the thing to note here is that this is precisely the same sort of argument we had to do right here for the D Infinity metric we knew that maximum existed so we just went with it and ran and that's generally speaking good advice if you're dealing with something that's continuous try narrowing your focus down as much as possible to a single point or to a single function whatever you can do to finish off your proof now depending on your background with analysis you might be wondering to yourself why do we only care about C naught why don't we care about um like what is c c naught why is that zero there well in fact the reason it's there is because we can study differentiable functions specifically continuously differentiable functions so definition actually I'll just Define it for CK this is the set of set of continuous functions On A to B such that the first K derivatives of f 1 exist and two are continuous this is the set of what are known as continuously differentiable functions so ones in which I when I take the derivative this um like derivative is going to be continuous and once we have this new set defined we can Define even weirder metrics well I guess not all that much weird but ones that are slightly more complicated I guess I should say okay so example let's just consider the ones on C1 of a b we want to show that um the following dc1 of f and g I'm going to Define what the metric is first and what we want to show is that this is in fact a metric this is going to be the subramum over X and A to B of f of x minus G of x Plus the separate Mama of X and A B of f Prime of x plus G Prime of it or sorry minus G Prime of x this is going to be our new metric proof that this is in fact a metric proof one definitely non-negative but we have to check positive definiteness well if f is equal to G then f of x is equal to G of X everywhere and in fact F Prime will be equal to G Prime which implies the same result right because you can just take the derivative of this so that includes One Direction if the distance on C1 of f to G is zero what can we say well again we're summing over non-negative things right so if the sum of two non-negative things is zero that implies that both terms must be zero this fact I cannot iterate it enough is deeply important when you're working on your problems that you will have to use this fact repeatedly that the sum of non-negative things being zero implies that the individual terms must be zero as well to prove positive definiteness that's where it mostly comes at so what this implies is that the supper mom of X and A B of f of x minus G of x equals zero and we've already explained above how this implies that this metric or that f equals g so this is one of those examples where you want to boil it down to the examples you've already done before okay so that proves positive definiteness symmetry is pretty immediate and the triangle inequality follows by the same argument up here right we know that the we know that it the supper mom exists for each of the terms because it is because all the terms are continuous but the thing I want to note here is what stops us from doing this at C1 can we do this at CK the answer is yes it's not too much more difficult use instead some over the zeroth derivative the first derivative so on and so forth up until the case derivative and that's fine what if I want to study it on smooth functions which I'll Define right now we Define c Infinity a b to be the set of smooth functions I.E infinitely differentiable notice that if it's infinitely differentiable each of the derivatives must be continuous because if it wasn't then the next derivative wouldn't exist right or the next derivative wouldn't be continuous so this is the set of smooth functions what stops us from just taking the sum over all of these terms over all infinitely many terms the issue is that there are infinitely many right again a metric must be it must be the case step for a metric that the value that you get out is not Infinity I guess in theory you can mess around with this a little bit and you'd get weirder types of metrics but for our purposes you don't want it to be Infinity but remark there is a not too bad addendum to this on the pset there's an example that you can work through that's a an optional one where you define a metric on this where what you do is you sum over this interesting fraction dck f g 1 plus dck FG so it's something over a bunch of metrics you have to in fact show that this is a metric which is one of the other problem set problems but it's not that you can't Define a metric on smooth functions is that you just have to be careful of the fact that there are infinitely many terms and one small thing to note is it possible that we could have gotten away with not including this term in the sum the answer is no uh and I'll let you think about this some more but the idea ends up coming from if we take this away and the distance is zero then what's the distances between F Prime and G Prime imply about f and g so I'll let you all think about that some more because that is a problem set problem but the answer is no and it's a bit interesting to think about why and that reasoning is exactly why we have to be careful about the infinitely differentiable case okay so now that we've done this now that we've defined a metric on C1 and C naught and so yeah um yeah yeah that's there's nothing wrong with that the issue is that it's not encapsulating as much information as we want it to so as we'll see in a moment we want to understand differentiability and integration as functions that are continuous so that's great question yes we could have just considered the first one as a metric on all of them in the same way that we can consider the trivial metric this one this is known as the trivial metric to be a metric on all of them right but we want more information when possible great question okay so I will come back over here so now that we have C naught and C1 defined we can in fact define differentiation and integration or not defined we can state that differentiation and integration are in fact continuous and I'll do that right now so uh I guess this is a proposition if I consider differentiation as a map between C naught of a b so continuous functions to sorry C1 to continuous functions my claim is that differentiation is continuous as a function or I guess map is a better word but a map between metric spaces does this notation make sense to everyone the differentiation as a as a map cool so let's check that this is in fact a continuous function um which is pretty nice to do like by this setup it's made to be nice which addresses your question about why we sum over the two terms so to prove continuity what's often best to do is just to consider what the distances in fact are right like let's write out the left hand side and the right hand side of this implication right so let f and g b and c 1 a b and then we consider the distances on C1 of f and g and we want to say that if this is less than Delta then the distance is on C naught of the derivatives so I'll just say F Prime of G Prime is less than Epsilon right we want to find for all Epsilon bigger than zero if there exists a Delta such that this is true right well let's State out what these two metrics actually are and that'll make it clear that the reduction is not that bad so again this is equal to the supper mum over X and A B of f of x minus G of x Plus the supper mom of X and A to B of f Prime of X Plus G Prime of X or sorry minus G Prime of x and here this is simply the supper mom over the distances of f Prime of x minus G Prime of x the reason why this step is nice to do first is because we notice that this term is precisely the same as this one on the left hand side and all the terms are non-negative so let Delta equal Epsilon if you let Delta equal Epsilon then this term being dc1 of f to G being less than Delta implies that this term must be less than Delta or in fact less than Epsilon by Construction so that implies that this term is less than Epsilon which is what we wanted to show so this shows or that's the end of the proof this shows that differentiation is a continuous map and here we use the fact that we're summing over the two terms now on your problem set what you'll do is you'll show that integration from a to a point T from A to B is in fact also a continuous operator so that's I think is pretty interesting it's going to be a little bit harder than this one because this direction doesn't include both terms but I think it'll be a worthwhile exercise to work through okay let's see what's next do so while you're going to show that integration is a continuous operator this doesn't stop us from studying integration as a worthwhile metric right now because we can Define integration not too poorly not too bad so we Define i1 to be a metric on C naught a b times C naught actually I'll change this a little bit uh from A to B 01 zero one two zero to Infinity given by i1 of f and g is simply the integral from 0 to 1. of f of x minus G of x DX all right we're going to show that this is in fact a metric now what are the few three components one symmetry here we have to use the fact from Riemann integration that because f of x minus G of x is equal to G of x minus f of x under absolute values then we get that the integrals are in fact the same from zero to one you can just apply has everyone seen this fact about integration one way you can prove it is that if if they're less than or equal to each other then the integration still follows and you can apply the inequality both ways so just wanted to say that but symmetries works out pretty nicely two we know it's non-negative but positive definiteness is going to be a little bit harder this time right if f is equal to G then clearly i1 of f to G is zero this comes from the fact that the distances between the two points is just literally zero but how do you do the other direction because this is where it can be a little bit more complicated what if for instance the two functions differ by a point then their integrals are still the same but what does this actually tell us let me let me write this out uh what if i1 FG equals to zero well here's yeah exactly right so because it's continuous suppose that this was not zero proof by contradiction and I'll draw a nice picture of what's Happening Here so assuming that the integral of them is non-zero then there must at least be one point let's say this is f and this is G that's pretty close by then there must exist at least one point such that they're different right otherwise if there was no singular Point such that they were different then they must be equal everywhere and we reduced to the first case right or that in fact that's what we want to show but it not being equal to zero means that there must exist a point where they're not the same and what this tells you is there must exist a little ball around X follow radius Epsilon let's say such that they're in fact not equal on that entire ball but we can use this to show that the integrals over the the integral over the entire interval 0 to 1 must therefore not be zero and reach a contradiction so specifically we would reach the conclusion that f is not equal to G and all that you all work through the statements of that but yeah it is precisely the fact that if there's a point where the integral is non-zero then there must be a ball around that point such that the distance is not zero cool and this just highlights why the fact why continuity is important here yet again not not just to show that nice things are continuous okay lastly the triangle inequality I just want to State this because it's not too bad we just note before even integrating that the distances between f of x and H of X is less than or equal to f of x minus G of x Plus G of x minus h of x and then integrate on both sides of the inequality and that allows us to reach our conclusion so in statements like this either choosing a point such that the supplement exists if it's based off of a separate amount that's a good way to do it the other way to do it is try to utilize facts about the absolute values before you even apply the metric so before you even apply integration see what you can say these are the few techniques that are deeply helpful and there's one thing I want to note here do I still have it up I do not um because it was a while ago we can view integration as a sum it's an infinite sum sure it's Riemann integration but some nonetheless what this is called I'll note it here this is known as the capital L1 metric on C naught AP or specifically the notation would be it's L1 a b this is a metric the reason it's called L1 might remind you of the fact that we had a little L1 that we'd find earlier where instead there we're summing over finitely many terms it's the same exact notion it's just the only difference here is that we're summing over infinitely many terms and that's why the notation is the same now this notion is deeply important for 18 102. because this is the definition this leads to definitions of lebeg integration which we'll talk a little bit about in this class um but really you know it's a question of integration as we already understand it via Riemann integration it's really really bad right what if we don't want to study things on continuous functions but instead ones with finitely many discontinuities the Riemann integral just becomes so much more annoying to deal with so the little big integral is the way around that um and it's called Capital L1 because of lebeg the person who invented it so but one thing I want to note is that all of the spaces that we've considered so far are vector spaces so for those of y'all who have studied linear algebra this is this will be slight review but I just want to note it here because it is somewhat important and we'll come up a little bit later in the class a vector space is simply a space in which you can add two terms together and it'll stay in the same space this is in the class on linear algebra so there's not going to be too many times when the definition of a vector space comes up but it's a useful thing to keep in mind and to know about so Vector space this is the todr I'm not going to write out the entire definition because it's in fact quite lengthy but if it's base v such that we have addition which maps from v n v to B and multiplication which maps from real numbers cross V to V and ways to think about this is for instance the set of continuous functions C naught a b we have addition defined on this such that we'd find f plus G to just be f of x plus G of X everywhere and we Define a constant times a function which is what R here is doing as just being the constant times f of x everywhere the basic idea for a vector space is that addition acts how you would want it to it maps from two points in the vector space to a point in the vector space and scalar multiplication acts the same way it takes a scalar and a point in your vector space and Maps it to another point in your vector space why do I note this because metric spaces do not only have to be on Vector spaces we can have it be much weirder a good example of this so so if you did not catch the basic idea of what a vector space is all this example might highlight it example consider this fear S1 to be the set of X in let's say r three such that actually you know what I'll just do the circle the circle of radius one in R2 this is not a vector space under usual pointwise addition right because if I take a point on my Vector space let's say that one let's call it X and I do X Plus X I'm going to end up with a point that's not on the circle this is not a vector space in the usual sense because we want addition to be contained in the vector space why is this important because we can still Define distances on the circle right we can Define distances on the circle via just the regular distances so distance from X to Y being the usual distance on R2 you could also Define it via the shortest distance between them which is known as a geodesic don't like this anyways so the point that I'm trying to highlight here is you don't always have addition defined which is going to limit the number of theorems we can actually say about metrics right nowhere in our definition of a metric does it tell us how to add two functions together because sometimes we just simply can't the set of or when we can it starts becoming more like functional analysis which is 18 102. we'll talk briefly about that in two lectures from now but this notion of it not being a vector space is pretty important was there anything else I wanted to know we might end today a little bit early which will be nice the main other thing I wanted to note here was something that I noted earlier for the little L1 spaces is you can also Define LP spaces f of x minus h of x to the P to the one over p this is known as LP spaces so functions in which this term is finite is an LP space all right so I actually only went through this 20 minutes earlier faster than last time so if you all have any questions I'm happy to talk about more of the material but we might just end today a little bit early I don't want to get into the general theory quite yet um what I would highly suggest is trying to sit with this notion a little bit because though it seems like a simple notion what we've really done here today is gone from understanding for instance functions as things that take in points and spit out points as things that can be manipulated as things that have a distance between them and once we study sequences of functions which is deeply important in analysis is going to be become more and more important