Transcript for:
Introduction to Fourier Series and Transform

so welcome to the fifth lecture of ENG 860 special topics in digital image processing the main topic of this lecture is the introduction to Fourier series and Fourier transform as for the readings I highly suggest you read the first four section of the chapter the fourth chapter of the book and a huge math alert for this lecture and the upcoming lectures. I highly suggest so you are not overwhelmed with all the mathematical equations that you derive the equation line by line on your own. But of course for this presentation I will go over the equations without any derivations. So be prepared to see slide with. many many mathematical equations.

So in this lecture I first go over some background and then I introduce some preliminary concepts that are useful for the rest of the slide. I will talk about Fourier series of periodic functions of one continuous dimension and also, I will talk about Fourier transform of functions of one continuous dimension. Then I will talk about how we can represent convolution in the Fourier domain. And then finally I will have some slides on sampling and Fourier transform of sampled functions.

So let's just have some background on a Fourier series and Fourier transform. This is a repetition of what I already discussed in the previous lecture. So any periodic function. can be represented as the sum of sine or cosine function of different frequencies, each one multiplied by a different coefficient.

As for the Fourier transform, it is applied for the functions that are not periodic, but they have to have some requirements in order to be able to define the Fourier transform from them. So, I'm not going to go into much detail about what are those requirements. For now, let's just briefly say that we have a function that has a finite area under its curve.

So, if you have a function which is not periodic but has a finite area under its curve, it can be represented as the integral of some sines and cosines functions of different frequencies, each multiplied by a weighting function. so here you see an example of a periodic function that is shown here and if we use Fourier series we can decompose this function into its consisting sine and cosine function that if they add added together they create the function that we see here so if you remember last time i also talked about how doing Fourier transform can be better when you are applying filtering in images so in the last chapter what we saw was that we can have some kernels that can be used for spatial smoothing so you might say okay if that can be done in the spatial domain why there is a need to do it in there Fourier domain and the answer to that is that most of the time doing things in the Fourier domain is less involved in terms of computational complexity and computational time than doing it in the spatial domain. So let's just say we have an image which is of size m by n and we have a kernel which is of size m by n like this.

If the kernel is not separable we need n by n by n by n operations to calculate the convolution of the image and the kernel basically spatial filter the original image with the kernel if the kernel is separable the number of calculation is highly reduced so instead of having m by n by m by n now we have m by n multiplied by the summation of the two dimensions of the kernel and the reason for that is when we have a separable kernel we practically can decompose it into one two one-dimensional kernels and then we apply each of them individually into the image and we get the final result if the same filtering is done in the frequency domain using fast Fourier transform the number of operation that is needed is going to be 2 multiplied by M by n multiplied by log base 2 of the multiplication of n by n as you can see the equation for the number of operation is a little bit more complex I'm not going to go through how this equation is derived but we can see for some sample image sizes and some kernel sizes how this can give us a better performance and computational advantage so let's just Assuming this graph we have an image that is of size 2048 in each dimension and let's just say we have a kernel that is of size 767. So this curve basically shows the computational advantage of using and doing this spatial filtering in the Fourier transform domain as opposed to doing it in the spatial domain. So if I drag from this point here i can see that doing the spatial filtering using this image size and this kernel size doing it in the frequency domain is around 15 000 times faster than doing it in the spatial domain here you see a sample table that shows the amount of computational advantage for some predefined kernel sizes so what you can see is that for an image of this size if the kernel that you are using is three by three in that case doing the doing the filtering in the spatial domain is much better but if you have kernels of size larger than seven by seven now you see that the computational advantage goes to doing it in there frequency domain this figure this graph is basically saying the same thing but it is for when the kernel the spatial kernel is separable of course as you can see the computational advantage if the kernel is separable is not as significant as with the non-superable kernel but still for kernel sizes a very large size it is still beneficial to do things in this frequency domain as opposed to doing it in the spatial domain. For example, for an image of the same size, using a kernel of size 201 by 201, it's almost nine times faster doing it in the frequency domain as opposed to doing it in the spatial domain.

In this chapter we heavily use complex numbers so it's a good idea so I give an overview of the definition of complex numbers in general. So assuming C as a complex number it has two components we have R which is the real part of this complex number and we have I which is the imaginary part of the complex number. In addition to these two components we have this letter J which is the imaginary number and it's equal to a square root of negative 1 the conjugate of any complex number can be calculated by inversing the imaginary part so we have R the same as the original number but now we have a negative sign in front of the imaginary part of the complex number Any complex number can be represented in different ways.

We can use a vector representation in which we put the real part as one of the components of the vector and the imaginary part as the other component. Of course, if you use this vector representation, you should know that the imaginary part needs to be multiplied by the imaginary unit number or the square root of negative 1. We can have a polar coordinate representation of this number which is defined as the sum of the magnitude of the complex number multiplied by this term, cosine of theta plus j sine of theta. Of course, this by itself cannot be directly derived based on values of r and i, because what is the value of theta? But these are all based on the Euler's equation, in which it states that e to the j, and theta, j being again square root of negative 1, equals cosine of theta plus j sine of theta. And therefore, this polar representation can be simplified to c equals the magnitude of c multiplied by e to the j theta.

There is a 2D graph representation as well in which we have a 2D graph. The horizontal line shows the real part. the vertical line shows the imaginary part and any complex number can be shown as a single point on that plane.

In the above equation the magnitude of the complex number can be calculated by a square root of the sum of real value squared plus imaginary value squared and the theta defined as the arc tangent of I over R using the Euler equation that we had in this slide we basically can find representations for cosine and sine function in terms of e to the j theta so for the sine function it is equal to e to the j theta minus e to the negative j theta over 2j and for the cosine function it's e to the j theta plus e to the negative j theta over 2 and the derivation of these equations is very simple you can just calculate those by simple addition and subtraction Any periodic function that we have has a period usually shown with value t in which it is basically the smallest period in which it is required for the function to be repeating itself. So for example let's just say we have this periodic function that at any period t it basically is a repetition of a previous version. So in terms of sine and cosine function we can have a similar definition so any generic sine and cosine function and the reason I'm talking about sine and cosine function is that we said that for representing any periodic function we can use Fourier series which is basically a summation of some sine and cosine function so for any sine function we can have an amplitude here shown as a We have the frequency, which here is shown as f.

We have t, which is the variable in which the function is defined. We can have theta. Don't confuse this theta with the previous theta that we had. This is called the phase shift, which basically tells how this sine and cosine function is shifted along the horizontal axis.

And we have... which is the vertical displacement which shows how this sine and cosine function is shifted along the vertical axis and the relationship that shows the relationship between the period and the frequency is usually shown as here of course in different textbook you may have different relationship but there the main idea here is that they are the period and the frequency are inversely proportional so if the period increases that means that the frequency is decreasing and vice versa so now that we saw how sine and cosine in general periodic function can be shown the question becomes this can periodic function be expressed as a sum of sine and cosine functions of different frequencies multiplied by different coefficients and as you can imagine Fourier series is basically the answer to this question. Yes, using Fourier series representation of function we can represent any periodic function as a sum of sine and cosine function and Joseph Fourier has studied this subject and confirmed that it is possible expansion of periodic function into Their consisting sine and cosine component is usually called Fourier series representation so one thing that you should notice is that we still don't do any transformation we basically have a function let's say in the time domain and the Fourier Fourier series representation of the function is still in the time domain so now let's see how Fourier series can be done For the purpose of this definition, let's assume we have a one-dimensional continuous periodic function.

We call it here f of t with period capital T. Here, I use variable t, lowercase t, to show a continuous variable. So, the Fourier series expansion of function f of t is defined using this equation. Basically, f of t equals... summation of CN e to the J 2 pi n over capital T multiplied by T and in this equation the value for n changes from negative infinity to positive infinity so in this representation the function f of T is known and this function e to the J 2 pi n over capital T multiplied by T is a function that We already saw how it can be transformed into sum of cosine and sine.

But the coefficient is the coefficient or the value that we want to calculate. So the calculation of the coefficient is done by integrating the function f of t multiplied by this complex function e to the negative j. 2 pi n over capital T multiplied by T over the period of the function one thing that I should note here is that the prerequisite for this course is a probability and also signals and systems so a lot of the equation that you see here won't be explained that much I highly suggest you either look at signal and systems book books or watch some videos to be more familiar with the details of these definitions because the assumption here is that you are already familiar with the concept of Fourier series here so now that we saw Fourier series let's just go and talk about another initial concept that we need to know before we start talking about images and that is the concept of impulse functions and how they define and what are their properties so the unit impulse function of the continuous variable T again we are still in the continuous domain so this impulse function which is located at T zero defined like this we have Delta of T equals infinity for T equals 0 and it's 0 otherwise so this function basically is an impulse of infinite amplitude but with zero duration of course in reality having such a function is not possible But we can have assumptions that give us a similar capabilities.

So this function satisfies an integral constraint which means that the spike of infinity amplitude and zero duration has a unit area. So if we calculate the integral of function delta t from negative infinity to positive infinity this integral which represent the area under this function is going to be one Because of this property, we have a sifting property for the impulse function, which means that the integral from negative infinity to positive infinity of any function that is continuous at t equals 0 multiplied by delta of t equals f of 0. So that means that we can... get the value of a function t at location 0 by calculating this integral, but what if we have a function That we want to find its value at a different location other than 0 in that case We can use a shifted version of the impulse. So the definition of a shifted version of the impulse function like this we have Delta of t minus t 0 which is the infinity at t equals t 0 but at any other place its value is going to be 0 so now if we calculate the integral from negative infinity to positive infinity of f of t multiplied by Delta of t minus t 0 the result will be f of t0.

So this is a useful Definition that you will see I will use it many times during this lecture and it is basically the main concept that is used for Sampling a continuous function and converting it to a discrete version. You see one example shown here We have this continuous function that is multiplied by the impulse function delta of t minus t0. And the result is practically a sampled version of the original function. Of course the mathematics is a little bit more involved. It's not as straightforward as this.

But all the things that we will see is basically using this simple. equation so now what if we want to have a train of these unit impulses basically what if we want to extend this impulse function and have many infinitely many impulse function at different locations so to do that we just assume that they are placed uniformly in our domain here in the continuous domain so this one at the origin is delta t the next one will be delta t minus delta t so the lowercase delta and the capital delta then this is at location 2 delta t 3 delta t the same applies for locations at negative values as well so as you can see having this function equals having a summation of delta function and its shifted version so this can be compactly represented by this equation s of Delta T which is basically the impulse train equals the summation over n from negative infinity to positive infinity of these individual impulse functions so since the plan is to ultimately work with digital images and these images are discrete functions it's a good idea to have a definition for discrete impulse function as well so in the discrete case first of all we use variable X as opposed to variable T and the discrete impulse function is defined as this equation so Delta of X equals 1 when x equals 0 and it's 0 when x is other than 0 so I made a mistake here this should be x. So as for the integral constraint since we are working with discrete values the integral for the in the continuous variable is now replaced with summation and it still satisfies that summation constraint so the summation of Delta X the discrete impulse for x changing from negative infinity to positive infinity is going to be one the same can be said about the shifting properties of the impulse function as well so if we want to find the value of a discrete function at location zero what we do is we find the summation from x equals negative infinity to to positive infinity of the multiplication of f of x multiplied by delta of x which gives f at x being zero the same is true if you have a shifted version of the discrete impulse function and calculating this summation would give the value of function f at location x zero As for the discrete impulse train, the definition is very similar to the continuous case.

So in here on the first row, you see an example of a unit impulse function in the continuous domain T, which is located at T0. And here you see the sample. impulse train in the continuous domain that are placed at capital Delta T so basically the locations are changed by the value of Delta T for the discrete impulse one thing that you notice is that the location the horizontal axis is now shown at discrete locations so they are not continuous anymore but the overall definition and the shape of the original impulse and the impulse train are very similar to the continuous case as said before Fourier series are defined for functions that are periodic for functions that are not periodic instead of Fourier series we can Fourier transform to represent them in the frequency domain and the definition is like this we have the mathematical F here which represent the Fourier transform of function f of T which is shown as the integral from negative infinity to positive infinity of f of T multiplied by e to the negative j 2 pi u T and the integral is calculated for variable dt if you look closely you see that this equation is very similar to the equation that we used to drive the coefficients of the fourier series except for that we divide we were actually calculating the integral over a single period Here we are working with a non-periodic function. We calculate the integral over the whole range of the function. Another thing that you might notice is that the result of the Fourier transform is a function of only u because the integral here is over variable t.

That means that the variable t is integrated out. won't be present in the Fourier transform the inverse of the Fourier transform is defined in a similar way so that means that if we have a Fourier transform of a function by using this equation we can get the signal domain transform as well so f of T equals the integral from negative infinity to positive infinity of f of u multiplied by e to the j 2 pi u t d of u and here the integral is calculated over the variable u so that means that the result of this integral will only be dependent on t and the function and its fourier transform are usually shown as a pair so That means that we can go from the original function to its Fourier transform and vice versa. We can use the Euler's formula. to basically convert the fourier transform from function in terms of e to the j theta to functions of cosine and sine which is sometimes useful so if we just rewrite that equation we basically getting these two definitions that are used for the fourier transform and its inverse this basically means that any function f of t can be expressed as a weighted integral of sine and cosine function and if you remember for the periodic function it could be the periodic function could be expressed as the summation of weighted sine and cosine function so this summation here is replaced with an integral so let's take a look at an example so let's find out the Fourier transform of a box function that is defined in this equation so f of T is a when T is between negative W over 2 and positive W over 2 and it's 0 otherwise so to define the Fourier transform we start by the definition so here I use mu for the variable but you can use u like the previous slide so the variable is not important as long as you know that it is representative of the Fourier domain variable so this is the definition of the Fourier transform it's the integral from negative infinity to positive infinity of f of T e to the negative j 2 pi u t dt so as i said the box function is only non-zero within this range and otherwise it's zero so in the region that is zero the integral will be zero but in the region that is not zero we need to calculate the integral so this original integral will be reduced to an integral in the range negative w over 2 to w over 2 so this a value is a constant so you can take it out of the integral and the integral of this exponential term is 1 over the coefficient multiplied by the function itself so if you remember the derivative of e to the power alpha t was alpha e to the power alpha t so now this integral is reduced to this equation in which we have the lower and upper limit of the integral so what we do is that for any place that we have T we replace it with variable W with the value W over 2 so this equation results in the next equation one thing that you might notice is after rearranging It is very similar to the derivation that we had for the sine function based on the exponential function. So if we replace this subtraction term and the 2j that appears on the bottom with appropriate values we can just see that this function is reduced to w sine of pi mu w over pi mu w and as I said the last step is done by using that trigonometric identity that we saw before so how can we represent such function so one thing that you might notice is that when the value of u is 0 we have both 0 on the numerator and the denominator of this fraction if you remember from signals and system or math courses this function has a value of 1 at location u equals 0 and the reason for that is if you calculate the limit of this function the limit tends to go to 0 course it has it involves some approximation equation which again I'm not going to go over them So if we represent that function this is the Fourier transform of the box function.

The Fourier transform has a peak value of aw at location u0 and then because we have the sine function and the sine function can have both positive and negative values we have this oscillating behavior of the function. One thing that you might notice is that the oscillation frequency is inversely proportional to the width of the box function. So if you have a larger W, that means that the oscillation happens in shorter time because the location at which you change the sign from positive to negative is proportional to 1 over W so it means that if you have a box function which is very wide you will end up with a Fourier transform that is very narrow and if you have a box function that is very narrow you will end up having a Fourier transform that is very wide it is very common to look at the magnitude of the Fourier transform And this is what is usually called as the Fourier spectrum.

Another thing that is very useful is the definition of the sinc function which is defined as sinc function of x equals the sine of pi x divided by pi x. And that is basically what we used here as well to represent the result of Fourier transform of box function so as I said depending on the width of the box function we can have different behavior in the frequency domain so what if we want to find the Fourier transform of a unit impulse which is located at the origin so you can think of it as a box function that is very narrow at the same time it has a very high amplitude because if you remember impulse function should satisfy the integral constraint. So based on the definition of the Fourier transform, Fourier transform of the impulse function is the integral from negative infinity to positive infinity of delta t multiplied by e to the negative j2 pi u t d of t.

If you remember we had a shifting property for a delta function Based on the shifting property of delta function, f of u practically becomes the value of this function at location t equals 0. So if I replace this with 0, the result of the Fourier transform will become e to the power 0 or 1. Which means that if you have an impulse function that is located at the origin, it's... Fourier transform is going to be a constant function so what about the Fourier transform an impulse function that is located at T equals T zero again we use the definition of Fourier transform and the shifting property of the impulse function so this is the definition of the Fourier transform as for the shifting property the value for this integral is basically the value of this function at location t equals t0. So then the Fourier transform of the shifted impulse function is e to the negative j 2 pi u t0.

You can also use the Euler equation to find a cosine and sine representation of this function in the frequency domain. If you remember from the slide when I talked about Fourier series, I said that Fourier series still is a representation of the original function in the original domain. So if you have a function which is in the time domain, its Fourier series representation is again in the time domain.

So now let's just see what is the Fourier transform of an impulse train. So this is the equation that we have for the impulse train directly brought from one of the previous slides. so to calculate its Fourier transform it is useful to first find its Fourier series so this impulse train has a period of delta t and based on the definition for the Fourier series it can be shown by this summation of course In this definition we had to figure out what are the values for the coefficients. And to do that we calculate the integral of the multiplication of the impulse train and this complex function over one single period.

But from the definition of the impulse train we know that in this period. we only have one single impulse at the origin so that means that this equation is going to be reduced to this which again using the shifting properties of the delta function that means that we have to calculate the value of this function at location t equals zero so if i replace this what i get is the value for the coefficient is going to be 1 over delta t so based on this we can have an alternative way of writing an impulse train so the alternative way is basically as a summation of some complex exponential function in this form so now that we have this we can easily calculate the Fourier transform of the impulse train and what we only need to do is to find the Fourier transform of one of these functions and then do summation so because of the symmetry property of the Fourier train if we have function f of T which gives Fourier transform of capital F of u then if we have a function with the same form as capital F but we replace its variable from u to t then and calculate the Fourier transform of this function the Fourier transform will be lowercase f of negative u one thing that I need to emphasize again is that the assumption for this course is that the student has passed some course in signals and systems before. So this is based on your previous knowledge of signals and systems. So similar as when we had a shifted impulse function, which resulted in a complex exponential function in the frequency. domain now that we have a complex function in the signal domain the Fourier transform of it would be an impulse function in the frequency domain so now that we have this if we combine all the concepts we can find the Fourier transform of the impulse train as following so the Fourier transform of the impulse train here shown as f of u is basically the Fourier transform of this alternative representation of the Fourier alternative representation of the impulse train this is a constant which basically can come out of the equation for the Fourier transform then you have to find the Fourier transform of each one of these exponential functions individually based on this equation the Fourier transform of the impulse train is going to be a summation of some impulse function again which means again that the Fourier transform of an impulse train in the signal domain is going to be an impulse train in the frequency domain one thing that you should know is that the period of the impulse train in the frequency domain and in the signal domain are inversely proportional so if you have an impulse train with a very wide period that means that in the frequency domain you have an impulse train in which the period is going to be very small and vice versa so now let's see how convolution works in the frequency domain so if you remember from before convolution of two functions involve flipping or rotating by 180 180 degrees of one of the function about the origin and then sliding it over the other one and at each displacement we perform the calculation so for the discrete case this calculation was performing a sum of products but in the continuous case this is equivalent to performing an integral So based on this the convolution of two continuous function is defined as following.

Convolution of f and h equals the integral from negative infinity to positive infinity of f of tau. Tau being the dummy integral variable. Multiplied by h of t minus tau. Basically we flip function h and then we change its location by changing the value of t.

we take this integral over variable tau so now if you want to find the Fourier transform of this convolution we can use the definition so the Fourier transform of the convolution is the integral of the convolution multiplied by e to the negative J 2 pi u t d of t now we replace this with the definition of the convolution by itself it looks very complex but the way to solve this is by changing the order of the integral so here we first have to calculate the integral with respect to tau and for the second one we have to calculate it with respect to t but if we change the order This is the equation that we get. We have the integral from negative infinity to positive infinity of f of tau multiplied by this integral which is inside this square bracket. But how can we calculate this?

If you remember when we had a shifted version of the impulse function and we wanted to calculate its Fourier transform what we had was a shifted an exponential function. So now if we have a shifted version of a function, we basically have the original Fourier transform multiplied by an exponential term. And this is exactly what we see here.

So the previous integral is reduced to this integral in which now we have the Fourier transform. Fourier transform of function h multiplied by e to the negative j 2 pi u tau. Now this Fourier transform is only based on variable u so you can easily take it out of the integral and then you will have h of u multiplied by this integral which if you remember this is the exact definition of the Fourier transform of function f of tau or f of t so after calculation the result will be equal h of u multiplied by f of u so in conclusion if you have convolution of two functions in the frequency domain that is equivalent to having the multiplication of the two Fourier transform function also based on the symmetry property of the Fourier transform if you have multiplication in the signal domain that is equivalent to having convolution in the frequency domain and vice versa so again there are two concepts that are very useful and you need to know first of all if you have an impulse train in the signal domain the Fourier transform of the impulse train in the frequency domain will be again an impulse train the second thing is that multiplication in the signal domain is equivalent to convolution in the frequency domain and vice versa so now let's see how sampling works to convert a continuous function into a discrete function so let's just assume we have this continuous function f of t and we have this impulse train again in the continuous domain in which each impulse is Delta T apart from the next one so any continuous function need to be sampled and quantized to be processed by computer so to sample we can multiply the original function with the impulse train and by integrating at any impulse location we basically can get the value of the original function that impulse location so F tilde of T is basically F T multiplied by F Delta T of T but this is the original continuous function and this is the impulse train and this can be shown as this summation as for the values at location k and that is basically the integral of function f t multiplied by delta t minus k delta t delta t being the period of the the impulse train and k is the k sample of that impulse string So now let's just see how we can find the Fourier transform of the sampled signal. So the way to do it is very straightforward. For this we use one of the conclusions that we had regarding the duality between multiplication and convolution in the signal and frequency domain.

So since in the... signal domain to do sampling what we did was to multiply the signal with the impulse train in the frequency domain that is equivalent to finding the convolution between the Fourier transform of the original signal and the Fourier transform of the impulse train also remember that the Fourier transform of the impulse train was an impulse train itself So now that we have this we can drive the Fourier transform of the sample signal. So we have capital F tilde of U.

Basically the Fourier transform of the sample signal is basically the convolution of the Fourier transform of the original signal. And the Fourier transform of the impulse train. So if I just replace this with the definition of the convolution.

we have the integral from negative infinity to infinity of f of tau, tau being the dummy integral variable multiplied by s of u minus tau d of tau. Now if I replace s with its representation in the frequency domain or its Fourier transform and just consider that we have replaced this in the original definition. We get to this integral. So what we can do is that we can switch the places of the integral and the summation to get to this equation. And now what we have is that for the Fourier transform of the sample signal it's going to be 1 over delta t.

The summation of f of u minus n over delta t. Which means that the Fourier transform of the sampled function is an infinite periodic sequence of copies of Fourier transform of the original function with period 1 over delta. So now let's see how it looks in practice. So let's just say we have a function for which we have the Fourier.

It's Fourier transform to have this form. As you can see. this Fourier transform is only non-zero this a small range and another location it is going to be 0 going back to this equation we saw that we saw that the Fourier transform of the sample function is basically a periodic sequence of copies of the Fourier transform of the original function but with a period 1 over Delta T So this is basically the general representation of the Fourier transform of the sampled function.

As you can see here this function is repeated every one over delta t. But now the question becomes okay so how this delta t can affect the Fourier transform of the. sample signal because so far we didn't specify any value for delta t we just solved it for a generic value so then we can have three different cases depending on the value of 1 over delta t which is basically the sampling rate so we can have the over sampled case in which 1 over delta t is sufficiently large so we are able to separate the copies of the sample function in the frequency domain so this is the case for the oversample case as you can see each copy of the frequency response of the original frequency transform can be easily separated because we have a gap in between each sample then we can have the case for the critically sample data in which The sampling rate or 1 over Delta T is just large enough to separate the sample functions As you can see there is no gap between them anymore But still you can separate them or isolate one copy easily and then we can have the case of under sample functions in which the sampling rate or 1 over delta t is not large enough to separate the sampled function in the frequency domain and this is the case that you see here that the gap between the function is not visible anymore in the frequency domain so now based on these we can define and describe the sampling theory so let's just assume a function f whose Fourier transform is zero for values of frequencies outside a finite interval and that interval is shown with negative u max to positive u max and this range is defined symmetrical around the origin so this function which you can see a simple example of its Fourier transform is called band limited because any frequency outside this small this range basically is zero so now based on the sampling theorem we can recover the original f of t from its samples if we can isolate a single copy of f of u from the periodic sequence of copies of this function which is contained in the Fourier transform of its sampled version. So this is the Fourier transform of the sampled version of the original function and sampling theorem states that extraction of a single copy is possible if the sampling rate or 1 over delta t satisfies this equation basically if the sampling rate is higher than two times the maximum frequency in the original signal. So as you can see the center for repetition or in another way the period of the repetition in the frequency domain is 1 over delta t.

And since we assume that the range is symmetric for both positive and negative values of the frequency, 1 over Delta T needs to be larger than 2 times I you max but now the question becomes how can we recover a signal from its samples and there are two answers to that one is in the frequency domain which is much more straightforward and the other one which is in the signal domain so in the frequency domain if you if you look visually here we have again the frequency transform of the sampled signal again periodic copies of the original frequency or Fourier transform what we do is that we multiply this Fourier transform with a box function with the value an amplitude of delta t and then what we do basically we isolate the center copy of the original Fourier transform and this is what we get just by doing the inverse Fourier transform we can recover the original signal so Again we extract the central portion of the Fourier transform using a box function which is defined like this. H of u is non-zero with value delta t. If u is between negative u max and positive u max and otherwise it's zero. And then we multiply this by the Fourier transform. So we basically get the Fourier transform of the.

original signal to get the original signal in the signal domain what we do is that we find the inverse Fourier transform of the result of this multiplication and we get the original signal in the signal domain so what's the equivalent way to do signal recovery in the signal domain so again we use the convolution and convolution property of the frequency transform also we use the symmetry property of the Fourier transform so because of the convolution property of the Fourier transform the multiplication in the Fourier domain is equal equal to convolution in the signal domain Also because of the symmetry property if we had a box function you saw in an example if we had a box function in the signal domain we had a sink function in the Frequency domain so that means that if we have a box function in the frequency domain We will have a sink function in the signal domain so the Inverse Fourier transform of a box function will be a sync function and because of the convolution property what we have is that the recovery in the signal domain will be as a result of a convolution of the sample of the signal with some sync function which are placed periodically under sampling the the signal can cause aliasing so by definition alias mean the false identity so sampling phenomena that cause different signals to become indistinguishable is called aliasing so since we describe a digitized function only by its samples it is possible that different continuous functions to coincide at the values of their respective sample. What does it mean? It means that there can be many functions that if they are sampled not properly they end up having exactly the same sample value and that is what is called aliasing.

So two continuous functions with these characteristics are usually called an alias pair and aliasing happens when the signals are under sampled. So if you look at this as a result of under sampling we have an overlap between the different copies of the original signal in the frequency transform of its sampled version and then by using a box filtering it is not possible to isolate the the center portion of the frequency sample only containing the frequency transfer form of the original signal. so when the sampling rate 1 over delta t is less than 2 u max we have this problem so for an example let's just have a sine function as sine of pi t based on the definition that we saw before for this function the period the period will be 2 while the frequency is 0.5 based on the sampling the sampling rate must be greater than two times of the maximum frequency so here since we have only one frequency which is 0.5 two times of 0.5 is one so the sampling rate must be greater than one so if that is not satisfied what we end up with is under sampling and aliasing so the effect of aliasing can be seen in these two examples So for the bottom signal what we have is that for each period we have roughly four samples but for the next one for the one on top what we have is that for each period we have like between one or two samples.

The result is that if we only look at the sampled version of the two functions they look exactly the same. even though the original signals are not the same. So the sampling rate for the lower function is good because you can easily see the shape of the original signal from its sample but for this function on top, the shape that you see for the function from its sample is different than the original signal.

In practical application, before sampling we usually do a smoothing of the original signal and this is usually called anti aliasing and the reason for that is a smoothing if you remember from previous lecture is basically a form of low-pass filtering so by smoothing you are practically eliminating all the high frequency so when you eliminate the high frequency that means that the the sampling rate can be much smaller without introducing aliasing into the sample result of course you will see more visual examples of this when i talk about sampling in two dimensions or practically in the images so in the next lectures I will talk in more detail about discrete Fourier transform of one variable. One thing that you might notice is that when we start with the Fourier transform, we had a function which was continuous and its Fourier transform was also continuous. When we said let's just sample the Fourier transform, let's just sample the signal, what we had was that we converted the original signal. into a discrete version of itself but what happened for the Fourier transform of the sampled version the Fourier transform of the sample version was still continuous and that is not something that we want to work with when we are working with computers is necessary for us to have the Fourier transform also discrete so this is something that I will talk about in the next lecture I also talk about how we can extend these definitions into functions of two variables.

I will go over some of the properties of 2D discrete Fourier transform and its inverse. Then we will see how filtering in the frequency domain works and how we can do image smoothing and sharpening in the frequency domain. One thing that I cannot emphasize enough is that by itself just looking at the lectures you may get a some idea about the concept but it is necessary for you to go over to textbook as well and try to follow the logic step-by-step because in this video what I did was that I mainly talked about the equation I and I didn't drive any of them but you can follow all the steps that are involved or mentioned in the slide by using the definitions of Fourier series and Fourier transform