hello and welcome to lecture nine where we'll be talking about digitization so in the previous few lectures lectures one to seven we've spoken about modulation we've introduced modulation we've spoken about the types of analog modulation including amplitude modulation and the different variants of amplitude modulation and angle modulation and the different variants of those for each of these we spoke about modulation demodulation we spoke about power we spoke about bandwidth we spoke about applications we spoke about advantages and disadvantages now before we start talking about digital modulation we need to introduce the idea of digitization now digital isn't new to you and neither is the idea of digitization but we're going to extend this slightly so you will have seen sampling before in modules such as elec 270 signals and systems and elec 207 instrumentation and control and you'll also have seen adc or analog to digital conversion we're going to talk about something called quantization because together sampling and quantization result in something called digitization so sampling on its own doesn't give you a digital signal if the signal is originally analog but sampling and quantization does now once we've got digitization out of the way then we can talk about digital modulation whether it's band pass or base band modulation so that's lectures 10 11 and 12 multiplexing so this is a familiar topic to you it's a slightly longer than normal lecture but much of it will be familiar so we are now at this bridging point here okay and we're working towards our next class test so we're going to reintroduce the nyquist shannon criterion for reversible sampling we'll talk about over and under sampling and anti-aliasing or aliasing and anti-aliasing and the types of filters and how to use them and we'll spend some time talking about quantization and some relevant um formulas related to that so you've seen this before there's a lot of interest in everything digital these days because of several of the advantages that digital offer or offers over analog not least of which the ability to correct errors to encrypt to compress to store digitally and to transmit digitally over digital networks so much of what we take for granted today in terms of um communication is only possible because of digitization and the first step in getting an analog signal into digital form is sampling so sampling is what we do to convert a continuous time signal into discrete time okay so this is terminology you should be familiar with from signals and systems so sampling isn't um necessarily um it doesn't give you a digital signal if it starts with an analog signal it ends with an analog signal and it's possible because we have redundancy and provided we abide by the nyquist criterion we are able to reconstruct our original signal from our discrete samples so this is while it's still analog this is not new to you well so again this isn't new to you either the idea that we can model sampling as some kind of a switch that gives you a discrete time signal from a continuous time signal let's not dwell on that because we won't be dealing with that we've spoken before about under and over sampling and we've said that both of them can be problematic except oversampling is less problematic than under sampling and we've spoken about how undersampling results in something called aliasing and aliasing is where the number of samples is inadequate so you end up unable to determine exactly what your original signal was in the time domain it looks like that and in the frequency domain we have something called spectral folding where adjacent spectra overlap so the sampling theorem specifies that if you have a band limited signal that means a signal that has no spectral components above some frequency b then your sampling frequency has to be at least twice b another way of saying that is that the time between your samples because the samples will be regularly spaced the time between your samples must be at least or at most 1 over tb so that's your sampling period so ts must be at most 1 over 2b or f so fs is 1 over ts so the sampling rate must be at least twice the highest frequency component within the signal again this shouldn't be new to you so for example if we have an audio message containing signals of up to one kilohertz and three kilohertz so you've got two components you've got a component at one kilohertz and a component at three kilohertz the question is what is a suitable sampling rate so first of all let's find the maximum frequency component so b is the larger of the two it's it's the maximum of one kilohertz and three kilohertz so b will be three kilohertz now your sample rate or your nyquist rate is 2b so that's 6 kilohertz so your sample rate needs to be greater than 2b so it needs to be greater than 6 kilohertz so a suitable sample rate might be eight kilohertz two kilohertz is not suitable for kilohertz is not suitable six kilohertz is not suitable so six kilohertz is what we consider critical sampling and because you have a component at three kilohertz then critical sampling is definitely not acceptable so let me write that it's called critical sampling so you need something greater than six eight kilohertz is fine ten kilohertz is fine twenty kilohertz is fine now strictly speaking 100 kilohertz is also fine in the sense that you can recover your signal using um a low-pass filter if you had to sample it 100 kilohertz but it's um it's not what we'd call suitable okay because it's over sampling it's highly oversampled okay so eight kilohertz works 100 kilohertz works but a suitable sampling rate you would be looking at something in the order of 8 10 12 15 not a hundred so once you get into this territory your um you're creating an extreme redundancy or a wastage in terms of bandwidth if you're transmitting it or in terms of storage if you're storing it another question say you have an audio signal that contains signals from 300 to 3 300 kilohertz okay so you've got your audio between 300 and 300 300 kilohertz so let's just fix that and the question is what's the nyquist sampling rate so the micro sampling rate we're not asking for a suitable sampling rate we're asking for the nyquist sampling rate so the nyquist rate is twice b and b is the highest frequency so it's the maximum of three hundred and three thousand three hundred which is three thousand three hundred so the nyquist rate will be twice that that's your nyquist rate so a sample rate a suitable sampling rate if that were the question the sampling rate would have to be greater than six six zero so a sample rate in the order of 10 kilohertz would be acceptable 15 kilohertz 20 kilohertz 30 kilohertz that would be perfectly acceptable but the question here is only asking for the nyquist rate so the nyquist rate is so well it's twice that so that's your microstrait okay so this idea that as you sample in the time domain the same thing happens in the frequency domain we're still sampling but the effect is that your original spectrum gets replicated and you have these replicas and this repeats and really i should have used f here rather than omega and the higher the sample rate the greater the um the guard band we have here between adjacent spectra so here we have the three conditions critical sampling oversampling and undersampling so critical sampling is when your sample rate is equal to the nyquist rate so you're sampling it exactly 2b so your spectra your adjacent spectra don't overlap but there's no guard band they basically touch so this can be acceptable if you have very low or zero or um close to zero power at this frequency b but generally we try to avoid critical sampling and if you have a frequency component that's non-zero at b hertz then that's definitely not acceptable okay so critical sampling in almost all cases you'll want to avoid now for oversampling is when your sample rate is greater than the nyquist rate so that gives you this so-called guard band between adjacent spectra and that's really useful it means that when you want to recover your message you can use a low-pass filter and that low-pass filter doesn't have to be a brick wall ideal filter it can have a a roll-off that's slightly more realistic more low-cost and more achievable and it can do that using the guard band without picking up anything from the adjacent spectrum now if your sample rate is less than the nyquist rate then you're going to get this spectral folding and that's not good because if you then try to recover your signal even if you use a brick wall filter an ideal low-pass filter you would still have a distortion and that distortion is what we refer to as aliasing so how do we recover our message how do we um this is the spectrum of our original message this is the spectrum of our sampled message in this case it's over sampled what can we do to recover the original message well um we apply a low-pass filter in this case it's an ideal low-pass filter because it allows this part of the spectrum to pass and it'll block all the higher frequency components so you'll recover your original message okay so it's a low pass filter that we use to recover the original signal so a question can an oversampled signal be perfectly reconstructed oversampled so fs is greater than 2b can it be perfectly reconstructed that means can we go from our signal to oversampled signal so that's discrete can we go back to the original signal such that this is exactly the same as that with no error is it possible to do that yes or no well according to nyquist provided we stick to the criteria provided these samples are greater than twice the highest frequency component within the message then the answer is yes it is possible to perfectly reconstruct our original signal after sampling of course what we would need is an ideal low-pass filter um at the recovery end but the question is can it be recovered we're not asking for the conditions but the conditions are that we need to sample at greater than twice the bandwidth of the original message and we need an ideal low pass filter at the receiving end so if we can't over sample if we end up critically sampling or under sampling or if we if we were to undersample then we will need to somehow mitigate the effect of the distortion or the aliasing so we use something called an anti-aliasing filter an anti-aliasing filter is a low-pass filter and we can either do this before or after sampling in both cases we're going to remove some of the effects of aliasing we can't remove all the effects of aliasing but we can remove some of the effects pre-filtering removes part of the signal to avoid aliasing happening in the first place whereas post filtering removes the part of the signal that's been affected by the aliasing so the question is is it better to apply this filter before or after the sampling process should we apply the aliasing anti-aliasing filter before or after the sampling process now this was an exam question in i think 2019 and again in 2020 in 2020 i asked for this to be quantified using expressions and here i asked for it to be quantified using numbers so it's not enough to know whether we want to sample before or after we want to be able to quantify how much of the signal's bandwidth is lost and how much is retained by applying the anti-aliasing filter before or after the sampling process okay so this is something um that's come up in previous uh final exams and uh it's perfectly reasonable to expect this to come up again so i suggest you have a look at this the answer of course is before so if you had a signal with some spectrum like this i've gone through this in another pencast on the problem in the problem sheet area but if you were to under sample you would have like this so here you have bandwidth there here you have fs minus b this bit is what's affected by your aliasing or your spectral folding now if we were to apply a low-pass filter before that happened we could have just filtered the signal at this frequency here and that would just avoid having aliasing in the first place would end up with critical sampling whereas if we were to sample after so if we were to filter after the sampling then you would have to filter here and you'd retain this and block all of this so effectively you would lose all of this bandwidth whereas with pre-filtering you would only lose this much bandwidth so there's a significant difference so pre-filtering or filtering before sampling is better the question is is it better to apply before or after the answer is before so that was the first part of the lecture the bit you're all familiar with that's sampling that's how to get from continuous to digital so continuous to discrete but we still don't have a digital signal to get a digital signal we still need digitization so we still need adc to happen analog to digital conversion and that we refer to as quantization so here your blue signal is an analog signal the red signal is a quantized signal and if you notice here you have a fixed finite number of voltage levels that's what makes a signal digital so what we want so whenever i draw a signal like this what you should imagine is a discrete signal that looks like this so just for visualization purposes we're drawing these as um continuous time but in reality they have to be discrete time signals so never mind this slide right now this is in in a way this is just to prepare you for the next lecture where we'll be talking about modulation or pulse modulation so you have your analog signal that needs to be sampled and then quantized then encoded into bits prior to modulation okay but for the purpose of today's lecture don't worry about this so this isn't yet directly relevant but you will be looking at this information in a few minutes okay so you'll see this slide again next week so what is quantization it's mapping a discrete time continuous value let's just look at those two words first continuous time sorry discrete time continuous value so discrete time simply means something that consists of discrete sample discrete samples continuous valued just means that these values have still not been quantized they haven't been digitized they can still have an infinite number of possible values so quantization is mapping this discrete time continuous valued signal onto a limited or a finite number of discrete valued signals so we're mapping from hexafn to yfn so we have things called decision levels and representation levels what that gives us it gives us some so again i'm drawing a continuous time signal if you think of it simply as these discrete time signals so these discrete time signals if you notice they have a finite number of amplitudes so it looks like there are only one two three four levels four levels means two bits so we're talking about a two bit quantizer here and if you subtract if you were to subtract even if we look at this as continuous time rather than discrete time if you were to subtract your original signal from the quantized signal this is your error okay so this error we sometimes refer to as noise or quantization noise now to express things mathematically if we're using n bits then the number of levels is related to the number of bits exponentially so airless 2 to the power n so for one bit you can have two levels for two bits four levels three bits eight levels etc okay so if you know the number of if you know the number of bits you can find the number of levels and if you know the number of levels you can find the number of bits so for example if you had um 64 levels then n would be log 2 of 64. okay so these two go together we also have our quantization step size so if you think of this as your signal and if that's your the range of your signal let's say that's um the peak-to-peak and we quantize that into a finite number of levels then each of these levels will have oops each of these will have a size of delta where delta is r over l and r is often p v peak to peak now why do i say often and not always because v peak to peak relates to your signal whereas r relates to your quantizer and if you've matched your signal to your quantizer then that's when you're allowed to say r is equal to v peak to peak now your quantization error is related to this step size isn't it so your quantization error is half this step size so quantization error is that much so it's always going to be plus or minus delta over 2. that's your quantization error and because what we're actually doing is we're taking an analog signal and we're producing digital values we're actually generating bits or a bit stream and if you're generating fs samples per second and n bits per sample so let me write that out so fs um samples per second and in bits per sample that will give you a bit rate which is the number of bits per second so bits per second equals bits per sample multiplied by samples per second so when we talk about quantization quantization is an imperfect process and it results in errors or noise and there are two types of noise there's the overload noise and granularity noise so granularity noise that's what we were talking about that's the delta over two we were talking about an overload noise is when your dynamic range isn't matched well enough so for example if you have a quantizer range r and your signal is matched to that range then everything is fine but if your signal is not matched to that range then what's going to happen you're going to lose signal amplitudes which are above or below or outside the range of the quantizer so effectively anything above or below the range of the quantizer will be quantized to the maximum or minimum so that results in a type of noise that we call overload noise and it's difficult difference from the granularity noise which is related to the resolution of the quantizer and we often need to trade these two off and we'll look at some ways in which we can do that so just more formally when we talk about dynamic range it's the ratio of the largest amplitude of a sinusoid that avoids clipping clipping is what we were describing here to the largest amplitude of a sinusoid whose variation goes undetected i.e the quantization error okay so strictly speaking what we should be um dividing it's v peak rather than v peak to peak divided by delta over two and because v peak to peak is twice vp delta is quite twice delta over two so we can use that shorthand so a question for you we've already asked can an oversampled signal be perfectly reconstructed the question today is can a quantized signal be perfectly reconstructed so if a signal has been quantized quantized means it's been digitized okay is it possible to reconstruct the original analog signal from the digital samples and the answer is no it's not possible there is always going to be some degree of loss okay there's always going to be either granularity noise or overload noise or both so let's just talk about the effect of this so remember we said that l equals 2 to the power n so the number of levels so here we have 4 bit so we have l equals 2 to the power 4 16 levels 3 bit l equals eight and two bit l equals four so you can see with only four levels you see how our original signal the approximation doesn't look very good at all but with 16 levels the approximation is much better so simply changing the number of bits will affect how good your approximation is how much of an error there is so if you look at each of these um samples you look at the the difference there is that gives you an idea of the quantization noise there are a couple of um quick youtube videos i wanted to share with you this first one shows the difference between um 8-bit versus 24-bit audio before i play the clip what do you think the difference between 8-bit and 24-bit audio will be again when we speak about 8-bit we're talking about the number of levels okay so 8-bit audio we're talking about 2 to the power 8 levels and 24 bit is 2 to the power 24 levels so you can imagine that the quality is going to be much higher for the 24-bit levels what does quality mean when we talk about audio so if it's music what does low-quality music even sound like how will it represent manifest there's going to be quantization noise we're going to be hearing this bit all of this this quantization noise you'll be hearing that what do you think it'll sound like [Music] [Music] sound speeds whether your sample rate is 44.1 kilohertz or 48 kilohertz but why do we have two i mean why don't we just have one standard digital sample rate well let's discuss this by the late 1970s pcm or pulse code modulation format digital audio was being recorded on pneumatic three-quarter inch analog videotape early on it was determined that in order to record and reproduce sound of a certain frequency you had to have a sample rate of twice that human hearing is approximately 20 to 20 000 hertz and to reproduce that full range of frequency you have to have a sample frequency of at least 40k here's why one entire sound wave consists of a crest and a trough a positive frequency sample and a negative frequency sample sample one sample two 2 samples per audible frequency therefore 20 000 hertz requires a 40k sampling frequency at the bare minimum for the math that proves this look up the nyquist shannon theorem link down in the description leading manufacturers at the time wanted to create a standard digital sample frequency and then decided on 44.1 k but why they determined that they needed a data rate of around 1.4 megabits per second for lossless 16-bit audio remember when i said they were recording on pneumatic video tape well pre-hd video in the us was 525 lines of resolution and 35 of those lines were non-video lines dedicated to things like closed captioning and time code at least 490 lines and at 29.97 frames per second they could put three samples on each line of resolution and get a 44 055.9 sample frequency in pal they had 625 lines of resolution with 588 available lines of resolution and a 25 frames per second frame rate if they do the same three samples per line you get exactly 44 100 so 44.1 is close enough to work for both pal and ntsc you may say close enough is good enough yeah it is if you can show me a better easy to use sample rate that's compatible with existing 44.1 kilohertz and common 24 25 23.976 and 29.97 frame rates then i will help you get your sample rate to the right engineers one last note 44.1 k gives additional frequencies over the human hearing frequency range so a low pass filter is applied at 20 000 hertz in short full audible frequencies here and you don't need to pass any more sound here so in the middle is something called a transition band which basically fades out the sound between here and here to prevent aliasing aliasing is when the frequency is recorded outside of the range of frequencies being recorded in this case frequencies above 44.1 kilohertz are confused by the digital analog converter as frequencies within the range of recorded frequencies thereby adding incorrect data or sound to your recording for more information on aliasing link down in the description so that's 44.1 k kilohertz but how do we get to 48k when dat digital audio tape was released by sony back in 1987 the option to record at 48k 16 bit was included amongst the recording format options the reason it is an even number and fully compatible with all the common sampling frequencies like 8k 16k 32k and 44.1 k better yet it's an easy multiplier of all but the oddball 44.1 k the big reason though when recording sound for tv and motion pictures it's fully compatible with all common picture recording frame rates 24 25 23.976 and 29.97 so there you have it in simplest possible terms 44.1 k was first and is all that's necessary for full human hearing frequency range digital audio recording but 48k is easier to use when recording sound to accompany a picture format thanks for joining me in this episode of soundspeeds and be sure to tune into the future for more sound knowledge and sound advice another question for you if we have 1024 levels how many bits do we need to represent that so l equals 2 to the power n so 1024 is 2 to the power n how are you going to find n well you either know it or you take a logarithm so n equals log 1 0 2 4. very important e log has to be to the base 2 otherwise you won't get the right answer so if you take log to the base 10 you'll get the wrong answer okay 10 1 0 2 4 bits it's also the wrong answer what you want is that 10 bits so you need 10 bits to achieve 1024 samples sorry levels now this is an interesting question how does doubling the number of levels affect the number of bits per sample number of bits per sample is n the number of levels is l so if l equals 2 to the power n if i were to double this from 16 to 32 for example if this were if i were to double double the number of levels what would that do to number of bits we also double the number of bits or half the number of bits so just think of it like this if we have 2 l that'll give you 2 times 2 to the power n which is 2 to the power n plus 1. so effectively you will have one extra bit for every sample so adding one bit doubles the number of levels that's a really important conclusion so we've spoken about changing the number of levels we've spoken about changing number of bits to change the number of levels we haven't spoken about changing the sample rate now we spoke about sample rate in the context of nyquist and oversampling and under sampling but let's assume that we're now over sampling is it possible to improve the quality of our signal by over sampling now in the first semester we said it wasn't possible we said if you over to sample by a factor of 2 or 4 or 5 or 10 or 100 it won't affect the quality if you're a covered signal because you'll have a perfect reconstruction as long as you're over sampling but that didn't take into account the effect of um quantization that was all analog once we start talking about digital signals or digitization we have to take into effect into account the effects of quantization noise and this is where over sampling actually becomes um valuable so if you notice this first um [Music] signal and the second and the third what is it that's changing the sample rate has changed and if you notice here you have your sine wave and you have your approximation this is the digital approximation after quantization now even if we have the same number of bits that's the same number of n therefore the same number of levels your approximation here is not as good why because we have fewer samples we're still over sampling this is still oversampling but because we we're not sampling at the same rate we've reduced the sample rate the quality of our recovered or the quality of our approximation is much less look at this this is our original signal we have one two three four samples in that one uh period so we're still over sampling because four is greater than 2. but look at this that doesn't look anything like our original signal so if you see this first one it's close enough you can recognize it's a sine wave here you can't even recognize it's a sine wave so that's good and that's not so good even though we have oversampling over sampling over sampling so oversampling does improve the quality of your recovered signal if you're digitizing so this is a question for you to consider there are two things you can do you can increase the sample rate or you can increase the resolution by resolution we're talking about increasing l or increasing n increasing the sample rate is increasing this so in reality in real life you would want to improve increase both you would want to over sample and have loads of levels in practice you would have to to trade these off you would have to choose something that's within the limitations of your software your hardware and your communication system or what you could do is additionally do something called non-uniform quantization that means that your quantizer wouldn't have regularly spaced levels and we do that because the kind of signals we often deal with don't have uniform probability density functions that means for example in audio signals in speech loud signals are less probable than lower amplitude signals and the ear is less sensitive to high amplitudes we're more sensitive to lower amplitudes so therefore it makes sense to use more bits a greater number of levels for smaller amplitudes than for higher amplitudes and that's where non-uniform quantization comes in so rather than have a uniform distribution of levels where for high amplitudes and for low amplitudes we distribute the number of levels equally if a uniform quantization what we could do so here we have n equals 3 and l equals 8. that's 2 to the power 3. and here we have the same l equals 8 and n equals 3. but because we have more levels assigned to the lower amplitudes and fewer levels assigned to the higher amplitudes our approximation is much better for this low amplitude sine wave compared to this approximation which doesn't even look like a sine wave so non-uniform quantization beats uniform quantization for low amplitude sine waves let's look at high amplitude sine waves so let's look at the high amplitude so this there sort of looks like a sine wave but then so does this and this has used only three levels whereas here it's used four or five actually so non-uniform quantization helps when we have lower amplitude signals so to achieve non-uniform quantization what we often end up doing is using a uniform quantizer but then applying a non-linearity before and after the non-uniform the uniform quantizer where the second non-linearity is the inverse of the second sorry the second non-linearity is the inverse of the first so we call this compounding because we're compressing using this non-linearity and we're expanding using this non-linearity so what this effectively does is it reduces the dynamic range of the audio before compression and it has the effects of reducing noise so things like buzz hiss and low level audio tones we can reduce this because we're making better use of the dynamic range of your quantizer by compressing before applying a uniform quantizer so even though we have a uniform quantizer we refer to this process as non-linear quantization does anyone know what noise reduction is [Music] tell us a problem with analog tape arose when trying to record material that had a larger dynamic range than the tape did so engineers turned to noise reduction noise reduction systems worked by compending the signal meaning that the signal was dynamically compressed during recording so that it would fit within the signal-to-noise ratio of the tape on playback the signal would be expanded to restore its original dynamic range there were two popular analog noise reduction systems dbx and dolby dbx was founded by david blackmer who formerly worked for a company that made medical testing equipment that company also had a signal-to-noise issue in that when sticking medical probes inside a human body the voltages had to be very low so as not to kill the patient blackmar's company had developed a compounding system so that the measuring voltages could be low but the data could still be usable he saw that this technology could be adapted for audio and started his own company to do just that ray dolby of dolby labs had developed both dolby a and later dolby sr noise reduction dolby also created and licensed both dolby b and c noise reduction for cassettes today noise reduction isn't necessary with digital gear but for those analog tapes it made a big difference so now our signal has been digitized that means it's been sampled and it's been quantized we then encode it into bits so what you now have is a bit stream just a series of bits so now you have your bitstream now we're ready for digital communications that's what this module or this half of the module is about digital communications how are we going to transmit how we're going to transmit these ones and zeros well we can either do it baseband or bandpass so that means we're either going to do it wirelessly or we'll use some kind of physical medium right so if we're going to use a fiber optic cable or copper cables or coaxial cables whatever cables we use that's called baseband communication we use pulse modulation for that and that's lecture 10 our next lecture the lecture after that lecture 11 will be looking at wireless communication so whether it's satellite communication radio communication mobile communication all of these use a carrier signal so this is wireless communication or band pass communication that we'll look at in lecture 11. so that was today's lecture where we looked at digitization and we introduced the idea of quantization and we added that to what we already know about sampling we introduced the idea of anti-aliasing filters pre and post we spoke about non-linear quantization we introduced the idea of compounding we spoke about dynamic range we spoke about quantization error and we are now ready to launch into the final few lectures of this module where we'll be looking at digital transmission so here is a quick summary of the mathematical expressions we'll be looking at in a problem class and our next lecture we'll be looking at baseband modulation or pulse modulation so i hope you found that helpful until we meet again stay home and stay safe