learn highlevel system design by coding a YouTube clone starting with a basic flow you'll gradually add three key Services upload watch and transcoder this course covers the actual highlevel Design Concepts in practice including chunking transcoding and FFM Peg and adaptive bit rate streaming using hls City teaches this course and she will help you build a sophisticated video platform and Master System design principles files are being generated chunks or parts did you see this happened in parallel adaptive Pate streaming let's see the first 10 seconds of the chunk is present in this file so these are the things that we have coded I coded YouTube in 6 hours and it is not just another YouTube clone we wrote three services upload service watch service and a transcod service we also added a popup kka and we also added DP post sequel now before I get into the details before I do a code walk through and tell you everything let me first set some context and tell you why and when did I write this code and how you can benefit from this video by let me Begin by reminding you that just clone projects or just simple web dev projects don't work anymore they are not impressive anymore because the competition has increased so much you're expected to also talk about system design so this is a project where we will be doing high level system design along with the code and the code that we have written we have even done transcoding by ourselves and this code is not easily available online this is something that you know takes a lot of effort to create this is truly quality content I can vouch for that and I take a lot of Bride in that so this is something that you won't find easily and this is a very different kind of project now why did I create it this project was part of my hhld course Hands-On high level system design course where we took up three projects one was WhatsApp second was YouTube and third was Zera so this is one of the projects that we did in hhld why am I saying I took exactly 6 hours because all the Cod that you're going to see every single line has been written during the classes and YouTube Project lasted for 2 weeks which was 6 hours of classes and every single thing from creating files creating projects even setting up environment variables setting up o Kafka everything has been done within the class even all the explanation was done within the class so when I say Kafka I also explained what is Kafka you know how can you picture it why do we need it oo what is oo all the setup all the theory detailing everything has happened during these 6 hours so if you think about it this code has actually been written in less than 6 hours maybe 3 hours you can say half of it because I've explained each line of the code at least twice to my students now the question might be that why am I making all of this available for free see this is a project which is extremely great okay this is awesome my students take a lot of pride in being part of edu courses I know that the kind of guidance the kind of motivation that I provide cannot be provided provided outside I know that we have all pushed each other to write better code we support each other we solve each other's doubts and all of that is there but still if you cannot sign up for hld or if you do not want to do it I still want to tell you that how you can you know start thinking about such a project that if you take up making your own YouTube how to go step by step what are the instructions and all how you can go through it and I will also give you the code it will be available for free so you can access it in very simple terms if you are someone who wants to build a project you will be able to because this video will be enough for step-by-step guidance and it will also give you the code but if you are looking for more structured more detailed guidance then you can still enroll for H the recordings are available and the course is still going on we are going to be working on zeroda so you can still enroll for it and recordings are available you will also get access to the next patches so all of that is there the details are there on the site you can check it out actually if you're a complete beginner you can still sign up because we have covered end to end is starting from you know in the first few classes I even explained what is node what is the difference between uh nextjs and reactjs what is nextjs uh how to install packages every single detail that you can think of you started with AWS what is VPC what is subnet fast forward we have done so much load balancers we have used so many services now so all of that was part of the course but okay that was like a quick recap that if you want to sign up for the course you still can but now let's finally get started and I will tell you how you can approach the entire project step by step let's get started since a lot of my students are beginners and a lot of you might also be beginners so what I've done is instead of starting from hld you also can do this start from the core features what are the core features upload watch and transcode right so just focus on these core features and understanding everything step by step and later we'll Stitch everything together so initially we'll focus on how you can upload everything then later we will you know level up and then we will do chunking and upload and after that we will level up and do uh adaptive bitrate streaming while watching and encoding while uploading so in the starting we'll keep things very simple but most of the code is going to change later so right now if you see the code that I am showing on the screen I am showing Google doc why am i showing Google doc because most of the code that I going to show in the starting that is going to change but if you want to see code also you can see there are four main things that we have written three services so this is the the back end all of this has been written in nodejs and client is written in nextjs so let's see how can we get started the first thing that you can see is how to play videos on client so for that there is a very famous package that you can use on react on next year which is called react player so you can use this and here you can see we are not just playing videos on client but we did three things actually so when you want to play video first thing you can play any YouTube video so I said you know take any YouTube URL and play play that using react play the second thing that we did was you should be able to stream your video audio so streaming something like Zoom so that is why first page that we created I called it room. jsx because that gave like a zoom feel you click on a button and you can start streaming yourself both video and audio you can ask for permission start streaming yourself third thing that we did was that create one S3 bucket so here you should understand what is a bucket I explained to my students what is a bucket how can you create a bucket how to add a video just upload manually and from there you will get one URL you can keep the bucket public for now just to see that you able to play using react player and take this URL and using this S3 URL you should be able to play the video so this was the first step that we did and if you want to see the code let me show you from here so this is the first step this is the package react player that we use and you can see I have added three main things right so this react player here I have given one YouTube url here I have given One S 3 URL so I uploaded the day n recording of my hhld class and I showed them that you can play like this adding on the S3 same you can also do right and the third thing you can add a button and as soon as you click on the button it I've named it call user to give you the feel like of Zoom something like that and here what you can do is you can the URL is the user stream and how will you get the user stream just like this you can turn on your video audio it will ask you for permissions and you will be able to play it so the first step is completely focus on the client side so that you get the confidence that you can play the videos on the client so let me just write this down so the first step was on the client side that you are able to play the videos so this is client so we are done with step one after that what we did was that we created one upload service so we created so this client is on next J we created one upload service and this is on nodejs and here what we did did was we created one API slash upload and the main goal of this API was to upload some video or some file to S3 so to keep things very simple first thing that you can do is upload just PNG that's what we also read after that you can try uploading a small video like you know four 5 second video so you're going step by step you're leveling step by step right and both this PNG or video what you can do is you can hard code for now that just uh add some file in your back end and try uploading that right now so this is the first step right that create upload service upload media on S3 and you can test this upload API using Postman so initially we tested using Postman so Postman is going to do/ upload and it is going to call this API and whatever hardcoded file you have it should be able to upload on S3 and from here you will get a URL that you can play on your client also and you can sa then after that what we did was to level up now instead of from postmen so we'll just remove this instead of sending from Postman we will send the upload request from the client itself so on the front end right so you're stitching everything together now over here after sending the upload request now here also right now everything is hardcoded right so the next step that you can do is here you will add one input field for file and you will add this in the request in this upload request you will add the file in the upload request send to this backend service and this backend service will upload to S3 so one entire flow is going to be complete so you're going to select a file send it to upload service and that is going to upload it to S3 so let's look at the code now so this was the first step after that the Second Step was to create upload service so in our upload service you can see I have created one API upload I have created one route and the logic is going to be inside of the controller and this is the controller code why I am showing the code in the doc right now is because the code has changed a lot by the end of the project the link to the doc is again in the description you can check it out but to give you an idea see in our upload service there are controllers there are routes right so inside your route you can create one upload route and here you can create like right now there's upload to DB there is complete so in the starting we had one upload API so you can think of something like this right and the code was in the controller so in the controller in the end we'll be chunking and we'll be uploading that but your official code should be in controller okay let's see our controller code now to be able to connect to AWS we using the package AWS SDK and this is just to get the file so right now as I told you in the second step I am hardcoding the file on the back end so this is my hardcoded file and here first I'm just you know uh connecting to my AWS I'm giving the bucket details the file name key uh the access ID all of this just just configuring AWS and this is my main code so here I am uploading and what I'm doing is that if there is any error I'm just logging it and sending the response of 404 and if it is successful just giving the success response and logging in that is it and in the next step what we are going to do is we are going to an the UI because right now the back end is hardcoded now we are going to add the UI so here on my main page I've added one component upload form and if you go inside this component you are you're going to select a file and then send it right so for that there is one input field where you are taking the input of file and here we are handling the file change what are we doing inside that we are just setting the file and when you want to handle the submit you can handle file upload what we are doing inside this is so our backend server our upload service is running at 80 Port so you can just call SL upload and here we are adding our file in form data we are appending it in for file data you can also inspect and see that file will be going in your network I'll be demoing everything in the end now that in this point what have we done we have selected the file from our front end UI or from our client and send it to the upload request on the back end we need to take this file out from our request and we need to send it to S3 because right now on the back end everything is hardcoded right so that is what we are going to do we are going to extract file from the request and service and upload that to S3 so for that we are using mtter or molter how do you pronounce it so again we are using AWS SDK and in before our controller there's a middle we that we have added because there's a single file that is going to be there so this part is same but instead of the hardcoded file we are getting the file from request request. file and again we are just configuring AWS so the same code has now changed right here now what we are doing we are taking out the file from the request and in the upload same thing is happening that we are going to try uploading this and if it is not successful okay and if it is successful good so now that one flow is complete that you are uploading from front end to your back end to your S3 and you're able to play at S3 file on your front end I think you should feel a bit more confident I saw this confidence in my students so what we did was after that I introduced oo so we'll be doing oo in this project actually I had already done o using JWT in my WhatsApp project and we had discussed JWT a lot in detail I'll be creating another video on that as well so you can refer to that video I'll add the link to in the description in this project I have focused on oo which is uh like sign in using Google and later we also compared a bit you know what is the difference between o and SSO uh you can read about it but here what we will be doing is we'll be focusing on sign in with Google what you can do is use next to so if you're doing this the main thing that you need to understand is that this is happening on next J server now this is the main difference between next and react that in react everything is to happen on the client side but in next there is also something called server side and that is why on in a lot of places you will see that on the top we write use client right so next Au is something that we have done on next is but server side instead of writing a completely different service for it because I wanted you all to understand understand that you know you can also work on nextjs server that is why you can write full stack full projects on nextjs itself the front end as well and the pack and it also so here we are using nextjs server here you will have to sign up on Google Cloud console you'll have to create your project and set everything up so once you do that all the steps are return return on this you can refer to it you can add your Google provider and using next to you can uh do sign up so here you can see what you can do is that we have added two buttons one button is for sign sign in and one button is for sign out and here's a simple signin sign out because we using next O next o is amazing guys you should definitely try using it it makes things so much easier and you can get the data from use session and this data actually later in the project you will see that we from this data we took out like username and the image and we displayed that as well and we made sure that only those who are signed in are able to upload the videos so this part is very interesting just read about next Au and try implementing it and and over here so yes session provider and all of this is done so this is what we did in O and after this we discussed a bit of theory like SSO I hope you understand all of that but now let's get to a very very interesting part which is Kafka now that we are done with one flow so we did from front end to back end to S3 and we also talked about o right I think it is time that we start talking about hld so that is what we did so here the first thing that you need to understand in hld is that uploading is not straightforward there are more things that are involved because when you are uploading the video first thing that you need to do is content filtering you need to make sure there's no hate speech like nudity and all of that secondly you need to take care of copyright issues so you need to do all the checks second the third thing that you need to do is transcoding so while playing the YouTube videos you must have noticed there are different resolutions 1080p 720p 480p so while uploading itself you have to transcod the video in different formats and keep it so because there are multiple things that need to be done while uploading itself we need a pubsub why a pubsub so there will be one service that will be responsible for adding to our pbub in this case we'll be using Kafka in our project and from here different different Services can pick up the same message and use it so here Suppose there is one service for transcoding so it can consume the message and it can transcod the video so this is the next thing that you can do that we also did that we implemented Kafka we understood Kafka in details how how it work and all of this so you can also read about it and just do one basic check just to get started that push or publish one message and that message should be consumed by another service which is transcoding so till now we had only one service on the back end which was uploader service now we will create one more service which is basically a new node project which will be transcoder project and there we'll just do this ke yes it is consuming the message the entire code of transcoding and all that is going to come a lot later for now you just need to be able to see that okay Kafka is working and you're publishing to Kafka you're able to consume from Kafka coming to Kafka so I have added a bit of theory because we had like a bit of theoretical class where I explain what is producer consumer broker and all this actually there's one video that I'm creating on Kafka a crash course sort of thing so I will add the link to that also in the description so you can understand Kafka from there if you're a complete bner so as you can see there will be two Services upload service we had already written and that will act as a producer and there will be one more service that we'll be writing which will be the consumer the transporter service and here there are lot of online free solutions for Kafka one is also Cloud kfka that I used in demos for hld batch for hhld I've used iin for all the demos so you can create Services there are a lot of free services over here this is no way any promotion I just found it good so I'm just using it as you can see I have set up one Kafka and you can see all the configurations you can set it up in your uh project and here you can add topics you can create topics so here you can see there's one topic which is transcode so we'll be producing to transport and we'll be listening from here so in both our services upload service and transcor service we have added one folder called Kafka and this code is common because we're just configuring Kafka here we are creating one class and then we are adding Brokers and then we are setting up SSL password admin all of that and then we have written the code for produce and consume now although the code is common in upload service this is going to be used and in transcoder service so if you see over here in this one only consume will be used but I've added in both just to show you and here if you see where am I calling so in transcoder service in my index.js and here if you want to see where are these called suppose let's see in transcoder service first so if you go to index.js here I have added the config and here I'm consuming I'm consuming what I'm consuming transcode and here I'm just logging ke I've gotten data from Kafka that is it this consume is called in the transcoder service where is the publish being called in our upload service so this should be called right produce since we have to upload in the upload service let's see the code for that also so in the index.js you'll be able to see that I had added one more route itself publish and this is the router that I am using and this is the actual code so this is this will be there in the controller so over here send message to Kafka and and here what are we doing we are just producing the message transcod so I will run it and show in the end itself everything so here I am pushing to kafa from the uploader service and what is consuming the transcoder service is consuming and I've also added all the steps in the doc so how to set up Kafka overview all of this set up like you need a certificate and all of this and you can create a topic on I and after that how to configure produce so the ca. JS is going to be there in both the services and this is the publisher code as we had just seen and this is the consumer code on the transcod service that's it when we did the first FL from client to upload to S3 we either did for a PNG or for a small video now that you have understood o and Kafka the next thing that you should be asking yourself is that what is the difference between a you know simple PNG or a small video and what are the problems that will happen when there's a huge video so if there's a huge video it is going to take a lot of space and sending it over network is not going to be possible in one go right suppose it's a 1 hour video what are you going to do so obviously we need to cut our video into different different chunks or parts so each chunk or part can be like of a few seconds say 4 seconds 5 Seconds 10 seconds you can decide accordingly right but you know that you need to divide your video into chunks now the question is that where should the chunking happen so a lot of people get confused with this a lot of people say that we we should be chunking on the upload service site which is basically over here so then my next question is that you know if chunking is going to happen over here then how are you going to send the video from front end to your back end also right because if you're doing the chunking over here first thing that you need to do is send to be able to send the video from font end to back end how are you going to do that so the correct answer is that you should be chunking on the front end itself sending it to the back end and which is going to send it to the A3 now the final thing that we want to do is that once we send these chunks from our front end to back end to S3 finally on S3 what should happen is that all of these chunks should put should be put together should be assembled together to a single video right even though we are sending it in chunks the final thing that we want is a single video correct so this huge video how can we play also that we'll discuss later for now let's focus on this huge video how can we upload it so that is the current Focus so your next major agenda should be that how can you do chunking on the front end and then how how can you up send it from your upload service to S3 such that it is assembled back to a single video so this is the next thing that we should see here I have written different ways to upload data on S3 here you can see front end to back end to S3 without chunking so if you do without chunking it is going to be slow and not efficient if you do from front end to S3 without chunking processing like transcoding and all is not possible right so a lot of questions come that what if we remove the back end why do we need back end then how are you going to do all the processing So This Is The Answer front end to back into S3 with chunking this is what we'll be doing faster processing is possible retry resume abot all of this is possible right then there's also one more thing uploading using pre-sign URLs this we are not going to discuss right now because you need to understand pre-sign URLs in S3 for that if you understand that you would have understood this right so next agenda after CF card the first thing that we did was send video in chunks from client to server to S3 so right now what is going to happen is these chunks itself will be uploaded on on the S3 also the reassembling is not going to happen whatever chunking you're going to do in Parts those chunks will be sent to uh your upload service and the upload service will send as it is the chunks itself to S3 so this is what is going to happen right now so first thing that we need to do is to the chunking on the client side so this is how you can do so first thing that you need to see is that what is going to be a chunk size so here everything is in byes so this is going to convert into MBS and you can mention how big you want your each chunk to be suppose you want want your each chunk to be 100 MB so this is going to be a chunk size you can find the number of total chunks you can log it to be sure that how many chunks are there and all of this and after that we are going to do the chunking now chunking is actually very interesting this is where you need to understand your alos a bit so that you know you used to writing this code uh so your chunk index is start is going to start from zero to less than CH total chunks and you're going to slice your file you're going to slice it from start to start plus chunk size so your start is going to keep weying right so suppose it starts from 0er to 100 next time it is going to be 100 to 200 like 101 to 200 after that it is going to be from 2001 to 300 and so on and so forth and how we were sending the entire file earlier in our uh form data now what we'll be doing is we'll be sending the chunks so here I am uploading what I am sending the file name the chunk what are the total number of chunks and what is the index of this particular chunk so in this fall Loop what am I doing these are the number of requests so this is my Loop and inside the follow Loop I'm going to keep uploading the chunks so earlier I was sending one file now this same request is going to get called how many times the number of times as they chunks right I can also show you in the code so this is my client in upload there is one page. jsx obviously this code is like the final code but this is actually the same thing so if you want to see the for Loop you can understand from here we are slicing our file from start to start plus chunk side and then we are sending it in form data so this call is going to happen how many times how many times this fall Loop is going to run the number of iterations now that we have sliced our file into chunks on the client side our backend service should know our upload service should know that now I'm not getting one single file I'm going to keep getting chunks so that is the next thing that we'll be doing so this is just the div so this is where the front end ends now in the backend side so in our route instead of you remember this was this upload do single file now instead of single file there are going to be Fields there's chunk there's total chunks and there's chunk index and our upload file to S3 in the controller is going to uploaded to S3 so same thing but right now what has happened is that the chunks got uploaded to S3 now the thing is that what we use was s3. upload right if you remember in our upload form how were we uploading s3. upload so if you want to see the controller code let's go back to it so we were using AWS SDK and we were doing s3. upload right so it is going to upload one by one now what is the problem is that all of these chunks are going to get saved separately on S3 now AWS SDK gives us a very cool feature so instead of uploading these chunks one by one to to completely different files what we can do is instead of s3. upload we can use something called multi-art upload which means that we are going to send it in chunks and S3 is going to reassemble it together so whatever we did till now was what I had done in the week one of the project so our project was R into two weeks right it was a two week project in hhld so first week is what we did so far second week this was the agenda first agenda was multi-art upload from backend to S3 now what we have done currently is from frontend to back end everything is getting chunked and then the chunks we are uploading to S3 right now for now for the first part you can forget about the client just take any file on the back end slice it in the back end itself just for easy just to be able to understand slice it on the back end itself but what you want is that on the S3 side it should be put together and you should be able to play the file together so for that we are doing multi-art upload so if you go to the docs actually I've have added the link to the docs and the docs is just amazing I'll quickly summarize it for you but you can go through it yourself okay since you're making the project but the multi-art upload process is divided into three steps one is the upload initiation the second is the Parts upload and the third is the completion now initiation is when you're going to tell that you know I'm going to initiate a multi-art upload and then it is going to create a upload ID for you the S3 and it is going to give you back now after this whenever you're going to upload the parts or complete the upload in that you're supposed to send this upload ID so that S3 knows that okay all the parts all the chunks that I am getting or the completion of the upload request all of this is corresponding to this upload that you initiated so it is just going to generate an upload ID and give you so the upload ID is generated where in the initiation part after that there is one Parts upload here you're going to send all the parts and in the response of it you going to get an ntid tag and in the multi-art upload completion what we are going to do is Q are supposed to send all of this information that how many parts were there and with that whatever entity tag it had returned you that you need to respond back see here it is written that when you complete your multipart upload request you must include upload ID and the list of both part numbers and corresponding e tag so you're supposed to send two things part number and E tag values let me show it to you in terms of diagram also so it will be clearer so in simple terms multiart upload S3 expects three requests one is creation second is upload Parts where you're going to upload all your parts and the third is complete multi-art upload what S3 is going to do is it is going to put all of these parts together it is going to reassemble it together into a single video so let's see in the diagram so that it is further clear so what we were doing till now was s3. upload now we are going to try something called s3. multiart upload right so this is this provision is given by S3 itself so it is there in AWS SDK we are using the package right what S3 expects us that we will send three requests the first request is going to be initiation request that we are going to initiate the upload so in the request we will send that okay this is the file name this is the key and all of this and in the response we'll be getting what we'll be getting one upload ID and in the next both the requests we have to send this upload ID in the request so the next request is upload part so here we'll be sending a lot of parts so Suppose there are 100 parts or Suppose there are 200 Parts how many hour thousand Parts you're going to upload that and in the request along with your part details like you know what is the part number you'll be giving your upload ID that this is my part number and this is my part ID and in the resp response corresponding to every part number you're going to get something called e tag which is entity tag so in the end what is going to happen is when you're going to send the request of complete upload what happens is you send an array and in the array you send what part number and E tag for every single part so like this you like Suppose there were thousand Parts you will send the array and you will obviously send the upload ID itself so this was going to be there in your request and finally in the respon resp of this complete upload you're going to get one final e tag which is going to be the tag of this entire upload right so earlier you were getting e tag for every response later you will get one final e tag also just for further Clarity every e tag is going to be different because it is for all the parts right and this e tag will also be different so we are going to be implementing the code for this so we need to write three API on our upload service we need to make the three API calls from upload service to S3 now to make things simple for multiart upload earlier what we had done was that we were doing chunking on the front end and we were sending it to the back end and we were sending the chunks itself to S3 right so this was how we started so this was the step one now in step two right now we'll just forget this we'll forget it and what we are going to do is that we will do the chunking on the back end we are going to hardcode our file right now just to Able just to be able to test this multiart upload so just to test multiart upload we are going to hard code the file chunk it over here and send it to S3 what is expected is that S3 would have reassembled this video and created a single video so we want that whatever chunks we are sending from a back end upload service to S3 it would have reassembled it this we should be able to play on the client also after this what we will do is we will do the chunking on this these chunks we are going to upload to S3 using multi-art upload only and then this S3 would have reassembled it to a single video so we want to go steps by steps so for the first step what you can do is you can just hardcode it and send it and after that you can send the do the chunking on the front end which is the correct way and then send it so in order to do this what you can do is you can test using Postman so Step One is multiart upload from back into S3 using a fixed file so fixed file so what do I mean by fixed file we are fixing the file on the back end so we have given the file path and seeing if the file doesn't exist we are not doing anything so we are going to be testing this using Postman right uh so here this we have done earlier also we just configured AWS that's it and now you can see inside this try there are basically three requests so this is the first request where we are creating the multi-art upload so this is returning a promise and from this promise if you see we are getting what we are getting the upload ID so multiart parents. upload ID right so whatever we getting from the response of the first one the initiate ation one we are getting this and from this we're getting the upload ID just try it out yourself the documentation is very much clear you can try it out right and again same chunking we are doing so there's a fall Loop so inside this fall Loop we making another call which is upload part and here if you can see I am saying upload part and in the response in this data there is something coming e tag right so we have created an array so the array is empty so we are storing all the e tags or the uploaded Parts in this array so here you can see I pushing the part number and the E tag why because while completion over here I need to send it so here I can see that I am passing this pass so uploaded eags this array I am sending it in the end and in the response I'll get that okay I've uploaded it so this is how you can test multiart upload from back end to S3 but in reality what you should be doing is you should be doing the chunking on your client side so we'll do that and for this by the way for multiart upload you will change your router also and you can set your cost per permissions on bucket and all of this and after that what we need to do is we need to send the chunks from front end to back end in sequence and then multiart upload to S3 so front end is where it should happen and this code I can show you the code also because this is the final sort of code so this is the client side of code so in upload what we have done is in our handle upload if you see there are three parts right so from front end to back end also now there will be three requests instead of just one request so here when there's a button and when we click on upload load so in our handle upload what is going to happen is three requests are going to happen to backend so the first request is going to happen is that I have called as/ upload SL initialize so if you see on the packet there will be three routes let me show that to you see for initialize this one route so this is where we'll do the create multiart upload after that there's just one slash route which is like upload the chunk and here we are expecting chunk now initialize what is going to happen is we are going to expect the file name the title and all of that that is why something is going to come in the body but it is not a file so multer expects you to give upload. none and here there's going to be a chunk and here there's like the third request is complete right so on our client side what we have done is three requests so the first request is upload initialize and here what are we giving just the file name here whatever selected file is there in our input field that name we are giving and this is the first request that is going to happen and from here what are we going to get from the response we are going to get upload ID and once we have this upload ID we are going to use that in the next two request so this was the first part and after this in the second part is where we are going to do the chunking so here we are doing the chunking the code is same that we have seen earlier so this was the first call and in the second call is when we'll be doing the chunking so this code we had seen earlier also we were seeing earlier also that upload call is going to be done so many times so whatever is the selected file we are slicing it and in the form data we are adding everything here you can see that I am creating an array of upload promises and in in the end I'm just pushing all of them so in short we are just uploading the chunks over here now after that there is one more thing that is left which is completion so in the upload complete we are sending that okay this is the parts so these are the total chunks upload ID and all of this and if you want to see the other site the backend site so there are three requests and let's see the controller as well I created another controller so that you can compare that what was the first thing so earlier we were just doing s3. upload now we are doing multi-art upload so now in multiart upload there are three functions you can obviously move them to three different files but for now I've just kept three in the same file so there is initialize upload in this we are configuring all of this and calling create multiart upload there's another one and by the way for each one we'll be sending the response now right because three different requests from front end to back end and the second one we are uploading the chunk so here we are going to send the upload ID the part number and all of this and in the third one we are going to be uh sending the complete upload request so this is how you can do multiart upload so we are done with the first two steps we did multiart upload from back end to S3 you like earlier the file was fixed after that we send the chunks from front end to backend in sequence and then we did the multiart upload to S3 now what we are going to do is send chunks from front end to back end in parallel so what do I mean by that so if you notice the difference let me just tell you something so in the client code over here in the for Loop if I keep keep adding await okay so await means what it is going to wait for this to happen so while one chunk is getting uploaded this is in the second request by the way when we are uploading the paths right so from our front end to back end what is going to happen is because we have added a weight in our for Loop it is going to wait for one part to upload and it is going to happen one by one versus what I can do is create an array of promises and here instead of adding a weight what I'm doing is I'm just keeping the promise prises over here and after the for Loop is over after this for Loop is when I am saying that await all the promises together so what is going to happen is from front end to back end instead of sending it one by one you can send them in parallel now the question that can be asked is that what if you know you are putting a too much load on back end now for that you can also do load balancing because there is upload ID right so from that itself is enough for you to understand that this is part of which upload so if there are way too many upload paths you can still handle that but this will definitely make things efficient so if you notice over here sending chunks from front end to back end in parallel create upload promises array and over here you can keep pushing them and you can await all of them together now that the first three points are over let's do a quick revision so you did chunking on your client then you sent your chunks to your packet and then you did multiart upload to S3 because the multiart upload expects three apis you created three apis from front end to back end also and from back end to S3 also so there were three calls from front end to backend and from back end to S3 right initiation upload parts and then complete upload now that upload flow is clear it's time that we start talking about wat service so here we are creating the watch service and also we talked a bit about you know that S3 right now whatever we have been using is public you can make it private and you can access using signed URL so you can play around with that you can get the signed URL so this is just something that I wanted to show you can skip it if you know about sign URLs already the main point is that create one wat service because from watch service also what we going to do now is that we are going to attach a DB and we are going to add all the video details in that DB and from the watch service you're going to watch that let me explain using the diagram so what we have done right now is that on our front end we are doing the chunking we are sending it to our upload service and then we are sending that to S3 using multi-art upload now first thing that we did was created one more service which is what service okay then we created one more database which is postgress SQL and here we are going to be using om Prisma and we are going to do what when we do upload we are not just going to upload the video on S3 but after completing the upload basically in the third request after completing the upload in our postp SQL we are going to be adding some metadata like like for example our title our description our author name and because we have finished the upload we would also have what we would also have S3 URL in this right after completing the upload so we can add the S3 URL as well so what what are we going to do we are going to first do the upload to S3 and after the completion we are going to get the URL after that we are going to call this postgress SQL and we are going to add the metadata title description on the S3 URL so we can send the S3 URL to the post SQL and then what we are going to do is we are going to add a route in what service to list all the videos so right now we going to be using the same DB itself so we are going to do something like get all videos and our client will be able to like from our home so basically YouTube home is going to give all the videos right so there will be like a list of videos on our client so this is the next thing that we are going to do and let me tell you a step further also because I can walk through the code together once you can watch all the videos what you can do is on this screen itself you can add a button for upload right but who can upload only signed in users so we have we have already done off right so we'll just integrate it together and what we'll do is only those who have signed in can upload and those who have not signed in cannot upload so they will be either sign in button like sign in button or they will be be like sign out and upload button and you can also put hello name and image and all of that if the person is signed in so this this is the next agenda as I explained after getting the list of the videos what you can do is add the front end code firstly to list all the videos and to play the videos and all of that and only authorized users should be able to upload so let's see the code for all of this and I promise I'm going to show you the demo it is going to be awesome to create video metadata DB again we are using post again we are using the fre solution from Ian itself so you can set it up uh you can set up your database so here I had set one hhld class during the class and we were using that and interestingly YouTube actually uses vus which is not free otherwise we would have used that so you can read about vtis vtis is actually great for horizontal scaling even though it is a relational database and so I have listed out the differences between V and post SQL and we'll be using Prisma for om which makes things very much easier so instead of writing queries queries you can treat that like your code itself so like how you create table you basically create an object something like that while creating the entry so the documentation of Prisma is just awesome it is very detailed so all the steps that I added over here are taken from the documentation itself so you can go through that in detail and we are creating the metadata and stuff coming to the code within upload service itself if you see I've created another folder DB within that there's db. JS just name simple itself and here we are doing add video details to DB where are we doing all of this so when we do Prisma in it you can see the commands are written in the doc and it is also there in the Prisma documentation also there's one file that is created which is schema. Prisma and here it creates all of this for yourself only you just have to create a model so here we have created the model so ID is an integer and we are Auto incrementing it there's title description author and all of this and the migrations are created by itself so uh and if you see in EN EnV there's database URL that is added and how have I said this up from my Ian right so I have created a model over here and over here what I'm doing is Prisma Dov dat. create so you can see creation becomes so easy it is like code only that you are writing so you'll be adding data title description author URL and so on and soort if you want to see your data in UI version so you can I have installed something called PG admin again something I found for free so go to servers here you can see hld demo and then here you can see uh table video data right so here we had created datab is called hhld class and if you go over here to video data here you will be able to see the columns and all uh let me actually just go to the query tool and let me write a query select all from video data so this is whatever data you're going to see we have created everything during the class itself and whatever data like during the upload watching I have created everything during the class so yep so these are the random names that I added and you can see the URL so when we upload we got the URL and then we uploaded that over here and I will show it to you running in the end but you can see the database like this so this is what we just saw you can add model in Prisma schema and db. JS we just saw this right we can add video details to the DB you can create right and you can test using Postman how you can test using Postman so for this what you can do is so if you come over here in upload controllers here we had seen three functions right initialize upload upload chunk and complete upload so just to test that uploading to DB is working fine I've created another route and here I am testing upload to DB but initially you can test via Postman and in the request you can send title description and author right and uh URL as well so let me show you from Postman also how your request can look like so whenever I create apis my usual practic is that first I'm going to test from Postman make sure that everything is working fine on the back end and then add on the front end so this is what I have done I've just tested over here upload to title description author URL so that is why I'm getting all of this in the request and then I'm adding the video details to DB so you can do the same you can test that everything is working fine on the back and you're able to upload and after this what we can do is we can come back and write the front end code so I've also mentioned the postman body and the router that you can add and over here what we are going to do we are going to send the video details from the front end during completion of upload and add it to the DB so on the front end itself if you remember there were three calls that were happening right so if you go back to the client and source and upload C page. jsx so if you remember there were three things that were happening first was the initiation second was the uploading of the parts and then third was the completion now in the completion if you see these things I have added now so this was not needed like before now we we are going to be adding title description and author and now what I've done is that my in my form I have added more input Fields not just file but there's also an input field for author for description and for title so this is how you can you know keep improving your project step by step you can keep adding the things so here we have added title and then we have also put a check that title and author cannot be empty they are required and then after that in the completion you can add title description and author earlier there was just F name total chunks and upload ID and later we have added all of this and you can just add the input felds I can actually I think we are ready to see the upload flow at least from the front end to the uh back end let me finally show you how it is working so in order to run I have created two terminals one for client so I can run this using npm run Dev so it is running at 3,000 and upload service I'm going to run using node moreon and Dev start so this is running at 8080 yep at 8080 so now if we go into complete upload so finally in the end what is going to happen is that we are also going to be uploading to Kafka right so right now I'm comenting this because we have not discussed this so far we will be adding video details to DB and we'll be uploading right so we should be able to see it in the DB though so if you want to see what all things are there in the DB if I just run this again so you can see there are seven entries in the DB and if you want to see S3 this is the hhld classes and if I refresh this there's nothing I've deleted everything right so we are going to try uploading but before we go ahead and see the working code there's one thing that I want to talk about so here since we are doing multiple things add video details to DB push the details to Kafka and all of this and we are also completing uh the upload so there are three things that are happening right completion adding video details to DB and putting to Kafka which we will do but okay three things are going to happen now you should be asking questions like key what if one of them fails how are you going to handle it now we have not gone into that level of detailing in this project but that is something that you should be talking about for now you can just put everything into error you can catch the error but ideally what you should be doing is if addition to the video DB failed then you should be talking about what is going to happen on the video that is there on the S3 are you going to show a popup to the client and ask the client to add the video details again or what is going to happen how are you going to handle all of this so these questions should be coming because we're talking about H but because right now we are keeping things simple now I'm going to go to Local Host 3000 and this is how the UI looks title description author I've have kept things very very simple right now and if you want to see the network tab let's do that so let's go to the network So currently there's no call there's nothing on the console right uh suppose I want to just add uh YouTube hhld and description is like with suppose kti pwani YouTube channel and suppose author is kti itself and we are going to choose a file for upload and let's say it is the day 15 recording of my uh hld classes and I'm going to try uploading this okay and let's see the network is going to be interesting so did you see this happened in parallel so first request that went was initialized and you can see all of this happened for parallel some is happening already and some is taking some time right the upload this is basically upload parts that is happening and why is it happening in parel because promises we had not awaited we had just created one array and we were awaiting it together so that is why the upload from front end to back end is happening in parallel right so there are so many chunks in this file this pretty big file it's a two and a half or 3 hour class right so after two hours we do chitchat and all of this so it might be like 3 hours of Zoom recording so if you see the initialized one here we are just sending the file name and we are sending it form data if you remember and in the headers you can see everything so you can see upload Parts is still happening some chunks are uploaded some chunks are still happening and you don't see any call after this right so after all the parts are uploaded there's going to be one more request what request is going to be it is going to be the complete request and if you want to see what is happening on the back end so you can see the back end terminal as well you can see that the data that we receiving back there's an e tag right so you can see there's an uploading chunk that is happening so we started from over here initializing upload was the first thing it got the file name from the request and then it we got what in the response we got the upload ID so in the corresponding request we would have sent this upload ID and you can see a lot of requests are coming in parallel why are they coming in P because we added it in front end now you can see the E tags are different for each chunk and it is happening so it is still happening it takes a lot of time since it's a big video so you can see right and if you want to see what is the chunk number also you'll be able to see if I see the uh request over here a few minutes later so you can see we are at the final few upload chunks so finally I guess three are left let's see and as soon as this happens there should be one more request that goes let's see let's wait for it a little longer than a few minutes later all right we waited for a couple of minutes let's see the console as well so you can see so many upload requests were called right and over here you can also see upload successfully message that came because there was the final request that went which was the complete request and if you want to see over here completing upload is also there right and in here you can see the final e tag that is there and the final URL and this URL should have been uploaded to the database and if you remember there was seven entries and the ID that got generated because we had added Auto increment in our schema right so the ID is 8 so if we go to our data datase and if I run this again so you can see this right so our data got added to the database we were able to upload in chunks and now let's check our S3 as well so if I come back over here and if I refresh there's a D15 recording and I can actually play it we can open and we can hear so okay I don't want to hear myself I'm going to mute and play so this is the recording of our hld class so day 15 class but yes so we did one upload request from our front end we were able to add to the DB we were able to do chunking but wasn't this amazing what we just did was we added the video from the front end from our client we were not able to just chunk it on the front end we were able to send it on the back end the back end tended multiart upload to the S3 and also added the details to our post sequence if you think what you just saw was very cool just trust me and wait for it because things are going to get even more cooler a lot cooler in fact so let me just show you the code you have already seen this I just added log completing upload and in the request body now title description author is coming and after completion of multiart upload we did add video details to DB so you already saw the code for this and you saw that I got the URL if you just want to see what I'm talking about in my complete upload over here I am getting the URL from over here so let me just open word wrap and over here you will be able to see upload result. location because the URL we are getting from the completion of the upload and that URL I am sending to add video details to DB so this is it now that we have understood the upload part very nicely let's come to our watch service and let's work on that again our watch service needs to be connected to our database right because we are going to work on a URL where we are going to see our YouTube home and where we'll be able to list down all our videos and play them so that is what I'm going to do over here in my watch service I'm creating an API get all the videos basically so over here what I'm doing is writing a simple Prisma so there are many ways of writing this we could have used the metadata and wrote this but just wanted to show that you can also do query draw over here and you can try this out and we'll get all the videos over here and you can test using Postman and after that you can add the front end code so on the front end code we are creating another page YouTube home and we are again doing the async request to get all the videos and we are going to do this in the use effect that means that as soon as the app is going to load that is when we are going to get the all the YouTube videos and if you want to see how to do that let me just show you the client code so over here in the page just to show you I added upload form right now now what I'm doing is adding YouTube home so YouTube home is the new page that I've added so if you go over here YouTube home uh set video set loading so initially it is going to say it is loading and if it is loading in the between it is just going to say loading like in the middle of the screen it is going to say like this and once it is done loading it is going to run right so upload service I had stopped let me just run it again and let me run the watch Service as well so we are going to go inside the watch service and we are going to run it again using noemon and this is also running and this is running at 8082 and let's see if our request was going to 82 itself so that everything is working fine so let's go to our YouTube home and let's check the request is going to 82 obviously I should be putting this to environment variables but this is just like quick demo I had done all of the in the end of the WhatsApp project so uh I wanted to hurry in this project and focus on the other things so I've written like this okay and if we refresh this so you can see loading and you can see all the videos have come including the ones that we just added can you see YouTube hldd and summer camp and you can actually play all of this this is pretty cool now this these things that you're seeing is like because you're seeing the final project but yeah so I know the UI is not so good but this is enough and uh this is is all during the classes that I have written so whatever was the best we could do we have done so this is how we added route in the wat service to list all the videos so this was the back end part so here we just did get get all the videos and here was the front end part which we which I just showed you right videos and loading and we just saw this also after this the sign out sign in part over this what I'm talking about right over here you can see an ed because it is HL with the two courses and all of this so how can we add that so if you remember I had told you about next to right and use session sign in sign out so we are I had told you that you will be able to access data anywhere and everywhere and why I'm able to do that is let me show you the code so in the client if you look at the layout. JS I have wrapped all the children within session provider o because of which the session is available to all the children so that is why what I've done is in my YouTube home on the top so if you notice over here on the top I have added one very simple Navar right so created one Navar over here so that is a new component Al together and if you go over here I am able to get data using use session over here and I am going to do what once I able to sign in I will should be able to go to upload right and I have added sign in sign out buttons and here from the data I'm able to get the usern name and the user image so right now I didn't have any image so it returned H by default if I would have had any proper image it would have returned me that right so now let's come over here and let's try this out so if I sign out and if I sign in you will be able to see that I do sign in with Google and here now I get the upload button and if I go to upload I am able to come over here and the rest of the flow you have seen right so this is sort of a more complete flow now there are couple of more things that you can add to your project like for example if I am signed out and if I try going to upload like this itself I should not be able to go but right now you'll be able to go so these small things you can keep adding because I had WR all of this because I had done all of this during the class I have not done this level of checking and changes and even this you could have lazy loading pagination and things like that for now it's a simple sign in sign out able to upload able to get the entire flow so if I go to upload and if I upload from here the rest of the flow you know about it right it's time to finally move to one of the most interesting things which was actually the last class of the YouTube project that we did so all of these things are done and now it is time that we come to one of the most interesting parts of the entire project which is adaptive P streaming so you saw how we started step by step started from just you know seeing one entire flow to understanding Kafka understanding postgress uh put adding one or and doing all of this now it is time to level up one level further and we are going to be talking about adaptive bet streaming so for the h students what I did was we discussed a bit of theory resolution format bet water resolutions uh talked a bit about TCP UDP web RTC rtmp what is hls what is Dash and all of this and then we got to finally the code let me first quickly explain to you what is adaptive betr streaming so you must have seen on YouTube and I'll show it to you also that when you play a video sometimes what happens when the network is good the resolution when you have set the setting to Auto the resolution sometimes keeps changing so you must be watching something at say 7 20p and the network connection gets better it might switch to 1080P or if the network connection becomes worse it might switch to 480p or 320p right so what is happening actually is that each video when we divide it into chunks what we have on a back end is that each video C chunk now that chunk also is saved in different different resolutions on our pend so if this chunk is there this chunk is saved at 320p 480p 720P 1080P and similarly for all the chunks so what happens is suppose over here we detect that you know network connection is actually worse so the next chunk that is going to come that is going to come with a lower resolution and suppose over here we realize that oh the network connection is now better the bandwidth is better so then after this what is going to happen is the resolution is going to improve so this is called adaptive pit rate streaming I have covered this in two other videos as well there was one video where I was explaining adaptive betr streaming to my father and there's one more video where I've covered the hld of YouTube with hararat where we did an entire discussion there also we have discuss this so I will link both the videos in the description you can check them out but I hope you have understood what is adaptive betr streaming so as the network connection changes the next chunks that are going to come from the back end that are going to come from the server are going to uh change accordingly the resolution of those chunks is going to change accordingly so obviously what what we need to do during upload is that we need to transcode each and every chunk into all the solutions right so there are two things that we need to do one is during upload and one is during streaming or watching so during upload what we need to do is that for each of these chunks we need to convert them to all the possible resolutions and all of these possible resolutions we want to upload to S3 and who is going to do this with service is going to do this yes transcoder service so if you remember we had added one Kafka so the upload service was adding some message to Kafka and then Kafka from Kafka transcod service was consuming this message right so while completion of our upload what should happen is that uploader service should put all the details to the Kafka and transporter service is going to pick it up from Kafka and then process and then upload it to S3 so this is one thing that we need to do second thing that we need to do is during watch right because now okay everything is available for us to be able to have B adaptive P streaming but how are we going to do adaptive P streaming so for that there are two solutions one is hls and the other is Dash so this is going to happen where this is going to happen on the client side so let me show you the difference between hls and dash there's a bit of theory that I added on the doc so let me quickly explain that to you if you see the doc I have added a quick comparison of hls and dash see both are adaptive streaming protocols used for delivering multimedia uh the only difference is that hls was developed by Apple so it is actually easier to use it with iOS devices and dash was uh is an open standard developed by Microsoft Netflix Google and all of this so in this project we'll be using hls which is HTTP live streaming but you can also use Dash which is dynamic adaptive streaming over HTTP let me tell you that the concept is essentially the same thing just the file extensions and all are going to be different one very important file that you need to know about when we're talking about adaptive betr streaming is the Manifest file now before explaining all of that to you let me actually show you how things are working let's actually run it can see so for now what we going to do is forget everything else and just see our transcoder service because in our transcod service what we had written so far that we are just consuming the message from kamka it's time that we see the actual code of transcoding right so let's go back to our code over here so coming to the transcod service in index what I have done is just to test right now in transcode I'll be commenting this because this is the final one that we'll be using for now we'll be using convert to hls and what is happening inside this is if I just go to this convert. hls and hls folder so here is where the actual transcoding is happening for now for demo what we are going to do is test this using Postman and what video are we going to uh transcod there's one test video that I have added test. MP4 we are going to be transcoding that and we'll see that the output will actually be generated so there's a for Loop over here that is going to you know generate the uh chunks in all of these resolutions so there's an array so the code is also extended princible that if you want to add more resolutions you can add them if you want to remove the resolutions you can just remove from the array and what we are going to do over here is mention the video bit rate audio bit rate I'm going to go through this but for the output what it needs is an output folder so I'm going to create another folder inside this which is going to be output and inside this is where the files should come right now you can see there's nothing inside output okay and right now we were playing only with two services and one client let's create another one for transcoder service and over here let me just run it and so it is running at 8083 but it is also listening to Kafka so for now just to keep things simple let me just comment this out so that you know uh the Kafka messages stop coming and it just loaded and it is running at 883 so what we're going to do is go to postmen and run this so we got the response immediately but let's see what is happening in the back end so can you see this getting generated can you see something is getting generated yes can you all see this this is awesome right so these files are getting generated by itself can you see it so it is presently getting generated at 1280 by 720 right so what are the three resolutions that we had given let's see that so let's go to the transport code we had given three resolutions right 320 by 180 so these are the first resolutions and there was a master file for this then there was another resolution 854x480 and then there was another Master file for this so all of these are the chunks and then there's another one for 1280 by 720 then there are the chunks for that and there's another Master file and then there's this final Master file now what is this what is happening let's see a bit of theory so what is happening exactly is that m3u8 is the playlist file which is containing the URLs and transport stream files containing the actual media segments so what we did what we segmented and transcoded that is saved in this TS so this TS is nothing but small small chunks but what we need during streaming is one place from where we can read that this Chunk in this format is present where this Chunk in this format is present where so this is what it is telling this particular Master file is telling let see the first 10 seconds of the chunk is present in this file then the next 10 seconds are present in this file then the next 10 seconds are present in this file so you can see it is getting generated by zero 0 1 02 03 and so on and so forth so there is one master file for this resolution then there's another Master file for this resolution so again you can see that see this the first 10 seconds are present in this chunk this these 10 seconds are present in this chunk now you must be thinking that KY where did we give 10 seconds right so we have given that in our code so if you see over here okay if you see over here we have given the hls time should be 10 10 seconds so if you change this your chunk size the number of chunks and all of that is going to to vary let me just show you once more so if these are all 1280 x 720 K chunks okay and if you come over here this is the Manifest file for that so the first 10 seconds are present in this file the next 10 seconds are present in this file now this is for each resolution now you want one master playlist right so if you see over here uh so one is segment then there is Master playlist variant all of this is there and after that what you need is one master playlist what is Master playlist so these were the smaller playlist that we had created and what is there in the master playlist is C this resolution playlist file is this this resolution playlist file is this this resolution playlist file is this so essentially when adaptive betr streaming is going to happen by hls what is going to happen is that it is first going to come to this file and then it is going to see that oh network connection is good let's see that okay we can use 1280 by 720 let's refer to this manifest file and if we are referring to this manifest file it is going to get start getting the chunks from over here then as soon as the you know uh the network connection is going to go worse it is going to say that oh no I need you know the ones with the res resolution so it is going to say that for this bandwidth like you know for this resolution which is the playlist file then it is going to go to this playlist file and then it will get the chunks for that so like this it will keep doing what adaptive bet streaming so M3 u8 is the extension and TS is the extension for hls when it comes to dash the extensions are going to be MPD and m4s but it is essentially the same thing so what we have done so far is that we just did one testing for transcoding we sent the request from postmen and it was able to transcode right now that you have understood that what is exactly happening the Manifest files are getting generated and transcoding is happening let's look at the code at how did we write the code right so we are using two packages over here fmpg is very common for transcoding you can read about it again all the URLs and everything all the links are added over here in the doc so FFM PG is very famous for transcoding and what we have used is its binary so these binaries have come from ffmp static and we are actually encoding using this so this guy this package expects you to have binaries you can either download your own binaries and set everything up or you can use this for getting the binaries so this is what we have done and over here this is my file name and okay uh so you can see these file names that are getting generated right TS and 3 it how did this happen so we are actually generating these file names and how that is happening is that whatever is the file name in that from dot I am replacing it with underscore so test. MP4 became testore MP4 and then underscore resolution so underscore resolution and then M3 U so this is for M3 U and for the segment file what I have done is that after all of this I am adding this 0 0 0 1 02 03 and so on so that is how I am doing this so so after I've generated the names what I'm doing is and in this for Loop so this for Loop is for every resolution so if you remember in this aray there are three things right so from every array what are we getting is we are getting resolution video bit rate and audio bit rate so here you you can see that I'm generating the output file name segment file name and then what I'm doing is I am actually doing the transcoding in FFM MPG this is the output options that we have given we have mentioned that h264 CC a for audio video bit rate audio bit rate resolution and this is the time now this is usually used for uh live streaming and all that so I'm not going to go into details right now and this is the output file name that where it is going to be there and then you can have your error end and all of this and your entire so as you go through this what you have done is you have created one array and as you keep generating these TS and these M3 U you keep adding them to the this array that this array push variant playlist so what is the resolution what is the output file name because what is going to happen is that you need to create your master playlist after this so in this master playlist you are going to map this uh variant playlist if you don't understand map and all you can refer to the JavaScript Basics video the link is in the description and uh you can uh get the resolution and everything and this is where you are creating your final Master playlist so if you can see this is how it looks like and this is exactly what I'm doing where did my code go you can see this is exactly what I'm doing over here I am adding all of this right so this is the final code that you have understood uh but now let's come to the doc also so you understood and coding on packet this is how it is happening variant playlist blah blah blah so Master playlist we created and this is done so you have understood that how we did the transcoding on the back end and we tested it using Postman now what we want to do is we want to see that we can play this on our client side or not now how do we test it so how we have done so far is all our t files and 38 everything is in this output folder so for now just for testing just for step one what you can do is you can upload this entire folder in your AWS so let's try doing that I'm going to upload a folder so in hhld YouTube let's go to transcoder service inside this there's an output folder let's upload the folder as it is yes we want to upload everything so you can see the TS files M38 everything is going to come okay we are going to add the folder and upload the entire thing now that we have added all the files to Output let's come back to the bucket so here there is output right so everything is there and what do I want to access the master U so if I come over here and if I copy this URL I should be able to access the permissions are public right now so the bucket is publicly accessible and if you go over here on the client side what I have done is I've created a video player and I've cre and I've given one URL right so right now just to fix like I know everything is hardcoded right now but this is just to test so if I go to my video player and I come over here so I can give one URL over here right so right now let's just give this URL I'm not sure if it is going to generate the same URL okay so this is the URL and okay and what we are going to do is if hls is supported and we are going to attach the video the source and everything and we are going to play the video okay so this is a very simple frontend code uh but in our page right now I'm leaving everything else and just adding video player and and I'm going to go to client and let's try running this so here you can see the video is getting played right and if we go to inspect and if we go to network and load this again so can you see m3u8 and TS files so see where did this come from from S3 and the TS files also see all because we got 1280 by 720 these also came the same now because this is a very small video I can't show this to you but in the class we also did what we did was uh we had a huge video and from there what we did was from no throttling you can change to slow 3G now suppose I do this and you can see this is still refreshing now later what it is going to do instead of getting 1280 by 720 it is going to get something else now let's see if it is going to happen uh let's just change quickly fast 3G and no throttling it's going to happen very fast fast and slow 3G I didn't get time so initially you can see it was 320 by 180 then it switched to 1280 x 720 because it's is a very small video I was not able to show properly but you can try uploading for a huge video I actually showed it in the class you can try it out yourself because it's going to take a lot of time to show uh essentially you get the point that adaptive betr streaming is happening so not only we transcoded on the back end we also saw that how adaptive bitrate streaming is happening on the client how did that happen that happened because we used h H LS so we have used hls.js over here and if it is supported we have attached the media loading loaded source and if it is not supported you can just play the original file the original file is also going to be on the S3 right now you will say that kti for now what we are doing is we getting the S3 link I know right now this is how how we have done that what we are going to do is firstly we going to let the upload service upload the entire thing to S3 and in the response of that if you go back to your upload server so this is my service and if I go back to my controllers and multiart upload over here what I'll be doing is that once the upload is complete I'll be pushing it to Kafka from Kafka my transcor service is going to pick it up and what am I pushing to Kafka the location what is location the URL and this is the URL that transcod wanted right the transcoder wanted so it is going to upload it and now after this what you can do is see I've have added all the steps over here so HL is streaming on client right now this is hardcoded after this what you can do is by the way I also added the Dash conversion code for reference if you want to do go ahead with Dash instead of hls now what you can do is pick up the video from S3 transcod it and push it back so let's just see this then after whole connection you'll be able to see okay so over in the transcoder service for now so if we go back to the transcoder service by the way if it is getting overwhelming don't worry like if you go step by step you'll be able to get it and if you need guidance you can also sign up for hhld we are here to guide you in any way possible if you can quote this out yourself it is great if you think you know guidance to be able to create projects like this will be useful to you you can check out the H course the link is in the description if you have come so far you definitely like my teaching style you're definitely loving the entire project so if that is the case if you want to see more projects like this and if you want to be part of some amazing courses amazing Community where we help each other out you can check out the hld course also like the community is so helpful whenever someone runs into any kind of bugs any kind of issues let it be related to AWS or any code we are here to help each other out so yep that is there so now let me just walk you through what we're going to do in our transcoder service what we had done was that we were transcoding using this convert to hls now we'll be using S3 to S3 what is the difference in S3 to S3 is that here we are going to pick up the URL from S3 transcod it and put it back so let me quickly walk you through the entire code okay so this is client this is transcod service S3 to S3 okay so how we have written the code is that uploader service is going to upload everything to S3 which will be the original file once it get the URL this URL is given to our uh service our transcod service you can also do like before the upload itself the chunking can start but this is how we have done okay so uh here we have configured S3 and done all of this now what is going to happen is it is going to download the S3 file where are we going to get this URL this URL is going to come come from so if you see s32 S3 mp4 file name right so right now I have hardcoded this you can give this anything right now I have hardcoded it what it is going to do is it is going to download the file and how does it know where to download from because we have given the bucket name and we will Al we have also given the file name right so bucket name and key is enough for it to be able to download and once it downloads it is going to create like you will be able to see I will run it and show it to you it will create in its local it it will convert all of it and after that it like you can see this code is same right resolutions and from the read stream we are piping a right stream and you can see this code is same right resolutions the for Loop all of this is same creating the master playlist all of this is exactly the same right this is same what we have just done is that till here it's all good from here after we have generated everything locally we are going to delete it and then we are going to upload everything to our AWS and how are we doing that we are creating one hls folder and we'll be uploading it over there let me show this to you how it is going to run right now the file name that I have added is trial to because this was actually a very small file this was just part of one of the classes so let me just upload the same file for easier purposes okay so let me just go to the hhld videos and trial to I'm uploading so this file it is going to find so ideally what is going to happen is that our uploader Serv will send this to Kafka from Kafka it will pick up the URL location and then it will be able to go ahead right for now we have done like this itself and you can see what all things are there there's output there's trial to all of this we actually don't need output for now but okay I'm going to let it be there and in our transort in our index we have made this change right so this is the transcor service in our index we have made the change that S3 to S3 is going to get caught okay so just notice what is going to happen over here so let's go back to postman let's send this request again and let's come back over here so can you see this being generated in local so the files are being generated and after getting generated you will see that also deletion will happen and you can see that logs are coming right because if you go to s3. S3 this is where we were actually adding the logs and you can see Master got generated because it was a small file it is happening by itself now you can see that files are getting deleted can you see file get getting deleted because because it deleted locally uh downloaded S3 mp4 file and now it is uploading the segments to S3 right so it is deleting everything so it generates locally and then it is going to upload to S3 so it took the URL from S3 and then it is uploading to S3 you don't even need the entire URL what do you need just the key and the bucket name so if I load over here hls folder is created and inside this you can find everything right so this is how you can do from S3 to S3 now that we have finally seen everything happening let me quickly show you the final hld diagram we have seen all the parts working properly what you have to do is now code all of this and I hope from the diagram you will be able to understand exactly what happened what all you saw and I know for bers this can be overwhelming but my job is to make things easy for you coming to the hld we drew one client right we made one client which was written in next years after that we created three services one was what service the first one that we had created was the upload service and we also added our transcor now what were the requests that went from where to where did it go there was one home request that got us all the list of the videos right so uh client sent home request which got it all the list of the videos upload service it sent upload and this was basically three API calls right one was initiate so let me write all the three calls for further CL Clarity so that you don't get confused the first one was initiate it was upload initiate but okay so then it was upload and then it was complete upload right so upload SL complete it was so you sent all of these three apis to upload service then what happened was between upload service and transcoder service there was one cfom and we also added databases so there was S3 and post SQL right so let's write post SQL over here and S3 over here so what happened was when we finished up complete while finishing of the complete what all things happened was upload service did three things one was that it added the original file to S3 that was one thing second thing that it did was it added the metadata to post SQL basically title description and author and the URL the S3 URL because it had already uploaded to S3 it has the S3 URL and the third thir thing that it did was that it added to Kafka and transcod the service consumed from the Kafka what did it need from Kafka it needed just the key of the bucket so the key and the bucket it got so this transcor service went to the S3 so it went to S3 and it got the file locally it transcoded everything and then it put it back so what service is getting all the details for slome from where from the post SQL so these are the things that we have coded and this is amazing right we actually coded everything that is there in the hld diagram these are the main features I know there are a lot more features that you can add there's recommendation engine that you can add content filtering and a lot more things major things are like comment uh Channel user table and all of this you can add but I guess the main features of YouTube is pretty cool this is a pretty cool project what do you guys think [Music] I hope you all had a good time if you have watched it here that really means a lot and I hope you all can be part of Ed courses as well the link to edu courses is in the description we have mentioned all the details on the site the various courses that we have the bundles the curriculums the testimonials the FAQs everything is mentioned on the site just check it out and if you still have any questions you can reach out to us at support at the ril courses.com we would love to be part of your uh Learning Journey and please don't forget to subscribe I hope uh you know you like all the hard work that I'm putting at least you can subscribe it is completely free for you just subscribing it will motivate me so much thank you so much and see you next time bye