this video is sponsored by ultim more on that later in this video we are going to see how to run YOLO V7 algorithm on Windows 10 or 11. you will need a laptop or a PC to run the code you will need a proper internet connection to download some of the models online you might require a GPU to train the model if you do not have a GPU you can use CPU and you need to install Python 3 and conda in your system you can open Chrome and type in conda and you'll see a link saying downloading conda after clicking on this link you will have three versions to download Anaconda mini conda and Anaconda Enterprise we're just going with the free version Anaconda once you're at the site you'll see a download button for Windows and you'll also see icons for mac and Linux download your preferred version in accordance with your operating system it's a 600 megabyte file so it may take some time since I have already downloaded the installer on my desktop I'm just going to click on this click connects after reading the license agreement if the terms are okay for you click on I agree I'm installing it for my usage only therefore I've selected just me on this screen check the box that says add Anaconda 3 to my path environment variable the register Anaconda 3 checkbox can be left unchecked doing this may remove the installers of default python click install once the installation is complete we can click on next and next again and this will launch by the way if you ever wanted to implement computer vision on a jet Nano but can't really find one maybe it's due to the chip shortage or something like that well I've got great news for you because there's a company called octopod where you can find Justin Nanos and Raspberry Pi's amongst many Edge AI Hardware octopath is like the Google for electronic spots but integrates seamlessly into Altium and this allows you to design custom printed circuit boards so if you are into designing custom hardware for your computer vision enabled cameras then ultimate design is the most reliable software for your company or project you can get access to a free 15-day trial of ultimate designer and access octopod in the links down below kinda Step 1 install all the prerequisites the first prerequisite is to create a conda environment you can create a conda environment by typing Kanda create hyphen hyphen name and your environment name in this case I will set it to Van v you can activate this environment by typing conda activate van v the purpose of creating an environment is to establish that the modules downloaded in your base environment doesn't get mixed up with yolo v7's requirement by making this possible we'll be able to avoid most of the errors due to the version differences clone the GitHub repository you can either go to the GitHub site of YOLO V7 and download it manually or use this git clone and the link provided to download it directly using your command prompt install all the required packages you can install all the required packages just by typing pip install hyphen R requirement dot txt the requirement.txt file contains all the required modules for YOLO V7 to Run download the pre-trained weights by clicking on the link here you will be able to access the pre-trained weights of YOLO V7 which we'll be using to make the predictions for the sample images instead of manually pip installing each of the comments we are going to use the requirements.txt file we do that by typing pip install hyphen R requirement.txt most of the modules are already installed by me that's why it takes less time for me for you it may take a longer time as multiple modules need to be downloaded you can download the pre-trained weights by clicking the link below or you can go to the YOLO v7's GitHub repository and if you go down you will find the testing section where you will find a file called YOLO v7.pt if you click on this link it will directly download the weights for you you just need to go to the YOLO V7 folder as you can see I already have my weights downloaded in this location you can also do the same Step 2 detection on images using GPU installation for GPU you need to install Cuda in your system and Nvidia graphics card is required for Cuda to run you can click on this link here to download your Cuda file and install it using the exe file the Cuda website has opened here we need to choose our system settings so that the right Coda version is installed for example we need to choose the operating system we are going to work on Linux or Windows I'm using Windows so I'm going to click on Windows and architecture is 8664 because it's a 64-bit computer and the version that we want to work on is 11 and the installer type is exe local here for the right models the file will be ready to be downloaded if I click on download the model that start downloading it's a 2.5 gigabyte file so it may take some time I'm just going to cancel here because I already have my Cuda file ready to be installed I'm just going to click here it may take some time to load the data I'm just going to click here here I'll provide the path then the installer will be extracted into your system temporary files start installing into your systems so it may take some time now it's just checking system compatibility since we have Nvidia graphics card GTX 1650 and a driver greater than 407 we're good to go we need to read the license agreement I'm just going to click on agree and continue it's always recommended to use the express version check the box that says I understand Cuda will now start installing the installation is partly done if you want you can check these boxes now you've got Cuda version 11.7 in your system after installing Cuda you need to install the pi torque models in this following manner so that the versions are compatible you can copy paste the code provided here to install it if you downloaded a different version of Cuda or in later years there might be upgraded versions of Cuda you can click on the link here to access the version that you're using and customize it according to your requirements once G CPU installations are done you can type python detect Dot py and the weights address set up your confidence value and your image size and paste your input image in your source file and set up your device to zero if you set up your device to zero or if you don't mention the parse options device by default it is GPU so by pasting this code the model will now be able to predict image that is given as input this is our input and when it is sent through the YOLO V7 model you will have the code given below for GPU you can just copy and paste it in the command prompt and the model will take care of it for you don't worry about the warnings this means my GPU is being accessed for performing my analysis the model took 0.6 seconds to predict and it sent the result in this destination runs detect experiment 23 and horses let's go and check if the model worked properly when you go to the YOLO V7 folder you'll find a folder called runs and detect inside which you will have all the predictions that we did our most recent one is experiment 23 and if you get inside we will be able to find out that the model is able to predict the horses with the bounding box and the classification with its confidence value we get this as an output where we get a bounding box of all the horses and the class definition on what kind of an object it is with its confidence value detection on images on CPU installation that is not much difference between GPU and CPU all you need to do is change your device from 0 to CPU when your device is set to zero it is assumed as GPU and if the device is set to CPU it will be assigned a CPU with your CPU running it might take some time longer than your GPU because gpus are usually Faster by pasting this code in your command prompt you will get the prediction of the image this is our input so next is running it on your CPU the code is going to remain the same but the only thing to change is your device from zero to CPU since CPU is not that much powerful as GPU the model is going to take more than 0.6 seconds to predict the horse image the result will be stored in the experiment 24 folder as you can see the model has taken 3.7 seconds and it is stored in the experiment 24 folder let's go back and check the experiment 24 folder you'll find the image of horses as you can see we have all the horses percent and units this is our output as you can see we are getting the same bounding boxes with the same class prediction along with its confidence value detection on videos YOLO V7 has the capability to detect both images and videos all you need to do is replace the source destination of an image to a video we have not mentioned device 0 here because we only want to use the GPU you can simply copy paste this code and the destination of your video and the model will now be able to predict this is our input video we are going to use GPU to predict our videos it's going to take much quicker time to predict it the code Remains the Same but you change this one in this YOLO V7 folder you can provide your Source location anywhere but I just created a new folder inside my inference folder that is videos and I placed in my sample video before starting this tutorial so this location can be anywhere I just made sure it's easy to access let's have a look at our Sample video it's a street with traffic lights cars people walking and the train is passing by this is our input video with this video we are going to make a model using GPU the video has been processed completely and you can check it up in the runs folder you do that by going to YOLO V7 folder and then runs and then detect and finally the experiment 25 folder if you open the video you'll be able to see that the model was able to predict the cars traffic lights people walking on the streets and you can see the train that is passing by so the model is now able to predict the train once it's fully visible here it's detecting it as a bus but the model is working perfectly fine it's able to detect the truck so our model is pretty much working and this is our output video when sent through YOLO V7 as you can see the model is now able to predict traffic lights cars and people walking on the streets and you can see the confidence levels is pretty high the model is now able to predict the train and the persons inside the train too step 4 detection on webcam the model will now also be able to detect using a webcam all you need to do is change your Source from an address to xero when you set your source to zero the inbuilt webcam gets activated and your model will be able to predict the objects that are visible in the video if you do not have an inbuilt webcam you can change your source to one and attach an external webcam to make the prediction for the external webcam so that that's the whole difference between 0 and 1. I haven't mentioned device zero because I want to run it using GPU the last thing that is left is to detect from webcams you can do that by converting The Source from an address to zero if you have an inbuilt webcam and setting the source to one if it's an external webcam so the model is able to predict the person let's see if it's able to detect a phone yes it's able to detect this phone properly let's see if the model can detect a book yes it's able to detect a book the frames are very less compared to what we normally can handle but the model is pretty much working I'm going to show you how to train yellow V7 on a custom data set so this tutorial is based on the official YOLO V7 repository by Wong ging hu and