oh my God all hell the new AI King FL hello humans my name is k Overlord and oh my God what just happened a few days ago out of nowhere a company called Black Forest Labs released a brand new state-of-the-art T IM gii model called flux that beats any model that we've seen previously flux is a huge 12 billion parameters model that can generate beautiful images with correct hands almost perfect text photo realism anime and that follows the prompt even better than stable diffusion 3 or any other model released up until now I mean this is just nuts and black forest Labs although it's a brand new company that we've never heard before is actually a small team of 15 people 14 of which actually come from stability AI which is kind of funny to see that only 14 people were able to make a better model than anything stability EI was able to make by themselves so um yeah uh anyway so as you understood this model or models because there's actually two of them are just incredible the best models ever simple as that and you can of course run them locally on your computer or online for a few cents an hour and I'm going to show you how now there are a few potential issues with the model but I'm going to leave that for the end because for now this is still speculation but for now let's begin the installation now to install this you have two ways in the first is of course by using the Maring installer that is available for my Pat supporters actually if you are one of my P supporters I have a bunch of files for you that will be very very useful for you now to be able to run flux we need to install confy UI so if you don't have confy UI installed yet you can just use the flux confy UI manager Auto installer and just double click on it and from there it will ask you if you want to use the fast low vram install or the unoptimized normal model and I'm going to explain exactly what is the difference later in the video but actually no matter who you are my advice is to actually choose the fast low vrm install and once again I'm going to explain why later basically just press a then press enter and then it will also ask you if you want to download the flux schel model which is a faster version of the flux model and I highly recommend that you choose Yes again so that you can try it out later because this is actually a very very cool model and then press enter and then it will install conf UI and download the models automatically you really don't need to do anything and in the end once the installation is done if you have less than 12 GB of vram you can run this VR bat file but if you have more than 12 GB you can run the normal run Nvidia GPU bat file which will automatically launch confi ready to be used simple as that just load a special workflow for you to use and you are done you really don't need to do anything and if you already have confu installed you can just use the flux s model install.bat file just drag and drop it inside your config UI folder and then run the bat file to begin the installation so that you don't need to do anything manually yourself and the second way to install this is of course the manual way now to install confi it is very very simple I don't necessarily need to show you how to do it just download and extract the portable Standalone build for Windows that is right here all right so for the manual way you really have a lot of files to download and also you need to be careful where you put them into the precise folders especially because there is a lot of different ways to run the flux models so first there is the normal flux Dev model then you have the flux chel model which is like a super fast version of the normal model that can generate image in only four steps but with a little bit less quality than the normal model and now what's really cool is that only a few hours ago we got an fp8 version of the dev and Shel models basically what this means is that these models are optimized and require less V round to run and to be honest after testing those out these are the models that I actually recommend you to use but if for some reason you still want to use the normal model you can still do so if you really want to and then finally you also need to download the flux T and moders so basically now you can start downloading the models so you're going to click on files and versions first you're going to download the vae model called AE sft so just click on the S icon to download it onto your computer then here the flux dev. sft click on the S icon to download it on your computer if you want to use the Chanel model once again same thing just click on the S icon to download the model or just once again use the fp8 model these are actually the models that I recommend you to use instead of the original ones so just click files and versions and then download the model that you want to use then download the text encoders so there you go at the end you should have like six different files in my case I only decided to download the fp8 versions of the models so let's actually start with that so I'm going to select those two models control X to condem then inside your config UI folder you're going to go inside models unit and then you're going to paste these two models right here so the next you going to select the t5x XL and the clip L safe Tes models crl X to C them now once again we are inside the models folder and now we going to go inside the clip folder and then paste them right here and then finally we only have one file left called ae. SF that you're going to select crl X to cut and this time we're going to go inside the vae folder and then paste that file right here and there you go and now we can finally inst confi then once again import inside the workflow and now we're ready to have some fun basically everything should already be done for you everything should be selected so the only thing that you need to do is just here input your prompt select the resolution of your image and then click Q prompt and after a few seconds you get something like this and for me it takes around 14 seconds to generate an image that being said this image took around 60 GB of vram to generate because the flux model compared to stable diffusion 3 or any other Cil diffusion Excel models before is a 12 billion parameter model compared to 2 billion for Cil diffusion 3 so it definitely uses way more vram to generate a single picture so to make sure that you're able ble to generate an image with your GPU as fast as possible here's what you need to do now first if you are like me and you have a 3090 or a 4090 congratulation you have enough vram to generate an image in 14 seconds but for this you need to go to your Nvidia settings into manage 3D settings under Cuda cism fallback policy you need to choose prefer no cism fallback and then click apply now what cism does is that when an application uses more VR that you have it will start offloading some of that work onto your RAM making the image generation much slower which is why by choosing this no cism fallback it will only use your vram and nothing else allowing you to generate an image as fast as possible now if you don't have a 3090 or 490 then definitely do not activate this option actually quite the opposite you should definitely activate CIS fallback if you have less than 16 GB of vram also make sure that here under weight D type you s fp8 E4 M3 FN because otherwise for some reason it takes much longer to generate an image and if you have even less GB of vram like for example 12 or 8 then first make sure that you are of course using the fp8 version of the model and that here under the clip name two make sure that you're using fp16 this will basically use even less vram to run and if you have more than 60 GB of V Ram you should actually use the fp8 version instead because it will make the image generation even faster as long as you have enough vram to run the operation of course so in my case is I have a 1490 this is the setup that I use to generate an image so that I can then write my prompt and then generate an image to get something like this an absolutely beautiful image generated in around 14 15 seconds so yeah this is really some amazing stuff oh and also I said earlier in the video that there is also another model that is available called flux schel that is kind of like the super fast version of of the flux model so basically you need to select this model right here flux Chanel fp8 then you're going to leave everything by default just decrease the amount of steps from 20 to 4 because this is really all you need and then let's generate a new image again so this time we get an image like this so still really really good pretty much perfect but this time the image was generate in around 2 seconds which is just fantastic now there is a small dips in quality compared to the normal model which well I mean makes sense given the low amount of steps compared to the previous model but even this allows you to generate an image that is better than anything we've ever seen before and if you want to generate images with even less vram keep in mind that flux actually works even better than stable diffusion excel at 512 x 512 resolution so like this image for example was generated in um 0 seconds so yeah like for example I'm going to try to like not cut so I'm going to to click on the prompt and then without yeah I mean I didn't even have time to finish my sentence the image was already generated so yeah I mean what can I say what do you want me to say this is amazing when I told you that this is by far the best model that we've ever seen this was not a joke like this is really just the best model ever simple as that but don't worry even if you don't have a powerful computer or a powerful GPU you can still rent the GPU for a few sets an hour on a website like runpod and run conf UI as if this was running on your local computer and I'm going to show you how to do that so first if you haven't done it already you can click the link to the description down below and create a new account on runpod then you can deploy a GPU pod and to save a little bit of money you're going to change secure Cloud to community cloud and then scroll down until you see a 24 GB of vram card that is available like the RTX 3090 or the RTX a5000 it doesn't really matter which one you choose so just select it then you're going to click change template and then here you're going to search for confy UI and then choose this one by ashle kza and then edit the template and from 10 GB you're going to input 50 GB for the container disk then click set overrides and click Deploy on demand so then once this is done you're going to click connect click on Port 8,888 now as of right now config UI is already installed so you don't need to install config UI anymore but now we need to install and put the models into the right folders and for this if you are one of my picture supporters I made a few special files for you to use this is going to be very easy for you so just go inside the conf UI folder then you're going to drag and drop one of the three files that I have prepared for you a b or c so a will install a more optimized version of the flux model which will use this vram and will generate images a little bit more quickly B will install the normal flux model which takes about 30 seconds to generate and c will install flux chadel which is basically like the super fast model which again takes something like 2 seconds to generate an image but with less quality than the normal model so once again it's kind of up to you which one you want to use not the one that I recommend using for fast and good generation is file a so just select the file and drag a drop it into the workspace then you're going to click on Terminal and then you're going to copy and paste these two command lines that you will find in the patreon post and then press enter and it will download all the models automatically into the right folders once again it might take a while but manually you really don't need to do anything and once this is done you're going to go back and click on Port 3000 to launch config UI then here you going to load the workflow so like just use the first one and then everything should already be done for you then from here you actually need to update comi to the latest version so all you need to do is just click on manager click update confy UI and once you see the message confi has been successfully updated you're going to click close then click restart say okay if you want to reboot the server and then what I actually recommend you to do is to actually close the window and then relaunch the port 3000 and then we're finally done just make sure that you're using the right model fp8 with the FP fp8 weight type here actually make sure that you use the fp8 safe TS model then write your prompt and then click Q prompt and after a few seconds you get something like this just absolutely beautiful I mean what do you want me to say you've seen what this model can do and yet I'm still impressed and if you want to do in the manual way it is pretty much the exact same thing up until you need to download the models and to be able to download the models quickly onto run pod you can just right click on the model that you want to download then click copy link then you're going to go inside the folders where you want your model to be so in my case I need to go inside the models folder inside the unit then here once you are inside that folder you need to click on Terminal and here you're going to use this command you're going to type curl Das l- o where here you're going to put the name of the file between quotes and then between quotes again you're going to put the link to the model and then you're going to press enter and then it will start downloading the file onto run pod automatically and basically you do that for every file inside the right folder and for example if you want to use a file that you downloaded inside confin UI so that the file appears in the list you need to click refresh and then it should appear inside the list ready to be used so let me just put a prompt with a short step and then click Q prompt and we get something like that and this is made with the Chanel model which is the fast super fast version where the quality is definitely not as good as the base model and we still get some pretty fairly decent picture and this image was generated in like 2 or 3 seconds so yeah very very impressive but of course the the base model is much much better much more powerful than the Schell model but definitely slower as well and really the ability to be able to generate anything you want is really really super cool and the model's ability to be able to understand the prompt is just incredible like for example here the prompt was a photo of a woman with blonde hair drinking coffee on a beach with a dragon in the background and we get exactly what I asked for and not only that but every part of the image is pretty much perfect we do have like five fingers on each CH the photo is extremely realistic with a beautiful composition and colors I mean this is really just incredible and the model is really just super ultra powerful that can even generate anime better than any other model before while still following the prompt to a te like for example this is a screenshot from an anime movie a creepy girl smiling in a forest surrounded by soldiers and this is exactly what I got I mean just incredible and if you want to know if the model is censored or not well the answer is um well the fact that I have to blur this image shows that it is not that censored now sure it is not able to make some hardcore not safe for work images which you know makes sense but it is definitely less censored than stable the fusion 3 for example but once again do not expect to be able to generate whatever fantasy you have in mind no base model is able to do that anyway and even super complex prompt is not an issue for this model I mean this is just incredible and once the Community start to find Tunes this model will become even better well this might actually be the main issue with this model because as of right now we still don't know if this model can be trained or not and even if it is even if it is possible to train the amount of computational power that you might need to train a 12 billion parameter model it will certainly not be possible on a consumer grade GPU so even if you have a 30/90 or a 4 90 training on your local computer will be just impossible meaning that in the best case scenario we would all need to rent some GPU on run pod or somewhere else to be able to train just the simplest thing on this model and that is if it is possible to train so yeah as of right now we still don't know it might be too early to tell but I'm not going to lie this does not B well for the future of that model but hey I mean as of right now seriously how can we complain when we get something like this for free oh also if you have any issues whatsoever do not forget that I provide priority support on patreon so just send me a DM and I will try to answer your question as soon as possible so really just try this out yourself and have some fun and there we are with folks thank you guys so much for watching don't forget to subscribe and smash the like button for the YouTube algorithm thank you also so much to my P supporters for supporting my videos you guys are absolutely awesome you people are the reason why I'm able to make these videos so thank you so much and I'll see you guys next time bye-bye