I've developed two workflows to help you create amazing set extensions for your projects the first one allows you to select an area of your image and add anything you want kind of like photoshop's generator field but much better and free and with the second one you can even input a 3D model and the AI will integrate it seamlessly taking into account the light Direction the colors and the general style of the original footage and I'm still Blown Away by how well this works and I can't wait to see all the amazing projects that you're going to create with it so I'm going to show you step by step how to set up and use these free workflows but first let me quickly talk about the sponsor of this video it's you thank you so much for watching these videos and thank you to my lovely patreon supporters who make these videos possible if you want access to exclusive workflows and resources and an awesome AI Discord Community click the link in the description and now let's get started let's say we're working on a movie set in a bustling Metropolis like New York but but we only have footage of this town because we found it for free on pixels still we really want to use this as our establishing shot let's use my workflow to transform it first use an editing tool of your choice to save out the frame where we want to add our set extension in my case that's frame number one and if you want you can already track your footage I'm tracking the camera and after effect but you can also use the point tracker in daav Vinci resolve or do a full 3d track of your shot in blender and I'll show you these options in more detail later but for now now let's just click track camera and let After Effects do its thing now we need to install com UI a note-based interface for stable diffusion and other AI models and I created this free stepbystep guide on how to install it and where to download and put all the models for everything to work so just go to the official GitHub page scroll down and download while that's downloading we can install git I've already done this but you just need to install the Standalone version once comfyi is downloaded you can put it anywhere you like and extracted and this extracted folder is now your comi directory next you want to download the com VII manager so just go to the manager GitHub page scroll down and right click on this link save link as and put it inside of your comi folder click save and once it's downloaded just run it you can then click run Nvidia GPU and comi will start in your browser but now we need a few models first we need our checkpoint and this is just the base model that we are going to use and I'm using Wild Card turbo for this so just go to the link right click save link as go to comi models checkpoints and click save let's download the first control net the misto line right click save link as go back to models control net and I like to create a new folder for sdxl models and just click save let's download the second control net do the same thing right click save link as and we're already in the folder just click save now go to the model manager and search for Ultra and install this one right here and that's it let's quickly restart com UI and now you can just drag and drop my workflows into the com UI interface but you can see that a lot of notes are missing but that's not a problem just go to the manager and click install missing custom notes select all of them and click install and wait for it to finish once it's done click restart and wait for the installation to finish and you can see our workflow is here and ready to use but first let's go to the settings here and change the link render mode to straight just looks a bit cleaner that way and I also want to go to the manager and activate the preview method Laten to RGB we're working from left to right so let's start at the top left corner here here you just need to drag and drop the frame you just exported now you can just right click and open in mask editor and now let's increase the thickness and I'm just selecting the area where I want to add our set extension to you can rightclick to delete areas and once you're happy with it just click save to note scroll down and add a prompt and let's try something crazy here let's add a spaceship ruin overgrown broken sinking into the ground and now you can just click Q prompt and the image will start generating and this first image already looks really good but for the right part here of the image it just added this mountain here and I don't like that so let's just try another seat and this already looks really cool yeah that's exactly what I had in mind what I really love about this workflow is that it understands the whole image so you can see the sunlight is actually coming from the right direction it's casting the correct shadows and even like the parts in the image that are further away are behind the atmosphere they have higher black values which really adds to the realism that we are going for so you can see this workflow even though it looks complicated is actually quite easy to use but let me quickly walk you through the whole thing so you really understand what's happening in the first group here first we scale the image down to an HD resolution and that's just because the composition looks much better with sdxl when we create the image at an age resolution so the idea is we scale the image down and then we generate the image and later we will upscale it again I think I don't have to say too much to The Prompt group here so we have a positive prompt add all the things that you want to see and a negative prompt all the things that you don't want to see in the image up here we have two groups that will create masks for us the first one will just take the mask that you painted and just blur it a little bit so that the seams are not as visible if you want you can increase the radius here but this usually works really well down here we have a mask that will select only the edges of the mask that you created and this will be fed into a line art control net this one will extract the lines from the original image and then apply this control net at the edges of your mask and this just helps to blend the newly generated image with the original composition of the image next to that we have a reference control net that will just take the original image as a reference for the newly generated image so that the style matches this all gets fed into to the K sampler here it will generate an image and in the next step it will be upscaled and added on top of the original 4K image you can play around with the D noising value here a higher D noise will add more detail but it can also break the scale a little bit so I would recommend keeping it quite low actually this last mask setup will look at all the parts of your image that have changed and will try to isolate them here you can play around with the threshold and The Mask erode regions if you have too many many of these spots right here like you can see if I set it too low we have all these extra tiny spots here resulting in a mask that looks like this so we don't even have to create a manual compositing mask in After Effects we can just go back to the original frame I'll just select a tracking marker that is as far away as I want the set extension to be then I import my PNG image now we just need to make it 3D and I'll just copy the position of the track that I created change the orientation so roughly like this and if we now click play you can see that it stays in the correct place now this workflow works perfectly well for all types of shots where the set extension is really far away so we don't have any 3d parallx effect but what if we want to add something that's closer to the camera look at this shot I took with my phone for example I created a 3D track in After Effects and now I want to add a cozy Farmhouse so I generated this image with comu ey so I'm using the exact same technique and we can see that it works okay for for a short amount of time but then it falls apart and we can see that this is just a 2D image so now let's use my 3D Med painting workflow first we must track the 3D camera so in blender I delete everything go to the VFX motion tracking workspace and open my clip I click prefetch and set scene frames switch the motion model to a fine activate normalize bump the correlation up 2.9 margin to 20 go to the start and track the features add more features in the end and track in the other direction now I should have enough tracks to work with I delete the ones that look broken go to solve and select two key frames where I feel like that the parallx is strongest so maybe 40 and 100 I check all the refine options and click solve camera motion this gives me a solve error of 0.5 which is actually really really good so now I can click set up tracking scene I'll select three tracking markers that are on my ground and click floor and I select one where I want the house to be I think it's going to be roughly here so I select that one and click set origin now comes the fun part modeling the house I use Simple box modeling for this and I try to keep it as simple as possible because the AI textures will do a lot of the heavy lifting but you can be as detailed and precise as you like here I'll also leave in this ground plane here because we have this hard sunlight and I'm hoping to catch some of the Shadows created by the house so I can better blend the images together now that we have a final 3D model we need a way to transfer the scene geometry to the AI and if you watch my previous videos and generating full 3d environments with AI or rendering with AI you know that we are going to use render passes so first let's make sure that our scene resolution matches the original footage and we also want to go to the render properties and make sure that color management is set to standard next we want to activate the Mist pass and in my case I want to go to the first frame and click render image then let's go to the compositing workflow and create a viewer node and connect the Mist pass to the viewer node and you can see this is pretty much a depth pass where white pixels are far away and black pixels are close to the camera we actually want to have that the other way around so let's add an invert Noe and now we want to focus all the information that we have all the grayscale values on the house so I add a curve node I shift the value so that the front is fully white and the back of our geometry is disappearing in the darkness finally we can create a f output node and select a location where we want to save out our image and when we now render the image we have a really good depth pass so we could now use this depth path to generate an image but you can see that the window and the door is really hard to see so this information might not transfer during image generation so we need to create another render pass a line art pass for this we just go to render and activate the freestyle tool next you want to go to view layer scroll down and under freestyle activate as render pass go to Freestyle color and change that to White and when we now render our image and go to the compositing workspace and connect our viewer to the freestyle output we have these really cool outlines now just add an alpha over node connect the freestyle to the second input and make the first one black and add another file output node finally we want to go to render film and check transparent and add another file output node to the alpha output of of our image when we now render our image we have these three passes now let's switch back to comu ey and import the 3D version of the workflow and using it is really simple you just need to import the alpha render pass here the original frame here down here again you just add the prompt for what you want to generate in this case a cozy farmhouse next you can just add the line art pass here and the depth pass here again make sure that we have the Wild Card model selected here the mistol line model here and we want to use the Promax model here and then you can just click Q prompt again and you can see it will start generating a house and this is integrating really well but it's not quite what I had in mind so let's try a few more seats oh yes this is more like you can already see this is integrating really well you can see the resolution is quite low so the next step will just upscale this image at the moments it's set to two but we can actually change that to four and again you can play around with the D noise value depending on how creative you want the upscaler to be when generating the images you can also play around with the control net strengths down here generally you want to keep them quite low because this is allowing for a more creative image generation but the lower you set it the less L it will also stick to your original geometry you so you kind of have to find a middle way but as you can see these values usually work really well as a starting point so now we can already switch back to blender here I want to select my house and create a new Shader and let's call that projection go to the shading workspace delete the principled bsdf and create an emission Shader and connect it and then I'm just going to drag and drop the upscaled image in here and connect it to the color now I can go to layout make sure that you're on the right frame in my case I created the render pass on the first frame so let's go to the first frame go to edit mode and select all the faces and click UV project from View and now the texture matches up perfectly we can repeat the same steps for the rest of the geometry so for the floor I'm also just switching to the projection Shader go to edit select everything and click project from view if you have some weird stretching going on or it's not looking correct it's probably because you don't have enough subdivisions but you can always just add more and reproject it next next make sure that your composite output is connected to the image down here and render out the sequence oh and if it's really slow like this make sure to deactivate the freestyle tool and you can also click M to mute all the other file outputs because we don't need all the render passes for all the other frames so now I click render again and it's a lot faster I then brought the footage over to After Effects and created a rough mask for the ground plane just by keying the grass and blurring the edges I also added blur and sharpen effects to match the iPhone's extreme compression and I highly recommend you don't try to integrate something into grass uh that was really annoying but I mean you get the idea I think the effect looks really cool and with some extra compositing it could look amazing another cool thing about this work is that we can very quickly try out different textures let's say we don't want this cozy looking farmhous but like a post-apocalyptic shed for this I can just go back into comu eye change the prompt generate a new image switch it out in blender rerender the sequence and then I can just drag and drop it onto the previous clip in After Effects and it's already integrated and this looks really good but let's say we want to add more detail we can simply do that by opening the line art in an image editor like Photoshop for example and just painting new lines on top of the image I can then just switch out the image in com VI eye and generate a new one and now the shed looks really really broken and of course we can also combine these two workflows so that we can add like 2D set extensions for the background and a 3D set set extension for the foreground let's go back to the shot for example where we added the skyline we added the skyline and it looks really good but let's say we want an office building in front of it a giant office tower so let's just add a few boxes to the scene export our render passes throw them into comfu eye add a prompt for like an office building generate a few images select one project it in blender render the sequence and throw it into After Effects create some rough masks for the foreground and it just looks really cool so I hope you're as excited as I am about this technique and I hope you try it out if you like these AI deep Dives want to support my work and gain access to exclusive example files like for example the blender files consider supporting me on patreon so thank you very much for watching I hope you will create something amazing with these workflows make sure to tag me in your work or share it with me I always love to see what you come up with and see you next time for the next AI workflow