Transcript for:
AI Workflow for Cinematic Film Production

Lady Gwendolyn clad in shining armor that reflected the scorching desert sun trudged through the sand towards the crumbling castle. This has to be one of the most fun workflows I've ever created. You can basically just take any sketch and have the AI turn it into a final cinematic image for free.

And just look at these results. Isn't that cool? It also works with multiple characters in the image and you can also combine it with my free character model sheet workflow for consistent characters and styles and it's an amazing and fun way to create pre-visualization for movie productions, precise concept art or to generate panels for a graphic novel for example.

But to show you the full potential of this workflow I will combine it with a bunch of other new AI tools to create a full AI short film. It's about the endless fight between gnomes and knights. So make sure to watch to the end for the full movie and process. Before I show you how to install and use this workflow in detail, let me give you a quick demonstration of the things you can do with it. The easiest way to use this workflow is to just create a black and white sketch of your shot.

And I'm using Microsoft Paint here with my mouse to prove that you don't need expensive software or even a drawing tablet. Let's say we want to create an establishing shot for our movie. Our knight is standing in the center of the frame overlooking a vast desert with a castle ruin on a distant mountain.

So now I can use this image to guide my composition. and I add a prompt for the style. And this already looks really good! But with this workflow I can also easily try out variations of this shot. For example let's change the time of day.

Or maybe we decide that the movie shouldn't be set on a desert wasteland but a lush flower field. But when you're working on a movie or a graphic novel you often need to have multiple characters in the same image. And here for example we have our knight talking to a gnome.

But when I now add a prompt for the full image, Stable Diffusion will mix both characters as it is applying the prompt to the whole image. But there is an easy way to fix this. For this workflow we just need to use different colors. So I draw my scene in black and one character will be red and one character will be green.

The workflow will then automatically create masks for the characters and we can create individual prompts for these regions, resulting in a beautiful and controllable final image. These masks also allow us to add character reference images to these specific areas so that we can keep our characters consistent from shot to shot. I recommend using my character model sheet workflow to generate these reference images as you will have some advantages later on, but any portrait of any humanoid character will work. Finally we can add another reference image, but this time for the whole style of the image. And this will just help you to bring all your images into the same color space and just make them look more coherent.

So until now getting AI to create precise compositions like this with different characters was a huge pain and pretty much impossible with prompt alone. But you see with this workflow it's pretty easy and a lot of fun. So let me now show you how to set it up.

First you need to install ConfUI, an insanely powerful interface for stable diffusion. And I created this free step-by-step guide so you can follow along. Click on this link, this will take you to the official GitHub page, scroll down and download ConfUI here.

Right click, Save Link As and select any location that you like. Next you can extract the folder using WinRAR for example. This extracted folder is now your COM4UI directory.

Next you need to install Git if you haven't already. Choose the standalone version, download it and follow the installation steps. Now download the COM4UI Manager, go to the official GitHub page, scroll down and download this file right here. Right click, save link as, and put it inside of your confui directory. Double click on that file, and it will install the confui manager.

You can now click run nvidia gpu, and confui will start. Next we need to install some custom nodes and models. And the easiest way to do that is just to download the storyboarding simple workflow, and just drag and drop it into the interface. And you can see it's already there, but a lot of nodes are still red because they are missing. And the easiest way to get them is just to open the manager, click, Install missing custom nodes, select all of them and click install.

Restart it and when you now import the workflow you will see that all the nodes are here. Next we need some models and I'm using the wildcard turbo model as my checkpoint. Just go to civet.ai.com, right click on the link and choose save link as, go to your confui directory, go to confui models checkpoints and click save. Next we need our control net model. Go to download, right click, save link as.

Then we go back up to models, control net. And what I like to do is I like to create a new folder for SDXL control nets just for organization purposes. Double-click that folder and click save. Next we need our IP adapter models and you could download them from their Hugging Face page, but it's easier to just install them via the CompuEye Manager.

So go back to CompuEye, Manager, click Model Manager and search for IP adapter. And you can install all the ones that have this description. You don't need all of them right now, but it's good to have them.

Select them and click install. Also install these three here. Next we need some clip models. And you can just select the two ones that have clipvision model needed for IP adapter in the description. Select them and click install.

Once that's done, just close the model manager, click refresh, and double check if all the models are loaded. So in the beginning here we want our wildcard turbo model. We want the plus face. Model and in this group here load advanced control net we want the Mr. Line rank save tensors And that's it now we can start generating So let's quickly create another shot and paint make sure to set the scene dimension in the image properties I'm just using 16 by 9 here and this time I want a close-up of our female knight looking into the distance with a serious expression And let's put another like castle in the background. So this is our final image.

Let's save that as a PNG And now you can just drag and drop that up here. Let's create a prompt, or maybe let's try something like this, and click Cue Prompt. It will now take some time to load in the model, but it will only need to do that once.

And if you want to see previews in your K-Sampler, go to Manager, and click Preview Method Latent to RGB Fast. This is our final image, and we have our female knight looking into the distance, and we have a castle in the background. You can quickly try out new variations by just changing the prompt.

So let's try another one. I'll also add like angry and put serious expression in brackets so it's weighted a bit more. So yeah, I'm pretty happy with that one. And this does not only work for like these realistic types of images.

Let's turn it into an anime style. And this looks pretty cool, right? But now let's look at a scene with multiple characters.

So now I have this image of the gnome like an over-the-shoulder shot of the gnome looking at the knight who just entered through the door. So we could try to create an image prompt that describes the whole image but as you can see if we run that it completely mangled our prompt. It's completely mixed up.

The gnome is wearing not only a suit but also knight's armor. At least it has a pointy head but she also has like a weird head so it's a complete mess. But we can easily fix this by using different colors for the different characters.

So now I just activate these two groups here like the character one group and the character two group. Make sure to also activate this one. And now when I run the prompt You can see it created these masks and we can tweak them up here so we have some holes here.

So let's just expand them a little bit more. Also for the gnome. Like they don't have to be perfect, just cover the general area.

Maybe even a little bit more for the gnome. And now I can break up the prompt. So in the master prompt I only describe the general style and the very basic composition. And then here for the red mask I describe the night.

And for the gnome let's make that even bigger. I have this description. And if I now click Q prompt, this looks pretty good, right? By using these detailed regional prompts, you can get some pretty good character consistency. But it can be even better.

To improve it, we can just go back to the character group that we want to make more consistent. I activate this IP adapter setup using Ctrl B, and I load in an image of my character's face up here. Click Q prompt again, and you can see it switched out the face.

And this works really well but the problem with the IP adapters is that they kind of fight your prompt. So if you prompt for a happy emotion for example and use a neutral IP adapter image, the final expression will be more neutral. But if you used my consistent character model sheet workflow to generate this image, you can also just create emotions there and then just use these images here instead.

If you run into any issues with this workflow or just need some feedback, consider supporting me on Patreon. You will gain access to our community discord where I try to help out everyone wherever I can. You can also get access to the advanced version of this workflow, so let's take a look at it now. The first feature that I added is the style reference in the beginning here. Let's say we generated an image like this, but I really like the vibe of this image.

So I just put that here and click cue prompt again. And you can see this mostly changes like the color of the image, but it also changes some aspects like the stones, like the ground, It just looks a bit more like a desert because we use this desert image as an input. I also added this upscaler here.

Just activate it if you want to use it. And then we'll upscale the image and add more detail if you want. You can set how much it should change the image by changing the denoise value here. And finally in the end here I added these face detailers.

And not only can you use them to fix the character's faces if they come out a bit broken, you can also change the emotion again here. And this is an amazing feature for dialogue sequences for example, when you want to cut back to the same shot but with a different emotion. So I just activate the group for the character that I want to change the emotion for.

In this case she's the red one. And I just put in my prompt here. And I can... can change the strength of the motion here.

That's the denoise strength. So higher denoise values will give you more extreme emotions. When developing these workflows it's important to me to test them out as much as I can.

So this time I wanted to see if I can create a full short film with it. And spoiler alert, yes, yes you can and it's also a lot of fun. Seeing your images go from like scribbled sketches to high quality 4k renders is just so satisfying. But don't expect it to work first try for every single image. You still need to experiment a little bit.

and test out different prompts and seats. There are also some limitations currently. For example here I tried to push the composition just too much. I wanted to create like this fish eye extreme low angle shot but with all this distortion stable diffusion didn't really know what I was going for.

So I toned down the perspective a little bit and it worked. By the way, to generate the story for this short film I used CLAW 3.5 Sonnet and I was absolutely blown away by the story. Like this is the first time that an AI generated story actually made me chuckle a little bit.

I then brought all my generated images over to LumaLabs Dream Machine. You might have seen all the cursed AI videos and like time traveler memes. They were all created with this tool and I wanted to use this tool to see if you can actually create something that's not cursed.

To generate a video you can either just use a text prompt or upload an image. You can then add a prompt describing what's happening in your scene and the camera movement. And after a minute or so you will have the final result. And this worked so well, that is the first try! And sure, sometimes you have some really weird result, for example this one where the character just leaves and turns into a man.

I also like that you can add a start and an end frame. So for this image I simply created two versions of the same image using my workflow by just changing the seat. I then imported these two images, a start and end frame into Dream Machine and it created this amazing camera movement.

All in all it just took me like two to three hours to generate all the shots for this short film. I created the voice over in Eleven Labs, I just searched for a voice that I liked, copied over the script and clicked generate speech. I also used Eleven Labs to generate all the sound effects, the foleys and the atmospheres.

You can just use their sound effects tool. and write a prompt, for example let's say we need a laughing gnome. Then you click generate and after a few seconds it will give you four options. They sound evil. That's perfect.

Let's take this one. For the music I used UDU and it works in the exact same way. You just type in a prompt for music that you want to generate.

So I type in medieval classical music, desert atmosphere, arabic film score and here are some of the results. Oh, this is nice. Yeah, I think I'll take this one. Finally, I put everything together in DaVinci Resolve and I cannot recommend it enough.

It's one of the best editing softwares that you can get and it's completely free. But without further ado, let me tell you the story about a valiant knight who has to face off against desert-dwelling pranksters only to find out that sometimes the pen is mightier than the sword. Lady Gwendolyn, clad in shining armor that reflected the scorching desert sun, trudged through the sand towards the crumbling castle.

Her mission, to vanquish the mischievous gnomes once and for all. For years, these pint-sized pranksters had terrorized her village with their antics. Sand in the bread, cacti in the outhouses, and camels mysteriously painted purple.

Enough was enough. As she reached the castle gates, Gwendolyn drew her sword and bellowed, Come out, you diminutive devils! Your reign of ridiculousness ends now!

To her surprise, the gates creaked open, revealing a single gnome in a tiny three-piece suit. He cleared his throat and spoke in a squeaky voice. Good day, madam. I'm Finwicket, the Gnome's legal representative. How may I assist you?

Gwendolyn blinked, lowering her sword. I, uh, I'm here to stop your tricks. Finwicket nodded sagely. Ah, yes. Well, before we proceed, I must inform you that we Gnomes have recently unionized.

Any grievances must be filed through the proper channels. If you'll just fill these out in triplicate, we can begin the arbitration process. Gwendolyn stared at the forms, then at the Gnome.

Fine. She grumbled, reaching for the pen. But this had better not be another prank.

As soon as her fingers touched the pen, it exploded, showering her in a fountain of vibrant purple ink. The legal representative burst into high-pitched laughter. Oh, it's always another prank with us, my dear.

The castle erupted with the sound of hundreds of gnomes laughing uproariously. Gwendolyn, still stunned and now very purple, couldn't help but crack a smile. She realized that sometimes, the best way to deal with silly goings-on was to learn to laugh along. Even if it meant looking like a walking grape.

I hope you had as much fun watching this film as I had making it. Storyboarding is one of the most important steps in filmmaking and many other forms of media, as it allows you to try out different compositions and camera angles, but most importantly, it helps you to get your ideas across visually. With this workflow you can now convey your ideas more precisely than ever before.

If you use this workflow make sure to share it on our community discord or tag me in your work, I always love to see what you come up with. And a huge thank you to my lovely Patreon supporters, who make the testing and development of these workflows possible. If you'd like access to exclusive example files and workflows, make sure to sign up under the link in the description.

Thank you very much for your support and see you next time!