Transcript for:
AI Video Automation Workflow

Today I'm going to be showing you how I built this NADN workflow node by node. I'm going to show you guys click byclick how you can build this too. And what this automation does is it generates videos for you. It captions them and then it posts to three different social media platforms. We're going to be posting to Tik Tok, YouTube, and Instagram. And the type of videos that it's going to post are videos like this where it has the audio in the background. I have the audio turned off right now, but it's actually generating these clips and the audio, the captions, everything 100% with AI. These videos are 100% AI generated. So, it takes all of the work out of the content creation process. And the nice thing about these videos is we're using V3 fast, the video generation model created by Google. And not only are we using V3, which is the best video generation model, but this is the fast version, which generates videos at a fraction of the cost. So these videos are highquality, they're lowcost, and it takes all of the work out of posting the videos as well because we're going to upload in this automation. Now, I'm going to be showing you how to create this automation click byclick in this video, but if you want the quick install version where you can just copy and paste it into your workflow just like I did there, and you can just have it in one click. All you have to do is connect to your accounts, then I would recommend joining AI Foundations. To get the one-click install inside of AI Foundations, you're going to go to the classroom once you've signed up, and then you're going to head over to the mastering agents with NADN. Go down to ACG2 under the automated content generation folder. Click on ACG2 and I'm going to show you how to install this entire template. And I'm even going to give you a niche swapping formula for this so that you can switch the niche that you are targeting. This means that if you're making motivational videos, if you're creating sermons, or if you're just creating some kind of promotion for your product, this is going to help you generate all different types and niches of videos for your social media platforms. Now, let's go ahead and dive into a cost analysis so that you know exactly how much this is going to cost you to run this system. I want to be extremely transparent because a lot of people on YouTube, myself included in the past, regrettably, have made videos on these types of systems and they don't share the pricing with you. So, I'm going to be sharing the pricing with you really quick and then we'll dive in click byclick showing you guys how to build this. So, we're going to be using OpenAI's chat GPT to generate the scripts and the prompts and everything that is textbased to generate these videos. And that's going to cost you 5 cents or less, probably less, per completed post. Now, the most expensive part, but arguably a lot cheaper than it used to be, is V3 Fast from Key AI. Now, it's going to take us $2 to $3 to completely generate these posts. That means you're going to be getting five to eight highquality clips, and we're going to be piecing them together for only $2 to $3. to generate videos like this. It used to cost upwards of five to$10 dollar and the results were unsatisfactory. Now you can really put these videos out and they look convincing. And we don't need any expensive editing APIs because we're going to be using FA's composition which costs fractions of a penny. We're going to be using FA for our captions which only costs 10 cents per post. And then we'll get into the hard costs here. We're going to be using 11 Labs to generate the voice audio behind the clips. We're going to be using Blatado to upload those to different social media platforms. Then we're going to be using Air Table and Cloudinary which are both on the free tier plan. You don't need to upgrade these in order to use this automation. These are used to upload and log our videos that we've created. Now, I've already touched on our premium community AI foundations, and that will be linked below if you want the one-click install. And if you want ongoing support while you're building these automations on the calendar tab right here, we have support calls every single Thursday that are going to help you guys out piecing this together and customizing it for your needs. We also have a robust classroom where you're not only going to learn about the video generation stuff. That's the cool thing about this community is we're not just teaching this. This is just a nice appetizer for the rest of the content that we're going to be creating for you. The video generation is only about 5% of this community. We have all kinds of other trainings on all different modalities of AI. So even if you're a total beginner with AI, we have a learning platform for you where you can come in, you can learn about large language models, you can learn about generative AI images, audio, video, and then you can learn how to master NADN with our flagship NAND mastery course. And if you want to create your own applications with AI, we also have the vibe coding agents into apps, another flagship course created by our coach Paulo. This is going to teach you how to build applications even if you're not a developer using the help of AI tools. Now, if you're just joining AI foundations, I urge you to watch the start here course when you first join the community. If you don't watch this course with a few simple quick modules, you're not going to know how to navigate the community to the best of your ability. Now, I've talked about AI Foundations, but we also have a free community called AI Pioneers. This is the light version of AI Foundations. Now, in AI Pioneers, you're not going to get the calendar with the live calls. We do need to charge for that, but we have uploaded some resources in the classroom to help you with these builds. So, for today's build, if you join our free community, AI Pioneers, and you click on the classroom and then go to agent builds under the ACG systems folder right here, I'm going to have ACG2 scripted shorts. This is the system that we're going to be building in today's video. And I have all of the copy and paste resources that you could need right here. Now, without further ado, I'm going to get into building the system step by step so that even if you're a total beginner with NADN and you've just gotten started with it, even if you've never done anything with AI, by the end of this video, you're going to have a working automated short form content farm. Just a couple years ago, this would have been a dream to have a system like this. And now it's possible. So, let's get into it. First, you're going to head over to naden.io and you're going to get signed up or started here. and they have hosted plans where you can sign up for their cloud plan and they're going to give you a 14-day free trial. But the beautiful thing about NADN is that you can self-host this thing. So, you can run it completely free on a virtual private server or on your own device. And this is where you can build all kinds of different automation workflows, not just these video generation flows. So, you could automate your entire business if you wanted to with something like NAND. Once you're signed up for NADN, it's going to look something like this in the back end of your account. And you're just going to click on create workflow. First things first, let's go ahead and name our workflow. I'm just going to name it automated scripted shorts. Then we're just going to click on this plus button right here, and we're going to create our first node. Now, a node is what's going to set a different action off in this automation flow. So, the first one that we need is a trigger node. So, I'm going to hit trigger manually. And what this allows us to do is execute the workflow and get things started. So, as you can see, that turned green. One item came through. And that means that it's going to run through this automation one time with this node right here. So, now that that's set up, we can get into actually automating things. Now, the beautiful thing about NADN is that you can actually hook it up to OpenAI, which allows you to connect to the popular model chat GPT. And that's what we're going to be using for the brain of this automation. It's going to be scripting and prompting our videos. So, we're going to click the plus button right here. I'm just going to type in open AI in the upper right hand corner. And then I'll click on open AI right here. From there, I'm going to hit message a model. And the next step is to connect to chat GPT. So, I'm going to click this dropown right here or I'm going to click on create new credential, whatever you guys need to do, whatever you see here. And then from there, you're just going to rename this. I'm going to call my credential demo. And this is basically like your login to chat GBT, except they use a secret key right here that you can just pass that connects your ChatGpt account to NADN. Now, to get this key, you're going to head over to platform.openai.com and you're going to hit enter. And now, once you're on platform.openai.com, openai.com. You're going to head up to the upper right hand corner here and you're going to see log in or sign up. You're just going to use one of those options. Once you're all signed up, you're going to click on this settings cog right here. So, I'll click settings. And next, you need to head over to the billing tab and you're going to need to load in some credits. I would say $5 to $10 is a good start just to test things out. And that should actually last you quite a long time because if you remember right when we were talking about pricing, OpenAI isn't a significant portion of the cost. It's going to be pennies to generate these videos. So set up your billing and then once you've done that, head over to the API keys tab. From there, we're just going to click on create new secret key. And I'm going to give my key a name. I'm going to call it demo. And then I'll add a project. Just default project works fine. And then we'll hit create secret key. We're going to hit copy on that key. And then I'll hit done. Then we'll head back over to NADN and I'll paste that API key in right there. Perfect. Now if I hit save, it should say credential connected successfully. And now I'm just going to close this. And now we're all connected to chat GPT. Now it's just a matter of prompting or entering text that the model can understand and then output a clear result every time. Now, I'm giving you guys all of these prompts for free in our free community. So, head down to the description and sign up for our free community, AI Pioneers. And once you click on that, you're just going to sign up, verify your email, and then we'll be off to the races. Once you're in AI Pioneers, it looks like this. You're going to click on the classroom tab. Then, you're going to click on agent builds. From there, you're going to go into the ACG systems. This is a folder that we have right here. and you're going to click on ACG2 scripted shorts. From there, we're going to copy the first script system prompt. Okay. So, I'm just going to copy this all the way through just like so. And then we're going to head back over to NADN and I'm going to paste that prompt in right here. Then I'll go ahead and change this RO to system. Next, we need to rename the model to script. You're going to need to name these nodes exactly as I name them in this tutorial if you want these things to pass through correctly. So, make sure that you're naming these exactly as I name them. Okay. So, we've got script and for the model, I'm going to click this dropown right here and I'm going to find chatgpt 40 latest. I'll click on 40 latest and we also want to check the box for output content as JSON. Next, I'll go back to canvas. And now what this is going to do if I execute this workflow is it's going to pass through a script for the video. So if I click into here now that the result is complete I can see that it breaks this script down. It also gives us a title and a voice ID for 11 Labs. This is to identify which voice we want to use for our automation. Then what we have is first a hook because it's going to break these videos down. It's going to make them super marketable and it's actually going to make them valuable. So it has that hook and then it has the build. This is the build to the final resolution. And then finally we have the resolution which is the end of the script. So when we put all of these together, it's going to make a solid video script. Now I'll go back to canvas and the next step here is we need to click on this plus button again and we're going to type in HTTP and you're going to click on HTTP request. Now what this is doing is it's not too complicated here guys. It's basically going to be talking with a software that doesn't integrate with NADN. So, normally we can just use an NADN node like the OpenAI one that I just showed you or like that trigger one that we did in the beginning. But in this case, we need to connect to a tool that doesn't have an NAND node. Okay? And that is 11 Labs. Now, 11 Labs is the voice generation software that we're going to be using for this. So, let's head back over to the community. And what I've done here for you guys is I've added the 11 Labs endpoint. So if I copy this right here and then I just head back over and paste it into the URL right here, that's going to add the end point that we want to talk to. Next, we're going to click on this method and we're going to change it to post. And then we need to connect to 11 Labs with our credentials or our secret key just like we did before with OpenAI. So I'll click on this authentication tab. I'll click on generic credential type and then I'm going to click on generic O type and we're going to select header off. Then down here where we have header off, I'm going to click create new credential and I'm going to call this one demo 11 Labs. And for the name, we're going to type in XI-appi- key. Then we need to go get that secret key from 11 Labs and paste it into this value right here. Now to get this value, we're going to go to 11labs.io. Now, once you get to 11 Labs, you can click this play button right here. And you can see how this works. It basically takes text as an input. So, in this case, it's going to be taking our script that we generated with chat GPT, and then it outputs a voice file on the other side. So, you get a nice clear humanlike voice on the other end of this. Now, if you don't already have an 11 Labs account, you're going to need to sign up right here or log in if you already have 11 Labs. Once you're logged in, you're going to click on your profile in the bottom left corner. And then you're going to click on API keys. Next, I'll click create API key. And I'm going to turn off restrict key. And then we're just going to give it a name of 11 Labs demo in this case. And I'll just hit create. And then I'll copy this credential to my clipboard. Now we can close this window. And we're going to head over and we're going to paste that into the HTTP request here. And then hit save. Now I'll close the credential. And before I forget, we need to name this node 11 space labs. Just like that with capital E, capital L, and then I'll hit enter. So now we're talking to 11 Labs here. But what we need to do next is send it the information like the script. So to do that, we're going to click on send body and we're going to pass this information over to 11 Labs so that it knows what to do. Now, normally you can pass these credentials in these text fields right here. But what makes this easier is I'm just going to give you guys a copy and paste. So, you can change this specify body to specify body using JSON. And then head over to the free school community. And then where it says 11 Labs JSON body, you're going to copy this text right here. Head back over to NADN and you're just going to paste it in this field right here. Perfect. Now, that's all coming through. And if you look here at the preview, you can see that it's pulling through the script all as one piece of information. So, it's separated here, but we've combined it all together right here. Now, if you execute this step, you can hear what the voice sounds like, and you can start to get an idea of what this video is going to do. Once that finishes, you're just going to click view on the data and then you can hear it. Guilty about it. You don't laugh at the same jokes. Don't want the same people around. Now there is one final step for 11 Labs and we need to go back to list and you see that this is named data right now. We need it to be named output.mpp3. So in order to get that we're going to go to add option and then we're going to go down to response and we're just going to click on this right here. Click on file and then we're going to change the output field to output.mpp3. And then you can execute the step. And now as you can see it says output.mpp3. And that's exactly what we want for the rest of this automation to work. Now, next, we need to actually create the transcript with timestamps. So, we need to show when certain things are said so that we can match the videos up with the script that's being output. So, if it says something about a suit, it shows a man in a suit. Or if it says something about the mountains, it shows somebody in the mountains. And this is just going to make the whole thing a lot more relevant. So, to do this, I'm going to click the plus button right here. I'm going to type in http. I'll click on HTTP request and then we'll rename this one to get transcript. We'll change the method to post. Then we'll head over to the community and we'll grab the get transcript endpoint right here. So I'll copy that and then we'll head back over and paste that in for the URL. And now we need to add another key, another authentication in order to access this tool that allows us to get the transcript from OpenAI. So I'll click this dropown right here. We'll click generic credential type. Generic O type. We'll select header O. And then right down here, we want to go with open AI, but you're probably going to need to create a new credential. So click create a new credential. I'm going to name mine demo open AI. And for the name, we're just going to type in authorization right here. And then down here in the value, I'm going to type in capital B lowercase E A R E R. So it's going to say bearer. And then I'm going to put a space. And then we'll head back over to platform.openai.com. And we'll grab our key for this next part of the password. So you could use this same key that we already created, but I'm just going to create a new key. And I'm just going to call it demo transcript. And then we'll click the project and select default project. And then I'll hit create secret key. I'll copy the key and then hit done. Then we'll head back over to NAND and I'll paste this in the value. And then I'll hit save and we'll close this. Then we'll click send body. I'll change the body content type on this one to form data. And then for this first parameter, we can leave it on form data. And then I'm going to type in model right there. And then for this value, we're going to type in whisper-1. This is the model that we're using to generate that transcript. Then I'll add another parameter. This one can be form data as well. And this one's going to be response_mat. And the value is going to be SRT. So this is telling it what type of file we want to retrieve back. So this is getting the voice clip input in from 11 Labs. And then from there it's going to output an SRT file. And an SRT file is going to timestamp everything in that script in order to tell us when something is being said. So at the 5-second mark, it talks about this. At the 10-second mark, it talks about that. That's what this type of file allows us to do. Now I'll add another parameter and this one is going to be naden binary file. For the name we're going to type in file and then for the input data field name it's going to be output.mpp3 exactly as we named the last file step. Once this is all filled out, we're going to click execute step and then it should output something that looks like this where it shows the exact timestamp and then what was said for that portion of the clip. Now we'll go back to canvas and next we need to add in an air table node. So this is where we're going to log the different things that are happening in our system so we can see which videos are in progress and we can also see once they've been posted and view a preview of that video. Think of Air Table as a supercharged Google sheet. We use this for all of our automations over at AI Foundations because it's a very powerful way to keep track of your data. So I'll click the plus button right here and I'm going to type in air tableable and then I'll click on this air table node and then we'll click create a record. And now we need to create our air tableable. So I'm going to click on create new credential and then we're going to get our access token from air tableable and we're going to set up our base over there. But first I'll just go ahead and name this airtable demo just so I know what it is. And then we'll head over to airtable.com. Once you're on airtable.com, you're going to sign up for free or log in if you already have an account. So, I'll click log in. Then, once you're logged in, you're going to click on the blue create button in the bottom lefthand corner. Then, you're going to select a workspace. I'm just going to select my first workspace. And then, I'm going to click on build an app on your own. Once this opens up our supercharged Google sheet, I'm just going to hide the sidebar right here. And then I'll click on untitled base up top. And I'm going to type in videos. I can select any color from here. And then for the table, I'm going to double click into the table one, and I'm also going to name that videos. And then I'll hit save. Now, for this first column, we're going to click the dropown, hit edit field, and then we're going to select auto number. And then I'll hit save. And we're going to rename this by clicking into it, and we're just going to name it ID, two capital letters, and then hit save. So now that's going to assign a unique ID to each of these rows so that we can keep track of which video we're working on. So, if this logs video one and then it says it's in progress later down the line when it wants to change that, it's going to change that status on ID 1 to posted. And that's why we have these IDs here. It needs to keep in context what task it's working on and which row it's using. Now, in this notes column that's pre-populated, I'm just going to double click into it and I'm going to change the notes title to title. And then I'll go ahead and hit save. Then we'll delete the assigne. And now we can leave status, but we need to edit the options that are available. So I'll hit edit field, and I'm going to delete to-do. I'm going to make the P in progress here a capital. And then instead of done, we're going to rename this one to posted. And then I'll go ahead and hit save. Now, this attachments field that's already here is great, but we need to rename it to source. This is where the video is going to land. Then we'll delete the attachment summary. Next, to get our authorization key so that we can connect NADN to Air Table, we're going to go down to the bottom lefthand corner here. We're going to click on builderhub and then I'm going to click personal access tokens. From there, I'm going to click on create token. And you're starting to see the pattern here. We're just creating different credentials to connect to our different accounts so that NADN can run the entire internet for us in the background. Now, I'm going to name this demo. And then for the add scope button, we're going to need to click on all of these scopes. So just go ahead and add all of these scopes here. Then we're going to click add a base. And I'm going to find the base that we've created. So this one is called videos. So I'll just type in videos to find it quicker. And I'll click on videos. Now I'll create the token. I'll copy this token and then I'll hit done. Now we'll head back over to NAND and I'm just going to paste my access token in right here. And then I'll hit save. Now I'm going to close this window. And then for the base here, we're going to select videos. And we're also going to select videos for the table. And boom. Just like that, you can see that we have our title, status, and source all coming in right here. We don't need the source quite yet because we don't have our videos generated yet. So I'll delete the source out of this selection. And that's not going to delete the column in Air Table. It's just going to delete it from this change that we're about to make so that it's not updating the source on this step. So next, we just need to change the status to in progress. And then we need to pull in our title, which our title has already been generated over in the script. So if I open up the script right here, and then I drag the title into the title field, it's going to pull that in dynamically. So this is the title for that video, and we're going to use that when we post our videos later on. Now I'll hit execute step. And as you can see, it's created a new record in our Air Table. is set the ID to four because that was the next option available. It gave it a title and it said, "Hey, this video is in progress." So now, if I went back over to our Air Table and I closed out of the builder hub, then we go into our videos, I'll see that it's created that new record right here. And you can use this for all kinds of different automations. You can imagine you could use this for a CRM, you could use this for anything really, inventory, different AI responses, research, all kinds of different things. And if you're interested in learning about Air Tableable, which is kind of the engine that drives a lot of these automations behind the scenes, then you can check out my full Air Table tutorial. I've got that on YouTube. Just type in Air Table tutorial productive dude and you should be able to find that full guide. Next, let's head back over to NAND and continue with our automation by going back to canvas. And then I'm going to click this plus button right here. And I'm going to type in code and we're going to pull in a code node. So, this allows us to run code inside of NADEN. But don't worry, you're not going to have to write any code because I've provided the code for you. Now, what this code is going to do is it's going to see how long that script was that we generated. And it's going to say, hey, how many videos are we going to need to fill this script? How many clips, how many scenes are we going to need to construct that final video? So, that's what this code node is going to do. And since we generated that transcript with the timestamps, we can run code that can calculate how many clips we're going to need to generate in terms of videos for that final product. So now we'll head over to AI Pioneers. And then right down here you see convert to scenes code and you can just copy this code that I've already created for you and this is going to do the job. So, copy that, head back over to NAD, and then you're just going to replace what's in here with that new code, and then hit execute step. Now, as you can see, it gives us a bunch of different information. And as you can see, this just broke it down into different scenes with different scripts that we have so that it knows exactly how many scenes it needs to create and what it's going to be saying during that part of the video. Now, I'll go back to canvas and the next thing we need to do is prompt the video clips. So, we have this code node right here that's told it, okay, we need these many prompts, and these are the specific parts of the script, what the video is going to be saying at that portion of the clip. So, what we can do is we can just duplicate our script node that we've already created right here because we're going to be calling OpenAI again, and this time it's going to be generating video prompts so that we can actually generate those videos. So, I'll hit duplicate, and then I'll just connect this to the code node. And on this one, we're going to click into it and we're going to rename it to prompt with a capital P. You can leave the credential the same because we've already created the OpenAI credential. You can leave the model the same because we're going to be using the same model. But for the prompt, we need to swap this out. So, I'm going to delete what the prompt had in there and we're going to head over to Pioneers and I'm going to scroll down here to prompt system prompt. How meta is that? All right. I'm going to grab the entire prompt right here. Copy it. We're going to head back over and I'm just going to paste it in right here. Boom. Now, it has all of the information that it needs moving forward. Okay. It even gives it the output that it's going to need specifically for this type of video because in this case, it's six scenes that we're generating. So, we need to change the amount of scripts that need to go through here. Okay. So, I'll go ahead and execute this step now. And what we should see on the other end is six different video prompts that walk us through this. Okay. So, as you can see, it's just giving different descriptions of the videos we want to generate. So, the first scene, a quiet morning unfolds with a person sitting alone at a coffee shop window watching people pass by outside. Their gay is reflective and still soft daylight casting gentle shadows on their face. So, it's just a description of the video we want to create. And then we're going to pass that to another AI tool that will create that scene for us just by giving it this text. And then it goes on to share the other scenes as well. All of these prompts totally generated by ChatGpt. Now I'll go back to canvas and we're going to click plus right here. And what I'm going to do next is type in merge. Now we're going to click on this blue merge right here. Now what this does is it allows us to pull different parts of our automation back into one item. So since there's a lot coming out of this, we're just going to merge it all back into one item and keep us on the right track here. So once you've dropped in that merge node, just click on the mode right here and then click combine. And then we're going to combine by all possible combinations. And we're going to pull in some data from earlier in our workflow to move it on to the next step. Now I'll hit back to canvas and we need to configure this. So right now we have the prompt going into input one and we need to bring our code node into input two right here just like I did there. So now we have both of those nodes connected to the merge. I'll go ahead and run this merge here. And then if I click into it, you're going to see that it organizes all of the data that we've already passed through. So it turns it into one item, right? It gives us the scenes and it gives us the scene prompt all in one item right here. Now, we need to format all of this information that just came out here. So in order to do that, we're going to use another code node. But again, don't worry. I've given you guys the code to format this how it needs to pass. So, if I click this plus button right here and we type in code, you can click on the code node again. And this time, we're going to rename the code node to merge scenes. Then, we'll delete the starting code out of here. And we're going to head back over to the community. And right here where it says merge scenes code, we're just going to copy all of this right here. And then you're going to head back over to NAD and you're going to paste it in and then hit execute step. Great. Now things are super organized. It tells us the script. It gives us the duration. and it also gives us the prompt for that portion of the video. So, it's really broken down the scenes and organized them so that we can actually use this data moving forward. Now, I'll go back to canvas and I'm going to split these scenes out into six separate jobs. So, I'm going to click the plus button right here and then I'm going to type in split and you should see split out. Click on split out and we need to select what we want to split out. So, in this case, we want to split out everything under these scenes. So we want scene one, scene two, scene three, four, five, and six as separate items coming through this. Okay. So to do this, we're just going to type in scenes, which is referencing this outer object right here, scenes. Then we'll execute the step, and you should see that we got six items on the other end. So if I go back to canvas here, we've been seeing one item for this entire automation, but now it's split it into six items. And the nice thing about this automation is it's dynamic because these code nodes are really smart. They're going to tell the automation exactly how many clips we need for that particular video. And some videos might be five clips while some might be seven clips. So this automation is built in a very intuitive way. All right. And now that we have this split out into six different jobs, we can pass it on to the video model where we can then connect it to key.ai. Now key.ai AI is where we're going to access the V3 fast model. And because Google makes it difficult for small companies and individuals to access these models, Key has brought this together for us. And they only charge a small upcharge on this. And they're by far the cheapest one that I found that brings all of these APIs together, like Google's different models, OpenAI's models, and everything that you could imagine under the sun. So to get there, we're going to type in key.ai, and then we're going to hit enter. And as you can see, key provides affordable and stable AI APIs for seamless integration. And that's exactly what it does. All we have to do is log in and sign up for an account. Once you get signed in, it's going to look something like this on their backend dashboard. You're going to need to go in and load in some credits on the billing tab in order to run this automation. Once you've loaded in some credits, click on API key. And then we're going to click on create API key. I'm just going to call this demo so I remember to delete it later. Then I'll hit create. Then we're going to click on copy and I'll go ahead and close this window. Then we'll head back over to Naden and I'll click the plus button right here and type in HTTP. We'll click on HTTP request and this is how we're going to access key. Now we're going to enter the URL in a moment, but first let's get the authentication figured out since we already have your API key. So, we're going to click on the authentication dropdown. Click generic credential type. Then, under generic o type, we're going to click on header off. And then here, we're going to create a new credential. I'm just going to call mine demo key. And then right here in the name, I'm going to type in authorization. And then for the value, we're going to type in capital B lowercase E A R E R space. And then we're going to hit paste to paste in our key. And then we'll just go ahead and hit save. Now, I'll close this. We'll change the method to post. I'll head over to the community here and we're going to grab this right here, the request video endpoint. And I'm going to paste that into the URL right here. And then we're going to make this say request video. And I'll click away. Then we just need to select send body. We're going to change specify body to using JSON. And then you can head over to the community and you can grab the request video right here. This JSON is going to paste in the prompt, the model, and the duration for us. So, as you can see, it's on VO3 fast. It's going to pass the duration from that previous node that I showed you guys earlier, where it shows the duration of 8 seconds, and it's also going to pass the prompt for each of those jobs as separate jobs. So, let's head over and paste that into the JSON body. Then, I'll go back to canvas, and I'm going to drag this down a bit right here. And I'm not going to run this node quite yet. First, what we're going to do is we're going to add in a couple other nodes here so that we can make sure that the video goes all the way through and then combines together. So, in order to do that, the next node that we're going to add here is called aggregate. So, six items are going to come out of this request video. And then we want to bring them all back together because we want those jobs to stick together so that we can then edit that video together later down the line. But we want those to go through separate into the request video. That's why we had six items before. So I'll click the plus button here. We're going to type in aggregate and then I'll click on the aggregate node. Then we'll leave it on individual fields. We don't really have to change anything about aggregate here. I'm just going to go back to canvas. Then we need to add the next node. And the next node is a weight node. So this is basically just going to pause the automation for a specified amount of time so that those videos can generate. Remember, we're generating all of this with AI. So, these videos that are coming out the other end are going to take a little while to generate. They're not going to take too long, but we need to have a wait so that if the video is not ready, the automation won't continue and run into an error. So, I'll click the plus button. I'll type in wait, and then I'll click on the weight node. So, this is basically just a timer. We're going to leave it on seconds. And for the weight amount, I'm going to change it to 140 seconds. And then I'll click back to canvas. We'll right click on this node and we'll hit rename. And I'm just going to rename it to wait for video. I'll hit rename. Then we'll need to add another merge node. So I'll click the plus button right here. I'll type in merge and I'll click on merge. This one we're going to rename to remerge. I'll leave the mode on append, but I want to change the number of inputs to three. And then I'll go back to canvas. Now by default, wait for video is going into input one, but we want to remove that connection. And we need to connect it to input two. Then for input one, we're going to bring in the split out items so that we have the context of what those look like when they're split out. And then finally, we're going to connect request video to input three as well. So that should be going into aggregate, but it should also be going into input three right here. Then we'll just get things nice and organized. And then we'll aggregate this to get the cleaned up data from the remerge node right here. So I'll click the plus button. I'll type in aggregate. And I'll click on aggregate. We're going to rename this to single item. We want to get it all in a nice organized single item. And for aggregate, we're going to click on this and we're going to hit all item data into a single list. Then we can leave this on data and we can leave include on all fields. And then I'll hit back to canvas. Then I'll click the plus button and we're going to type in code node. Again, we just need to format this data. So I'll click code and this one is going to be called get task ids. And what this does is when those videos are being generated, it generates a task ID for each of those videos. So it's going to create six different task IDs in this case because there's six different videos. So we need to grab all of those and organize them so that we can then request to get that final video. So for this code, we can just delete what we have in here. We can head over to NAD and then we can go to the get task ids code and copy it. Then we'll go in and we'll paste that into the code section and we'll go back to canvas. So we've already posted the request to generate those videos and key, but now we just need to get them. So I'll duplicate this key node right here. And we'll drag it down to the next step. And I'm going to connect it to this code node like so. And then I'll go ahead and rename this node to get final videos. And then we'll hit rename. But now we need to click into there and we need to change some settings. So, we need to change the method to get because we're not posting a request. We're actually getting some information from the server. Then, we'll head over to the school community and I'll grab the get final videos endpoint right here. I'll just copy that and we'll paste that in for the endpoint over here. The credential type is already set how it needs to be set. But for the send body, we're going to change it to using fields below. And we're actually just going to type in task and then a capital I and then D. and we're going to paste in a value right here. So that value is going to be in the school community as well. It's just this simple little variable for task ID. So I'll copy that and we're going to be grabbing that from that code node. So I'll paste that value in right there. Then we'll check send headers and I'm just going to type in accept. Then for the value I'm going to type in application /json. This is just telling it what format we're looking for in terms of the HTTP request, what we want to receive on the other end. Next, we'll go back to canvas and we'll aggregate again. So, I'll click that plus, type in aggregate, and I'll click on aggregate. And then this is going to give us a list of the final clips that are all generated and now they're in context for us. So, I'll click on this right here and we'll change this to list final videos. So, first we get the final videos, then we're going to list those final videos and I'm going to change this to all item data. Then for the output field, we can leave that on data and include all fields. Go back to canvas. And now remember how we generated that audio file using this 11 labs node up here. We now need to merge that back into the context down here. So I'm just going to click on the node ad and I'm just going to type in merge. Click on merge. This one we're going to call audio merge because we're pulling the audio back into context. And this one's going to be combine. And then we're going to combine by all possible combinations. And I'll go back to canvas. And now, as you can see, input one is connected to 11 Labs. So, you're just going to drag and connect that if you haven't already. Input two is going to be this list final videos. So, we'll connect that one next. Now, right now in our automation, 11 Labs is outputting a file like this, but we need to actually pass this into the video editing step, which is going to piece all of the audio and clips together as a URL. Now, in order to get that URL, we're going to need to host that video file somewhere. We're going to need to upload it somewhere. So, the next step is to upload that audio file to Cloudinary, which is a free audio hosting site, which makes those URLs actually able to pass in an automation. You see, most humans, they can click any URL and then if there's a video or an audio clip on the other end, they can listen to it or they can watch it. But APIs, these technologies that are connecting and these automations, they need that URL to be in a very specific format. So when I say that we're using Cloudinary, you might be like, what's Cloudinary? Why don't we just use Google Drive? Why don't we just use Dropbox? Well, Cloudinary I know is going to be able to do this. And we're also going to be using Cloudinary for another step here to transform our videos. So I'll go back to canvas here and we're just going to click the plus button next. and I'm going to type in HTTP and we're going to click HTTP request for the Cloudinary account. Now, this one's just going to be renamed to upload audio. We're going to change the method to post and then we're going to go grab the URL from the community. So, we have upload audio endpoint template right here. I'll just copy it. Then we'll head back over to NADN and we'll paste in that URL. Now, we need to head over and create a Cloudinary account so that we can get our cloud ID. This is how you basically authenticate what you're uploading to. So, this isn't as secure as an API key in this case. You can set up API keys with Cloudinary, but I find that it's not 100% necessary since we're just using a free tier of Cloudinary. So, you're not even going to put your card in there or anything. Um, it's not a huge deal if this did get leaked. I mean, you don't want to leak it, but uh it's not going to hurt you if you do. It'll just allow people to upload stuff to your account. So, next, we'll just type in cloudinary.com and head over there. So, as you can see in this example here, they just show how you can use this simple URL right here to edit files. So, they're using an image in this case, and they're just adding different effects or cropping to this image just to show as an example of what Cloudinary is capable of. You can do all kinds of different amazing things with Cloudinary. You can even tag different items using AI like this. And it's really sweet app. So, not a lot of people know about it, but I've been enjoying using Cloudinary. Now, you can sign up for free right here. And then once you sign up, get logged in and I'll see you on the other side. Now, once you get signed up, what you're looking for is your cloud ID, which shows up right up here. Now, this is a disabled cloud and area account, so you can't use this cloud ID, but this is where it's going to show up when you sign up for your account. So, you're just going to copy whatever it says right here, and you're going to head back over to NADN, and right here where it says your cloud ID, you're just going to paste in that cloud ID, just like so. So my cloud ID goes between those two slashes, that one and that one right there. Now we need to select send body so that we can send that audio file over. And for the body content type, we're going to go with form data. This is going to allow us to upload that file that we just merged in from earlier in the workflow. So for the body parameter type here, we're going to click on NADN binary file. And then for the name, we're going to type in file because that's what type of data that we're passing in. And then for the input data field name, we're going to type in output.mpp3. And if you remember right earlier, we renamed that to output.mpp3. So that's the name that we're using to pass this. Next, I'll hit add parameter. And for this parameter type, we're going to leave it on form data. And then for the name, I'm going to type in upload preset. And then we need to enter the value here. But first, you need to create your upload preset over in Cloudinary. So head back over to Cloudinary. Next, you'll click on your settings over in Cloudinary. And then you're going to go to the upload button right here. And once you go into upload, you're going to see that you have some different upload presets in there, or you might not have any yet. You're going to create a new upload preset. You're going to give it a name that you can remember, and you're going to change the signing mode to unsigned. The reason that I'm not doing this is because I don't want to show my upload preset on screen. So, you're just going to make sure that it's unsigned. And then you're going to save that upload preset once you've done that. Once you've saved it, you're just going to copy the name of that upload preset. And we're going to head over to NAND and we're going to put it right in that value. So, if I named my cloud preset cloud_auudio_upload, it would look like this in the value. So, that's where you're going to want to put that. Once you have your cloud name in here and you also have your upload preset in here, we're going to head to the next step. So, go back to canvas and then next we actually need to rename this code right here, this code node. So, I'm going to rightclick and I'm going to hit rename and I'm going to rename it to convert to scenes and then I'll hit rename. And then we need to add in a new code node right here. So, I'll type in code and I'll click code. Now, this one we can leave named code. I'm going to remove the code out of here and I'm going to head back over to the community and for the code code, we're going to grab this right here. Copy all of that code. Then head back over to Naden and we're going to paste it in the code node. Now on step number three, we're going to change out a line in the code. So I'm going to go up to step three, which is right here. Step three is transform URLs to remove audio using Cloudinary. So these videos, what happens is is they have audio already attached to them. They have ambient audio that's generated by V3. So that audio is nice and it does a good job of generating it, but we want to override that audio with the audio from the voice over instead because we want to have the voice in the background. We don't just want random audio. So in order to strip that audio, we're actually going to put our cloudinary cloud ID right here where it says your cloud ID. So, we can get that by going back to canvas, going into the upload audio, and then whatever you put in here for your cloud name, you're just going to copy that. And then we're going to go back to canvas, click into the code node, and we're going to replace where it says your cloud ID with whatever that ID is. And then you're just going to go back to canvas. And now that all of our videos are generated and we have all of the information that we need to edit those videos, we're going to actually edit them programmatically using an endpoint from foul.ai. So I'm going to hit the plus right here. I'm going to type in HTTP and we're going to click on HTTP request so that we can call foul. Now I'm going to change the method here to post. And then for the URL, we're going to head over to the community and I'm going to grab the edit video endpoint. So, I'll just copy this right here and we'll go back in and we'll paste that in for the URL. And we are going to rename this to edit video. Okay, perfect. Next, we'll select authentication generic credential type. And then for the generic O type, we're going to select header off. And then finally, for the header off, we're going to create a new credential. And I'm just going to call this demo file key. For the name, we're going to type in authorization. And then for the value, we're going to type in capital K E Y lowercase. And then we're going to hit space. So it's just key space. And then we're going to go get that key. So to get that key, you're going to type in foul.ai. And you're going to head over to that website. Now FAL is very similar to key.ai, but they have a few extra models that we can use over here. In fact, they're using an old technology called FFmpeg, which is the cheapest way that you can edit these videos together just by sending a simple HTTP request. So, this is known as a generative media platform for developers. Like I said, very similar to FAL, except these guys are going to allow us to edit our videos. So, we're going to use FA to do that. But editing the videos is extremely cheap. We're also going to be adding our captions using FA. So, you're going to log in if you already have an account, or you're going to hit get started if you don't already. Once you're logged into FAL, you're going to go over to the usage and billing tab right here. And then on the left hand side of the screen, somewhere in this area, you're going to see a button that says billing. You're going to click on that billing button. And then you're going to load in some credits with FAL. You don't need as many credits with FAL because this is going to be a lot cheaper. It's going to cost us around 11 cents per video or less to generate the captions and to piece the video together and do the editing for us. Once you've loaded in your billing credits, you're going to go over to API keys and you're going to hit add key. And then for the scope, just leave it on API. And for the name, I'm going to type in demo foul and then I'll hit create key. Then I'll just hit copy key and I'll hit done. Next, we'll head back over to NAND and we're going to paste it after what we already wrote in the value, which is key with a capital K space. And then we're going to paste in our key there. And we're just going to hit save. Now I can close this credential window and we can continue with the video editing step. So we need to select send body and then we go with JSON using JSON. And we're going to head over to the community here. And for the edit video JSON body, I'm just going to copy what we have right here. and we're going to paste that over here into the JSON. So now this is just going to send off all of our video URLs and it's basically going to generate the full video all pieced together. So I'll go ahead and head back to canvas and then we'll scroll down a little bit here and I'm going to add in a new node. So this is going to be another weight node and this is going to wait for those videos to be edited. So we'll click wait and I'll go back to canvas. I'm just going to drag it down here really quick. And then I'll click back into the node. And this one we're going to rename to wait for audio video merge. Just like that. And for the interval, we're going to change the weight amount to 80 seconds. And then I'll go back to canvas. Then we're going to duplicate this file node that we created over here. And we're going to drag it in after the weight node and make a connection there. Now I'll click into here. And I'm going to rename this to get connected video. We're going to change the method to get and I'm going to head over to the community here and I'm going to grab this right here. Get connected video endpoint. So I'll just copy that JSON response URL and we're going to go ahead and paste that in for the URL and make sure that it's on expression when you paste that. So it should be grayed out. It should look something like that. We already have our credentials in here for foul. So that should be good to go. And now we're going to turn off the send body. And I'm just going to turn on send headers. right here, we're just going to type in accept. And then in the value, I'm going to type in application slashjson. And then we'll go back to canvas. So what we've done here is we've got and we've listed all of the final videos. Then we merged in the old audio from that original 11 labs. After that, we uploaded the audio to cloudinary so that we could get a nice URL. Then in the code node, we pieced that all together into a request that the edit video node right here can understand so that it knows how we want those videos edited together. That edit request sent off and now we wait 80 seconds for it to complete and then we get the final video in one consolidated URL. Now we just need to first add the captions and then we need to upload those videos. We're getting super close here. Now we want to set a variable for the video that we're getting out of this. So, I'm going to click the plus button right here, and I'm going to type in set. And you should see this edit fields set node. So, click on that. And then we're going to rename it to set video variable. So, we're just going to type in set video variable. Then, we'll click into here. And for the name, we're going to type in video. Then, I'll head back over to the community and I'll just copy this right here under set video variable. I'm just going to copy this. Then, we'll head back over and we'll paste that in for the value. Now, I'll go back to canvas. Now, to add the captions, we're just going to duplicate this original edit video from Val and we're going to connect that to the set video variable right there. And then I can click into here and we're going to rename it to get captions. For the URL, we can grab that out of the school community right here for the get captions endpoint. And I'll just go ahead and paste that. We can leave the credentials for foul the same. And then I'll head over and I'll grab the JSON body from the school community right here. And we're going to change this specify body to using fields below. I'll name this one video URL. And then for the value, I'm just going to paste in that expression. Now go back to canvas and we'll duplicate this weight node right here because we're going to need to wait for the captions now. So I'm going to rename this to wait for captions. And then I'll hit rename. And we'll connect that. And we can just leave this on 80 seconds. I'll go back to canvas. And now we'll duplicate this. get connected video right here and we'll pull that over to right here after the wait node and get it connected. Now I'll click into here and we're going to get the final render. Our video is basically complete at this point. Now we just need to upload it. So we're going to get final render and everything here can stay the same since we're just using foul. So you just need to change the name of that node and you're good to go. Now there's one final tool that we will need to pull this off and that's called blat. So, if you head over to the community and you click on the Blatato link, that's going to bring you over here to Blatado. From here, you can try for free or you can log in, but you are going to need a paid Blato account to get this working. You don't need to go with their super expensive plan. Just stick with their $29 a month plan and you're going to get unlimited uploads. Now, Blato does a lot of this video automation, but they also have their own upload API. This is going to allow you to upload your viral posts. So you can hit try for free to sign up. Click through their onboarding tutorial. Just get through the whole thing. And then once you're done with that, I'm going to show you exactly where to go to hook up to your different platforms like Tik Tok, Instagram, and YouTube. Once you're logged in, you're going to click on the settings cog in the lower lefthand corner. And that's going to bring you to this screen where you can connect all of your accounts. Now, like it says here, log into your social account before connecting. So connecting is basically just when you click on one of these buttons here to connect your social media accounts like I've done here. But what you're going to want to do is in a separate window go log to the specific accounts that you want to log into on Blato. Then when you click one of these buttons like you click on login with Instagram it's just going to walk you through the process and you're already logged in essentially. You're not going to have to click through all of the login forms and things like that. And that's what it says to do right here. log into your social accounts before connecting. And for Facebook, make sure to select the page individually. Do not select connect all pages. We're not going to be using Facebook, so that shouldn't be an issue. But what you're going to do is you're going to log into YouTube, you're going to log into Instagram, and you're also going to log into Tik Tok. Once you log into those different accounts, you're going to see them appear right down here. Next, we need to scroll down, and we need to hit copy API key. right here. You're going to get prompted with a sign up form where you can sign up if you haven't already for their paid product. Like I said, just sign up for the $29 per month version and you're going to get unlimited uploads. There's not any other tool out there that I found that is that cheap that allows unlimited uploads to these platforms. Keep in mind though, if you do run into issues with uploading, it's not Blatado's fault. It's actually these other platforms here. I believe YouTube is going to cap you out at like 12 posts per day. And same with Tik Tok. You don't want to post more than like 10 or 12 times per day on either of these platforms. I wouldn't try doing it on Instagram either. So that means you don't want to be posting more than every 2 hours per day because you're going to get capped out with rate limits because they want to avoid bots, which is pretty much exactly what we're doing right now. We're creating a bot. They don't want people posting hundreds or thousands of videos in a day, though, which makes sense because these platforms have to store those videos and they also have to figure out what's good enough to go to the top of their algorithm. So, even though we're completely automating this and we're uploading way more posts than you could on your own in a day, you're going to still want to be thoughtful about how many things that you're uploading. But basically, you're just going to want to copy that API key right here. And then we'll head back over to NADN. Next, I'm going to click the plus and type in HTTP. And we're going to click on HTTP request so that we can connect to blat. This one is going to be renamed to upload. For the method, we'll change it to post. And then for the authentication here, I'm going to click and then we'll click generic credential type. For the generic O type, we're going to click on header off. And then for header off here, we're going to click create new credential. For the name, we're going to type in blotado-i- key. And then we're going to paste that API key right there. And I'm going to rename this to Loado demo and then I'll hit save. Now I'll close this and then we'll head back over to the community where we can find the upload endpoint right here. Okay, upload endpoint. So you're going to copy that. You're going to paste it into the URL. Next, we're going to turn on send body using specified fields below. We're going to make the name URL. And then for the value, we can get this in the school community as well. Just right here, upload URL. You're going to copy it and then you can paste it into the value. Now go back to canvas and we're just going to drag this down here for organization. Now that we've uploaded that to Blata, we also have to upload it to our different platforms because you upload the video file itself. And then you have to choose which platforms you want to send it to. So what we're going to do next is click the plus button right here. We're going to type in set and we're going to hit edit fields set. Now I'm going to rename this to platforms. And then we're going to click in here. And for the first one, I'm going to type in YouTube. Then I'll click add new field right there. Then we'll type in Instagram. Add another field and we'll type in Tik Tok. Perfect. Now we'll head over to Blato and we're just going to click copy account ID next to YouTube. We're going to paste that right here. We're going to do the same for Instagram. Paste that in the Instagram value. And then we'll finally do the same for Tik Tok. And we'll paste that into the Tik Tok value. Now I'll go back to canvas and we just need to add three more nodes here to send it off to these different platforms. So I'll duplicate my bullet node that I have right here and I'll drag it and connect. And this one we're going to rename to YouTube. Hit rename. Then we'll click into there and right here where it says media, we're going to change that to the word posts. Down here for the specify body, we're going to change that to using JSON. Then I'll head back over to the community and we're going to copy the YouTube JSON body right here and we'll paste that in the JSON body. Now for YouTube, you're going to want to change the text right here to reflect your call to action for the video. So if you have a specific description for the video or a specific call to action that you want to add, you can do that here. Mine just says learnfaceless YouTube and then I have school.com/aiifations. So I'm just giving them a link to that video. Now I'll close this. We'll go back to canvas and I'm just going to duplicate the YouTube node right here and we'll connect it and I'm going to rename this one to Instagram. I'll hit rename. And now if I click into here, the only thing that we need to change is the JSON body down here. So if I delete that out of there and then I go into the school community, I can just copy the Instagram JSON body right here. And then we can go ahead and paste that into the JSON right there. Then I'll go back to canvas. We'll duplicate this again. And we'll do the same for Tik Tok. So, I'll connect that and we'll click into here, rename it to Tik Tok, remove this JSON body. We'll grab Tik Tok from the community. Then we'll paste that into the JSON body there. And finally, we have one final node. We need to log it back to Air Table. So, I'm just going to select this Air Table node up here. I'll duplicate it and I'll just drag that in and we'll connect it. Now, we'll rename this to log finished video. And this is just going to say that that video is complete and uploaded. And we're also going to pass the video into Air Table so we have a place to monitor this system when it's running. So I'll click into the log finished video. And instead of create, we just want to change this to update. We're going to want to select videos once again. And then we're going to want to select videos for the table. And then where it says columns to match on, we're going to click the select right here and hit ID. In the community over here, you're going to see we have logged finished video ID. Copy that ID right there. Paste that in. Make sure it's on expression. Then we're going to go back over to the community once more. And the log finished video source we're going to want to copy as well. Head back over and we're going to want to add column to send. Hit source and paste in the source right there. Make sure it is on expression. And then we'll delete the status and we'll delete the title. Not because we're going to actually delete those. Those will remain, but you just want to not update or override it. There's no reason to do that. Okay, perfect. Now, I'm just going to click refresh since it says it wants to fetch the columns right here. And we'll delete this new little ID that pops up. You want to use this capitalized ID. And I'll delete that. Now, when you get to the get final videos, don't hit play on it quite yet because I did make a small mistake. You're going to want to click send query parameters. And then instead of this being in the body here for the task ID, we're just going to copy the task ID, put it in the name there. And then we're going to copy the query parameter value and we're going to paste it in there. So where it says task ID and value, we're just entering that as a query parameter by hitting send query parameters and adding it here rather than hitting the send body. So I'm going to turn off send body now and then we should be good to execute this step. So, I'll hit execute step and it may go back to the waiting step here. And then we'll click the play button on the get final videos again once it goes all the way through. Then list final videos. Now, once you save and then refresh with that change to get final videos, just hit execute workflow and we're going to run it from the top. So, we're going to watch it go through and add the script and then it's going to talk to 11 Labs for the audio. It's going to get that file. Then, it's going to turn it into transcripts and log the record over to Air Table. Then it's going to convert that transcript to scenes. It'll create prompts for each of those scenes. It'll merge that all back together. Then it's going to bring the scenes together, split it out into six jobs in this case, and then those six jobs are going to be requested as videos. It'll aggregate all of those together, and then we're just going to wait for the video to generate. This is going to take a little while, so just take a quick break and watch something else on YouTube in another tab or something while you wait. And when this is done, you can come back. All right. And then once that's done, it's going to go fetch the final videos and then continue on with uploading the audio to Cloudinary, creating a packet to edit the video with this code node right here. And then it's going to edit the videos, which is going to take another 80 seconds right here. And then it will connect the videos in a moment. All right. Once it's done waiting at that step, it's going to go off and get the connected video and then add captions to that video, which will take one more 80-second run right here. And then we're going to finally get that final render of the video and we're going to upload it to our social platform. So, we just have 80 more seconds to wait and then this automation will complete. And remember, if you want to learn the ins and outs of how I created all of these nodes and the thinking process that went into it, you can join AI foundations, our premium community, where we explain all of these nodes step by step and how they work. And I even explain how to generate these code nodes using the help of AI. We go over the confusing nodes like the merge nodes and aggregate. And we show you advanced prompt engineering techniques for these AI agent nodes. Now, this automation is completing. And as you can see, it's uploaded to all of our social platforms. And we did get an error here on the last node when it was logging back to Air Table. So, let's see what the issue was. If I click into this Air Table node right here, let's see. The node log doesn't exist. Okay, that's because the node is actually called create a record. I forgot to rename this node to log. So that was the issue. So if I were to rename this to log, it would work the next time. But in this case, I'm just going to click into here. And since we already have the node, you can just delete this ID right here and then click on this create record. And then we're going to drag in the capital letter ID into that ID field. And then we can just hit execute step again. And now it should work. It just didn't have the right node name in here. It should have said create a record since that is the actual name. In the original template I had this named log. So that's why that didn't go through. But there it went through. And we can check over in our air table at what that looks like. So let's see here. Okay, it still says in progress. Let's see why it did that. Yeah. So we have to add a column here of status and then we need to change that to posted. And then if I run that one more time now in the future when I go into the air table over here, it should say posted. And as you can see, we have our final video. It's also been uploaded to all of our platforms. So, let's check out this video. You didn't make a big move today, but maybe you didn't spiral either. You answered one email, folded two shirts, got out of bed when it felt heavy, and maybe that's all you had in you. It's easy to dismiss it. To think, "I didn't do much." But the truth is, it doesn't always look impressive. Sometimes it's whisper quiet. Sometimes it's just staying upright when your body wants anything, but that counts, too. You don't owe anyone proof of your progress. The fact that you showed up in any capacity is enough for today. Quiet consistency builds louder than people realize. Let that be enough for now. All right, perfect. So, that video did turn out quite nice, but if you wanted to do this for different niches, I have the exact prompting that you can do that with in our premium community or you can go into the custom uh script right here and you can just add in what you want for the niche. So, just make sure that you keep this output format the same. But in terms of the examples down here, you're going to want to switch these out with different examples. So, you're just going to want to switch the content within this to fill in what type of style you want. So, you need to create like four examples here. You can use AI to do this. So, you can just copy these examples, take it over to chat GPT, and basically just tell it to switch those out. You can also switch the niche uh up in here somewhere. Let's see. uh right here where it says creating motivational short form scripts, you're going to change it to whatever your niche is for that portion right there. And then if you go back to canvas, you might also want to go into the prompt right here. And you can change out some of this prompt instruction. I would leave the important rules because I've inserted rules that are going to apply to all AI videos. And you should also leave this scene count. That's very important. And these variables are important as well. You can change things out in here if you want to. If there's something that the prompting is not doing that you want it to do and you want it to match your niche better. And if you want this 100% automated, then you can swap out this first node right here. This trigger click right here. Just delete it. And then click on the plus button and then type in schedule. Click on schedule trigger. And then right here where it says trigger interval, change it to hours if you want to trigger this every few hours or so. And I wouldn't do more or I I guess less than 2 hours per day. So if you did 1 hour, that would be too much. If you go for like two plus that would be totally fine I think. Um I like to do four hours. So every four hours this runs. And then if I hit back to canvas and I drag this schedule trigger up here and then connect it to my script. Now if I activate this workflow and then hit got it. Now this is just going to run every 4 hours for me and it's going to be completely automated. If you want to join AI foundations, you can scroll down to the resources in AI pioneers and you can click this upgrade to install in one click. That will bring you to AI foundations where you can get signed up for our community. You can join us on the calls and I encourage you to come introduce yourself. Every Friday we have an introduction call and on Thursdays we have support calls where you can jump on the call, you can share your screen and you can show us what you're going through and we can help you through any of the problems that you might be running into with your automations in NADN or otherwise. Please leave a comment below letting me know your thoughts on this automation. And if you enjoyed, please like this video and subscribe to my channel if you want to see more content exactly like this one. I'm going to have lots of other videos coming out. And if you guys like these content automations, leave a comment below letting me know that you enjoyed it and what type of content automation you want to see next. It just lets me know that you guys really enjoy these videos and that you want me to keep making videos like this. All right, we'll see you in the next