Transcript for:
Cost-Effective AI Automation Tool Overview

In this video, I'm going to show you a free tool that will help you reduce the cost of all of your AI automations and all of the monthly subscriptions that we need to run them. This is a free tool that allows you to launch your own API server on the Google Cloud that can help eliminate all of the subscriptions that we pay for on a monthly basis. To start, I've been focusing in on the features that I need for my new fully automated faceless video generator, Content Story Magic version 2.0.

You might have seen my previous version that made videos like this. The project was officially canceled. As of now, This new version is going to come out in about a week.

Once you deploy your own server, you can use these make modules directly in your automations. We can turn media into mp3s. We can combine videos. We can transcribe media into transcriptions and SRT files.

We can do audio mixing with video and audio, and we can even caption our videos without any costly monthly subscriptions. For an example, I've got a story that I'm working on here, the LA Story YouTube demo. It currently has one scene, which is the intro scene, and that intro scene is made up of three different scenes.

A freeway scene made with Midjourney and Luma Labs, a video of Griffith Observatory that's panning to the left, a beach scene where we can see the waves crashing up against the shore. and then a scene flying through the LA skyline. So after using Mid Journey and LumaLapse to create these individual videos, we'll need to take those final shots and combine them into a final video for that scene. So here's a simple make automation that's going to pull up that scene. It's going to grab these four videos, pass them into the no code architects toolkit where we combine videos, and then it will update Airtable with the final video.

So I'll go ahead and run this here. It's now calling the no code architects toolkit. which again is running on the google cloud at a fraction of the cost and once it's done we can open this up here and we can see that it uploaded that video to the gcp storage and then it wrote that back to air table so if we jump into air table now we can see it took those four videos and combined it into one i can open this up and skip through it you can see now it's 20 seconds because there's four five second videos there's the griffith observatory there's the beach And there's the city fly-through.

So the agenda for today is I'm going to show you how you can use the no-code architects toolkit in your own automations. And then more importantly, I'm going to show you how easy it is to use AI to expand the no-code architects toolkit. Because as cool as these different functions that we've built out are, this is really just the beginning and we can add any endpoint that we want. It's a completely open source project that you can get access to and we can all work together to add features to help each other out.

So make sure to get involved and let me know what type of functions you'd like in the no-code architects toolkit to help reduce the costs in your own projects. Now the no-code architects toolkit is live now. It's completely free to use. You can access it at this url here.

I'll put a link in the description and it comes complete with full instructions on how to use it. Now, if you want access to tech support and early access to the Make.com modules that make it easier to use, make sure to jump into my active community. It's growing fast.

You can get access to a Make an Airtable course and a bunch of really cool templates. And you'll get access to my new fully automated faceless video generator, the Content Story Magic version 2.0. Now let me explain a bit about how the toolkit works. Again, this toolkit will be completely open to the public, but currently to get access to these Make modules, you do need to be a part of the NoCode Architects community. You'll be able to click this link to get early access.

If you don't want to join, you can still take advantage of the NoCode Architects toolkit. You'll just have to call your own API through the HTTP module. You can make a request and then you will fill out all of this information using the information here that you can find in the free documentation.

Again, a link to access this is in the description below. By the way, if you're looking for specific instructions on how to install the toolkit on the Google Cloud Platform, make sure to check out this video. It walks you through how to do that step-by-step. I'll link it in the description below.

So now let's take a look at the different functions that we have access to. So this first module here is called Media to MP3. Basically, you can supply any video or audio file and it will convert it into an MP3 file. This is great for creating a podcast from your YouTube videos.

So you just link your video file here and then you can supply an optional webhook and ID. When you supply that webhook, it's going to process that request in the background. And then when it's done, it will send a request to this webhook URL with the ID that you provided it with the final MP3. So now let's go ahead and give this one a test.

it's now calling the api that is located on the google cloud platform and we can see that was successful and it gave us a job id and told us that it was now processing and when it's done it's going to trigger this webhook and now you can see it just came through here currently i'm sending all of the webhook requests to airtable where i built this simple request log you can see this is the id we passed the api right there and then it actually gives us that response that mp3 file and notice that it automatically uploaded that mp3 to the google cloud storage where if we click on it we can actually hear that audio We've had clients that triple the amount of content they're producing each day. And then it also gives us a status code and the message. So that's one endpoint media to MP3. I demoed this one earlier. This is where you can combine videos.

If we take a look inside this make module, you can add as many videos as you'd like. Here we're combining two different videos. Again, we're supplying that webhook URL and then that ID that we want the API to pass back to us.

So I'll go ahead and run this module as well. As always, if you supply that webhook URL, it's going to immediately process and return. It's going to give you that job ID.

And then when it's done, it's going to trigger this webhook. So we can see here that's already happened. We've got our ID, the job ID, and the response.

So I should be able to click on this and see the merged video. So in this video here, I just merged the same video twice. It was a 30 second video. So if I skip through this, we'll see that the same video was used and combined into one. And the toolkit also has the ability to transcribe media.

Here you can provide a video or audio file. For the output you can either ask for a transcript or an SRT file. SRT files are great for building captions. I'll show you that here in the last step here. And then again you can pass in the webhook and the id.

So let's go ahead and run this one and see what happens. So here we requested the transcript. I'm going to go ahead and modify this to an SRT file and then I'll go ahead and run this module again.

And now we can see the first one just came back and we have the transcript just like that. And then the second one came back with the SRT file. and then here you can see that srt file moving along now let's take a look at the audio mixing function and what this is for is for combining audio and video so in the faceless video generator we're creating videos but we're also creating audio voiceovers with ai so you can supply the video url and the audio url you can define the volume for each track you can define the length of the final video should it be based off of the length of the video or audio and as always you can supply the webhook url and the id for this request so i'll go ahead and run this module as well as always it comes back immediately gives us that job id and tells us that it's processing i'll jump over to the log we'll see that that audio mixing process just finished and as always we have our id our job id and the response if i click on this you'll see that video it's a bit hard to tell the difference because i overlaid my own voice on my own video but again what happens here is that it overlays this audio on top of this video at the volume that you set for each track The no code architects toolkit also comes with a G drive upload function.

What this does is it allows you to upload a file to a specific folder with a given file name to your Google Drive. And the reason why I added this is because both make and Zapier limit the size of a file that you can upload to Google Drive to 1000 megabytes. And that's when you're on their most expensive enterprise plan. On the smaller plans, it's even less.

And when you're doing stuff with video and audio, it's quite easy to go over that. So this will allow you to get around that and then move it along. We also have a tool here so we can caption videos. We can supply a video URL along with the SRT file which we can get from this module here. Again remember this transcribe media function will give us an SRT file and then we can supply a bunch of different options like the font and the font size.

I'll go ahead and run this module. As always it returns a job ID and the message processing and then back in our log we can see that it captioned that video and then we can watch it. I've turned off the volume and I'll go ahead and click play. You can see the captions along here at the bottom and of course you can control the margins and where these captions actually show. You can change the color and the font and the border color and everything else.

Here's a quick list of all the different things that you can adjust on your captions. So you can see here we're building a really low-cost powerful set of functions that we can use inside of our automations. And again you don't have to use the make.com module to do this.

You can use the traditional HTTP module to call all of these endpoints directly. And here's example payloads for each of those different functions. The make.com modules just make it a lot easier.

Again, these will be publicly released for free, but to get early access, you do need to be inside the no-code architects community. And if you're enjoying this video, make sure to like and subscribe to the channel. It tells me what type of content you want more of.

Now, finally, one of the last things I wanted to show you about the toolkit is how easy it is to add in new functions when we need something. Let's say we're building out our automation and we find a service that we're using all the time and we want to replicate it inside the NoCode Architects Toolkit. Well, I'm here in Cloud and that's what I did just last night.

One of the members inside the NoCode Architects Toolkit in one of our tech calls wanted to know if we were able to extract the keyframes from a video. And a keyframe is just an individual slice of a larger video. So a short video like this that is about 30 seconds, looks like it has about four different keyframes in it. And you might ask yourself, well, why would I want to pull out the keyframes from a video? Let's use our faceless video for an example.

Here we have a video with no audio that transitions through a number of different scenes. So if you wanted to analyze this video, you wouldn't be able to analyze it with audio. But if you were able to pull out those individual keyframes and send those keyframes to something like chat GPT vision, you'd actually be able to use automation and AI together to get a sense of what's happening inside of this video, even though there is no audio. Obviously, if it's a typical video with audio, you could just analyze the audio potentially.

But in a movie like this without audio, it would be impossible without these keyframes. Now I'm back here in Clod and I want to show you the discussion I had that allowed me to easily add the extraction of keyframes to the NoCode Architects Toolkit. So the first thing that I did was I said, hey, is there a command that would extract the keyframes from video using FFmpeg?

And what FFmpeg is, is an open source tool. It's a free tool that allows you to manipulate audio and video. And I said, yes, there is a command in FFmpeg that will extract the keyframes from the video. Here is the basic command to do this.

So before I tried to actually... build out the API with code. I just wanted to make sure that it was actually possible. It gave me back the example. It told me how to run it on my own computer.

And then we went back and forth like you often do with AI because it made a few mistakes and we went through a few different iterations. This task was a little harder than usual and we went back and forth quite a few different times. I just kept telling it it didn't work. What was happening was it kept messing with the dimensions of the video and throwing that off. Now quite honestly I struggled back and forth with cloud more than I normally have to.

but sometimes you have to do that. But finally we got it, and then I was able to move on with actually adding this to the code. So then what I did was, is I gave it sample code from the existing toolkit. Here we're looking at the code of the toolkit.

So I went to some existing functions that already work. The example that I uploaded was the endpoints for audio mixing. I showed you that before right here. And then I basically said, hey, here are two example files from the API. Please use these as templates for producing a new endpoint, extract.

keyframes. The endpoint will take a single video URL. So you can see what I'm doing here. I'm just literally describing what it should do. The endpoint will take a single video and the other default fields webhook URL and ID.

It will use ffprobe, which is another tool that's a part of ffmpeg, to get the video aspect ratio and dimensions. Then it will extract the keyframes as images. And then I said then it will upload each image to GCP storage similar to the audio mixing.

And then it will return the links in the response using this JSON string format. and then I gave it the format that I wanted. And then I said of course it will also return the other common API responses as audio mixing and a few other details that aren't really that important. And then right away you can see that it comes back with brand new endpoints for the keyframe extraction. I'll click on that and then it provides all of the new code for the extract keyframes and it also includes both files that we need to make this work.

Then it gives us a simple explanation of what it did and then all I did was grab these two files. I added them back into the project here. Notice you can see extract keyframes here.

and extract keyframes there. It's essentially these two files with some slight modifications. And now you'll see here that I have a new endpoint extract keyframes where if I send it a video URL, it will extract those keyframes and it'll send it back to our webhook just like this where we can see all of the individual URLs mapped out where we can simply click on them and see the different keyframes.

Look at that handsome guy right there. So if you want access to expert tech support and calls with me almost every single day, Make sure to jump into the NoCode Architects Toolkit. You can get early access to the Make modules and join a bunch of other NoCode Architects.

Either way, I hope you found this video valuable and I'll see you on the next one.