Transcript for:
Automating Blog Content with AI Agents

I let these AI agents take full control over my blogs. They are now doing research, writing the posts, creating the thumbnails, and publishing daily blogs for my products, Feed Hive, Link Drip, and Abbase. And the results are surprisingly [Music] good, at least for now. Cuz who's going to read all this content? What happens if Google detects that this is AI? Is it worth the risk of deranking your websites forever? I can't tell you if it's worth the risk, but I'll try to inform you to make a rational decision for yourself, and I'll show you how to set up this workflow for yourself in NAN step by step. For a long time, I was living in fear. They're not going to get away with it. In fact, Google has already made it very clear they will fight AI generated content with all they got. Instead, I would focus on a to cuz I heard people say again and again that you should never use AI to generate SEO content. But at the same time, my blogs were dead and I simply didn't have the time to sit and manually write content often enough. I did some research, but Google's policies aren't exactly clear about this. They say they don't penalize content just because it's written by AI, but they do penalize lowquality, unhelpful content created primarily to manipulate search rankings. So, mass-produced, low value AI content clearly fits this description and would be treated as spam. But it also meant that if I could create helpful, highquality, relevant content using AI, I'd be home safe. If I was going to do this, I was going to do it right. I would need careful research, quoting other sources from the internet. I would need unique internal knowledge that couldn't be copied from other places, and I would need custom onbrand thumbnails that would fit with the overall design of my website. After carefully considering the consequences and the pros and cons, I decided to go for it. Let's break it down. Every blog post starts with a topic and a plan for which keywords it should appear under. Having an AI do proper keyword research, find content gaps, and generate real content ideas is challenging, but we do have a few options available. First, we have SER API. This is simply the API version of Google. You'll give it a search term and what you get back is basically page one on Google but in JSON format. This is very useful since you can have an AI agent do a Google search through NAN to see what already shows up under a given keyword. Secondly, we have Perplexity's sonar model. This is an AI search model that will search, retrieve, and synthesize information from the internet. This is super useful to search for real information and facts without having to manually read through a bunch of websites to find what you're looking for. Finally, we have OpenAI's new search model. It's similar to Perplexity Sonar models, but it's currently in preview. Now, I tried a bunch of different approaches to make this work. And what I found is that there are two ways you can start this workflow. You can do all the keyword research manually and give the AI clear and precise instructions on what to write about. If you are an expert SEO and keyword researcher, you probably want to do it like this and accept that the workflow isn't going to be fully autonomous. Or you can do it the way I ended up doing it, and that is by using OpenAI's search model to find content ideas for you based on an overall description of your product and brand. This will give you much less control and most likely a smaller overall chance of ranking well for each post, but at least it will post entirely on its own. Let's set this up. If you don't have an account on NAN already, now is the time to create one. You also need to add your OpenAI credentials here so NAN can connect with your OpenAI account. It's super easy. Simply follow the instructions here. Now create a new workflow and add a schedule trigger. Configure this according to how often you want to publish a new post. Now let's add an AI note. Pick open AAI and pick message a model and let's call it search topic ideas. Let's pick GBT40 search preview from the list of models. The first prompt should be a system prompt and we will give it a basic set of instructions like this. This agent will be writing content for Feed Hive, one of my products. So, make sure to change this description to something that fits your brand. All right, let's test it out. Are you ready? Yes, I'm ready. Awesome. GBT40 search did a great job here. I tried to have an AI agent do actual longtail keyword research from Nadn using various different approaches I found on other YouTube videos, but honestly, it just doesn't really work that well. So, you can either do manual keyword research or you can let GBT40 search go off totally on its own and come up with things. So far, I've had surprisingly good results with doing it this way. Now, let's bring in another AI note again. Message a model. Let's call this one prepare topic. For this one, we can pick GBT40. And also here we want to have a system prompt. This is our content architect. It needs somewhat specific instructions. We also want to describe the exact JSON format it needs to output. This part is very important. And here we want to turn on output content as [Music] JSON. Now let's add another prompt which should be a user prompt. Here we want to add the ideas the research AI came up with. and test it. Nice. These agents have now done their job. Let's wrap them in a sticky note to organize them nicely. This first part of the final workflow is almost finished. Now, how do we make sure that these two AI agents will come up with new ideas and not just repeat themselves every day? After all, we don't want to end up cannibalizing on ourselves on the same exact keywords and topics. Also here I tried a few different methods including storing all blog post as vectors in a vector database so the sentiment of new ideas could be compared to previous posts but again sounds fancy but it just doesn't practically work that well. So I ended up simply giving the AI agents access to a simple list of summaries of previous posts. This part highly depends on which system you're using to publish your blog posts. You can use a headless CMS like strappy or prismic. You can use a no code tool like web flow framer or you can commit these directly to a GitHub repository as markdown files if you're hardcore like that. I use strappy. So in my case I will make an API call to strappy using the HTTP request node. This is how you filter blog post entries in strappy but obviously this will be different depending on your setup. There we go. Now let's add this node here before we start researching. And let's just add a few extra instructions to the research agent. Good. And add a user message. We'll just drop all the data from our blog archive like that. And turn it into a JSON string so the chat model can absorb it all. Perfect. This does the job. So far, I haven't caught my two agents producing duplicates. So now we have a description of the topic, a title, a description of an image we want for the thumbnail, and an outline as a list of chapter headlines. We now have everything we need to have AI write the blog post. Or at least that's what I thought. But no matter how many times I tried, these blog posts would come out generic, shallow, boring, and empty. They just wouldn't be worth reading. And most likely these blog posts would be exactly what Google would classify as spam. But then I realized what was missing. There are certain elements that all good blog posts tend to have. Statistics, facts, references. Good blog posts reference other articles and sources on the internet and incorporate them into their post to underline the point they're trying to make. And the best blog post will also have unique insights. They're not just reiterating what other people have already said a thousand times. They bring something new to the table. Insights from their business. There is no way around it. Before moving on to writing the blog post, I needed a special step. This part is critical to make the system work. We need two other AI agents to do additional research before writing the actual post. But instead of researching topic ideas, this agent digs deeper into one specific topic to find interesting facts, statistics, and discoveries that we can use as supporting knowledge and references for a blog post. The other one is an AI with secret access to an internal knowledge base. This is a data archive with private data which might be interesting to include in a relevant blog post. It is data that we are sure no other blogs have access to. This will help keep our blogs unique. So, first we need an AI that can do research. You can use OpenAI's GBT4 search or you can use Perplexity's sonar models. Perplexity now has a sonar deep research model. You can use this if you really need exhaustive research to include in your articles, though it's much more expensive. And for my use case at least, the regular sonar models or GPT4 search will do just fine. To keep this workflow simple, we'll use GPT4 search like we did in the previous step. Let's set this up. So, we'll add another node and again pick OpenAI's messenger model and again find the search preview model. Let's change the first prompt to system and give it a prompt. This part highly depends on your specific needs. Generally, I recommend starting out with something like this and then specify further if needed. Great. Let's also add a user prompt. And here we want to add the title and the description from the previous step. Let's test it. We got the content and all the citations. Now, the next part is where it gets really exciting. We need to prepare our internal knowledge base. We're going to do this with a system called Rag. This allows us to let an AI agent search and retrieve relevant knowledge from our private knowledge base. Nadn supports vector databases such as pine cone, but if you're non techch, it's pretty complex and not very userfriendly. So instead, we're going to use a base and I'll show you just how easy it is to set up a rag. Go to 8base.ai and sign up for a free trial. You want to pick the pro plan since we need API access. [Music] If you think $99 per month sounds expensive, don't worry. I'll show you how to get around this in a moment. Now that we have the account ready, go to sources. Here we can add our knowledge sources. We can add our website. Since my blog is for my product FeedHive, I'll add FeedHive's main website and FeedHive's help desk website here. If there are pages on your website you don't want to include, you can go here and exclude them. We also have a bunch of these university videos on our YouTube channel. So, I'll add those two. There we go. Now, I'll select them all and click train AI models. This is going to take a few minutes. Now let's go to the FAQs page and create a new FAQ. This is really really cool cuz it allows me to manage knowledge directly in a base. So here I can start adding questions like [Music] this. And you can expand this with all interesting internal facts you have about your business which might be interesting to include in a blog post. And if we go back to the FAQs, then we can train this entire FAQ here. Awesome. Now we have AI models with direct knowledge from our knowledge base. Let's create a new chatbot with Abbase. A chatbot comes with a chatbot UI. You can use this to embed the chatbot directly on a website. However, in our case, we can ignore all of this since we will use a base API to talk to this chatbot directly. So all we have to do is go to knowledge and add the knowledge we just trained. So this specific chatbot can use [Music] it. Now if we go to settings, we see the chatbot ID here. Note this down cuz we'll be using this in a moment. Let's go to settings, click API keys, and create a new API key. Let's grab this API key and head back over to NAD. From the dashboard, click create credentials. Choose header [Music] off and in name write authorization and in value enter bearer space and then paste the API key. Now, back in the workflow, let's add an AI agent [Music] node. Let's add a chat model. This can be simple, so let's just use GPT40 mini. Now, I add simple memory and give this a fixed key. [Music] For the tool, let's add an HTTP request node. We want this to be a post method. And for the URL, we'll add this API.ai V1 chatbot ID reply. And instead of ID, we'll use the chatbot ID we noted down from when we created the chatbot in a base before. There we go. Under authentication, choose generic credential type header off and pick a base from the list. Perfect. We also need to include a body. Put the name message here and let the model decide the value. We also want to add a session ID and again let the model decide this value too. Let's scroll back up and give this tool a description. Now let's rename this get internal [Music] knowledge. Perfect. Now, let's go back to the AI agent and give it some instructions. There we go. Let's test it out. [Music] Awesome. It works perfectly. So now we have both knowledge from the open internet and internal knowledge we can use for our blog post. And as always, full transparency, 8base is one of the SAS products my team and I run. And if you don't want to pay the $99 per month for the pro plan, we actually offer access to 8base as a lifetime deal too. If you go to founderstack.pro, you will get a bundle containing not just a base, but also feed hive, link drip, and tiny kiwi. Basically, my entire SAS portfolio for a single one-time purchase. And with this deal, you get access to the equivalent of the pro plan with 8base. So you can fully leverage the API and all its cool automation capabilities. Okay. Finally, let's tidy this up. This is now a part of the working workflow, too. Now, we can finally move on to writing the actual blog post. And because we've prepared, planned, and researched our post already, we just need one AI to handle the actual writing. If you prefer, you can fine-tune a model for this if you want a specific tone of voice. Or you can use a base model of choice. Claude, Deepseek, Gemini, Open AI, they all have very capable base models for basic content writing, which means that this part is going to be pretty easy to set up. My go-to here is going to be 03 Mini from OpenAI, which is a chain of thought model. I think it does very well for this type of task. As always, let's give it a system prompt. We'll give it a description of its task. And we'll add a user message, too. And we'll provide it with all the things it needs to write the post. [Music] Let's test it out. And look at that. A fulllength, well-ritten, wellressearched blog post with unique knowledge and sources from the internet. I mean, this is valuable. I would now read this and find it interesting and valuable even if AI put it all together. And this is exactly what we want. At this point, the system was working. But there was one thing still missing. This was optional, but I personally think a blog post should have a nicel looking thumbnail. And not just that, I wanted a thumbnail that was consistent on the brand and fit in with the website design and the overall blog page. And this part was tricky. My first approach was simply to use Flux to generate the thumbnails based on the image description created by one of the agents and then have another agent put some text on top. I just don't think they fit. Well, I personally really like them. Me, too. I think they're the best thumbnails I've ever seen. Could you guys just be honest for once? Yeah, they suck. Absolutely awful. Listen, Simon, if you want this part without coding, this is the best you will get. Hey, why don't we just vibe code it? Vibe code it? Ah, yes. You tell us how you'd like it and we'll code a program that generates the perfect thumbnails. You know how to do that, Simon? Please. I'm not going to go too deep into details about this part. What I can say is I used cursor to vi code a solution using NodeJS and a library called canvas. And after a few hours I had a perfect solution running on a custom API. This API would take an image, a text, a color theme, and a list of highlighted words and then put a thumbnail together on this specific theme. And it works perfectly. The easiest way to generate images from AI is using Flux on Replicate. Go to replicate.com and create an account. Go to AI tokens and create a new token. And just like we did with Abase, create a new credential from the NAN dashboard. Add authorization in the name and type bearer space. Then paste the API key from replicate. Now we can add an HTTP request node. Let's change this to post. Use the URL API replicate v1 models black forest labs flux predictions. In authentication, use generate credential type header off and pick the replicate credentials from the list and as a body let's add these fields using JSON. For the prompt, let's finally use the image description from the very first step. And to make our lives a little easier, let's add this header with the name prefer and the value wait. This will make the request wait for the actual image to be generated before it returns a response. Let's see. Awesome. Now, you can use this image as is. You can also change the proportions to 69 if this fits better. And in my case, I'll quickly add another AI here which is responsible for generating the parameters for my custom thumbnail API. Let's add an HTTP node for the thumbnail API here. And when we put it all together, boom, there we go. If you're up for the challenge, try opening cursor and vibe code your own API like this. If you're not a programmer, I wouldn't do this for commercial productionready software, but for a small extension like this, I think it is perfectly good and fun to use AI to vibe code a solution, even if you're not a professional software engineer. From here, the only thing left is to publish the actual blog post. For me, this publishing step includes pushing the title, description, content of the blog post, and the thumbnail to Strappy, where I keep all my blog content. [Music] I also have a step that uses a trigger in feed hive. So the blog post gets published on social media too. For you this publishing step might look different. Nadn has firstparty integrations with strappy WordPress web flow and a lot of other tools. So this part is up to you to set up according to your needs. But as you can see this now works fully autonomously. It requires no human intermediate steps. After setting up this workflow, simply lean back and watch blog post coming out every day or every other day or once a week, however often you want to post. This has been a total game changer for our blogs and we're seeing really good results with literally no effort, zero. It's fully automated. And while that being said, I still need to make an important point here. This is extremely experimental and potentially really risky. So if SEO is your best performing channel, if it is the top channel for your business where mostly all your users come from, if you have spent years perfecting your writing skills and your blog really means a lot to you, then probably you don't want to do this. You've probably noticed that I put a whole lot of effort into making these YouTube videos and that's because this is my highest performing channel and I would never try to fully automate this with AI. I care way too much about it to go down that road. But for me in my business, my blogs are way down the list. They were really never performing that well and they were more or less dead for a good while. So for me, this was totally worth the gamble and so far it's working out really well. But before you consider replicating this setup, just keep in mind where you see your blog post fit into your business. If it's higher on the list or you're planning for it to be higher on the list, you probably want to reconsider using this exact approach. By all means, though, set it up, play around with it, and see if it's useful for your business, too. You can get access to both Abbase and Feed Hive, as well as two other tools for a single lifetime purchase on Foundersstack.pro. And as a Founder Stack member, you can also download this entire workflow and import it into your own NN account to save some extra time. I have put links in the description for all of this. Now, excuse me. I need to get back to work, or rather my agents will. But if you want to see how to set up a similar workflow, but for content on social media, check out this video next. I will see you over there.