Transcript for:
Automated AI Newsletter System Overview

Hey everybody, Nick here. In this video, I'm going to build a completely automated end-to-end AI newsletter system that sources posts completely autonomously on Reddit, then curates them using AI filters, then summarizes them, and then writes new content based on that content before finally putting it into a newsletter format, pushing it out through MailChimp, although you could use whatever the hell you want, and then, you know, getting it seen by real people. This was a viewer request. I had a lot of fun putting it together. So if you guys have any other systems you want to watch me build... then by all means, drop a comment down below. If it's a good one that I haven't done yet, I absolutely will record a video just like this. Appreciate the time. Let's get into the system. So I haven't actually built the system out. What I want to do is I want to build this alongside you. The reason why is because I found a lot of content on YouTube about automation is unfortunately only really showing you the finished product. They never really show you the steps that you need to take to get there. I always think of it sort of like an engineer who's trying to, you know, improve his engineering skills. looking at a picture of the Empire State Building and being like, awesome, I think I can build that, right? There's a lot that you don't know during the discovery process. And my purpose in these videos and also my channel more generally is just to show you guys what that looks like. So we're going to do some build alongs. I'm going to try and figure these things out as they come. I may have some stumbling blocks, may have some issues. That's okay. I want you guys to see what a real actual building process looks like. And you know, one of the reasons I'm doing this just because I've heard from a lot of my viewers that they tend to like this sort of content. So if you don't, maybe let me know. Um, any who I've spent around five minutes thinking through this and here's how I believe this system is going to work. We're going to start by getting posts from some popular subreddits. I've did some, I've done some thinking and I want to do my newsletter on AI. So I want this to be just some general purpose, artificial intelligence newsletter, um, called loop. Once we have, you know, this newsletter post source from Reddit, what we need to do obviously is we need to scrape it using a service. So I have a couple of services in mind. I'm going to run through each of them in a moment. Then we're going to filter posts based off the subject and based off some other metrics to ensure relevance. When I say other metrics, I mean like you tend to get a lot of info from Reddit, like upvotes, downvotes, there's comments, all that stuff. So we can actually use that as a second order filtering mechanism to tell us whether or not a comment is good or whether or not a post makes sense or whatever. From there, we're going to store posts in a database, which is really just a Google sheet, with a unique ID. The idea being the second that you start having some sort of object. permanence. The second you start like storing data somewhere that you can retrieve it later, you open up your application to like dozens, if not hundreds of other cool things that you can do. And that's what we're doing. We're future proofing this as well. And then after that, every week or however long you want to run your newsletter, in my case, it's going to be every seven days, we're going to grab a certain number of posts that we're going to use AI to summarize those posts and then use that to write a standalone newsletter. We're then going to add the copy to a newsletter platform like MailChimp. I just picked MailChimp here because it's the one that's the most accessible. from previous projects, I know that they have an API endpoint, we could call to populate a newsletter template. And not all newsletter platforms do. And yeah, this is sort of how I think things are going to go. Reality is often deceptive. And a lot of the time, you know, if you remember from my other videos where I've done something like this, some of these steps might have to be swapped around, we might have to do a couple of other things that I'm not necessarily anticipating. But yeah, this is more or less how I think it'll work. Let's dive in with the very first step, which is getting posts from a popular subreddit. So I'm going to unfulscreen this. And what I'm going to do is I'm just going to type in like Reddit singularity, Reddit, uh, artificial intelligence, Reddit, um, open AI, right? I'm just really quickly going to go top to bottom and just see what these threads look like realistically. Um, and this is important because anytime that you're building any sort of system, you gotta know the ground truth data and you have to understand what, what, what the quality is. You have to like kind of You just have to massage it a little bit before you actually want to go through the rigmarole of like building out a system. If you make shitty assumptions with the ground truth data, then a lot of the time you'll carry those shitty assumptions the entire way through the rest of your program or your flow. So I'm just going to really quickly read through this top to bottom, and I'm just going to see what kind of data we're getting. Okay. And I got these three subreddits open. This looks pretty reasonable to me. So I like singularity just because, you know, it's a subreddit that I personally visit reasonably often. Scrolling through, it's confirmed OpenASR is releasing for the public today. Sick. new footage of optimus walking outdoors hell yeah chappie tpt pro users get infinite story usage of 20 second videos with 1080p hell yeah staff claims agi is achieved with 01 cool so there's tons of posts here i'm noticing the vast majority of these are videos and or photos additionally um they have links that's kind of annoying because if you think about it from my perspective like the simplest way to do this would just be to feed in the text of this reddit post into my newsletter generator so I'm kind of now thinking, all right, there are like a couple of things we could do. Okay. This subreddit looks a little better. Most of it's text. There are a couple of things we could do. We could develop a system that like goes through the images and also reads through the images, summarizes them on that sort of thing. Or we could just look for. uh, posts that only really have text. And then we could just summarize those. That'd probably be the easiest. Let's read through the open AI one. Cool. So more, most of the same, um, Ooh, we got an MKBHD video. Interesting. Yeah. So more of the same, I'm thinking what I'm going to do is I'm just going to see how many of these communities I could scrape. If I can funnel these into one data source, that'd be ideal. Um, because I don't think I'm going to get enough data for like an actual cool newsletter. If I just source. only from the singularity or only from the artificial intelligence subreddit. So my, my mindset now is okay, great. I'm going to try and see how many of these I can feed into a scraper of some kind. Then once I know, um, you know, we're going to run through and I'm probably just going to try eliminating all of the images and all the videos, uh, first, because I want to see if I can get by on text posts alone. If I can, then I know the scrape is going to be a total walk in the park. It's going to not be a problem already. So in terms of how to actually go about the scraping aspect, what I basically always do anytime I want to build. out an application that requires data from other social media source or whatever is always go to this platform for us called appify appify basically just lets you make apis out of anything and they have this marketplace or store component where other people create scrapers and then make them live for you they will charge you a small fee for usage but this abstracts away all the complexity and all the bullshit you having to actually go out there and build your own scraper which is great for me because i just wanted to whip this thing up in like an hour or so and if i had to build my own reddit scraper god damn would that take me forever So Google Maps extractor, Instagram scraper, TikTok data extractor, you can see you can scrape whatever the hell you want. I'm just going to type in Reddit. And from here, I see a couple, I see Reddit scraper, which I personally used before Reddit scraper, Reddit API scraper, just opening these new tabs. And the idea is we want a scraper that allows us to do a couple things. One, we want it to be like reasonably affordable. We don't want to spend a lot of money on this, right? You know, if your newsletter operation costs you $10 a month or something. That's great. You have a completely automated AI newsletter for 10 bucks a month. Jesus, people five years ago would probably be spending hundreds of thousands of dollars on something equivalent. But we also wanted to get us the data that we need. And what we need here is I'm thinking we need the Reddit post text. So I'm seeing here it says scrape Reddit posts with title and text, username, number of comments, votes, media elements. That looks good. This one here looks like it's made by the exact same guy, Trudex, Gustavo Rudiger. Um, it's just one is light. So I guess you probably get less data. The other is scraper. So yeah, 45 bucks a month. Screw that one. This one seems to be reasonable as well. This one seems to be reasonable as well. Why don't we just use the Reddit scraper light? Um, and I just, I have to see the interface on this thing. So I'm just going to go over here into my actors. I'll go to store and then let me go to Reddit scrape Reddit light. Um, let me save these changes here. Okay, great. And now we have what looks like a bunch of fields that we can edit in order to scrape the specific resource. So scrape information about post communities and users without login. That's great. That's what we want to start URLs. If you already have the URL of pages you want to scrape, you can set them here. If you want to use the search field below, remove all start URLs here. I imagine this is probably just someplace that I pump in the community. Excuse me for that. I pump in the community and then you know it's probably going to scrape it and what's cool is it looks like i can do multiple so that's nice let's go back here add this let's go back here and add one more okay and i don't know if these settings are automatically on or this is just because i've used this before and like i set these on not really sure definitely search for posts comments yeah posts only um we don't want to include nsfw content We're going to do a hundred. Are we going to do a hundred? Yeah. All right. Screw it. Let's do a hundred. Okay. So I'm going to save this. I'm going to start and I'm just going to give this thing a run. And the idea here is I just, I'm going to see the data. I'm going to see the format before I like get ahead of myself. Um, once I verify the format is okay, assuming that it is, then I'll jump into make and I'll see how I can take the data from the scraper and stick it into make.com. So yeah, uh, I'm seeing it's getting my data, which is cool. It's probably going to take a minute or two. hmm i'm not seeing any data inside of here which is weird why not maybe do i need to go hot i remember there being some issue with this sort of thing i'm entirely sure what it is or maybe it's because it says skip users posts i might have something to do with them okay no it's adding some stuff to the queue yeah i think this scraper just had some weird thing uh where you need to select what type um Reddit can sort posts based off hot, new, top, or rising. So if you click top, if you click hot, you sort of have these URL endings. And yeah, I just remember there was some issue where when I didn't use that, I didn't get the post. But when I do, I get them. Obviously there's no real way to know these quirks unless you actually go out and handle it. But, uh, yeah, I'm getting a lot of cool posts here. Looks like most of these are recent in the last week or so, which is nice. Some of these don't have thumbnails and stuff. Okay, great. So, you know, I'm getting enough data. This is more than enough data for me for to be able to run a test. Now, if you think about it kind of logically, what I have to do next is I have to set up a system. that filters these posts based on the subject matter. Like I don't like, let's look at this post here. Um, if you have a product to promote, this is where you can do it outside of this post. It will be removed. I don't really want that in my newsletter. Right. So I need a way to filter that out. I'm thinking I'm probably gonna use artificial intelligence to do this because it's flexible and smart, but, uh, yeah. So I think what I'm going to need to do is first of all, I'm going to need to get this data inside of make.com. The way you do so is you go over to storage and then here it says data set ID, just copy this over. And then inside of Make, type Appify. What you want is you want Get Dataset Items. Paste in the dataset that you just copied into this dataset ID column. Let me select the right account. And then just leave the rest of this blank. The purpose of this is this just allows us to connect to Appify and bring the data into a no-code automation platform like Make, which just allows you to do way more cool shit with it. So yeah, this looks pretty solid. We're getting all the data here, so... That's that post that I was referencing with the product promotions and stuff. This is another one. Uh, it looks like we have a body column, which is really nice. I'm happy about the body. Very happy about the body. Um, because, uh, yeah, we, you know, we're getting all the information we need, but we just want to make sure that we have some way to filter it. So kind of like what I'm thinking is, okay, how do we filter out this information? Probably based off title of the post and the body using AI so that we only extract things that are relevant. Like this is what I want. A drop in replacement for Lama 3.1, 70B approaches, whatever. This is the sort of stuff I want. Meta releases this. That's a cool piece of news. I want stuff like, I don't know, autism and artificial intelligence. New discovery says blah, blah, blah. Chat GPT-4O, better than Gemini Advanced. Do you know what I mean? So that's what I want. We need a way to filter this out. What I'm going to be doing is I'm going to jump into GPT here, and then I'm going to go create a completion or prompt. And I'm actually just going to write a prompt that does all this filtering for me. Now, because this is a filter, I'm just going to use GPT-4.0, which is a much more inexpensive model. And let me run you guys through what my prompt would actually look like if I were to design this for myself or for a business that I'm working with. The first thing I do is I'd add a system prompt that says you are helpful. intelligent assistant. That's the very first. This is basically just whatever the model identifies as. I'm saying, Hey, GPT, you're a helpful, intelligent writing assistant. So be helpful, be intelligent and assist me with writing. And then afterwards we go down to user. And this is sort of where we give it instructions. We tell what we want it to do. So what do we want to do? We are creating a newsletter with exciting, with new and exciting developments in AI. let's say that lists new and exciting developments in AI, your task is to filter a Reddit post based on relevance. If it's relevant and return either true or false rules, a post is relevant if it's about news. developments, exciting progress, or if it, yeah, or if it'd be considered newsworthy. Let's just go with that. A post is irrelevant if it's about something personal or if it's a question or if it's, um, a community moderation or if it's like a community moderation post like this one, or if it's, or if it just contains an image slash link with no context, let's do that. Return your output in the following using this Jason format will go relevance, and then it'll just be true or false. And we're just going to put this in brackets here. You don't have to. Sorry, we're going to put this in quotes. You don't have to. I'm just going to do this for the purposes of communicating with this model, kind of like an API. Okay, great. And now in terms of the text content, let's just go post and then let's feed in the body. That looks pretty good to me. We'll just feed in the body. Well, maybe we should do the title too. You know, maybe we'll go title and then we'll feed in the title and then we'll go body and then we'll go body. Perfect. Awesome. That looks pretty good to me. How many did we scrape here? We scraped 91. So why don't we just change the limit of this to 10. and then let's just get it to really quickly run a test on these 10 and just see what it's outputting oh my bad so we're getting the output as json but i actually want this json to be output and then parsed as well so in order to do that in openai what we need to do is go down to show advanced settings then scroll down to the bottom to where it says response format what you want to do is select json then click parse json response then click ok we're going to save this and then And now what we should get is we should get a result variable with the relevance. Perfect. Awesome. So I'm just really quickly going to jump through here and just see if any of these are true. I just want to see, you know, out of 10, how many realistically can I expect to be true? I don't know if any of them will be, um, we'll go result. Okay. Great. This one looks like it's true. Let's see what the detail was. The CEO who says cheaper, I could actually mean more jobs. Okay, cool. That looks pretty solid. I like it. Uh, let's go down to this one. Relevance false. This one false. This one false. Oh nice. So we got two out of the five so far. Three out of the five. That's great. Kind of losing my place here. Yeah. So three out of five so far. Sweet. Honestly, that's more than sufficient for me. Like if you think about it, if we scrape just based off the cost of this, if the cost of this thing is $4 per 1000 results, We just scraped 10 and we got three. So let's just even say that 20% of our records are okay. How many records do we really want in our newsletter? Like I'm thinking what we're going to do is we're going to feed in like five or six of these and then our newsletter automation is going to like summarize these and then provide a link back to the resource. Maybe it's not even provide a link back to the resource. I don't even really think that's necessary. We're just going to like compile this as if we were the news source. So realistically, how many do we really want in a newsletter? I don't know. Let's say six. So in order to get six, if it's 20% of posts land, then if we just multiply six by five, then that's 30. So in order to get this done, I think it's like 1.2 cents or something, or 12 cents, 12 cents per week. That's a walk in the park. Yeah, no problems there. I'm not going to worry about the images or anything like that. I'm only going to select posts that don't have images and that are just pure text posts that we can summarize based off of that. That just seems to me the simpler way to do it. Let's call this automated newsletter, uh, automated AI newsletter system. I'm just going to do one because I have a feeling we're going to need two, uh, scenarios here. And then what I'm going to do now is I'm just going to dump all this into a Google sheet. So I have this Reddit post database set up here. I don't have any fields or anything. What I want to do is I just want to take this, download all the output bundles. You see all these, I'm just going to copy this. Then I'm going to paste this in a chat GPT. And I'll say your task is to create a CSV of key names. I'm going to paste these into a Google sheet as headings. This is now going to take the data and it's going to just give me a bunch of headings that I can very quickly and easily use. Maybe quickly and easily was an overstatement. Yeah. I don't know why I have to download a file. Come on, man. Just put it as text. Yeah, there we go. Cool. I'm just going to copy all this now, go to my Google sheet and then paste this in. And then I'll just go data, split text to columns. Perfect. And now I have basically like all the data columns in here that I'd want. I'm just going to change the style of this a little bit. And I'm a huge slut for inter. So I'm going to use inter. Let's give this a quick little click. Beautiful. Looks clean. And let's just call this scraped posts. Now let's head back over to make.com. And what we want to do now is we just want to go to Google Sheets add a row. And then I'm going to want to go down here to nickit left click.ai. And then choose my spreadsheet ID, it's going to be called a Reddit post database. Then I'm going to have to select the sheet name, which is just going to be scraped posts, it does contain headers. And check out how simple and easy it is for me to map all of these now. I still need to add the filter, which I'll do in a second. But just look at that. This is substantially faster than, you know, when you make the fields in this equivalent, you know, it's just substantially easier and faster. Also, there's one thing I'm realizing, like I'm going to want to use this as a database, right? But I'm going to want to keep track of which posts in this I've used for my newsletter, because logically, I'm going to need a way to list all of these and then select them, right? So what I'm going to do is I'm going to add one more column here, and I'm just going to call it post status. I'll head back over here. And then over here, I'm just going to type in new. So new is just going to be, you know, after I scrape it, this is where it goes. Awesome. That looks pretty good. Let's add a filter. Actually, there's one more thing we have to do before we do this. If you think about it, before we add a new record, we have to check to see if the record already exists in the database. If it does already exist in the database, we shouldn't add it. So one thing I noticed while I was reading through this data is that there's an ID field. This ID field means that we can actually search through the sheet looking for an ID. And if there's an ID that matches this ID. then we don't have to add it. So I'm actually going to drag a search rows module and just add it in front of us. I'll type Reddit. Oh, I'm using the wrong email. One sec. I'm going to type in Reddit over here. Grab my Reddit post database. Sheet name is going to be scraped posts. And then what I'm going to do is you see where it says filter. I'm just going to go and check to see if the ID is equal to this ID here. And if it is equal to this, what I'm going to do is if it is equal to a total number of bundles will be greater than zero logically. So if I want to check and see something is new, total number of bundles will be equal to zero. What I'm doing is I'm searching for rows with the same ID. So if there are records with the same ID, then it'll return more than one. So I just check to see if it's equal to zero. It'll basically always work, which is nice. So that looks good to me over here. I'm just going to add a filter for relevant. And I'm going to say condition relevance equal to true. So now we have two filters. We have one for relevant and one for new. Um, now that that's good to go, I'm basically just going to run this puppy on all the records and see what happens. Let's go 100. And then let's just run. It looks like most of these aren't relevant. The ones that are now passing the Google sheet and they're all new. Ooh, it's kind of ugly. Also I'm adding this to the wrong column. That's annoying. Okay, let's fix this before it just gets unbelievable. Reset, reset. Then I don't like how tall these are, so I'm just going to click them back here. Hold on one sec. Just go right here. Also, I think the reason why this is doing is something to do with wrapping, so I just fixed that. Okay, cool. And now we're actually scraping these, which is pretty sweet. This looks good to me. Let's just double check to see if this data is everything that we want. Like one thing that you'll notice when I'm building these systems, what I'm thinking about is basically like I'm testing it iteratively at every step. I'm saying, okay, cool. Is this, is the output of this module or the step what I'm expecting? And so this is me actually just double checking that it is. So this looks good. This actually looks kind of weird, here's what's making news in AI. I don't know if I like that. This is actually basically my same. Yeah, this is the verge doing the same thing I'm trying to do here with my newsletter. So actually we shouldn't allow these in. So I'm just going to bold this for now. I'll keep track of that later. This looks good. That looks good. That looks good. What is this? It's an advertisement. This is my startup. Yeah. So I don't want people advertising their own products here. It seems kind of dumb. I don't know if I want anything that dumb either. Okay. Anyway, this actually looks pretty solid. Um, there's only two records here. I really don't like, here's what's making news and AI. So I have to go back to this. And what I want to do is I just want to update my prompt to, uh, what's the best way to think about this. This is basically like, um, I guess this is a newsletter. So I'm going to now mark things as irrelevant if they are sort of newslettery. So I'm actually going to go into my prompt here. I'm going to say a post is relevant if it's about news, developments, exciting progress, or if it's a post is irrelevant, if it's something personal, or it's a question, or if it's community moderation post, or if it's an aggregated newsletter from someone else, or if it's a product advertisement, especially if it's the founder talking about their own product. or if it contains an image link with no context. Good. So just updated my prompt. What I'm going to do now is I'm just going to delete these two records. If you think about it, I don't want these poisoning the well, but the rest of this looks pretty good to me. So I think we should be able to move on. Now that I think about it, you know, we could do some additional editing through the number of comments and stuff, but I don't actually think that's relevant. Like odds are if something is in the hot stage. Yeah, actually this one minute AI news is kind of bullshit. So is this rag thing? Let's remove that too. If something is in the hot stage, then it's probably already kind of pre-vetted. So I'm not actually going to do that, that other level of filtering. What I am going to do is it looks like there's one more field here. There's data type here. It looks like I kind of screwed up with my patterns here. Post status is not the right column. So let's just scroll down here. image URLs. Oh yeah. Yeah. Uh, let's just go image URLs. Yeah. Okay. So there's actually some image URLs here and then, uh, let's do, let's make this one post status. So sorry. W is what W is video URL. Let's go video URLs, image URLs, and then post status here. Cool. Um, so that's that. we have basically everything that we need, I believe, in order to take this to the next level, which is now, like, if you think about it, let me just see, is there anything else that I'm missing here? No, I'm pretty sure that's the whole system. Like, yeah, we, uh, we, we now have everything that we need to scrape data and then publish it to this spreadsheet. What we need now is we need a mechanism that will allow us to take the posts in the spreadsheet and then convert them into newsletter text. We have the data source, so it's really just this. this other half here every week, grab X posts, use it to summarize them, read standalone newsletter, add copy to some newsletter platform like MailChimp. So I'm just going to, I'm surprised that I haven't actually had to change any of these steps. Um, this is going to be scenario one, basically. How do I not make this? This is going to be scenario one over here. And then This is going to be scenario two, and we're going to have to separate them because logically they both do different things, right? Like scenario one is going to be getting the data and adding it to DB. Scenario two is going to be curating content and passing it through AI and posting to newsletter. That's sort of like the two-step flow that we have. So this is going to be called automated and newsletter system source posts. It's not entirely done it. We still have one more thing we have to do, but I'll talk about that later. And I'm just going to create a new scenario here. This new scenario is just going to be called automated and newsletter system summarize with AI and post to. and create a newsletter. Now let's just do that. That looks good to me. Awesome. This is popping up because I'm attempting to command S save this while also selecting the text up here. So it just doesn't know if I'm trying to save the text or if I'm trying to save the whole flow. I just want the title to appear here. I'm just saving this so that I have both of these available. So this is number one. This is number two. Let's move into number two. So now we just need some way to grab all this. And you know, I'm also going to want to store the outgoing newsletters somewhere else. So why don't I go here and type generated posts? Um, and then I'll worry about what that actually looks like later, but okay, let's go to my Google sheets and then let's go search rows. And basically what I want now, if you think about it is I just want to grab all of the rows in my database with a post status equal to new because of the post status is equal to new. It's basically up for grabs, I can use this in my flow. So I'm going to go here to do spreadsheet ID and just select the the reddit post database. I find in practice the lengthiest part of this is actually just dealing with this Google Sheets module, the search rows one because it just takes forever to remap everything. Then what I want is I want filter post status to be equal to new and for now I'm not going to set a limit although I probably should eventually. So we're just going to make this okay. I'm going to right click run this module only. And let's just see if we got all the data. So we got this. This is good. Nice. This is good. Cool. We got basically all the posts, which is, uh, you know, what we want. I mean, like, you know, logically we're not actually going to, we can't just have AI do 20 posts for news every week. So realistically we probably want like six or seven. I used to run a newsletter way back in the day. It's called the cusp. It was followed by, um, Dharma Shah, the founder of HubSpot. And it used to look like this. This is way back in the day with Dolly. Um, so I actually think like ideally what we would do is we'd actually just use this. We'd replicate some of this functionality. Welcome to the cusp cutting edge and using simplification explained in simple English. And this week's issue, here's some quick little examples, teasers, and then we have some H2 and then we have the actual description here. And then we just do the same thing over and over and over again. Yeah, that looks to me pretty good. So I'm actually going to use this as the basis of my GPT flow. Uh, what we're going to do now is if you think about it, we're going to use AI again. So I'm going to go open AI, create a completion. Actually, I'm just going to copy the module from the previous flow because it has most of what I want. I just need to change the prompt. And then why don't I just rename everything while I'm at it? So this is get Reddit posts. This one will be filter posts. This will be check if exists. This one will be add new post. Cool. Down here, it's going to be get Reddit posts. And this one's going to be, I guess, generate summary. Well, it's just the summary. No, let's generate headline and snippet. Let's call it that. You guys will see why in a sec. I'm going to jump back over inside of this. It has most of the text that we want, which is cool. Just say writing assistant. I think I forgot that on the... previous module call and we are still creating a newsletter um that lists new and exciting developments and your task is to take as input a reddit post title and body and rewrite it different words and rewrite it for our newsletter. The rules are going to be, um, use a casual Spartan tone of voice, use third person POV. Um, you know, most newsletters, they, they write like really gratuitously with the new lines. Like they'll keep the paragraphs really short and go like one or two sentences. So we should probably add some information about that here. Um, use new lines gratuitously and keep paragraphs to one, two sentences, max anything else. I think that's good. Return your output using this Jason format. Let's go headline. And then let's go snippet. rewrite it for our newsletter, use a casual sprint on a voice, you have 30% POV and using your two, so you keep paragraphs to one sentence max. Let's do try for four to six sentences total. That looks pretty good. Maybe five to seven. I don't know. Whatever. We'll, we'll, we'll see how it goes. Okay. And then next up I have to use this user prompt. Um, I'm going to use headline and snippet, right? So actually let's make this simpler. We'll go new. new headline, new snippet. This would be old headline, old snippet. Very cool. Now what I'm going to do for old headline is feed in title. Now what I'm going to do for old snippet is feed in body. There we go. And I'm just going to leave everything else as the same. And I just want to test this out basically. Like a lot of the time you have to add a bunch of examples of like before and after. Um, but What I'll usually do is I'll just have AI generate me a draft of that, and then I'll go through and I'll do some editing. And if I have to add an example, I'll do it then. I don't want to do it for all 20 or 30 or however many I got. So I'm just going to do like, let's just do one for now. I'm going to run once. Let's generate our headline and snippet and see what, see what we're looking at. Meta drops llama 3.370 bill. Meta has launched their latest Lama 3.370 bill, which is a step up from Lama 3.170B. Performance-wise, it's almost catching up with the 405B giant. It's a smooth swap for anyone using the older model. You can check it out on Hugging Face. It's worth peeking at if you're into A advancements. I don't like that you can check it out on Hugging Face. So I need to add another rule. And that rule is don't include, don't direct people to offsite resources. Don't direct people to offsite resources like that. Okay. Let's try this one more time. That actually looked like basically perfect. So I don't think I'm going to need to do any editing. Cool. That looks pretty reasonable to me. Awesome. Um, so now that we have this, you kind of think of it from my perspective, uh, What we need to do now is we need to format this in such a way that we're basically going from post to post and delivering summaries of each. So there are a variety of different ways we could do this and make. Probably the simplest is to use a text aggregator. What a text aggregator will do, so I'm going to set the source module to get Reddit posts, is it will allow us to map this into a format that I'm very comfortable with using, which is called Markdown, where you could write headings and you could write the subheadings. uh my heading let's try h3 first uh yeah let's try h3 the heading would be this and then the snippet would be this right and then between the two i'd want a new line then after i'd also want another new line so basically what had happened is for all of the bundles in this sequence it'll do this and then just concatenate them all together which seems to me pretty pretty good so let's do six um and then let's go over here click ok We're going to get this yellow little bubble here because technically a transformer should never be the last module in the route. But I'm going to give this a go. We're just going to see what this looks like. It's currently processing all this right now for us. Wonderful. Realistically, I shouldn't have done all six. I should have done two because then I would have used fewer outputs. But this looks pretty good to me. But let's just, um, I'm going to copy all of this and then I'm going to go to markdown live preview, which is just a tool which allows me to paste stuff in. Okay. So this is basically what it's going to look like. Wow. This is sweet. Yeah. Stuff like this is kind of annoying. It's probably doing this cause I tried to force it to do seven, um, new lines. So I don't actually need seven new lines. So, sorry. Uh, seven sentences. I don't actually need seven sentences. Let's just aim for like three to five, three to five sentences total. That'll be a little better. But anyway, I'm still, this is like good format. This is what I like. So everything here is nice. All we need to do now is we just need to put this in. I mean, basically just put this in a newsletter. I'm just gonna use a Google doc though. Um, it's going to be the simplest for me. And the thing about Google docs is they only take as input. Oh, you know, there are a couple more things we have to do. Actually, we obviously have to generate an introduction to generate conclusion. Um, but, but I just want to test this out in a Google doc. Um, the thing is Google docs requires, um, HTML format and I'm not putting this in Markdown. I could have put this on HTML. I could just ask GPT to do HTML, but I find that when you ask AI to output things in HTML, um, sometimes the format is a little bit off or, you know, it's not like perfect for the Google docs module. When I do Markdown, there's a lot less room for ambiguity. And then, uh, make actually has a built-in Markdown to HTML converter that just always outputs the result and like very easy, accessible Google doc functional format. So I'm going to convert this to HTML. Let's aggregate headlines and snippets. Sorry, my rice is burning. One sec. Okay. Rice is definitely a little charred, but Hey, we like it crispy. Uh, so we're going to aggregate the headlines and snippets mark. Okay. I'm going to feed the, this end is just an example newsletter. Content's going to be this HTML and let's just see what happens, right? Cause we know that this looks good, just about how it's going to look in the Google doc. So I'm going to run this once. Ah, shit. I should only done two. Yeah, whatever. That's fine. Uh, we're just going to aggregate all of them, I guess. And if you think about it logically, what we're doing is we're going to use one AI model to do the, um, bodies. And then we're going to do another AM model to do the introductions and the conclusions. I think that makes sense. We should generate a title as well. So this looks good to me. I don't like that there's no new lines, but I guess it's just a formatting thing. Yeah. You see how that's like kind of stuck together. There's no space between them. It's just a formatting thing. This actually looks pretty good. I'm going to go enter here. Yeah, I like how casual and spartan the language is. I don't like stuff like this. A bold claim that adds fuel to the AFR. Imagine the possibilities if he's right. I think I can probably reduce the incidence of this by changing the temperature. I'm like 0.8. Spartan tone of voice. He's a third person POV. I'm just going to remove the try for three to five sentences total. probably be better. But anyway, cool. It looks pretty solid to me. So now we just want some sort of introduction and then we just want some sort of conclusion. The conclusion can probably be template. If I just look at how I did this on my own newsletter, welcome to, so I'm going to say welcome to the loop. Well, welcome to the, welcome to loop, I guess. Cutting edge ANU's explain in simple English. Looks good. And this week's issue one, two, three, let's dive in. What's the conclusion like? You see how there are all these images here? There's also a bunch of links. We could, like, if you really wanted this to crush, what you would do is you would take these images, sorry, you would take the headlines, you'd feed them into a search engine model like perplexity. Perplexity would return you a list of citations. You take those citations and then you would pass the snippet along with the citations through another GPT call and say, hey, your job is to add links. So add links, add these links to whatever this resource is, wherever relevant it would then go through and it would, it would add links. And then for images, you could use like mid journey. You could have mid journey generate images once every three snippets or something just formatted. And then it would actually do pretty well. Um, I don't, I don't think I'm going to do that for this just because if I do that, this video is probably going to be like three hours long, but I will give you all of the tools that you need in order to go out there and do it like little stems and stuff like that, just to make it pretty simple. Um, maybe you could do the same thing with, with videos on YouTube and whatnot. Anyway, the thing that's important for us is the conclusion. This is good. I'm just going to copy this conclusion. This is just going to be the same conclusion every time. Pretty solid. So I'm just going to copy this. This is like going to go here. Oh, hold on. That's a wrap. Maybe we'll go. That's a wrap. Enjoy this. Consider sharing with somebody, you know, you can also follow me. Well, this is going to be a pretty short format, so you can also follow me on Twitter if you prefer more straight to the point AI news like this. See you next week. Dash Nick. Cool. And we need some intro, right? So that's what I'm going to do now. Let's take this. Let's use this to generate introduction and conclusion. Uh, no conclusion. Sorry. What was that other thing we need to generate a title, right? So we're going to go generate introduction and title. We're actually gonna use this to generate an, like AI generated title. So we're creating a newsletter list, new and exciting developments in our test. Take as input the newsletter and write an introduction and a title. Rules, casual Spartan tone of voice, third person POV. You know what? I'm just going to remove most of these rules. And then I'm just going to say, use a casual Spartan tone of voice. Follow this template. Then I'm just going to go back here, scroll all the way up to the top of my thing. Then I'll just go back here and then paste this in. Welcome to Loop, cutting-edge AI news explained in simple English. Cool. In this week's issue, uh, let's do one, two, three. Let's do this. Return your output using this JSON format. Introduction. And then title. Uh, we should probably do titles. use these examples for titles. Why don't I just go back through my blog? Let's get a couple of example titles. That's a good one. So I like this one kind of clickbaity. As you see, the clickbait did not start on YouTube. The clickbait has been happening for a long time. Uh, let's do that. Okay, cool. Awesome. Uh, right. So what I need is I need newsletter. Then I just need to put in this text and then I'm just going to go over here and then, you know, where it says result, you can actually access parameters before they're even generated by going result title. And then what I want next. is I want result introduction, I believe, right? Yeah. Introduction. Okay. This should be good. Let's see how it goes. Paste the mark down here. Let's run this bad boy. I'm pretty excited. I hope this works. It's going to be really cool, man. I love templating content like this because like, even if, even if hypothetically you wanted to use this for your own newsletter and it wasn't perfect, even if it got. 80% of the way there, which these models are totally fine doing. You just 5x the leverage on whoever's time it is. Look at this. Ooh, that's clean, man. Oops. Uh, this, that is clean. Okay. Meta launches Llama 3.370 build to compete with the big players. Welcome to loop. Cutting edge entities explained in simple English in this week's issue. One, two, three, four, five, six. You know, to be honest, I don't really need all six of these here. Like, like this, you already know all the news from this. Okay. So let's just do, let's just tell it to pick the top three for the introduction. Let's do that. Pick the top three news headlines. Don't use more than top three, the top three news headlines. Let's just do that. Cause cause I rise. I mean, there's no point. You're just reading the whole fucking newsletter right here. I mean, some of these are a little longer, like box CEO. Yeah. Some of these are a little too short. Google's latest quantum chip meet Willow. How do I make this longer? Um, what is the body of this? Oh, it's an image. you know what we have to remove the images here like just an image that doesn't that's not going to work for us the reason why this is so short is because it just says images and then it goes so actually we need to adjust our filter back here um or if it just contains a an image link let's just see what the exact text was or if the body merely contains an image link with no contacts. There we go. This is going to force all of the ones that say images to just get out. So we're not actually going to have these anymore. So when we do our newsletter, um, you know, we're not, we're not going to have to worry about these like one or two line things. Ideally. All of the posts that are here are going to be substantially longer, which is cool. Okay. Okay. Okay. So we got six here, gender introductions, the title markdown, Google docs. Let's run this through. And I want you guys to keep in mind that I'm using this Google doc here just as a, just as an example. After this, we're going to take the same flow and we're just going to connect it to MailChimp or Klaviyo or whatever newsletter provider I deem worthy. I am the one who deems newsletter providers worthy. Okay. We just ran it through. We're creating the Google doc. I'm going to go down to web view link again, paste it, open it. And then I'm going to go and just make the spacing a little sexier. So I want to see kind of what this would actually look like. We could even do some formatting here. We could like have this be bolded, maybe do something like that instead, because I'm using a colon down here. But anyway, um, meta launches this and might be boosting jobs. According NATO uses a, I don't persuade soldiers. Wow. This is pretty clean. Not going to lie. And the that's a wrap looks good too. So yeah, I'm just going to change the introduction template to include a dash instead of the colon, just because I don't think it looks good in English to have two sentences, one immediately after the other, and both of them to have colons like this. So I'm just going to use an em dash. And then we think about it. What we need to do is we just need to update the Google sheet. so that the post status is like published or something instead of new. So we just need a way to, to update this. The way you do so is you go to Google sheets, go update a row, and I need to put this inside of this aggregator. And then I'm just going to choose Reddit posted database, Reddit post database sheet name is going to be generated. No. Oh, you know, we got to dump this to a Google sheet too. Forgot about that. We're going to add in the row number from the Reddit post that we got. So it actually returns the row number. So we know that, Hey, you know, when I update the post, I want to update row two. And then what we want is we want post status to be published, set post status to published. Maybe we should just say update post status. It's probably a simpler. Oops. Do not consume any more of my operations. Thank you very much. Looks pretty good. Um, and then Yeah, we also need to do one more thing. What I want to do here is I want to go and add another row. So instead of me going here and clicking add a new row and having to like do the reconnection again, I know that I have one module that already adds a row. So I'm just going to paste this in, then I'm just going to remove this filter. And I do that just by holding Command C and then pressing Command V. And then what I want to do is I want to add a row not to scraped posts, but to generated posts here. Now I don't actually think I have the headings yet, do I? No. So I have to add headings. What headings am I going to add? Uh, actually, I guess I don't know yet. Cause what I want to do is I just want to have a store in Google sheets, uh, like a database that includes all of the campaign information. So yeah, technically I can't do this until I have campaign information, right? So let me just delete all of these variables. And then why don't we actually do the MailChimp stuff? That'll be smart. So let's head over to MailChimp.com. First of all, I know that there's some editable text area here thing. So I just have to really quickly read through the API and just remind myself of this. I've done something similar quite a while ago. So it looks like in order to populate a content area in MailChimp, you need to include MC colon edit. with the content that you provide so yeah this is interesting i've tried doing this before quite a while ago but i'm not entirely sure how so we're just going to jump into mailchimp um see if i have this account that works which it does we'll go down to email templates i believe create template And we want to do next, I think is code your own paste in code. That's fine. No loop template. Okay, great. We have all of our code here. Nice. What we want to do is we just want to replace, hold on a sec. You know, it's his body. You just want to replace everything inside of this with. MC edit, I believe and save it. It's going to delete all this shit. Shit. Anyway, should just say MC at it. Nice. And then just want to save and exit. Just looks good. And now we want to head back into make and then yeah, then we go to MailChimp, create a campaign. Let me actually add this before. And then, uh, yeah, I did the connection with MailChimp, but what would the title be? I guess the title would just be title list audience ID. Would it just be loop? Good. Subject line would just be title from name. Just be Nick. at loop from email address would be my email, which I'm just going to use this one for it to name nothing folder ID. Okay. So fill the body content by template ID by HTML format text, and then feed in this HTML, this output. And then what you want is you want to go to MailChimp again. Do you see where it says perform a campaign action? This is what you're going to want to update. Select the campaign ID. and for action, click Send to Campaign. And now it's going to, this is going to create the campaign in MailChimp. It's going to populate the text with the HTML that we just output from the markdown to HTML module. And then it's going to send the campaign to everybody. Now I'm going to get into the configuration of how exactly to schedule this and stuff like that in a minute. So I don't think we actually have to send this as part of the test, but that, that is more or less the logic here through MailChimp. You could probably do the same thing through Klaviyo as well. I actually don't entirely know. I know for sure. The thing about a lot of these inbound or newsletter platforms is a lot of them don't provide you a way to create a campaign using an API or update the content of a campaign. They actually want to lock you into dragging and dropping stuff. So I know MailChimp allows you to do this definitively. Realistically, you could probably try this with all the other main ones like Klaviyo, ActiveCampaign, and so on and so forth. But yeah, this is what we're going to use with that smiling yellow monkey. And then we're going to go add a new post. And then what we want is campaign web type. Okay. So I actually just need to, I need to do this once. We just need to see how this goes. So I'm going to go run once. We're actually going to connect all the pieces here. And this is really like our, our end to end test. If you think about it, I'm assuming that, you know, we're sending this once a week, let's do like Monday or something. Monday probably makes the most sense. Uh, Monday at like seven or eight, I'll show you guys how to hook that up instead of Appify and stuff. Okay. So Yeah, there's an issue here. Why? Oh, because at. Let's just go nick at loop. Let's run this puppy again. Yeah, MailChimp told me what the error was, which is that there was an at sign in the from field. Apparently you're not allowed to do that. So I just got rid of that and then re-ran it. So we just finished. We're now generating the intro and the title. Markdown. Creating the campaign in MailChimp and then performing the campaign action. We actually sent it. Beautiful. Let me just see an archive URL, long archive URL. I believe this is going to contain the actual body of it. Yeah. Very cool. AI ad passes, cybersecurity, retail and sports evolve and AI benchmarks fall behind. Welcome to loop cutting edge. I need to explain it simple English in this week's issue, three hits. Let's dive in. Nice. You see that this is different than previously. The reason why is because we're using different content. Um, so we marked the content in our sheet. as published. And so the first time it ran to the first six, second time around to the second six, and now we only have a few posts left. And then, so in this way, you basically always have like new content, your newsletter, pretty sick. Okay. So what do we want to do though? We want to, I want all this data here, um, campaign ID, recipients, HTML, all this stuff. So I'm just gonna do the same thing I did a moment ago where I copy and paste this in a chat GPT. Good. Do this, turn this into the same thing. Sorry, let's just say use CamelCase for all this stuff, just because the other sheet was with CamelCase. And I believe there's something to be said about the aesthetics of this. We're going to go to data, split text to columns, and then bold this, then paste this in here. And then again, select for enter, we're going to go enter. And then I'm going to go back here and we just want to map all these fields in our Google Sheets module. So let us refresh the headers and let's go top to bottom campaign, web type, create time, archive URL, long archive URL, email, send type, archive recipients collection. Yeah, this is actually just going to be an option, isn't it? Yeah, that's kind of annoying. Sorry. Actually, let's, um, Oh, you know what? Uh, I don't like this collection stuff, but like if we dump this in, we're not going to get any actual data from this. Just going to say object, object, probably. Hmm. Okay. For now, I'm just going to dump this in. um for the links let's just join links with this plain text long string html archived html okay cool we'll delete that uh looks pretty good to me let's just go here and then we want to do is we only want to send this once a week right so i'm going to go to monday and we'll just go i don't know 6 00 a.m and we'll say every monday at 6 a.m we send this puppy And then... If you think about the sheet, we only have three new ones that are left. So why don't I go back in and then scrape a bunch more, I guess, like, let's just do some more scraping. So let's just go to two limits and then instead of a hundred, let's do 200. Let's actually see how this works. And then what I want to do is, um, I want to hook this up and make this like production ready, right? So there's a difference when you're testing a flow versus when it's actually like being pushed to production. Production just means like reality. So when we tested this flow we use this get data set items module but what we really want is we want there to be a way to trigger after the appify actor is completed and then to automatically get the right data set so that the data set is dynamic. It gets updated, it's different. To do this you click on the get data set items, you add the watch actor runs before it, and then you replace the default you replace the data set id with this default data set id figure. And then I'm just going to set the limit to let's say 200. And then over here, what we have to do is we actually have to add our own hook. So I'm going to say finished Reddit scraper. I see that there's another one called that. So I'm just going to rename this just to be safe. The actor I want is I want this Reddit scraper, this one here. Now what this is going to do is it's going to run every time that we finish. And to show you guys how this works, I'm actually going to click run once and then We're going to watch this get populated with a bunch of new posts. And then after this is populated with a bunch of new posts, we're then going to go in and we're going to run the second scenario as if it were the beginning of a new week and ready to pump out a newsletter. So I'm pretty excited. In order to get this hooked up, let's go back to Appify. We got these three communities, which looks pretty good. I'm just going to click save and start. And now this is going to run and we are just going to watch it run from start to finish. and just do one final end-to-end test of our whole flow. I consider end-to-end tests like basically a requirement. It's one thing to test iteratively and to do things one module at a time. This enables me to build quickly with a lot of modules in my flow without necessarily breaking anything and having to wonder where the hell is the error. Like for instance, if I just dropped 10 modules and just thought, yeah, this seems like it'll work. And then there was an error in my flow. Like it'd be hard to identify exactly where the problem was. Was it that module two is misformatting the data, that module six had the wrong function call, but there's a lot that goes on. But if you test iteratively with like the first module and you make sure that that works, inputs and outputs are as you expect. Second module, that works, inputs and outputs are as you expect. Third, fourth, fifth, sixth, so on and so forth. And basically the second the error occurs, you know that every other module before that module worked. So obviously the error is with this one. And this allows you to do debugging substantially faster. That said, after you're done with that, in order to like really make sure your flow works, you still need to test it end to end. And my rule is I always test it as close to a production environment as humanly possible. If I'm delivering a project to a client, I will ask myself, Hey, what is a client going to provide as input? What sort of fuck ups are they going to make when they filled the form wrong or something? And I basically try and put myself in their shoes just to determine like what my, my flow would actually look like start to finish. So this is currently running on a ton of these. It looks like we have 70 results so far. A lot of these look like images. So most of these are going to, yeah, most of these are going to, um, you know, be filtered out of my system, which is fine. I think we're going to end up with 200. So we still got quite a ways to go. Um, let me think, what are some other things that we might be able to add to the system? Yeah. There are a bunch of these mid journey APIs, which are unofficial. I don't know if mid journey has an API yet. Yeah, no, they only have the unofficial ones. But basically what you can do is you can hook up to something like this. And then what this is doing behind the scenes is it's just calling mid journey through their interface and then returning the results to you like an API call. So you could do is you could just, you know, over here, where we generate the introduction title, you could also generate three image prompts or something, and then generate three images with us. And then you can distribute them every third step snippet or something. Um, and then while I have a bunch of images, I think I've mentioned the link stuff, but if you've got a perplexity here, you could create a chat completion with perplexity. And when you look up something with perplexity, like let's, let's take one of these Reddit thread posts, for example, I'm going to feed this in and then look, this is going to return some citations. One, two, three. What you do then is you feed in your title here and then, sorry, you feed in your snippet. And then you say, Hey, with these three links, this link, that link, and that link, I want you to go into my snippet and then rewrite it and insert the link as Markdown or, you know, a href equal link goes here. And then, you know, now you have a snippet that's like dynamic. That has a bunch of links inside of it. I mean, there are many things that you can do in order to make this better and a little bit cleaner, but, um, yeah, I think you guys see that there. Like basically once you have the core idea of a newsletter automation, um, you can take this any, which way you want. Now keep in mind, I've scraped almost 200 entries here and hasn't even cost me a dollar. Those 200 entries are probably equivalent to just based off the math, at least 40 or maybe 50 pieces of news. There's 40 or 50 pieces of news effectively. And you know, it's not great to post news from last month, but effectively they allow you to like continue going for a whole month. Um, so this whole scraping cost could be less than a a dollar a month, which to me is pretty crazy to think that you can get a fully automated newsletter for a dollar for the data, probably like another 50 cents or so for the token usage and the operation usage. You get this whole thing done for a dollar 50 a week. Like you are off to the races. You are laughing to the bank. Obviously you still have to pay money to MailChimp. You have to pay money to whatever the provider is that you're going to be sending the emails with, but yeah. Okay, great. So we just finished this. We are now going into ChatGPT. This flow is now filtering through and adding things that are relevant. As you see, the first few were not new, which is why they've been filtered out. We had four that existed, but only one that actually made it to the end. This is worth saying. If you guys also have a low rate limit, if you guys are on a low tier, then you might want to add like a sleep module before or after this flow. The reason being this is going to consume a lot of operations. Like if we go here and we go to usage, you'll see I'm consuming 222 tokens and I'm basically doing it one, two, three, four, five, like about two or three times a second. So that's like 600 tokens a second. So 600 times 60 seconds, but 36,000 tokens a minute, which I believe sets you a little bit over the default limit provided for tier ones. So just keep that in mind as you are doing, you know, these, these token wise heavy operations. Okay. So obviously this is working. Let's head back over to our Google sheet and let's see what's going on. We see some posts, Reddit is joining the AM market. That's new. Scroll all the way to the right here. We got a bunch of more new ones. Pretty sweet. LLM hallucinations, world's second fastest supercomputer. Ooh, very clean. I am still noticing that a lot of these have the images text here. So I'm not liking this. Now that I'm running this again, I'm noticing that tons of them just say images. So there were a couple other things I could do. I could set a procedural filter where when it has images like this, what I do is I, um, I just like, I don't allow it through. So relevant would not only include the AI relevant thing, but it would also include a call that says, if this contains the text capital I M A G E S with a colon immediately afterwards, then don't allow it in. And that seems pretty reasonable. Yeah, that's probably what I'm going to do. You could also update the AI prompt to do that for you. I mean, you know, kind of the way that I see it is like, if you're going to be consuming this many tokens anyway, that's like a pretty quick and easy hack. I think in my case the images hack makes sense though, so that's what I'm going to do after this is done. Okay, we just finished with all of these. So what I'm going to do is I'm just going to make sure that this body does not contain images with this line here, with this colon I should say. So this was only now going to proceed if this does not contain said colon. And then awesome. What we're going to do now is we're just going to write a whole newsletter. Let me just see. Am I sending this to myself? Am I going to get this? Yes, I've already gotten one. Cool. I'm going to run this now. It's going to go through and it's going to select six. And I want you guys just to pay attention to what's happening here. We're updating each of these from new to published on the right hand side, right? So you're seeing this is basically now incorporating this into the flow. It's pretty sexy. We're now going to aggregate the headlines and the snippets. Write the intro and the title. convert it to markdown create the campaign and then perform the campaign action uh it does look like there's an issue the google sheet unfortunately and it's what i thought the collection um is raising an issue here which which blows but essentially anything that has like a nested collection here is not going to work so let's see uh recipients collection here maybe we just want List ID instead. So I'm just going to store this. Settings. I'm just going to replace these with other variables. Settings collection is just going to be subject line. Then tracking collection is just going to be HTML clicks. And then delivery status collection. I'm just putting these in my URL bar so that I can just remember them later. Um, oh actually we don't need this. We could just go enabled. Tracking collection was going to be HTML clicks. Settings collection. Uh, what's going to be? What did we say? Subject line up here. Recipients collection. We're going to be list ID. Uh, links array. I think that should be okay. I'm not entirely sure. Maybe we want. Hmm. Did we just feed in the first one? Okay, cool. Should be sufficient now. I'm just going to go back to my Google Sheet and just update this because I don't want this to break again. Obviously, we can only run this a limited number of times, right? Why don't we go through and run this one more time? And what we want is we want to update this, call this list ID. I want to call this subject line. I want to call this HTML clicks. And then we want to call this who, uh, I don't remember what we call that. Shit. Probably should have written that down. Hmm. Anyway, we did get all of the text, which is nice. Um, there's this giant blue wall in front of us because anytime that you add something to a Google sheet, um, it will automatically inherit the style of the element above it. So that's what that's for, but we can fix that just by selecting all of the elements and then sticking it here. Okay, great. And now we have, uh, we have the plain text long string as well with the actual whole thing. newsletter just listed here which is cool and here as well and this is html the plain text long string is going to include some stuff like um telephone on sub links and so on and so forth just because that's what mailchimp does but it doesn't actually like exist in actuality and voila we now have our finished system with our newsletter had a lot of fun putting that one together um if you wanted to change the style or whatever you could do so by changing the html template But man, is that sexy? And is that completely autonomous? Last but not least, before we finish, let me just show you guys how to schedule this so that this runs whenever the hell you want. So there were two scenarios, right? There was scenario one, which sourced the data and scenario two, which actually did something with the data. For scenario two, just head over to the scheduler and make sure this is set for days of the week if you want to send this weekly. And then like Monday, Thursday, whatever, with the time down here. And then just make sure to to clarify whether it's AM or PM. Once this is done, just turn this on a custom schedule, click on, then head back to the first one where it says watch actor runs. You'll notice that this scheduler is a little lightning symbol, which stands for immediately. Basically, this is awaiting a webhook. So if you wanted this to work autonomously for you, you'd have to go to schedule and then you'd have to create a new schedule in Appify, then go weekly, and then just have this run whenever you want. In my case, Sunday at 12 a.m. UTC is perfect. And then go down here to add, click actor, and then just go to Reddit, scrape or light. The input is just going to be whatever those three or four, however many subreddits you want were. So we click save. We then click save and enable. And now we have a schedule where this is now triggered completely autonomously. I'll say scrape Reddit weekly. And basically what's going to happen is once a week. That is going to trigger. Every time that that triggers, that is going to trigger the dump to Google Sheet automation, which was automation number one. After that's done, that's going to automatically update this big sheet of ours. And you know, you don't have to do this once a week. You can do this like two or three times a week if you wanted to like source tons of posts, right? But anyway, after that's done, then this, which will also occur once per week, in our case, Monday at 6 a.m. Again, you could do this as many times as you want. We'll go through that sheet autonomously. and it will update all of the new posts to published as they are sourced and used in actual content. And so in this way, we have effectively created a closed loop automation, which does all of this completely without any sort of human intervention, which is why an approach like this can be so goddamn powerful. I really hope you guys liked that video. I had a lot of fun putting the system together. If you have any questions about how I did so, feel free to leave them down below as a comment. And as I mentioned at the beginning of this clip, I take a lot of my requests from viewers now. So... Um, yeah, more than happy to build out a system as long as I haven't done it before. And I just like doing the same thing over and over and over again. Uh, aside from that, do the fun YouTube stuff, like subscribe, get me to the top of the algo. If you haven't already subbed my watch time, um, rate of non-subscribers to subscribers is going way up just cause I'm getting more and more popular. But if you find yourself in the unsubscribe camp, please do me a solid and subscribe and yeah, I'll catch y'all in the next video. Really appreciate the time.