Transcript for:
Workflow Automation Tips

In this video, I'm going to give you a list of all of the things that I wish I knew before starting with Nadn. Tip number one, cheap AI models are underrated. A lot of people that start with Nadn and want to build AI automations, they only want to build it with the most expensive and the most intelligent models. But what is often overlooked are much cheaper and more efficient models. I'm talking about something like Google Gemini 2.5 Flash Light or something like GPT5 Nano or maybe even some Chinese models. All of these cheaper models, they run at a fraction of a cost but still have an acceptable level of intelligence. And because they're so affordable, this opens up a whole set of new possibilities that is just not possible with the more intelligent and the more expensive model. So definitely experiment with these cheaper and more efficient models and you're going to be surprised how good they have actually gotten. Tip number two, speaking of different AI models, you have to start using open router in your AI automations. So I have an open router note here and open router is basically a service that allows you to use any available AI model through a single account. So for example here I can quickly switch between let's say GPT5 Gemini pro clot sonnet or some open source models or even some uh Chinese models like deepseek and I can switch between all of them with a single click even if these models are from a completely different company and open router has allowed me to make my AI automations a lot better because it allows you to always find the best model for each individual use case or even for each specific task just because it's so easy to experiment with all different types of models and on top of that it has some additional advantages. So if you're interested in using open router then I've actually recorded an entire video about it and you can find it here on my YouTube channel. Tip number three is a key application. See if you build automations and AI automations you need some way to communicate and manage these automations. And the best interface for that is Telegram. You can use Telegram to receive notifications from your workflows. You can use Telegram to receive approval requests. Or you can use Telegram to send a message to your AI agent so that it can start to do something. And the reason why it is so good for that is because first of all, Telegram is completely for free. You can use it on your smartphone, but also on your computer, in your browser. And on top of that, Telegram has a fantastic API that NN has built a very detailed integration for. So you can see all of these different actions that are available within the Telegram node. And if you start to learn to use some of these actions, then you can use Telegram as an amazing management and orchestration tool of all of your workflows and your AI agents. Tip number four. A lot of you already know that you can reformat a workflow with this tidy up button down here. However, the problem is, as you can see, for larger workflow, this doesn't really look good because it just stretches it in length. So, what you can do instead, you can also just select some of your notes and then click this tidy up button and then it will only realign the selected nodes and only will make them look better and keep the rest of the workflow aligned as it is. Tip number five, the switch node is always better than the if node. At some point in your automations, you have to build some type of conditional branching. So based on a condition, a different type of route will be taken in your workflow. And for that, there are two nodes available in NAN. And that is the if node and the switch node. However, the switch node is much better than the if node because it can do everything that the if node can do, but it can do it much better and has additional advantages. For example, right here with this switch node, you can see I named my outputs. So the upper branch or the upper output is called spam and the lower output is called not spam because this switch node does some type of spam categorization. And because I can name the outputs, I can immediately see what's going on. Whereas with the if node, the the name of the outputs are always going to be true or false. And I cannot change that. So that means it's always going to be a little harder to see what's going on with the if node. Additionally, with the switch node, you can add as many conditional outputs as you want. So for example, I can go into the switch node and add a new routing rule. For example, um a new output called maybe spam. And then you can see I've now got a new output that I can connect to some other part of my workflow. And this is not possible with the if node. Also, the switch node has some additional settings down here. Uh for example, send data to all matching outputs, which is really useful. And those settings are not available in the if node either. Tip number six, data pinning and data editing. So when you run your automation workflow, then every time you run a test, n is going to send a request to all external services that you've integrated. For example, here I have Reddit get some posts from Reddit and here I send a prompt to Google Gemini. There are two problems with that. First of all, every time I run it, it takes some time, a few seconds to complete that request. But also when I run a lot of tests then the service could view that as an unnaturally high amount of tests and start to rate limit or even block me. And in order to avoid these two disadvantages, what you can do after executing a workflow once for testing, you can just select some nodes and then hit P on your keyboard. And now you can see they turned purple. And that means now the output data of these nodes is pinned. which means the output data is saved in NADN. If we open up the note, you can see in the output data, it says this data is pinned for test executions. When I now run the automation again, Nadn will no longer send a request to the external service, but instead it will use the pinned data. So this means our test runs a lot quicker because the data is immediately available. But also it means that we no are no longer at risk of being rate limited or even blocked by these external services for an unnaturally high amount of request. But the coolest thing about this data pinning is that if we open this up and have a look at the pin data here, you can see once you pin data, you have the option to edit the data with this data edit icon up here. And if I click this, you can see I can change any section or any part of this output data however I want. So if for test purposes I want to try out some specific data for the rest of the workflow um instead of hoping that it will eventually appear uh in the response of this node, I can just pin the data and then enter the test data that I want to run experiments with manually here. And this is really really useful and allows you to test a lot of different edge cases. Tip number seven, speaking of executing workflows for test purposes, whenever you want to execute a workflow, you can obviously come down here and click this orange button. But you do not have to do that. Instead, you can use a really useful keyboard shortcut, and that is command enter on Mac or control enter on Windows. And when you press that, then Naden will automatically execute your workflow. Tip number eight is called the page size equals 1 trick. And that is one of my favorite small mini tricks in Naden. Now, before I show you this little trick, I would quickly like to ask you for a favor. If you enjoyed this content so far, then don't forget to leave a like on the video because that signals to me that you like this type of content and then I can produce more videos like this in the future. Okay, page size equals one trick. A lot of times when you analyze and optimize your automations, it is quite difficult to see which input led to which output, especially if there are some AI steps involved. So right here I have this AI block and it's an AI spam categorization node and this nodes receives some form submission data from the previous step. Let's switch to table view so that we can see all of them. So here are the form submissions that this node receives and then it looks at these form submissions and then responds with a AI spam analysis. So it responds if a certain submission is spam and then a short explanation for the decision. Now the problem is as you can see it's quite difficult to figure out which of these form submission inputs led to which of these evaluation outputs. I mean obviously what I could do is I could start to count. So for example 1 2 3 4 5. Okay. So this is number five. Then 1 2 3 4 5. And then this generated this output. But this is really annoying and also prone to a lot of mistakes. So what you can do instead you can do something really interesting and that is down here you just switch the page size to one and you also do it for the output data. Page size equals one and then NN will only show you one input and one output at a time. And now each of these numbers down here corresponds to a single data item. So for example, if I want to see what output is connected to input number four, then I just click on four here and I click on four over here and then I can immediately see okay this input this form submission led to this output to this analysis. And if you have lots of different uh data then what you can also do is you can hover over these three dots and then click on this forward icon and then this will always jump three items at a time and you can also do this over here three items at a time. Tip number nine. If you want to in nadn turn a new field into an expression field then obviously you can go down here and switch it from fixed to expression. But there's a much quicker way. You can also just type equals in it and then NN will automatically turn it into an expression field. Or there's another method that you can also use which is actually the one I prefer. You can just start to type your expression. So with the double curly brackets, then hit space and NN will then automatically turn the field into an expression field and then complete or close the double curly brackets. And that leads me to tip number 10, and that is expressions in Nadn are everything. If so far you've been only using nadn expressions to map data from a previous step into a field, then you've been really missing out because these expressions are so much more powerful than that. For example, as you can see up here, I used a simple plus operator in this expression to generate a full link because the link that I received from this previous step is only part of the full URL. And so that it's clickable, I also have to add this uh domain prefix. And so I use this expression to add these two together. Or here for the content, I've used this conditional operator, the double pipe operator. And this basically is kind of like a fallback operator that if the text from the post is empty, then I want to use the URL to some external media. And those are just two small of the many many use cases and possibilities that you have with these expressions because whenever you edit an expression, you can see down here there's this text tip. Anything inside the expression is JavaScript. This means inside of these expressions, you can use the full power of JavaScript. And the cool thing is you don't even have to know how JavaScript works. You just have to know this very small part of JavaScript which are the expressions. And in case you are interested in learning more about these expressions and more about these technical aspects of NAN, then feel free to check out my course that I've created called technical NAN simplified. In that course, I basically teach you all of the technical aspects of NAND completely from the ground up including expressions. So if you're interested in that, I'll put a link in the video description. Tip number 11, data search. If you have a note and some input or output data that is very complex. So for example, here I get some posts from Reddit and you can see there is a huge amount of fields in this data item and it's quite convoluted and it's really hard to see what's going on. And let's say I need to find an ID field because I want to add the ID inside of here. Then obviously I can start and scroll down this page and hope that I will eventually run into the ID field by accident. But what you can do instead you can just click on this search icon here and then search for whatever you want to find and then n will highlight this search term anywhere it occurs in your data. So let's see the ID. There it is. It is highlighted ID and then I can drag it here into this field. And this obviously also works in table mode. So if you search something here, let's say I search for NAN, you can see then it highlights that search term wherever it finds it. And the same holds true for the JSON view. Tip number 12, human in the loop. This is also one of my favorites. A lot of people don't know that actually in NAN you can make your AI results absolutely perfect. I mean 100% perfect. And that is by using a note from this category called human in the loop. And it then describes it at wait for approval or human input before continuing. If I click on that, you can see it opens up all of these chat applications like Telegram, WhatsApp, Gmail, Discord and so on so forth. And basically the way this works is you can use any of these chat applications and make your automation ask you for approval before it can continue. So it will send you an approval request on any of these platforms here and only once you approve then the automation will continue and if you reject then the automation will stop. So let's have a look at a really interesting example here. Um right here I have this AI node and the task of this AI node is as follows. So it's using claude set in the background and here the system prompt is you are an expert at writing controversial Twitter posts to create maximum engagement. I will provide you with a subtopic of digital marketing and you have to write an extremely provocative and controversial tweet. Only write a single post not multiple posts and so on so forth. So the task of this note is to write an extremely controversial tweet because these posts usually tend to perform quite well and get a lot of engagement. However, the problem is sometimes they can be a little too controversial and too pro provocative. So uh let's run an example. I up here in the user prompt I have to enter the topic of the post. In this case I wrote LinkedIn content marketing and let's execute this step. Okay, let's see what it generated. LinkedIn thought leaders posting the same recycled motivational garbage every day aren't marketers. They are digital parasites feeding off algorithm dopamine while adding zero actual value to anyone's professional life. So as you can see it is quite controversial and very provocative and it might be a little too controversial and provocative and that is why in the next step I've added this human in the loop ask for approval note with telegram and the operation it uses here is send and wait for response. This is basically the operation you get when you select telegram here from the human in the loop category. And then here I defined a message. In this case, it only asks me if I want to approve this new Twitter post. And then the text of the generated post. And let's execute this. So I'm going to pin this post here and execute the workflow. And maybe you heard it. I already received a message on Telegram. And you can see right now the workflow is still executing. This uh circle is still spinning and it is just waiting now for my approval. So let's go to Telegram. I received this message. Do you want to approve this new Twitter post? And then the post it generated. In this case, I think it's too provocative. So I'm going to decline it. Okay. And back inside NN, you can see it received some data with approved set to false. And then in the next step, I just added a filter note that removes all posts that I disapprove, that I reject. And only those posts where I approve, where approve is set to true, they are actually sent to Twitter. And this is such a great way to give your AI full autonomy while still remaining in full control of the output quality. Tip number 13, start to learn rack as soon as possible. Rack stands for retrieval augmented generation and this is just a fancy term which basically means that you provide data to the AI that it has not been trained on. So that could be some relevant examples that you want to provide to the AI or some parts of a PDF document things like that. And the reason why it makes the AI so much better is these AI models that we're using for our automations they have gotten really really smart. However, if there they have not been trained on a specific use case or if they do not know something, they cannot give you a great result. But if you provide the data, the necessary examples or information to them, then they will give you an excellent result. A lot of people are quite intimidated by rack because usually involves vector databases. However, it is not really that difficult. And if you're interested in learning how to set up your own rack systems, then subscribe to this YouTube channel right now because that's one of the next videos I'm going to upload here on this channel. And when you subscribe, you're going to be notified. Tip number 14, do not overuse AI agents. A lot of people that are new to Nadn, they discover it and want to use it because they've heard how amazing these AI agents here are. And yes, they are absolutely amazing. they you can give them a lot of autonomy and in my opinion they can be quite revolutionary. However, you should not underestimate the traditional way of running AI automations and that is in this sequential execution order. So for example, here I have such a trai traditional AI automation and essentially just contains three straightforward AI text prompts. So this one, this one and this one with some logic in between. And the great advantage with these sequential and traditional AI automations, you have full control over what happens. So you can fine-tune and adjust a lot and make the AI perform a task exactly how you want it. And because you have so much more control over what happens, usually your costs for these AI automations are going to be lower than a similar AI agent. And on top of that, in some cases, you just have much better control of the output and thus get higher quality results. And don't get me wrong, AI agents, they are super useful, but do not discount this sequential way of building AI automations. It always depends on the use case. Tip number 15, import curl. See, the most powerful node in Nadn is actually a node called the HTTP node because the HTTP node allows you to in integrate any thirdparty service or any thirdparty functionality even if NN has not built an integration for it. And a lot of people don't know that the HTTP node is actually quite easy to use even if you do not fully understand everything that's going on here because of this button up here which says import curl. See every third party service that you can integrate in Naden has an API. So this basically means that your automation can access that third-party service automatically. And in almost every documentation of thirdparty services, you're going to find a example of how to send an HTTP request in this curl format. You just have to dig through the documentation a little bit, but usually you can select different programming languages and one of them is going to be curl. In this case here, I'm trying to integrate something from anthropics, so some cloud model. And now I display the request in curl format. And then I can just copy this without really understanding too much of what's going on here. And back in Nadn, I just click this import curl button, paste it in here, and click on import. And Nadn automatically fills out all of these different fields with the data from this curl format. And now you're basically 80 to 90% of the way there and only have to make a few small adjustments to get this HTTP request to work exactly how you want. Tip number 16, managed credentials. Sometimes in Nadn you want to integrate a service like for example I want to integrate anthropic some type of cloud sonnet model and a text prompt and then you find out that naden has not implemented a certain functionality that this application offers. So what you can do in this case you can use an HTTP request to implement this functionality yourself in a custom way. However, the really cool thing is you can still have NAND manage your credentials. So you you don't have to worry about the authentication and storing some passwords or access tokens, but instead up here you can just click on authentication and then you select predefined credential type. And then if Naden has an integration for it, then you can just search for it. in this case, enthropic. And you can see it automatically loads my default anthropic credentials that I can now use for the request and I don't have to worry about it and copy and paste these access tokens myself. Tip number 17, never miss important NAND updates. If you always want to stay uptodate about NADN's latest features, then you can go to this website right here or to this link github.com/nadn-iio/nadn/rees. And on this page, N8N publishes all of the new versions and all of the new updates that they have released. And usually they release one new bigger version with features every single week. And you can identify these bigger updates because the version number always ends in0. That means that this version includes some new features. For example, this version was released 2 days ago. And here you can see a list of the new features that NN has added. And obviously what you can also do is you can just subscribe to this YouTube channel because whenever there's going to be an important NN update, I will make a video about it. Tip number 18, back up your workflows. One of the first automations that you build in Nadn should be a backup automation. And the way this works is NAN has a node called N8N. And basically this node allows NN to control itself. And one of these actions that you can take within this node is called get many workflows. This one right here. I've already added this to the automation here. And if we open this up, you can see it's quite a simple note. And when I execute this, it gives me all of my workflows in the form of JSON data. You can see it's quite a lot of data. And then in the next step, you can use that data to store it to some type of cloud storage, for example, Google Drive or you could store it to some type of S3 bucket and things like that. And this way, if anything happens with your account or especially with your self-hosted and an instance with your server, then you will still have your workflows and do not have to build them from scratch again. Tip number 19, you can use the set node of NN or also called edit fields node to create data items in your workflow with static data. So right here for example, I've added such a set node and I called it website information. And it includes two fields, one called name marketing site and one URL example.com. And when I execute the step then this node returns the fields in a form of a single data item that I can use in the rest of my automation. So right here for example I have this uptime monitoring automation that checks my website if it is still online or if there are some problems that I have to check. And here in all of these steps I use the data from the set node. For example, here I use it to visit the URL or here in the notification I use the name and also the URL so that when I receive this message I can click on it and immediately see what's going on. Tip number 20. The only downside with this set note data approach is that it always creates a single data item. But let's say here for example I want to check multiple of my websites then this approach would not work. But what I could do instead is I can remove this note and instead add a code node and let's also call this website information. And inside of the code note you set the mode to run once for all items language JavaScript. And then you remove all of that and just type return and square brackets. So because we want to return a list of items and then for each item you type curly brackets and then the fields of the item. So one was name marketing website, URL example.com. And now I can copy this and with a comma in between paste this multiple times and this will generate me multiple data items. So for example, this one I could call documentation website and this one I call support website. Let's execute this. And you can see here I get the data items as a result and I get three items this time. And this means the rest of the automation will now also execute multiple times depending on how many output items we get here. Tip number 21, multicursor. And a lot of people do not know that in N8N inside of the code node you can do the following. So you click somewhere and then you have a cursor placed and then you hold your option key on Mac or your alt key on Windows and then when you click somewhere else then this creates a second cursor and if you click again it creates another cursor and this way you can have a multicursor and can edit text in multiple places at the same time. For example, I could turn website or name into website name. And there's a second way how you can do this. If you select a piece of text, for example, URL, and then hit command D on Mac or control D on Windows, then this will select the next occurrence of that piece of text. And if you hit it multiple times, it will select all of the occurrences of that text. And then you can also do the same thing and edit this occurrence in all places at the same time. Tip number 22, AI prefiltering. A lot of people that want to build AI automations in NN, they want to use the best and the most intelligent models. And that is a good idea if you want to build the best AI automations. The problem however is these intelligent models, they can be really expensive. And to reduce the cost while keeping the quality extremely high, what you can do is you can use the so-called prefiltering method. And the way pre-filtering works is in between your input and your expensive AI operation with the intelligent model, you put another AI operation and that is an AI prefilter. And this operation uses a much much cheaper model. In this case, I use Google Gemini 2.5 flashlight. Super cheap and efficient model. And the task of this prefilter step is to remove 70 to 90% of the input that is obviously not relevant and only keep the last 10 to 30% of hardest input and only this is then passed on to the more expensive more intelligent model and thus it has much less work to do and you save a lot of money. If you are interested in a more detailed tutorial on how pre-filtering works, I've actually recorded a video about this a few days ago and I will show it here on the screen right now and you can watch it and learn more about this technique. Tip number 23. You can use the set node, the edit fields node to simplify data. So right here in this workflow, I get some articles from a news website. And as you may be able to see, the response data is very big and very convoluted with lots of different pieces of data, most of which I actually don't really need. And to solve this issue and make the data much more clear, what I did after this article step where I get the latest post from the news website, I added a set note. And in this set note, I extract the most important content from this input data. in this case the title, the URL, the text of the post and the author and only return that as part of my data. And if we look at the output data here, you can see it is much easier to see what is going on. It is much more clear and also this makes it so much easier to work with the data later in the next steps of our automation. Tip number 24, learn to read the execution logs. If you want to analyze your workflows and see if everything works as expected, for example, if your AI does what it is supposed to do, then obviously you can open up the individual nodes and look at the input and the output data and then you can close it and then go to the next node. Look at here at the data and to the next node and so on so forth. But this is not super efficient. What you can do instead down here, you can click on this bar or on this up arrow here. And this opens up the execution logs. And this is basically a much more compressed and quicker to use overview of all of your execution data. Here on the right side, you can activate the input and output data. And then you can very quickly switch between all of the different nodes and immediately see both the input and the output data right next to each other. Super useful. And here the same as inside of the node window you can also switch between the schema the table and the JSON data view. So if I switch this to table for example then you could use the trick that we discussed today tip number eight the page size equals one trick. You could use that in combination with these execution logs. And this way you can go through the data really quickly and see if everything works as expected or if you still have to make changes somewhere. Tip number 25, the configuration note. Most of the times when you set up a workflow, if you think about it, the settings or the configuration of that workflow is spread all over the different nodes. For example, in this first node, I have set a limit for the maximum amount of posts that I want to retrieve that could be counted as a setting. So the limit of the posts or here in this AI block, I have defined the response format that I want the AI to return. That is also another setting. Or here in this telegram node, I've also determined which chat ID the notification should be sent to. So what you can do instead, you can create a new node and you're going to add this node as the second node in your workflow behind the trigger and you use a set node for that. And then here you just define all of your settings as individual fields. And this note we call it config I usually call it or configuration. And then we can start to define our settings. So for example, the first one was maximum amount of posts that we want to retrieve number 10. Then the AI response format was this one right here. And the Telegram chat ID where I want to send the notification to is this ID here. And now what I've done is I've extracted all of the important settings and configuration of this workflow and put it into this single node. I can then go inside of all of these nodes and replace the hard-coded values with my settings. So instead of writing the 10 here, the limit, I pull whatever number is set in the configuration note. Then we do the same thing for our AI block with the response format, I don't hardcode it here anymore, but instead I go to the configuration node and pull in the AI response format from there. And the same with the Telegram chat ID, I just pull in the Telegram chat ID from the configuration node as well. And now my automation works exactly the same except all of my settings are in this single node. And this has the great advantage that first of all I can quickly make changes and test out different things because I only have to make the edits in a single place. But also if I make changes to my workflow settings in this node, I can be confident that the changes have been made in all of the places. Whereas if I spread my settings across all my different nodes, then sometimes it can happen I make a change somewhere but I forget to make the change in all necessary nodes and then my workflow will no longer work as expected. Tip number 26, load execution data. If you want to make changes to your automation, but currently you don't have any execution data available in your notes, then you don't have to execute your workflow again, but instead you can just click up here on executions and there you have a list of all of your previous executions. And here you can select one and then click on copy to editor and then this will paste the execution data into your workflow and you can start to make your changes based on that loaded execution data. Tip number 27, copy JSON. Sometimes you might need to use your execution data and bring it over into another application. For example, if you want to run an AI prompt based on the execution data or want to do some analysis of it, then what you can do is up here you just switch the view from schema to JSON. So display the data in JSON view. And then when you hover over the data, you get this copy icon. And then with that copy icon, you can click it. And then you can go for example to chat GPD and paste it as part of your prompt. Tip number 28, name your workflows. And this is one of the most important tips to keep your automations organized. So for example, if I want to add a new node between these two nodes here, let's say a filter node. A lot of people would just add this filter node, then set up some type of filtering condition, and then just leave it as it is. So with the name filter. But this is not a good name. All of these default names usually are really bad because filter what what does that mean? It means absolutely nothing. Instead, what you should do is you should name your nodes in a way that by looking at that name, you should be immediately be able to tell what this note does. So in this case, the task of this filter is to remove all of the nonrelevant articles. So what I'm going to do here, I'm going to call this remove nonrelevant articles. And then when I come back to this automation in a few weeks or even in a few months, I look at it and then I immediately see what's going on. Tip number 29. When you are testing your automations, a lot of times you receive much more data than you need for your test purposes. So for example, here I pull the latest 50 articles from a news website. And that is important later for the live version of this automation. But during testing, I do not need the latest 50 articles. For example, three articles would be more than enough. So what you can do is after you pull in your data from some external service, you put a limit note here inside of your workflow and then for example in my case restricted to a maximum of three items. And this way when I execute this now this executes much quicker because it only has to go through three items which is more than enough for test purposes. But the really cool thing about the limit node is that it does not affect the data structure of your workflow. So it passes through whatever items it receives, the only thing that it changes is it limits the amount of items. And that means once you're finished testing your workflow, you can just remove the limit note. And that's all the changes that you have to make. Tip number 30. If you split up your workflow into multiple subworkflows, then if you want to open this subworkflow to see what's going on or you want to make some changes inside of it, then you don't have to open it up manually, but instead you just find your subworkflow note. Then you hold your command key on Mac or your control key on Windows and then you doubleclick it and then it automatically opens up in a new tab where you can have a look at it or make the necessary changes. Tip number 31, always set spending limits for every AI provider that you use. So most of the big AI providers, like for example here, I I have Anthropic, they allow you to set a monthly spending limit. And that means if you have some type of mistake or some type of bug in your automation and suddenly your automation executes a thousand times, then with a spending limit, you are not going to run into a huge surprise bill. But instead, once once it reaches $50, it will just stop executing. And I don't know how you feel about it, but after setting up these spending limits, I felt so much more confident in trying out new things and making some changes in my automations and running small experiments because I don't have to be worried about running into a huge surprise bill. Tip number 32, AI is more than just text prompts. So in the beginning when I started with the AI automations I mostly used open AIS or Google Geminis's message a model action or the AI agent node where basically you just send a text prompt to an AI and you get a text prompt back. But AI is so much more than just text. For example, if we have a look at all of the actions that we can take inside of this Google Gemini note. Yes, we have this text action message a model but there are also other functionalities like analyze a video. You can have the AI literally watch a video and analyze it for you. Or you can generate a video or analyze an image or a document, transcribe a recording. There's so much more to AI than just the basic text prompts or the textbased AI agents. And I encourage you to have a look at all of these other advanced functionalities. Tip number 33, the dot notation. Inside of the set note, you can group related fields together. So for example here in this case I extract the important data from a news article from a news post and it contains a field called title URL text and author but in my opinion the fields title URL and text kind of belong together in a group because they all represent some aspect of the content of that article and inside the set node you can just prefix a field name with a prefix name. In this case, I choose content and then dot that's why it's called dot notation. So content.title, content, content dot URL and content.ext. And if I execute this again now, you can see these three fields title, URL, and text are now grouped within a separate object called content. And this makes it much easier to look at, but also it's now really obvious which of these data fields belong closer together. Tip number 34, item linking. If you want to be good at building NAT and automations, you have to fully understand the concept of item linking. And item linking basically refers to the concept that whenever the data items here are passed through the automation, every data item has a parent data item and that has another parent item and that has another parent item, another parent item and so on so forth until you're back at the beginning of the workflow. And whenever in Nadn you reference data from a previous step that has more than a single item then item linking or the item linking concept becomes really important because somehow NAN has to determine for example if in this step you reference data from this post info step. Somehow NAN has to determine which of these 10 items that this post info step output should it use. Tip number 35. And then I'm going to also give you a cool little bonus tip. Uh tip number 35 is look at alternative table applications. A lot of people use Google Sheets in Naden and it's one of the most popular integrations and it's not a bad application but in my opinion it's also not really userfriendly. A lot of people use Google Sheets because it is free and it is very popular. And ever since I started to use air table as the database for my automations instead of Google Sheets, um I'm looking much more forward to working with these ta with this tablebased data because it's just so much easier and so much more fun to use. And obviously air table is a paid application or I mean it has a free plan but only a very small one. But what you can do instead if you don't want to pay for air table, you can also use these self-hosted tablebased applications like base row, no codeb or c table. You can self-host them in a very similar way as you would self-host nadn. And this way you can store an unlimited amount of data without paying for a subscription while still using a really userfriendly table application. Okay. And now I'm going to give you my small little bonus tip at the end. But before I do that, please don't forget to like this video if you enjoyed this content today because that signals to me that I should produce more videos just like this. Okay. And now to the bonus tip. If you want to use data inside of a set node from a previous step that you just want to pass on completely without changes, then you don't have to click on add field here. Then write the name uh of the field and then pull in the value of the field. But what you can do instead, you can just pull in the value directly into this area. Then it turns green. You drop it and then it automatically adds this field with the correct name and the correct expression which references the data from the previous step. Okay, thank you for tuning in. My name is Mike. I wish you a great day. Bye.