a few days ago make.com released a beta version of their brand new AI agent system in today's video I'm going to compare it against Nadn's AI agents to help you figure out which is the best system for your AI automations i'll be breaking down the differences across a number of different key categories to really dive deep into what the differences are and the first one up is the user experience or the ease of setup of the agents creating AI agents in NADN is super simple so you just click create workflow you add a first step so you need some sort of trigger so we'll just put in a chat trigger that way you can interact with the agent using a chat interface here and from there you just click on the plus type in agent and away you go here's your AI agent node and from there you can add in your system message or your system prompt and then you have these legs out of this node where you can add an LLM chat model you can add a mechanism to retain memory and you can add your various tools we can just add in OpenAI i already have a credential set up here and let's just leave it at GPG 40 mini and from there all you need to do is click open chat and just say hello and that then hits the AI agent it triggers the chat model and you get an answer so the actual setup of AI agents in N8N is incredibly intuitive and simple they have done things differently in make.com so you have this tab now which is for AI agents and when you click that you can then create an agent so within make.com you kind of define the agent up front and then you embed it in your scenarios let's call this our research agent you can choose a model you then have an area for your system prompt where you can add text this functionality is in beta so I'm sure these interfaces will improve but for example this system prompt you actually can't really expand it out so that's something I'm sure they're going to fix but if you just type in anything for example and click save then you get to the agent edit page where you actually can expand this out we'll just put in a basic prompt here you are a helpful research agent and you'll see here that you have access to add system tools so the equivalent here is the way you can add tools to the agent within N8N so to add tools then you click the add button and what happens is all of the scenarios within your account that are set to scheduled on demand or immediately will show up here so this is the way make have built their agent is that the tools are scenarios and it's the equivalent within N8N you'll see this call NN workflow tool the thing is within make.com you can only use scenarios you have no other option whereas with Nadn you have a lot more options so yes you can call another workflow or another scenario but you can also just trigger a HTTP request tool so this might hit an endpoint for example and get some data back so this means you don't need to wrap this in a workflow or a scenario and then you have a very long list of other tools that you can just directly hook into your agent and again all of these tools don't need to be wrapped within scenarios so in a way this makes the creation of agents a lot easier in N8 so let's say you wanted to have a Google calendar tool for example you just add that there and the newer versions of NAN have these great buttons that you can press which basically lets the model decide what data to inject in here so we're letting the model choose the start and end date and let's just give it a description so you can press that button so that the AI magically populates it we'll save that and then we can trigger it so we'll just say create an event to review our AI agent tomorrow at 12:00 noon in our offices and as you can see it's created the event we have our description at the bottom here review AI agent i haven't even tested this so on the first shot it did create it but it's missing the title the time zone is off on the timing but all of that can be fixed up then within the parameters of the tool within the agent and this is an example of another agent we have on N8 a social media agent and it's easy to visualize the various tools that you have based off the connections to this node so needing to create a scenario for every tool is a little clunky particularly because you get to this point of adding tools and there's no button to create a scenario you can only choose scenarios already created so you need to kind of save that come over here to the scenario section and create a new scenario so let's do the same thing let's go Google calendar and we're going to try to create an event which is here you need to then work through the OATH which is the same as N8N and from there then you can choose your calendar here you can set your event name your start date and end date again the same as N8 now we do want these to be dynamically populated so just put in test there for a second and we can't put a test actually there so we'll just choose a date which requires a specific date format and what you need to do within make.com then is you have these scenario inputs and outputs you need to define that the input for this scenario or this tool is essentially an event title and that's required and then we have a start date time and then an end date time which we'll also have as a date so if we save that then which we can't save uh because we need to set it to on demand so if we change that to on demand and we can't change it to on demand because we haven't saved it so we're kind of locked in a bit of a loop here again this is beta i might need to just refresh this and start it again we'll save this scenario first and now maybe let's add the scenario inputs up front now again I still can't save because I need to set it as on demand i can't set it as on demand because I need to add a module let's not add the calendar module now we can schedule the setting that's a quirk that definitely needs to be ironed out on the make.com side so we'll set this as on demand we'll save that but clearly we don't want this module so we'll go to Google calendar and let's go back through this process google calendar create an event and now that we have the scenario input set we can then select them here under the scenario input variables so we have that event title so drag that in start date we'll click save and now let's delete that and that moves the trigger to there so we'll save that i've just set it to on demand down here and you can see it's active and the scenarios need to be active for the agent to actually trigger them so now if I come back into my agent click add and then I can see my create Google calendar event tool so to add this you do need to add a scenario description this tool has created an event in my Google calendar so we just save that and click add we'll call this a personal assistant agent great we now have our agent on make.com the problem is there's no easy way to trigger it the way there is in n you can just click open chat and have a chat with it the way I did earlier there isn't a native chat interface on make.com for these agents i don't know where is it in the pipeline but the position agents more as kind of like a reasoning engine to be used within a workflow but let's add it to a scenario and actually let's try to trigger it create a new scenario and at a very basic level let's just set a variable which is message hello okay so that's that and then if we type in make AI agents you can run an agent and that gives you the flexibility to then choose the agent that you just created or personal assistant agent so I think they've kind of over complicated things here compared to NAND with NADN you just have an agent on a canvas and and you just configure it as you need to within make for example you're kind of configuring it here in this AI agent section but then you need to embed that agent into a scenario and the agent has its own kind of higher level tools but then you can almost override the tools or give it additional tools here that are only available for this specific instance of this agent and then you can also override the system instructions of the agent so I'm not sure why they did it like this was it possibly just to have an AI agent section on the menu i'm not sure but it does kind of complicate things let's add in our user message then so this would be what we set here in this variable and we'll swap this out in a few minutes for something like Telegram or WhatsApp and if we click run once it's going to open AI and then in the output here you can see the response is hello how can I assist you today so that's essentially the same response we're getting here for the same message not having a chat interface to actually trigger this though makes it quite awkward to test it so let's now test out our Google calendar tool that we built so can you create an event in Google Calendar for tomorrow at 12:00 noon to review our AI agents click run once and there you go the event has been created for tomorrow at 12:00 noon let's check the calendar so I found the event in my diary review AI agents it actually set the date as the 14th of November 2023 which clearly is wrong it's April 2025 so within the system prompt of the make agent we need to say the current date is X and this is another problem which is you can't add variables to this system prompt but we'll probably need to do it here in the override instructions area so additional system instructions the current date is and then let's just search for a date yeah it's not time stamp there's now is actually a parameter there we go the current date is now yeah and there it is and yeah it's just thrown off with the time zone again cuz we're at plus one here so the idea of having scenarios as tools there is a similar concept in N8N which is to use a workflow as a tool and what you have here in make in terms of the scenario inputs in other words what the agent is sending to this tool to trigger some action is the exact same in N8N so if we add a tool here which is the call N8N workflow tool this does require you to choose a workflow so let's come in here and we'll just quickly create another workflow so this is the test tool and similar to the way this needs to be set to on demand or schedule here it needs to be set to when executed by another workflow and then on the left hand side here you can define your input schema which is the exact same as what we did here so you would set event title start date and end date and those variables will be made available then to a specific module or a specific node so this could be the create an event and then to actually populate the start dynamically you can just drag it in from that kind of input schema so that's that one there the end would be that one there so that's what's really clever about the way NN did this is that N&N have lots of modules so if you look at Google Calendar you have all of these actions but instead of requiring you to actually create a workflow each time you can just embed the actual module directly and that's a really nice kind of user interface I think so onto our scorecard which has the best UX and setting up agents i think hands down it's N8N it's a lot more kind of intuitive a lot more mature and it'll be interested to see how Micah try to improve their workflows around creating scenarios and because it's very clunky at the moment if you'd like to get access to any of the automations you see in this video then check out the link in the description to our community the AI automators next up is interfaces and triggers so within N8 if we look at the interfaces here I'm using this embedded chat and that's really intuitive to actually test out your agents of course you're not restricted to that so there's lots of different triggers that you can create within N8N so if you look at add another trigger down here you could trigger via web hook call you can trigger it on schedule you can trigger when executed by another workflow which then enables multi- aent teams there's even form submission triggers so you could have a custom form for example and that custom form when populated could actually trigger your AI agent the chat that we've been using down here can be made available publicly you need to activate the scenario for this to work and this could then be embedded on a website for example as a customer service chatbot and of course there's lots of modules that then can operate as triggers so if we come in here for example the WhatsApp module has various triggers that you could send in messages and get responses telegram is another one slack is another one so there are lots of ways of triggering AI agents in NAN the chat one in particular is a gamecher I think because a lot of use cases for AI agents are chatbot related either embedible on a website as a customer service bot or it could be a support bot in a knowledge base somewhere or an internal bot deployed onto a company's local network so to trigger an AI agent on make.com like here I'm just setting a variable for a message you know you could have that as a scenario input you know so what is the message and that way then you could just get rid of this and then the message that you're passing here could be that scenario input and if you try to run this then you'll get your message pop up here and then you can add in your message as you want and that will then trigger it and you'll get your response back but clearly very clunky and that's not a back and forth conversation you're having like what you're having here with this chat interface there are other triggers then so let's turn off on demand if you look at add a module there's a a whole host of triggers that you can use so again the examples I used before WhatsApp you can have a watch events module that watches out for messages or images coming in that you could then send to your agent you could have a telegram trigger so watch updates add your web hook in and then you would hook it in like that and again this is an issue with make.com is you can only have a single trigger for a scenario you can't connect that up as well the way we were able to have multiple triggers on workflows within Nadens and the lack then of a native chat interface means that there isn't anything publishable that you could embed onto a website so instead you probably would need to use a web hook or a custom web hook you would save that that gives you a URL and from there then you could link that up to your make AI agent to generate some sort of response that you would then use a web hook response to send back to a front end the problem with this though is that you need to build your own custom front end you don't get an embeddible chat like you get here you would basically need to build this part custom and then the web service calls from that custom kind of front-end chatbot would then need to hit this scenario this web hook and work through the agent so way more complicated unfortunately and the way make.com promote this new functionality is less so based on that kind of chatbot use case and more based on kind of like an embedded reasoning node within a scenario or within a workflow and this is a good example of it here this is our video automation system available in our community and we have this router set up with lots of different filters based off the type of trigger that was coming in from an air table and this is a real programmatic deterministic automation we're defining the exact behavior based off a certain trigger whereas what I think make.com are doing with these AI agents is to add in a level of autonomy around decisions so instead of a monolithic automation all kind of controlled from this router you could just have an AI agent sitting there and that AI agent could have different scenarios so this track could be one scenario this could be another this could be another and then it changes the architecture of an automation like this where you're leaving it up to the AI agent to make a decision as to which track or which scenario to trigger based off what input it received so that's quite an extreme example but a more realistic example maybe something like a Gmail module that's looking out for new emails to come through and that AI agent would then have the ability to carry out actions based off the contents of that email so maybe one of the tools of that agent may be to automatically unsubscribe from an email or maybe to draft a reply to an email and save it within Gmail or perhaps to create an event in the diary based off what just came through from an email but the alternative way to do that without having an agent there would be a more router based automation that would require something like an LLM chat completion that would output some sort of structured data and you would then need a router and you know the first track would be unsubscribe this one would be draft or reply this one would be create an event in the calendar so effectively the AI agent is replacing all of those with a new reasoning system with tools so onto our scorecard who wins here i think it has to be NADN because NNN doesn't just support that kind of use case of an embedded reasoner within a an established workflow but it also offers the opportunity of having an embeddible chat interface having the dynamic forms is pretty cool as well on N8N to trigger the agents outside of that then a lot of the triggers on make are the same as on N8N you have your WhatsApp your Slack your Telegram triggers there are Gmail triggers on N8N as well as make so a lot of the module specific triggers are there too in reality there probably is more triggers in make than N8N but it probably isn't enough to actually win this round next up then is LLMs and Reasoning so if we jump back into make.com and go to our personal assistant agent and click on agent settings you can see that you have a variety of models that you can choose now a quirk here is that you can't change the model provider once the agent is created and it says you need to create a new agent so that is a constraint it's not the end of the world but it does remove some level of flexibility on swapping in and out models and then to create a new agent you then choose a different connection so here's the Gemini connection you have 2.0 Flash 1.5 flash for anthropic you have 3.7 Sonnet Opus Haiku etc 3.7 sonnet is a reasoning agent so let's test that one out here so this is going to be our reasoning agent and we're going to say use your reasoning powers to construct a reply reasoning agents are crucial in AI agents to actually get intelligent responses back they do take longer they do cost more money but if you provide lots of different tools to an agent or you're running a multi- aent system having a reasoning agent is going to get you dramatically better outputs no matter what platform you use so we'll save that one won't even give this one system tools i just want to see can it actually use reasoning so we'll create a new scenario we'll add our AI agent for reasoning and the message here will be what is the meaning of life let's try this one so we'll save that and run it so we're getting a response back but it's not actually using reasoning whereas if I try clawed 3.7 sonnet here I have the ability to enable thinking so let's ask the same question now what is the meaning of life and we are getting JSON output here but you can see the thinking is coming back so to actually tighten this up so you're getting text as opposed to JSON i think you just do edit fields text so we're getting the same response the profound philosophical question but here it's definitely using thinking which is what I wanted to see for a reasoning test and then you also have the ability to provide a thinking budget which here is set to a thousand tokens as well as a max number of tokens for the model itself so it doesn't look like thinking mode is enabled by default for 3.7 sonnet so there is a little bit of a lack of flexibility there also the lack of flexibility of swapping out the models is a little bit disappointing if we look at what models you can use for these AI agents in make.com you can choose the various ones from OpenAI Anthropic Mistral Coher Grock XAI Gemini and then they also have a catchall for the OpenAI API compatible providers so this does cover a lot obviously you don't have local options because you can't deploy make locally but that's quite a decent list and would be kind of similar to what you have here within N8N there's possibly slightly more here some of the more enterprise ones such as Microsoft Azure or AWS bedrocker there Vert.ex cloud and then of course there's OMA chat so if you were running this locally you could use local inference with OMA so you do have more options with NAND both at the enterprise level and at the local level but I think the middle ground models are quite similar between both platforms so back to our scorecard unfortunately this is only going one way at the moment i'll be giving this one to NADN as well next up is prompt engineering so if we look at the configuration for the agents in make.com it's all built into this system prompt here so there's a definite issue that you can't add variables here like this is just a text area there's no way to add in dynamic information when I tried to add in the date for example I had to do it at the scenario level but if you look at the additional system instructions here you can see in the example they provide you know use customer name within two curly brackets when addressing customers and if we just add in this is the date again just as an example and if we re-trigger this yeah you can see within the execution steps the system message is what we set at the agent settings level but then also appended then is that information so you really can't have anything dynamic within the system prompting it can only go within here and within that then you have your standard functions that you can use in make.com such as switch functions if functions various operators it's not possible to use code but you could use a module like o code kit so if you had a very complex system prompt that you really needed Python or JavaScript to actually generate you could generate it using this and then just drop in the output into the system instructions there but it really isn't flexible enough for kind of highly dynamic system instructions or system prompts the system prompt in NN is quite straightforward so you have it here at the AI agent level if you set it to expression you can drop in dynamic information like that because NN is low code you also have the ability to add in lots of other information so today for example is today's date you could do like that you can also use different operators like logical or operators or tenorary operators within this field you could dynamically create a system prompt here using a code node which is this one and then you can use JavaScript or Python to actually work through some logic to output the right prompt so lots of different approaches that you can use on the system prompt front a lot of flexibility there i forgot to mention max iterations so this is a setting that's available on both make and N8N so you can define how many turns the AI agent can make to achieve an outcome we went through this in our previous video on agentic rag where an agent could pick and choose which data sources to use to build out context for a blog automation so that's the same there you could specify the number of iterations an agent will run through before stopping and that's also available here in the agent settings the recursion limit that's set to 300 there which is quite high i wouldn't run it that high so to score this NAN is definitely more flexible in the sense that you can have dynamic variables within a system prompt you can also do this in make with the additional system instructions you just can't do it at the agent settings level within make there are various expressions and functions that you can use to help dynamically populate that out similar to NAND so while there is slightly more functionality on the Nadnite here I think I'm just going to mark this one down as a draw it's almost feature parity next up we have tool and as I've already touched on tools in make.com's agents are effectively scenarios that the agent has access to and then there's also additional tools that the agent can access depending on which scenario the agent is embedded into so Naden doesn't have this feature because agents are just a little bit simpler in NAD in that they're just embedded on a canvas and if you need to use that agent on another workflow you just copy and paste it but the limitation on tools only being scenarios is something I've already covered whereas if we look at this social media agent you can see that here we're actually triggering a workflow which is our fetch stock photos workflow which goes through a number of nodes to achieve an outcome whereas here we're just hitting a HTTP endpoint and this is just a get request to this URL and as I demonstrated before if you click on add tool here there's a long list of tools available for agents to use that being said within make.com if you take their scenario as a tool approach then technically every module that's available within make.com could be used within a tool this is just the reality that make.com have more integrations with more services than nadn does this list is a lot shorter for example than the previous list on make.com that I went through so having built a lot of workflows in N8N I usually go to the HTTP module quite a lot to interact directly with the API endpoints of various services because there might not be a native tool so actually I think I'm going to score this one as a win for make.com because there just are a lot more out ofthebox modules that you can hook into whereas with nadn you generally need to hit the API of a service because there may not be a native node already there both make.com and NAN offer the ability to hit a custom endpoint like what we're doing here so you have these native HTTP requests and they exist both in N8N but also in make.com if you add a module for example there's a variety of modules that you can use here to hit arbitrary endpoints where you just drop in the URL and you post to it add headers inject a body etc so I think that's a win for make.com next up is memory and sessions and if we go back to our AI agent on make.com for example if we click on the memory leg of this agent you'll see that there's a variety of different options of how you want to persist the memory of the conversation the simple memory for example is essentially just loading it into RAM and it's not going to persist outside of that but you have other options you can use Reddus you could use Postgres chat memory as well as a couple of other options and if you go to simple memory you have the ability to set a session key that could be fixed or you can have that as a variable and you have some logic around setting that and the other thing is you can set a context window length so how many interactions should the agent remember which is crucial when you're keeping track of a conversation on the make.com site you have very little flexibility on the memory side of things so if you go into the configuration of the agent on a canvas you do have the ability to set a thread ID or a session ID so with this you can track different interactions based off what the trigger is so if it was WhatsApp as a trigger it could be the person's phone number that triggers the scenario and that way then they have the context of that conversation throughout the engagement and I think that's what this iterations from history count is i think it's the amount of interactions that are retained in history i've gone through the full Make.com documentation and I haven't found clarification on that one but I think that's what that is so on to a winner for this com is very bare bones but at the same time it works brilliantly for beginners beginners don't want to get lost in trying to figure out the mechanism of memory and they probably don't need the flexibility of going to the likes of Postgres or third party memory services you do have the ability in make to retain sessions based off ids or it creates a new ID if you don't set one so make.com isn't actually bad in this regard it's just completely abstracted away from you with N8 you have a lot more control but then with that comes a little bit more complexity so I probably would mark this as even because I think for the use case of beginners this works perfectly fine but then with the N8 end you have a huge amount more flexibility around all of this which is more likely what you would need for more sophisticated agents onto knowledge and rag and I was struck by how there wasn't a single mention of rag in make.com's announcements around these AI agents and if you think about what an agent does an agent has an LLM brain it has tools that it can trigger it has knowledge that it can use to base its answers off and it has memory so they are the key aspects of an AI agent so not to even mention agents knowledge I think was a little bit of a a misstep that being said if you look at an AI agent in N8N there isn't a knowledge leg knowledge is derived from tools and that's what these vector stores are here on the right hand side so if you wanted to give this agent access to let's say your pine cone vector store you can just click that and then it can retrieve documents as you can see there and from here you just choose the pine cone index where everything is actually saved and there's just a huge amount of functionality around this so this is our vector store tool you can choose an embedding model so it could be open AAI text embeddings three small model for example and essentially then any communication that's coming into this agent if this tool is triggered the query is rewritten and then an embedding or a vector representation of the query is sent to the vector store to retrieve the most similar results and then that's fed into the context of the agent to come up with the response then it has a huge amount of features around loading documents into vector stores so this is our agentic rag ingestion workflow which I covered in one of our previous videos and is available in our community and here we're taking web pages that we crawled and we're then injecting the markdown of those pages into a vector store and we're loading the data here using this document loader and we're using a particular chunking strategy called recursive character text splitting to actually create chunks we're using the open embedding model and then all of that is sent into the vector store i have covered rag in make.com in this video here where I walk through this scenario which actually upserts vectors to pine cones vector stores so there is functionality in make.com for this but it's seriously lacking on the chunking side of it in this use case I was simply uploading rows from Google sheet into a vector store and I wasn't really chunking anything i was just each row was a vector and there's no native chunking functionality in make.com whereas here for example there are different options for splitting text you have token splitters recursive character splitters or you could use your own custom logic to actually split text and create chunks to go to a vector store and that's the power of the code node in N8N this is some JavaScript that I worked through with chatgpt and that can chunk in a relatively intelligent way to provide higher accuracy with the retrieval of rag the best I was actually able to do using the native features of make.com was this basic chunking which is just simple reax and this is very clunky like this is splitting words halfway there's no overlap and so this is not going to result in decent retrieval so I think this is a major problem of make.com when it comes to rag and knowledge in general for agents they need to create native modules to help with the embedding and the chunking of documents otherwise you'd have to rely on platforms like OpenAI who have a vector store feature and the module inmate.com allows the ability to add files to that i've covered this as well in a separate video on the channel this is incremental updates to a rag vector store and it's using those OpenAI modules to actually keep files up to date in the vector store so within make.com then if you wanted to provide knowledge to an agent you could give it access to a tool let's say in other words you could create a scenario that would take as an input a search term so let's say that's the search query which would be required and then you could use an OpenAI module to actually search that vector store now there isn't a query vector store option here so you'd have to make an arbitrary API call to OpenAI's platform to actually send that query in to get back the top K results so as you can see it's not very intuitive and you are relying on third party services whereas within N8N there's just huge functionality around vector stores loading data into vector stores querying data with vector stores and then using different embedding models here we're using OpenAI you could use cohhere you could use a local embedding model or a fine-tuned embedding model so we have a definite win here for NADN on knowledge and rag in terms of output formats make.com are positioning these agents as to be embedded within workflows so a key part of that would be the need to actually get structured outputs it's very typical in our scenarios on make to require JSON as an output format and we would define the exact JSON that's needed so you could put that into these system instructions you could say we require the output in this format and you could define that but there's no option here to force it whereas within N8N for example if you click on the agent you have an option to require a specific output format and if you click that then you're able to add an output parser to the agent and you have different options for output parsing and let's say you could provide a structure JSON object for example so here it could be state and cities so then once you add that you have this new leg now on the agent called output parser and you can hook up that output parser with that format and that then requires the agent to generate the output in that format there's also an option for an autofixing output parser so it's not just a case of providing the output schema but then you can also hook up another LLM to essentially send it back into the AI to actually get it to fix the output if it didn't produce it correctly the first time so then that's how that configuration will be set up there so this way you're getting highly reliable outputs from the AI agent in a structured sense which can then feed into follow-on modules and that's what improves the resilience of automations make.com definitely need to build functionality around forcing JSON as an output format and given the flexibility to create a schema so that then follow on modules can actually rely on what the agent is producing that's not there at the moment so it's another win for NADN on the output format side for multi- aent teams on this channel I created a personal assistant agent that controls 25 sub aents that's a free NAD template that you can access within this video here and I'll leave a link for that in the description below and this is that agent this is HAL 9000 1 it uses telegram as a trigger and then it also responds in the iconic voice of Hal from the movie 2001 a space odyssey and that's using text to speech using the speechify API the way this works though is that this large multi- aent team are all available as tools so here we have different supervisors and if you click into any one of these and open them up they in turn have sub agents that they can trigger actions so the communication supervisor has access to an email agent a Slack agent and a Twitter agent and if you jump into our Slack agent the Slack agent then has various actions that it can complete so the architecture that I'm using here is basically using agents as tools and in theory this is possible with make agents so you could have an agent which will be your director agent you would set a model you would give it a prompt and then you would need to create another agent which would be a supervisor agent and then you would need to embed those sub agents in scenarios so then this director agent actually has access to the scenarios that include those agents so it is possible in theory to do it it's definitely clunkier to set up than N8N because you have this kind of abstraction of AI agents outside of the canvas the other issue is timeouts because if these sub agents take too long to actually work down the tree to carry out the action and send the message back up to the main agent this agent workflow itself might time out within makes AI agents they have this feature to continue the scenario run while the agent is working and if you click yes to that you get to specify a web hook URL and they say set this to yes if your agent will take 3 minutes or longer to finish so 3 minutes may be the hard limit here for the agent to wait for a tool to respond the maximum execution time of a scenario on make.com is 40 minutes but obviously the limitation there is on the tool of the agent itself there is more flexibility within the self-hosted version of N8N on workflow timeouts however I do think the HTTP node in N8N has a max timeout of 5 minutes so generally speaking timeouts is a problem for multi-agent teams in both make and N8N there possibly is a little bit more flexibility though within N8 as the make AI agent was only released late last week and multi- aent systems are quite complicated i haven't spent enough time testing the make AI agent to actually tell you one way or the other where it stands but in theory it's the same mechanism of using agents as tools so I'll leave this one as unscored until I spend more time on it for debugging and error handling generally there's better features in n for this anyway and so for make.com I have an AI agent here connected to WhatsApp so if I just trigger it to say scrape a specific website and you can see the specific execution data by clicking on the bubble so here's the response from the AI i have a tool hooked up to scrape a website which is using gina.ai and it's saying that it can't fulfill the request but it does not accept a website name it requires a URL and if we click into that tool you can see in the history that it was never actually executed but this must be the agent determining from the message that the URL hasn't been provided now I didn't add triple W so let's try it again and I'll do that and this is the nature of building AI agents is there's so much back and forth needed okay I've actually provided the full URL this time and the fact that it's taken longer means it's clearly gone to the tool to get the result okay I'm sorry but I cannot provide any output because the tool doesn't return any information to me so okay this is good we're getting some level of feedback and if we click into here and here we can see that there was a success and there is the data so it did actually crawl the website so I just need to go back into the canvas now and specify a scenario output so here we'll say this is the website summary and then description is summarize the scraped website and now with this scenario output set then you just add another module and you use the return output module which then gives you that website summary and then you can just drop in the variable that you're getting back from the service in this case it's data then back into our agent and now that scenario will provide that output which it has done and there's the summary and then that's what's passed back to me on WhatsApp so you can get a sense of the debugging and the error handling involved there the problem is you have these various runs in history and if you jump into any specific one you can see the data that was used within that execution but there's no way to retest that flow with that data and that's what you can do in NAN which is super powerful so if you go to a workflow and click on executions you can see various previous execution runs you can click on them to see what data was involved in that execution and for previous successful runs you can just click this copy to editor which brings in all of that data into the workflow where you can just pin it and test it again and for workflows that produce errors you can actually click this button and that allows you to retry the original workflow with the data or with the current version of the workflow with the data so there's just huge amounts of flexibility in debugging NADN workflows once you get the hang of it it is more advanced but it's a lot more dynamic than just the static list that you get on make.com the other thing to say for NADN is for error handling per node you have the ability to set it to retry on fail and that's hugely important when you're dealing with LLMs that can actually time out or can produce overloaded errors and you have different options then where you can just stop the workflow dead or you could continue it with a different leg so for example if this one throws an error could follow down a different track the other thing is you can set error workflows that trigger which could notify you that the actual execution has run into a problem so there is a lot of functionality in NN for error handling there isn't any settings here at the agent level for that type of gracefully retrying and exponential backoff you have your standard error handling in make which if you right click and add an error handler there is the ability to use break error handlers or resume error handlers and carry out different actions within the stream we use error handlers quite a lot in our automations on make so there is some decent functionality there but probably not as fleshed out as NAN so for this one again I'm going to go with N8N for debugging and error handling particularly around that ability to reload previous execution runs for deployment and privacy I think straight away we can just give this one to NAND for the simple reason that you can run NAND in the cloud you can self-host NADN on your own server or with services like Render or Railway you could host it with AWS or Google Cloud you could run it locally or in Docker so then with that comes huge data privacy because you could have it completely cut off from the rest of the world hidden behind firewalls whereas with make.com you only have one option which is to use their cloud platform so it's a lot simpler from that perspective but you have a lot less flexibility they have a full privacy section on their website where they talk about privacy by design and there is some decent privacy features built into make such as the ability to turn off logging once a scenario is active and up and running that way you're not saving potentially sensitive information per execution and on their more expensive enterprise plan they do have enhanced security such as audit logs compliance support single sign on with company specific identification management systems so I think that's a quick tick to NADN for huge flexibility on deployment and privacy onto more cutting edge things like MCP i did a video a few weeks ago on MCP agents in NADN this was using a community module and it gained a huge amount of traction and even since then Nitin have brought out their own MCP client as you see here as well as an MCP server which can be used to integrate with the likes of Claw Desktop or Cursor if you don't know what MCP is then definitely check out my video i'll leave a link for it below but now with the support of the entire industry it does look like MCP is going to be the go-to standard when building agents and connecting them with tools even Zapier has created an MCP product allowing agents to connect to the wide variety of modules and connections they have on their platform so make is already behind the curve on this they definitely need to bring out some sort of MCP solution even just to keep up with their competition so again another green tick for Naden so on to the last one pricing and I think you may know the direction this one is going makes pricing is based on operations so you can see the price for 10,000 operations on their core plan is $9 a month within their AI agents you have your connections to the various LLM providers you are going to pay per inference via these platforms anyway and that's the same with N8N you're going to be connected to the various platforms and you just pay the bills there so really we're talking about the cost of running these agents and building these workflows and it's on an operation basis within make.com for NADN because there's an open-source version of NADN you can download it for free and run it yourself and that way you're just paying for the server cost you don't pay per operation within NADN cloud you do have a budget of workflow executions per month at the starter plan it's two and a half thousand but you could have unlimited steps or unlimited operations within that and you can have five active workflows within make.com you can have an unlimited amount of active workflows so naden cloud and make.com do have different commercial strategies but if you're operating your NAN agents or your workflows at a level of scale it's worth quickly getting off NAN Cloud and just setting up your own server on the likes of Render or Railway or Celestial where for even as low as $5 a month you can get your instance up and running without any limits so that's clearly a lot cheaper than paying per operation and then needing a specific monthly plan on make.com so again I'm going to give this one to NAN and here's our results so NADN agents clearly the winner it's a much more mature platform a lot more flexibility a lot more features it's definitely more cutting edge by keeping up to date with the latest in MCP and definitely the only area that it's losing out on is its tool use the fact that make.com has just thousands of integrations n just can't compete with that so that really is where make.com wins but I think on every other front nadn is better and coming into this I assumed AI agents in make.com would be easier for beginners and n would be better for kind of intermediate to advanced use cases but actually I'm surprised by how poor the UX is on make.com's agents it's really clunky to actually set these up whereas with N8N you just drop an agent node onto a canvas hook up a chat model hook up a few tools and away you go so it's super simple on N8N and I think make have possibly dropped the ball a little bit on that by abstracting the creation of the agent from the scenario they should just dump the creation of the agent and just have them all in scenarios it'll make life so much easier for people to actually use i hope this video was useful let me know in the comments below what you think do you think make agents are better than NAD if you'd like to get access to any of the make.com or nadan automations that you saw in this video then check out the link in the description to our community the AI automators where you can join hundreds of fellow automators all looking to leverage AI to automate their businesses not only do you get access to these automations but you also learn how to customize them to meet your own specific requirements we have a packed schedule of events all of which are recorded so that you can play them back if you miss them we have over 100 templates on both make.com and N8N as well as a number of courses including our N8N masterclass check out the description below we'd love to see you here