what if your AI agent could instantly understand and use any software tool without you manually setting it up that is the potential of mCP which is a standardized way for AI agents to discover and use tools and data sources the best way to explain mCP is to actually show you an example and there is now an n8n Community module that was published two days ago that you can actually play around with so here I have a typical AI agent in NN this is a Google Calendar agent and as you can see here you have various tools you view calendar events checking availability updating and deleting events and as you can see for this calendar agent I need to be quite specific around its capabilities I need to create tools for different actions and then within the system prompt for this calendar agent I need to hardcode some sort of procedure around how to actually use these tools now agents can figure out how to use tools but generally from my experience you need to be quite descriptive with agents on how to actually use the tools to get the right outcome so this is quite a consuming process and it doesn't scale well with the more tools and the more agents that you have in a potential multi-agent system however with mCP it's a completely different approach so if we zoom in here you can see that instead of listing the specific tools and the various actions we now just have a single end point which is list the tools so you're going to an mCP server let's say for Google Calendar and the agent is asking give me all of the available tools that you have it'll respond with all of its tools and then you can execute one of those tools but it doesn't stop there you can also ask a to provide prompt templates to use those tools and I think there's huge potential in this aspect of it because it's not enough to actually know what tools to call that's the same as having any API endpoint but to actually have prompt templates on how to call those tools to get the best outcome is very different from a standard API kind of documentation the other thing you can do is you can ask the service to list the resources it has available so this could be the contents of files it could be database records screenshots images live system data and you also have an action to get that specific resource to then bring it into the agent's context so to compare the two approaches then with the traditional nhhn agent tool approach you're being hyper specific around what tools to call and then in a system prompt how to interact with those tools whereas in this approach it's completely abstracted and the beauty of this type of approach is that as more tools are added to let's say Google Calendars mCP server they'll automatically become available to your agent without you needing to actually edit the n8n workflow or the NN agent itself so these agents can actually evolve in their capabilities as tools get added on the server side or maybe as tools improve on the server side so as you can tell there's huge potential for this type of architecture for AI agents of the future and this NN module that you see here was only released two days ago so this is very raw it's not a core module it's a community module so it will evolve and mature over time as the mCP standard evolves and matures as well so in this video I'm going to go through how to set up these mCP clients and servers to interact with the brave search engine fir craw scraping Service as well as the getup service I also play around with setting up a puppeteer agent as well as trying to connect to apathy mCP server but before we do that let's dive into what mCP actually is how it compares to N N's built in agent tools and whether it really will be the key to create smarter more autonomous agents I've published this NCP n8n workflow in the free resources section of our community the AI automator check out the link below to get access so that you can play around with mCP agent yourself mCP stands for model context protocol and it's an open standard anthropic published late last year from my reading of this the primary Motivation by anthropic on developing this type of open standard was more from a local or a desktop perspective anthropic has a product called claw for desktop and there was clearly a mechanism needed to empower this application to interact with either local or remote services and that's essentially what mCP is it's a standardized way for AI models like Claude in a desktop environment here to interact with tools and data sources outside of claw for desktop the majority of talk about mCP is in relation to AI code editors like cursor and wind surf but mCP can be very useful for chat GPT clones like Libra chat where you can hook in different models locally and Trigger internal or external Services via mCP if you're interested in getting mCP set up in claw desktop or in cursor for AI coding then check out this great video by my friend Rob who goes through it brilliantly step by step I'll leave a link for this in the description below an Tropic have described mCP as a US be for AI models and you can kind of see this here in this architectural diagram you have your AI agent host which could be claw desktop or as we're going to go through in today's video it could be n8n and by hosting what they call an mCP client that can then be plugged into any mCP server and that server then is connected into local or remote services to access files or apis or databases or whatever it is so having this concept of a USB connector is incredibly powerful because it can have lots of different servers and they're all then connected into your host and because they're abstracted away from your application they can be maintained themselves new features can be added they can be improved and the agent will always get the benefits of the improved code so this type of abstraction clearly has its benefits but it also comes at a cost if we look at a typical n Nai agent Tool It's usually the application code that's triggering the API of the service directly and getting responses back so clearly that's a lot simpler there are a lot less moving Parts whereas with having this middleman of an mCP server you're absolutely adding to the complexity and then there are also security concerns because traditionally your application NAD for example when connecting to a service will just be authenticating directly so you just need to make sure that you secure your host and that the communication back and forth is encrypted by adding in this additional layer these mCP servers the authentication happens at the server level here no longer at the host so this brings with the challenges for authentication but also authorization so are you authorized to carry out specific actions within the service even if you are authenticated while there is is huge potential in mCP for agents it is very much dependent on this standard actually being adopted by the industry and there are so many examples in the past where great standards have been published but shunned by key players for various commercial reasons but whether it's mCP or a future standard that's published and adopted by everyone there is clearly a need for some sort of seamless way to plug in software applications into our AI agents without needing to reinvent the wheel every time so before jumping into the N module and implementation let's have a quick look at the mCP architecture some of the core features so we know what we're talking about we have our host which in this case is an AI agent hosted on n8n and within that host application you have what's called an mCP client this mCP client is essentially what you see here so these are effectively the tools of the agent and that's connected to an mCP server which needs to run somewhere either permanently or ad hoc and it usually has a public repository on GitHub that is available through different package managers be it for python or for node so for my scraping agent which uses fir crawl this mCP server is available in this GitHub repository and this is essentially the bridge then into the firra dodev service and within fir craw service there is an API that the server is hitting it can then scrape the web it can carry out whatever actions it needs to do and then it communicates back through the server back to the client and then the agent outputs so we can see this here if we connect up our chat interface I'll clear my session and then I'll just say scrape this page and I'll just give it the web address of our website and you can see it's going to the list tools to find out what this fire crawl mCP server can do it learns what its capabilities are and then it goes to execute that tool in fir crawl to scrape this page and then it's going to respond with the scraped results and as you can see it's provided the scraped markdown of the page along with even the logo that's in the footer so that was the full end to end from a request in the host through the client through an mCP server to fir craw and then back again and it outputs the result key thing to talk about though is this mCP server and where it's hosted because when I first heard server I assumed it's like a web server it's something that needs to be running at all times listening for connections and relaying on but actually that's not how I have this set up so if we look here I'm currently running this application on lso so it's not running on my desktop it's a remote server I am still self-hosting it but I didn't actually install this mCP server so your options are you can either install it so that it's running permanently and listening for connections permanently or you can use npx which essentially sets up a temporary server it downloads the latest version of the package and then it spins it up as a process on the machine so if we jump back into here these are the mCP clients and this is the client configuration but if you click into the credential this is actually the command to spin up the server so you can see here npx we're triggering the fir craw mCP repo we're passing in an environmental variable which is my fir crawl API key so by actually triggering the client it is spinning up the server to interact with fir craw service and a lot of what you might have seen with mCP in other kind of YouTube videos is all code based that's why this is actually actually a no code tutorial because you can configure your clients and you can configure your servers all through this interface that you get from this community module which is really cool and the other services that I have hooked up then fir crawl is one Brave search is another it's exactly the same thing it's an npx command to execute the package it downloads the latest package spins it up as a process and then it uses it to relay into the service what's worth calling out as well is there are two ways of communicating with an mCP server so there's a standard input output which is this stdi or there's SS which is server sent events so here I'm using the standard input output mechanism because npx that's going to run locally on the server whereas if your mCP server is actually on a different machine somewhere else you can only use SS these server sent events to actually communicate with it and because desktop and local applications are one of the main use cases for mCP it's quite common for this entire architecture to be local to your machine so you could be running a host it could be NN locally cursor wind surf claw desktop whatever it is is you can communicate with your mCP server you still can use ssse you would just set it as Local Host and from there then it can interact with the service and it doesn't need to be software you could be interacting with the file system you could be interacting with commandline applications to trigger browser automation for example so there is a lot of power in running this locally and leveraging all of the resources on your local machine it is a little bit more secure doing that but you still need to be very careful with what mCP server you actually download because you are installing that on your machine so we've covered a lot of these key Concepts already so you can see that example of tools and how you can list tools and execute tools via an mCP server with the concept of prompts the server can provide prompt templates to the client that way it can provide a standardized way of communicating with the tools which should result in much more reliable outcomes from both the server and the agent with resources the mCP server can expose data and content from your service that could be file contents database records images screenshots whatever it is those are the three main Concepts involved with transports you saw how you can interact using standard input output which is more local based or ssse server sent events which is for more remote communication with an mCP servers sampling is a really interesting concept because it's all about human in the loop and it allows an mCP server to communicate with the client and therefore an AI agent who can relay to a user to get it to confirm or reject a specific prompt or a specific action so this opens up a huge amount of flexibility and functionality around this human in the loop design pattern and then with roots it essentially informs the server as to the boundaries of the service so it could be different projects for example it could be project folders repository locations maybe specific kind of client endpoints something like that if we're to compare mCP to nad's agent tools I think they very much complement each other there is a large overlap in the sense that with agent tools you can trigger actions within a service and with mCP you can do the exact same thing so there's clearly overlap there but obviously the approaches are different with mcp's decoupled nature as you can see there it's very much suited to general purpose agents that could carry out any number of actions within a service with more traditional n8n agent tools you have much more control over the agent you can pick and choose the exact tools that you want to integrate with you can make sure that they have the correct permissions to actually carry out these actions so in this case for example the calendar agent if you didn't want to give it the ability to delete an event you simply just remove the tool at this point now that agent cannot delete the event so you have so much more control and then you can get quite specific in your system prompt around how the agent should trigger these tools whereas with the mCP approach it's so abstracted that it's much more general purpose any of the tools that are listed they could potentially execute unless you lock them down at the mCP server level and then if new tools are added to the mCP server they'll immediately become available to the agent so that brings about questions as to you know backwards compatibility of tools as well as the repeat ility of the actions by the agent you could test an mCP agent the way I've done here by asking it to scrape a particular website and it might produce a consistent result as it is doing here whereas tomorrow maybe fir craw will add another 10 tools to the mCP server and suddenly the agent no longer picks the one that produced this reliable output it picks a different one and comes up with a different output so they are very different use cases I think and that I think that's why they do complement each other a few weeks ago I created Hal 90001 which is this Mega multi-agent n8n system which acts as your personal assistant and as you can see it automates or connects with everything across 26 different sub agents and it was a really fun project but to actually get this working reliably would require going through all 26 agents testing every tool prompting each specific agent to make sure that it's doing the right things to get the right outcome from each tool so for such a broad multi-agent team it's just unscalable in a sense to actually be production ready whereas you could have mCP servers for each of the 26 services and it provides a full broad menu of all the tools and then if the prompt templates were used properly for these Services it would educate the agent at runtime on how best to interact with those tools to get the right outcome and I think that's where the real potential is in mCP is for these really broad agents like Hal 90001 we give this one away for free so I'll leave a link in the description so you can watch the video and access this template and as well the mCP blueprints that I'm going through today you'll be able to download These Blueprints and play around with them yourselves so let's dive in and I'll show you step by step on how to actually set these up so first things first you need to go to this NN nodes mCP page this has full step-by-step instructions on how to actually configure this community module and the other thing to mention is that it's a community module it's not part of core yet it may be in the future particularly if mCP actually takes off as a standard and is adopted by the industry but on this page it has a link to the installation guide on how you can actually install and manage these Community nodes essentially what you're doing is you're copying out the package name and then within your nadn instance you go to settings Community nodes you can see I have it set up here and you click install and you just paste in your package name now because it's a community node it is unverified code from NED 's perspective so you need to agree to the disclaimer and then you click install and then that sets up this node within your application and then you can start using it this isn't available in NN Cloud just to be clear so you will have to be self-hosting this either locally or as I'm doing on a remote server like with the S but anything like render Railway or any of the cloud platforms will work here so once you have your community module installed if you click create workflow for example and we'll just trigger this manually for the moment if you click on the plus and just type in mCP you'll see that there's an mCP client and you can choose list available tools for example I'll set up a new connection here to hit fir craws API so these are the two transport options that I discussed standard input output and the server sent events so we'll choose standard input output and then let's go to fir craws mCP doc ation which you can see here and if you scroll down here you can see the commands and the arguments you need to pass so it's npx which is there so it's Dy which basically means it'll confirm any prompts that it's asked while it's installing and then fir craw mCP so copy that out and then the fire crawl API key so you need to grab that I'll just set this to expression so fire crawl API key equals and then you need to go and get your API key okay so I have that there and you just paste that in I'll cycle this key after this video um and then you just click save so that's the server essentially set up you haven't actually installed this package but because it's npx it's going to temporarily install on runtime so then within the client you can choose one of the operations so we'll just list tools for a second so we save that a click test workflow okay and that was successful and if we double click you can then see all of the tools that are available within this mCP server so we have fir Coss scrape that's a description and then it talks to the schema that needs to be passed now I don't think it provides enough details here and maybe the standard will evolve over time but I think it needs to provide a lot more information for the AI agent to make the right call as to how to populate these parameters there's fire CRA map which is a way of mapping URLs within fir craw search chain which is a new feature I think and a deep research feature which is definitely very new but they are the tools then that are available and then you can execute one of those tools so if you go to mCP client execute a tool now you'll need to provide it a tool name now it doesn't make too much sense to actually put this in dynamically because this may change new tools may be added but for the moment I just put in let's say fire crawl scrape I just hardcode it and then tool parameters you would then need to hardcode and that's why it makes sense for an AI agent to do this because if you're going to this trouble you may as well just set up the actual standard agent tool so you know you want the AI agent to do this but you can execute it Standalone like that then to get the AI agent to do it let's just add a node AI agent and then we delete all that out we'll use a chat trigger connect it all up so the chat trigger is going into there use a chat model I usually use gbt 40 as standard and then we'll just give it some memory just so that it can keep track of itself and then in terms of the tool you can then use the mCP client tool which is essentially the same thing except it can be triggered by an AI agent okay so if I just say what tools are available it goes and it spins up that fir craw mCP server uses it to relay information to fir craw service responds back to this little client here which then responds back to the host which is the AI agent and then here is the answer so it can scrape it can map it can crawl it can batch scrape it can carry out deep research and when you're getting this up and running if you run into issues is actually getting the agent to trigger this tool you need to set up this environmental variable which is to allow Community packages be used as AI agent tools So within aleso aleso actually runs Docker images of NN so I needed to add this to my Docker compose so that's just something to be aware of but that'll differ depending on what environment you're working on okay so it can Now list tools so then let's add another one which is mCP executed tool and then for Tool name what we're going to do here is we want the AI to dynamically input this as well as the tool parameters now in the latest version of n8n there's this great little button that you can click so that the AI automatically populates that out so we'll just do that there that's not here for some reason but we can just put in an expression and then it's dollar sign from AI let's call this tool name and then we'll just say populate this with the tool name from the list tool results something like that now let's try this so can you scrape my website the a automator docomo it goes to the list tools now it should have been in memory but no harm getting an updated version maybe something has changed now we have I've got an error so let's have a look fail to connect to mCP server oh yeah I selected the wrong credential so there we go okay it's working again now within fir crawl actually if we go into the activity logs we'll probably see this crawl running yeah there it is 1403 it has scraped took about 10 seconds and was successful if we come back in here then so it's all green and it looks like it's outputed all of the HTML from the page yeah and it's taken a bit of time and it's just thrown an error and yeah interesting so the maximum context length of 128,000 tokens was breached 376,000 tokens and that's a really good example of a potential failing I don't even know is this a failing in mCP or a failing in the current state of AI agents it isn't able to understand that there's too many tokens been passed to the llm and actually take kind of a a different approach can you provide the markdown only please I'm going try it that way and fire call usually does it either markdown or HTML or both so markdown shouldn't be that many tokens so that returned and said I couldn't do it and and I believe what's happened is it hasn't actually structured the parameters correctly I think this output format needs to be an array and it's just a value let's get it to do it again and see has it learned from its mistake and this is what I mean about how there's not enough information coming back from this list tools call it needs to essentially provide full API documentation you're seeing formats here but it's not telling you it needs to be an array so it very likely could get it wrong here again it's taken a bit longer it might have actually got it right yeah so that worked and you can see the full mark down there so it does work clearly not ready for the prime time just yet I think the the reliability is an issue I think this one specifically might be just a lack of it description by fir craw on the actual tool if they dumped all of their documentation for this endpoint to explain these parameters into the description a agent would have a much better understanding of how to actually interact with it the other thing then is this fir craw mCP server it can only list tools and execute so if we try let's say list prompts because I I really believe the listing of prompts is going to be the one of the biggest benefits of mCP so if you ask then can you list prompts it's going to hit this but it's going to throw an error I think yeah and the reason being is that mCP Server doesn't actually offer prompts so this probably needs maybe a more graceful ER handling within maybe the the n8n community module so if we look at some of the other mCP servers I've connected to if we come over here this is the brave search engine so you can see there's Brave search npx this is the package name and then if we just ask it a question let's say what's the weather like in gallway it gets a list of the tools that it can trigger on the brave search engine there's Brave web search Brave local search and then it executes that tool which it's done there and it's got a link to the weather that's semi- worked let's see what it actually can do are there any Japanese restaurants in Dublin I've never used Brave search before so I have no idea is it any good or not okay do we have an answer and we don't and we've hit some sort of API limit too many requests I am on the free plan on Braves so that's probably why that happened so clearly the vision of mCP and the idea that you can plug any piece of software into it and your agents will simply work is Far From Reality at the moment yeah still hitting an issue on that one I'm absolutely leaving this in the video though because like mCP has such potential we're very far from this being kind of production ready for NN at the moment I think if we look at the GitHub agent one that I have set up this one actually does provide a list of resources so actually let's try that one list your resources let's see what it can do it's going to give us both cool okay here's our answer yeah so there's the browser console logs is what's listed under resources and then for tools there's 17 different tools and this makes sense because mCP is used a huge amount for AI coding and then obviously pushing to repos on GitHub is a core feature of of AI coding that will be needed so this is definitely a very mature server that you can kind of see there so yeah there's three different mCP servers with kind of mixed outcomes in terms of testing so then two others worth discussing number one is the Puppeteer agent because with mCP you can actually trigger Puppeteer for browser automation similar to open a eyes operator or browser use the issue I have with this is that this is running on a remote server and it's actually running inside a Docker container on a remote server so if I actually try to execute a puppeteer for example let's even try it here and then I'll just say go to my website the AI automator dcom so this is going to throw an error because it doesn't have permissions within a Docker container to actually trigger a browser automation so you can see there failed to execute operation it's not permitted if you try this on your local machine that possibly will work whereas it obviously won't work there because we're we're remote in Docker the other one I tried then was this appify agent so appify has a remote mCP server and I really wanted to test this out because of these two transport mechanisms it has an mCP server actor where you can send it SSE events which is this one and I'll provide a link to this in our resources as well so I spent like an hour and a half trying to get this working and I got some of the way I was able to actually spin up the mCP server for my account and I was able to trigger the mCP server within appy console I wasn't actually able to get it working here though so if I look into the tools if I go into the appify connection I have my S URL set and the post end point set to receive messages however it wasn't able to actually communicate through this so I'm not sure is this because appify have a non-standard implementation of mCP or is it that this mCP Community module just can't handle this type of use case so if anyone does get a work in let me know in the comments below so then onto a verdict is is mCP going to change the nature of AI agents in general and specifically in n8n but like definitely not in the short term as you can see from this testing and this setup it encounters more issues I than standard agent tools but definitely the potential is there this community module is only a few days old mCP is a relatively new standard I'd love to know what you think in in the comments below do you see a future for mCP in the likes of n8n for this level of kind of abstraction around services and tools and don't forget you can access this blueprint as well as our HAL 9000 of one personal assistant in our community the AI automats we have a free resources section where you can access both of these and click the link in the description below for more thanks for watching and I'll see sing in the next one