Transcript for:
AI-Driven App Development Guide

In this video, I'm going to drop my top 10 rules for building with AI so you can spend more time building and less time debugging. Not only that, I'm going to cover my favorite app building tools, frameworks, and MCPs for less chaos and more control in your development process. So, if you're into this kind of stuff and you want to dive deeper into building apps with AI, I highly recommend checking out the course and community. It's been live for a couple of months now and we have a ton of people in there building out their projects using AI. There's a course in there with over a 100 micro lessons taking you through from zero to hero in terms of AI app development. Enough talk. Let's go. So, how do you know which is the right tool for you? There's a lot of different tools on the market. So, it ranges from basic to advanced knowledge. So starting off on the left, very easy to use and get started with are lovable and bold new. Moving onwards towards Replate agent and then on the other scale we have Windsurf and Cursor. And generally a lot of people I work with start off on this end of the scale and start making their way all the way up settling on either Windsurf or Cursor for their last levels of development. In terms of pricing, they're all relatively the same. you are going to have to outlay for a monthly subscription with lovable bolt. If you're looking for something completely open source, you can look at a tool called Klein, which works with VS Code, which is free, but you will have to pay for your model usage through a provider like Open Router, and that can actually sometimes end up being more expensive than going with the payment plans from the likes of Windsurf and Cursor. So, the good news is you don't have to get too worried about which tool you start with. They're all really similar. They operate in the same way. You have a file structure that sits here on the left hand side. You have a window where you can edit your code and see what your code is. You have a terminal at the bottom which interacts with your file system and where you can run your server. And then you have a AI window on the right or left hand side in some cases where you chat with the AI about building out your codebase. So the next rule deviates from what vibe coding is. In the vibe coding standard, you just jump in, you start developing code and see where it takes you. But this is where lots of people fall down. What I recommend is really just taking the time to work with AI to build out a plan for your project. And I'm telling you, this will save you so much hassle and will save you weeks, if not months in the longer term. So, in the description, I'm going to leave notes to these free resources at notes.switchdimension.com. And in here, you've got the AI development project setup prompts. So, essentially what we have here is a set of prompts that will allow you to get your project set up and running in the right way. So, the first thing we do is we talk to a product manager AI. The idea here is that we get a good understanding of what it is you want to build and why you're building it. This is really important for explaining to your other AI agents to make sure you're building the right thing. Next up, we'll talk to a UX designer. And the UX designer basically helps us think out what it is that we want to build in terms of a user interface and a user experience. There's a couple of different ways that you could build this out. We want to make sure we're building it in the correct way for mobile, for desktop, or for the particular type of persona that you're focused on. And then lastly and probably most important is the software architect. So we have to step through the first two steps in order to understand what we need to build and we're going to feed that into our agent after the fact in cursor in Windsurf wherever so that they can actually build out the app that we want properly. And I'm telling you just taking the time to do this over a few minutes really saves you a lot of hassle. Let's call it vibe product management if you will. So what we get is a set of three documents and then we drop those into a folder in cursor or windsurp and then have our AI look at those to know exactly what to build and how to build it. The most important being this software requirement specification uh document which breaks down the whole project ready to build including our ST tech stack, our authentication process, our route design etc. So this is really powerful. If you want to take your planning up a level again, you can check out a tool called Taskmaster AI which will work within Cursor or most other editors. And the idea here is it takes your initial plan and helps break that down into subtasks which the model can then make its way through. As a starting point, I recommend just doing these steps yourself, breaking down your own tasks and feeding them one by one. It's basically what I teach in my own course. And then once you're comfortable with that, you can step up a level and use a tool like Taskmaster AI. So, it's a good idea to pick popular languages and frameworks to build your apps with. The reason being large language models have crawled everything that's available on the internet. And if there's more content of that particular language or framework to use, they better. I like to work with TypeScript, JavaScript, and Python because they're heavily crawled and there's a ton of information out there, including tutorials across YouTube. Uh to a lesser extent, I work with frameworks like Nex.js and other tools and frameworks like Tailor, Tailwind CSS, and Shad CN for component design, which just make life a whole lot easier. It's very easy when you start to try and do too much in one go. And that's normal. You're trying to get a sense of what the model can and can't do without running into a lot of errors. And you decide to put in these huge oneshot prompts to try and develop a whole application. The better way to do it is to do it step by step. I absolutely think you should go and try and build out an app in one shot, but then you'll quickly understand what breaks, what doesn't, and then you move to more iterative steps where you step through each part bit by bit. And that's why breaking it down into individual parts is important. So, the takeaway here is don't try to do too much. Don't shoot for the moon straight away. Break it down into smaller parts. Get to know the popular models and get to know their personalities. Each one of them has their own strengths and weaknesses. I will use anthropics models, claw 3.5 and 3.7 heavily with code generation, but then I'll switch over to 03 from OpenAI to do some planning and thinking about how a project should be set up. I'll also use Gemini 2.5 in the same way to do some troubleshooting and also to work with larger code bases. If you want to get a sense of which model is good at what and how I benchmark each model or test them against each other, you can check out a video that I've linked to in the description below. An important thing to understand when we're using these models is their context window. So that's how many tokens or essentially how much code we can send and receive when we're working on a particular problem. So, it's okay when you write one or two prompts, but when your conversation gets longer and longer, the context window starts to get bigger. It runs out and it forgets the start of the conversation. It doesn't have the full context of what you're talking about and can go in the wrong direction. So, you need to be aware of this. You need to do things like opening a new conversation and a new chat as you move between different tasks and being aware of what context is being sent. So you'll create documents that will help to give the model a summary of where you are and your code base and your code editor will help with that too. So cursor, wind surf, etc. do a great job at summarization of your codebase to send a tighter version of the context so it doesn't run out. Now context is increasing all the time, but it's different across different models. Gemini 2.5 has a very large context window and then Claude will have a smaller window but might be more suited in different ways. So think of it like Pokemon cards and understanding the strengths and weaknesses when to use each at any given time. So again you need to know what model to choose for what scenario. I'll use a model like Gemini 2.5 Pro in thinking mode or 03 or 04 mini when I want to plan out my next steps, figure out how my codebase works or do a little bit of thinking with the model. And I'll do that in ask mode and then we'll switch over to agent mode when I actually want to implement the changes. And typically I'll use claude 3.7 or claude 3.5. Now the next thing to understand is the personality of these models and it's only through working with them that you know what the personality is. So for instance, Claude 3.5 was the favorite for doing most development for a long time and it's still probably my favorite. Uh but it's not maybe as intelligent as 3.7 is in terms of benchmarking. The problem with 3.7 though is it tries to achieve too much in one go. And that's great when you're doing design. So I would use Cloud 3.7 in design. But if I'm doing implementations of code and I'm adding new features, I'll usually you still use 3.5 because it just moves that bit more slowly alongside me and doesn't go off the rails, if you will. You can write rules to make sure you keep it in check and you can ask it to think step by step. But for beginners, I would say just work with 3.5 and then, you know, occasionally work with 3.7 to get a sense of how it works as you get more confident. So again, on the topic of context, it's really important to give your model more context about what it is you're working on or you're developing. And we do that in various different ways. You need to remember that a large language model has something called a cutoff date. So that's a particular point in time where it was finished training and it doesn't know anything that happened after that training date. So let's say you're working on installing clerk or setting up Neon DB or maybe uh installing or working with a new version of Nex.js. Their documentation will have updated and changed since the cutoff date of the model. And so what the model will try to do is set up and install these tools using the outdated ways. and you end up getting errors and problems and different issues that show up. So what we should do in that case is use this little at symbol here which is available in windsurf and cursor to add context. Now you can add specific docs. Each of these platforms has gone and index particular docs. So with the web feature I might go directly to a specific page of documentation on how to implement a particular API. Let's say OpenAI release a new version of their API like they did recently, the responses API. I know the model doesn't have a clean view of that because it's outdated. I'll copy in that exact URL for that documentation, paste it in here so that the model has context to set it up in the correct way. So that's really important. If you want to go up a step from that, there is a tool called context 7. So this is an MCP tool which you can install in windsurf or cursor. So essentially what context 7 has done is it's gone and scraped and crawled all the documentation for popular libraries. So if we look at clerk we can see that it was updated about 3 days ago. If I click on that here you'll see that it's got tons of different snippets pulled in 2,300 snippets. And that show you exactly how to get it set up. So I can set up my MCP in Windsurf by just hitting this little configure button here and then adding in this snippet of code and it then should be good to go. You can see that it's switched on here and it's got two different tools, the resolve and the get library docs. So whenever I'm looking for context on a particular docs, the MCP will be called to pull in those snippets from Clerk or from whatever other tool that I want. So, in terms of prompting, you generally want to have a chat at a starting point where you work out a plan for the next feature that you're building or whatever else you want to put in place. And I'll usually get the model to think in step by step or to create a numerical plan based on various different tasks and have those tasks numbered. Then when I'm switching into write mode, I'll iterate through those tasks by saying, "Okay, let's work on task one, let's work on task two, etc." Making sure the model doesn't go too far ahead in time. And we're also making sure that we're slowly stepping through what changes that the model is making and they all make sense. Like, let's set up the connection to the database, let's create the API, let's put the next pattern in place. So I can actually check the code and see what's happening as it moves step by step. I might not fully understand every part of the code, but I'm getting a general sense of what's happening as I move forward. The alternative is a oneshot prompt where we tell the large language model to go and develop the entire application in one go. That can work for small prototypes when you're starting off. Often when you create a project, it's no harm in doing this to see how far the model will get. But in production code, in reality, we want to make sure we're moving more incrementally. So, make sure your prompts talk about moving in steps and tasks. You'll get a chance to review the code, and it looks a little bit like this. You're going to have a red line for something that's been removed from the code by the AI and green where it's been added. So, it's important to look through all the changes and all the files and see what exactly is happening. If you don't understand something, it's really worth your while highlighting and then simply asking the model for further information on that. You can just hit control K or command K and cursor to get a prompt to help you understand exactly what's going on. This isn't necessarily vibe coding. The idea is with vibe coding, you just let the model run. But the problem is it's going to take you in the wrong direction. It's going to go down rabbit holes. It's going to create redundancies. It's going to create code over and over. and it's going to end up being a large pain in the butt later on. So, I know it's a bit boring to check the code that's been created, but if you think about how far we've come, you didn't need to write that code. The AI is writing that for you. It's your job to understand if it's done a good job and what exactly is happening. So, let's talk about saving and version control and reverting if you run into trouble. So, within cursor and the same in Windsurf, you have these checkpoints. So if something goes wrong when you're having a conversation, you can restore checkpoint and that's like going back in time to before you told it to run a particular command. I don't tend to use that that much myself. I much prefer to rely on traditional version control and that's using git. Essentially git is like your save game or save file. But you can actually move back in time to any version of your project that you have committed or saved to. So in this instance, we'll have made lots of changes. I'll stage those changes and then commit them. And I can get this little button here to create a commit message, which will detail exactly what I've done in the last couple of changes and then save that. So if things go wrong later on, I can step back to that point in time and restart the project. In the terminal, you can type something like get reset, which will move you back to the last place in time. Now, be careful with that command. What I teach in the course is that you use feature branching so that you actually create a branch for whatever you're going to be creating next. You work in that. You can discard it if you want. You can make changes there and it won't affect anything anybody else's work or whatever else is going on in the project. So that's what I'd recommend at a production level. So when you get a bit more comfortable in your development journey, it's worth starting to get the AI to write some tests. Now there's various different types of tests. There are unit tests, there's integration tests, and end-to-end tests. I'm not the biggest fan of writing tests myself in traditional coding, but it does have a place in AI because they can write a lot of the tests for you. But be careful. They generally will do anything they can to pass their own tests. So, you need to be careful. So I'd write a selected amount of either integration tests or endto-end tests using tools like playwright or justest depending on the framework that you're using and just get AI to set up all of this for you within cursor or windsurf. And essentially the idea here is that you write a set of tests and you then get the AI to create the component or the feature around that test using a unit test or an endto-end test. I would actually check to make sure that an email would be submitted and a name would be submitted and it would be saved successfully to the database and I would run that test to make sure that it would work. If I've been working on a new AI feature or I'm about to deploy my whole project back onto the web again, I'll run my test to make sure that my app is functioning correctly. Because the thing is with AI, it might have gone and fixed a problem over here for you or created a new feature, but it might have broken something over here that had previously been working fine. And that's the case for traditional development as well. So we use a layer of testing to make sure that even though we've added in all these new features and functions and we've added patches that we haven't broken anything else that existed before. So again, once you get a bit more comfortable, start to play with and work with some tests. Now, you might have heard a lot about MCP servers and how they can help you code. There's a couple that I recommend like Brave Search, and you can get those all from Smithery or from MCP.So. I've actually listed all the main repositories here where you can get them set up. If you're just starting out, don't get too bogged down in getting them all set up. There's a couple that I recommend like Brave Search and maybe a connection to your database, whether that's Superbase or Neon DB. These are going to get more and more popular. So, if you are an advanced user, I'd understand how they work and which ones you'd like to use. So, there's a lot of information in there and I don't expect you to absorb it all in one go. I'd highly recommend checking out the master AI app development without the frustration video. And then if you want to understand how to build out websites really quickly, the 10X Vibe code might be a good place to start as well. So, hopefully you got some value out of that. The idea is you move from this frustrated state of vibe coding to something that has more control and more rails, taking the chaos out of it, and you end up actually falling back in love with the process. This.