What's up engineers? Indydev Dan here. Imagine opening your codebase and having your AI coding tool instantly understand your codebase better and faster than you can read the readme. There are three simple folders that can unlock that superpower for you in every project you touch. We're going to break them down into their atoms so you can use them to increase your compute advantage. The AI coding tool you use does not change this one critical fact of engineering in the generative AI age. Doesn't matter if you're a cursor fan, Windsor, Fline, Codeex, or even Claude code. You already know what this idea is. Context is everything. Context is king. If your agent can't see critical information, it simply cannot build what you need. That's what these three essential directories solve. comprehensively and systematically. Let's talk about these three essential directories and why they're valuable for your engineering work. Let's start with the foundation AI docs. Think of this as your AI coding tools persistent memory, a knowledge repository your AI tools can instantly access. Inside of the cloud code is programmable codebase. A big idea we discussed in last week's video. You can see we have an AI docs directory. Inside of this directory, we have two markdown files. Cloud code best practices and open AI's agent SDK. I can now boot up any agent and have them quickly read these files. They can then turn around to get work done quickly. So what goes inside of AI docs? Here you have thirdparty API documentation, integration details, you have custom patterns and conventions, any implementation notes, anything specific to your codebase, it all goes in AI docs. I mostly use this for third-party documentation so that I can quickly ramp up my code bases over and over and over. The AI docs directory is a persistent database for your AI agents. So, we of course have the specs directory. What goes in the specs directory? Specs is short for specification, which also just means plan. You might know these as PRDs, product documents, whatever you want to call them. These are the new units of getting massive amounts of work done with your AI coding tools. And now with your agentic coding tools, we can now use multiple tools inside of single prompt with powerful agentic coding tools like claw code, cursor, line, so on and so forth. The specs directory is the most important folder in your entire codebase. This is where we write great plans. This is where we scale up our compute and do more work than ever in single massive swings. This 1,00 token prompt expanded into an entire codebase. This is due to the fact that we are agentic coding, right? We can write self validating loops inside of our prompt. Remember, agentic coding is a superset of AI coding, a massive superset. And great tools like Claw Code allow us to take all of our plans from all of our repositories from the specs directory and blow them out into full-on code bases and features. This is why you should always have a specs directory that details the plan for all the work that you're going to hand off to your powerful agentic coding tools. If you're still iteratively prompting back and forth and back and forth and back and forth, I can guarantee you, you are wasting time and you're not scaling your compute as much as you could be. Sit down, take your time, think, plan, and then build. There's a powerful new planning technique that we're going to showcase in this video where you can use compute inside of your plan to iterate faster with your AI coding tool. The key idea here is very simple. Every principled AI coding member knows this and everyone that's been following this channel knows this as well. The plan is the prompt and great planning is great prompting. You can scale up what you can do by writing a detailed comprehensive spec plan, PRD, whatever you want to call it. You can then hand this work off to your AI coding tools and your agentic coding tool and they will get the work done for you. So every codebase I build in now has the AI docs directory, the specs directory for plans and claude. Now dotcloud is a new emerging directory. To be super clear, this is specific to claw code, but what you write in these directories is not specific to claw code. If we go to the just prompt codebase, open up.claude and go into the commands directory, you can see we have several different commands. So what are these and how are they useful for scaling our engineering work? These are nothing but prompts. If we open up cloud code here in the just prompt codebase and we type slash, you can see the names of all these commands right at the top here. These are reusable runnable prompts that we can use across sessions. The most important reusable prompt that I recommend you set up inside of all your codebases is the context priming prompt. This is where you prompt cloud code, codeex, cursor, whatever tool you're using. This is not cloud code specific, right? The names of these directories can really be anything. So if you were to prime or just prompt server here, we'll do the basic context prime. It's going to run through these commands, right? Using tool calls, it's going to read the readme, then run get ls files to understand the context of this project. So, I recommend you set this up in every codebase so that you can quickly operate on the files and the ideas that matter. What is the big idea of what we're doing here? We're making it easy to set up new instances of our agentic tooling over time. Okay? And by over time, I mean on the day-to-day basis, but also on a session to session basis. If you've used cloud code or codeex or any one of your AI coding tools, they will run out of context. You can see the current context windows of the state-of-the-art models. A lot of these are limited to 200K or 1 million tokens. When using your AI coding tools, you'll eventually run out of context and then you'll have to reset. So this is what the context prime does and this is what the do. Cloud commands directory gives you specifically for cloud code, but you can deploy this across any AI coding tool. Right? So if I were to open up a new window here and open codeex-m3, copy the relative path, execute this, this serves the exact same function. Okay. And what are we doing here? We're doing the same thing we did in cloud code. We're setting up the initial context for the AI coding tool so that it understands everything so that knows where everything is. Okay, you can see there it spatter out a nice summary and now it's ready to go, right? It context is prime. These directories are not limited to context priming. We built out a ultra diff review where we created a diff and then we had multiple language models review the diff and offer feedback. This is something we're going to be talking about a lot on the channel. The capabilities of your prompts are now unlimited thanks to Agentic coding tools. You can run any tool. You can run custom MCP servers like we have here. You can do a tremendous amount by having reusable prompts inside your codebase. So these are the three essential directories I have in every single one of my code bases. Now AI docs is the persistent knowledge base for your AI coding tools. Specs is where you define your plans. It's where you define all the work you want done so that you can hand it off to your AI coding tool and to your agentic coding tools. And then we have the dotcloud. Again, you can name these whatever you want. This is where you place your reusable prompts that you want at the ready inside of your agentic coding tool. Reusable prompts are an essential pattern even outside of a coding. You want to be able to use compute over and over in different shapes and forms. And you do that by creating prompts you can reuse, validate, and improve, especially if you're building out emails. What does this look like in action? Let's go ahead and build out a simple, concise, brand new feature inside of Pocket Pick. There's a tweak that I've been wanting to make. Let me go ahead and open this up for you and just briefly explain Pocket Pick. As engineers, we reuse ideas, patterns, and code snippets all the time, but keeping track of these can be hard. Pocket Pick is the solution to that problem. This simplistic MCP server creates and stores all of your personal knowledgebased items, right? any code snippets, files, documentation that you want to reuse inside of a simple SQL light database. Here's all the tools you can see. Pocket add, add add file, find list, so on and so forth. The key change we want to make today is updating the add command. If we look at the data types here to add new items to the pocket pick database, we have text, tags, and database path. Right now, the ID is automatically generated. I want to improve the searching capabilities so that we can pass in ids when we're creating pocket items with pocket ad and pocket add file. This just makes it super easy to run pocket git and pocket to file by ID. How are we going to do that? We're going to use these three essential directories. With these directories, we can move faster in this session and over time. So let's go ahead. I'm going to use of course claw code. It's my favorite tool right now. First thing we're going to do is prime. So already having this command here, I am saving time. Okay, it's going to run as a tree get ignore. It's going to give us a nice tree format for a coding tool to see. You can see that working there. Let's go ahead and full screen this. And then you can see I'm saying read the following files. So this is all done. That's all loaded. We can type /cost and see exactly how much that costs. We have 20 cents to get our agent primed. Now it's time to start doing real work. How do we do real work? We don't iteratively prompt. That's the old 2024 way of doing things. We create concise, agentic plans. We're going to take this a step further with an emerging technique I like to call plan drafting. The big difference here is that both myself and my AI coding tool are going to be part of drafting this plan. So instead of writing the plan myself, creating the file myself, doing any of that work, I'm going to have cloud code create the first draft of this plan. create specs slash require ID a new feature ID stir inside of the and if I go back to the readme we can do a little bit of light planning here we want this inside of the pocket ad and the pocket break the plan into these sections write this plan I'm going to go ahead and use ultraink here it's definitely overkill but this is going to trigger claw code's reasoning capab capabilities. What I'm doing here is having cloud code draft the first draft of this codebase. I've briefly looked at this codebase. I roughly remember the architecture and how it works. But our agentic coding tools can quite literally read hundreds of times faster than we can. It has a better understanding of this codebase than I already. You know, it's got the SQL like table structure. It has all the commands, right? It can see UV piest, right? It knows about this codebase. It can see the structure. it, you know, quite literally knows more than I do right now about this codebase because we primed it properly. And now it's going ahead, it's going to write the first draft of this plan given everything that it knows. Okay. And so what's the new kind of flow here? The flow is build out the fundamentals of a plan, have a draft get created by your AI coding tool, and then iterate on the plan, and then actually execute the plan. So I'm going to go ahead and accept this. You can see here we have a new spec created and let's go ahead and take a look at how this looks. So this is the plan. This is the first draft and you can see it's pretty good, right? It's detailing the problem uh very concisely, right? When users add items to the pocket pick database using the add and add file tools, ids are automatically generated. Okay, so problem statement and then solution statement. This feature will modify these tools to require users to provide an ID as a mandatory parameter, giving them more control and easier identification. Exactly. So now we're reviewing, right? We're moving more and more every single day into a code reviewer, into a plan reviewer. We're becoming curators of information, right? Curators of code ideas, and then we're handing them off to our AI coding tools. And you know the great part about this is that we're not iteratively updating our codebase, putting it into bad temporary states. We're operating solely in this file. Okay, so I can look through everything. You can see there it's got that new ID field in the server module, right? That's an essential change as well. Add functionality. Okay, looks great. Server implementation looks awesome. Test changes. You can see it knows where all these files are, right? This is the really important part, right? We're tying together that context prime. We are skipping over our AI docs directory here because we don't really need it, right? If we were initially instantiating this codebase, it might be more important. All right, so you can see here it has a self validation section that was built out and then updates to the readme. So just by providing the right context with the context prime reusable prompt and by you know writing a you know pretty short what three sentence IDK rich prompt. We activated the reasoning model capabilities. We have a pretty great plan here. I'm pretty sure this is going to take us um 80% of the way. What I'm going to do here is delete some of the extra stuff here. We have some recommendations. I don't want the agent coding tool to build out any one of these extra optional ideas. Often times a coding tools, language models, if there's more text, it'll try to create meaning. It'll try to find patterns in it. So, I'm just going to go ahead and delete this schema stuff. I'm going to add one tweak here. Check the other tests to see if they're using any ad functionality and update them to use this. This is important. And then I'm just going to, you know, open up this path to make it super clear on where to look. This is honestly completely unnecessary, but I just like to do this just to be crystal clear. All right, so great. We have a great plan here. Let's go ahead and as a great practice, I like to before you fire your plan off, I always like to throw a commit in there. Let's make sure that we revert anything we did here, this file here. And then I'm going to commit the plan. And then we're going to operate on the plan. Okay, so I'm going to go ahead and say implement this file. Okay, that's it. Cloud code has this new feature. It's got a to-do list system. This is a new emerging pattern inside of agentic coding tools where you effectively create a plan first and then you work through the plan. This looks great. I'm going to go ahead and go into aentic mode or yolo mode. So now auto accepts are on and now our AI coding tool is just going to fly through this. It's going to do all the work for us. It is important to mention that I should have added the AID docs MCP server git repo mix integration. At the same time, I didn't need it because this functionality is already up and running. There's really no um additional information here from thirdparty documentation that isn't already embedded inside of the codebase. And so you know out of all the directories AI docs once you get up and running once you get your integration good is probably the least important but at the same time you know you don't want to underestimate the power of having you know a permanent knowledge base for your AI coding tools for your H engine coding tools that you can reference whenever you need them for your work especially when you're blowing up your context window over and over and over. I also want to note that sometimes I have feature specific context priming. So, you know, you can copy your prime and say you wanted to prime um add ID feature. Um and you know, you can see here obviously our our great agent coding tools working through all the changes. This is really cool. Love to see this with a feature specific context prime. You can come in add this and you know say you wanted to add some specific file here, right? some feature specific file or files and then you know py or whatever and then you'd have multiple of these right so this lets you context prime on specific feature sets over and over and this is obviously better when you're working on larger changes often times you won't need it you'll just need your key prime method agent coding tool is working through these changes you can see that got implemented very quickly and it all comes back to writing a great plan you know I was very detailed very concise with my IDKs. Um, you know, shout out to all the principal a coding members that know your IDKs. All my keywords here were packed with information. I'm being very detailed about what I want. You can see these items getting referenced. The read me is getting updated now. And it's all because I had my a coding tool see the right information and then I directed it to create a plan with that information with a specific structure set that I know works well. Right? This is where your experience and your judgment and your taste comes in. You have to know your codebase at some high level degree. And then you can see there it's uh doing its self validation. It's testing itself. Love to see that. And you can see this is really important. I added this at the end. Uh update other test files to include the ID parameter. But you know just to summarize you know we we then created this great prompt that created the plan. We're getting kind of meta. We're now prompting our agents to write plans for us that we then tweak and iterate on and then we take that plan and we then hand it off to our AI coding tool or agent decoding tool again. All right, so you can see here it's working through these tests. It needs to add that ID parameter to all of our ad commands to make sure that this works. And again, it's just chugging away for us. This is why claw code is so good. It does not stop working. It doesn't ask you if you want to continue to do more work. It just cooks, right? I I absolutely love this about Cloud Code. It's how they've designed the model. It's how they've designed the prompt. I say it over and over, but the Anthropic Cloud Co team is doing incredible work and it's clear to me that they use these tools. You can see we missed one test case and test find. You know, we can always just open it up if we want to look side by side. We can see that that functionality needs to get the ID that's now been added. There is our concise summary of the work. Let's go ahead and take a look at this. The most important thing here, we have self validation on. So I can pretty much be guaranteed I can be assured that everything here works, right? All the tests passed. Our hent coding tool is testing itself. If we open up that spec, you know, we can see at the bottom here again just to mention it self- validation super important. We gave it the command. It needs to self-eleate. Just to highlight it again, this is the big difference between agentic coding and AI coding. I'm not just writing prompts that generate code. Okay? I'm writing prompts that do engineering work. That's building. That's planning. That's testing. Okay? It's the whole development life cycle. This is the power. This is the capability you can unlock. I aim to share and hand you valuable ideas like we've discussed here every single Monday. You know exactly where to find me. If you're not subscribed already, definitely join the journey. We're going to build living software that builds for us while we sleep. And these three essential directories, AI, docs, spec, andclude. This is how we scale up our engineering work. This is how we pass off more work to our AI coding tools and now to our agentic coding tools. All right, make sure you like this video. Let the algorithm know that you're interested. You can see all the tests pass here. We don't really have to run this, but of course we can. Updated pocket pick. And what do we need to pass in here? Let's check the readme. This one should be relatively simple. Yeah, super simple. Let's go ahead and use this. I'll say update. There we go. And yep, database. That's fine. We can use the current working directory. That's fine. Now I'm going to open up claude. Activate this MCP server. You can see that's found there. I'll hit yes. And now I'm going to be really specific here with my prompts. Updated pocket pick. I'll say pocket add. And the ID here is going to be um let's just copy some random code. Let's say I want to, you know, remember the the MCP.json JSON format as a code snippet. So I'll just copy that. I'll say add. I'll pass in the ID. This is going to be MCP JSON and content is this. Okay. So there we go. I'm going to run our new Pocket ad using the updated Pocket Pick MCP server. There it is. Updated Pocket Pick. Add. There's the ID. There's the text. Go ahead. Hit enter. Now we should get our new added item. What's that tool called? It's find. We'll do this one. Right. So, pocket to file by ID. This is the one that I like to run there. And I'll say id and then you know that was mcp_json and then uh output file uh I'll say absolute path and get me you know mcpnew.json. This is going to now search by our new ID. And you can see we're running that updated pocket pick mcp server and we're going to output to this directory here. Let's go ahead and run this. See how it works. content successfully written. Let's open it up. And bam, you can see from our MCP server, from that SQLite database, this feature is running. Great. And you can see there, there's that new local database. This is a great way to just store and reuse snippets. Link for this codebase is going to be in the description for you if you're interested. So, it's pretty incredible how quickly uh we were able to build that out here. Look at all the files that were just changed. And remember what was done. all these changes, right? Very precise, very surgical. And it's all because of these essential files, right? These essential directories that let us scale what we can do inside of this codebase and every codebase. I hope it's becoming clear as we spend more time together, as you watch the channel. It's really about the patterns. It's about the principles. It's about how you approach this new age of engineering. Don't get stuck on any specific tool. And don't get stuck on ideas that don't scale across your work, across your code bases, and most importantly, across time. AI docs is your persistent knowledge base for your AI tooling. Specs is where you plan your work. It's where you hand off more and more work to your compute to your agent coding tool. Remember, great planning is great prompting. And then we have doclude. This is where we build reusable prompts we use across time in our codebase. The most important prompt here to set up is the context prime prompt. Set up your agents so that they have the essential information to work concisely. Don't waste tokens giving them access to your entire codebase. Be precise. Be focused. Having too much context is just as bad as not having enough. Having too much context can confuse your agent. Having too little won't let them get the job done. Put these together and you can scale your engineering work further beyond. Thanks for watching. Stay focused and keep building.