Welcome back to the AI daily brief. We are now almost a week into the GPT5 era and it continues to be a hugely divisive model, which is not to say that a lot of the initial complaints haven't been addressed. As you heard earlier in the week, the 40 rebellion was successful and that model is now available for people. We also have a new approach to the UX selector which is giving people who want it more fine grain control over which model they're using. And a number of the complaints around limitations for plus users have also been addressed, at least in the short term. None of which is to say though who's right when it comes to how good this model is. People are still struggling to figure out how to get it to do what they want it to do. And while some have just retreated back to the models that they have already built workflows for, others have spent a bunch of time trying to really understand how to get the most out of this particular model. Even OpenAI themselves released a prompting guide for GPT5. And what's interesting about this is that even though we have been trending more and more away from having to be good at prompting, I've discussed, for example, on this show how many different tools are effectively taking your prompt and turning it into a better prompt, which is, for example, what Ideogram, who I use to make this image, does. And yet, GPT5, at least initially, suggests that we are in for a resurgence of prompt engineering. The TLDDR background is that GPT5 seems to require some very specific types of prompting. And so what we're going to do today, rather than endlessly debate how good or how bad the model is or discuss what its social ramifications are, I'm going to share 11 techniques that come from all sorts of different sources of people who have actually figured out how to get more value from GBT5. This should be extremely practical and something that you can put into place right away. We're going to break it up into two main sections. foundations and then a short section for agentic toggles. Now, as we dig into the foundations, the biggest themes to note is simply the fact that this model requires prompting, but the upside of that is that it requires prompting because it's so good at adherence. Shyammo from OpenAI writes, "GPT5 is very steerable and extremely good at instruction following. That means you'll need to update your prompts to get the best out of the model and help it generalize broadly." Alex Duffy from every who's had a few more labs experimenting with this says GBT5 is more steerable than any other Frontier model. Prompts make or break your results. And indeed when I ask folks to share their early experience with this, this exact thing came up a lot. Seth Cronin, for example, wrote, "My take is that you have to be a lot more explicit with GPT5 if you want a deeper output. 03 seem to read between the lines and give you a bunch of useful stuff to work with as if it wanted to do more work." Which brings us to prompting technique number one. And again, if you feel like this is a little bit of a regression, you are not alone. But a lot of people's best recommendation so far after a few days with this model is to basically tell it to think and work harder. Kajatan Mastella writes, "The GBT5 authors said it themselves. Append think harder or think deeper to the end of every prompt. Works like a charm." Basically, telling it to think deeper is an alternative way to trigger higher reasoning. Now, it should be noted now that they've updated the GPT5 selector to include auto, which decides for you how long to think, fast, which gives instant answers, thinking mini, which thinks quickly, i.e. a little bit of thinking, thinking, which thinks for a long time for better answers, and then pro, which presumably is the most thinky of all the things. So, maybe you could now assume that the UI is taking care of this. However, others are finding prompts to really push GPT5 to work harder to actually work. Alex Duffy again from every shared his ultraink prompt. It reads, "First, think deeply for 5 minutes. Ultrathink at a minimum. If after 5 minutes you still don't have the optimal response, keep thinking until you do about the best way to do this." Hyperwite CEO Matt Schumer had an even longer version of this. He tweeted, "Here's my insanely powerful GBT prompt. It guides GBT5 to think for way longer than it normally would, putting up to 3x more time and effort into solving your task." And basically what he proposes is before you actually prompt your task, you add this block of text which says, and obviously I'm only reading a part of this, ultra deep thinking mode, greater rigor, attention to detail, and multi-angle verification. Start by outlining the task and breaking down the problem into subtasks. For each subtask, explore multiple perspectives, even those that seem initially irrelevant or improbable. Purposefully attempt to disprove or challenge your own assumptions at every step. Triple verify everything. Critically review each step. scrutinize your logic, assumptions, and conclusions, explicitly calling out uncertainties and alternative viewpoints, and so on and so forth. So again, foundational prompting technique number one, just tell it to think or work harder. Next up, we have the concept of planning phases. And this almost reminds me of the old think step-by-step kind of prompt that we used to prompt chain of thought before we had the reasoning models. And this one comes from one of the better prompting guides we've seen so far from Pietro Shirano, the CEO at Magic Path. He tweeted a few days ago, GBT5 requires a different way of prompting. It's much more susceptible to instruction, especially style, and tone, and it does better when provided with reasoning, validation, and planning sections. Now, Pro had access to GBD5 for a few weeks before it was announced, and so has a few more cycles to have come to these conclusions. He actually talks about these explicit planning phases in two different ways. In his main section, he writes, "GBD5 excels when given explicit planning phases. An example prompt he shares is before responding, please one, decompose the request into core components. Two, identify any ambiguities that need clarification. Three, create a structured approach to address each component, and four validate your understanding before proceeding. It seems as though so far giving GPD5 these sort of explicit planning phases makes sure that the model doesn't skip steps and actually does a more comprehensive job. Interesting. In his advanced technique section, when talking about tool implementation, he suggests giving the model a mental to-do list structure that could also work for this sort of planning. In that example, he asks the model to track progress with primary objective, subtask one, subtask two, validation step, and final review. So, a similar principle here. Our next prompting technique is to be extremely explicit about what you want. And this comes up in every single person's discussion. We heard it over here from Seth on my post. It's the subtext of all these arguments that the model is extremely steerable. It's here once again in Pietro's guide where his first core principle is GPT's susceptibility to instruction style. He writes, "GPT5 is highly responsive to how you structure your prompts. Be explicit about your tone and style. The model adapts strongly to the communication style you establish. Use consistent formatting. Maintain uniform structure throughout your prompts. And define expectations clearly. GBT5 performs better with well- definfined parameters. This, by the way, was validated by OpenAI when they shared their prompting guide. They write, "As our most steerable model yet, GPT5 is extraordinarily receptive to prompt instructions surrounding verbosity, tone, and tool calling behavior. GPT5 follows prompt instructions with surgical precision, which enables its flexibility to drop into all types of workflows." However, they continue its careful instruction following behavior means that poorly constructed prompts can be more damaging to GBT5 than to other models. So, explicitness really matters, but so does structure, which is our next prompting technique or really foundational prompting reminder. If there is just one thing to take away from all of this, it's to be more conscientious of how you structure your prompts. Now, I will admit that this is something that I have gotten fairly lazy around. So many of these models at this point, including frankly the last generation of OpenAI models, had gotten so good at kind of interpreting what you wanted or just getting you started in a way that you could iterate with that I had gotten pretty lazy around prompt structure, which is not to say that everyone had. You might have seen over the last month or so a rise in people talking about something called JSON prompting. Go search JSON prompting on X and you will see a million posts like this one. Tom Crosshaw01 writes, "There's a prompting technique that makes AI respond exactly how you want every single time. It's called JSON prompting, and it's why Enterprise AI apps never break. Regular prompts give you inconsistent outputs because AI doesn't know what structure you want. JSON prompts force the AI to follow a specific format every time. Structure equals predictability. Now, he gives an example of a JSON prompt for something as simple as, "Write me a viral expost." In it, he breaks down that viral expost into hook, attention-grabbing opening line, content type, a thread, a single tweet or a story, the main message, the core value proposition, social proof like numbers, credentials, and results, the pain point, the problem being solved, etc., etc. The point is you break this thing up into its constituent elements, and then you use JSON formatting, which is basically a particular type of structuring information in a way that computers theoretically have an easier time reading, to give it all those details. Now, like I said, this has become a major trend lately, but the secret here is that it's really probably not about the JSON formatting itself. AI and design writes, I'm 99.9999% sure that a prompt being in JSON format does nothing compared to one that's not. The quote unquote secret here is most likely simply the fact that doing it in this way forces structure you can add in any other way just as well. If all the fields were just headings like this without the JSON structure, you'd get the same result. Using an arbitrarily chosen JSON structure being better than regular text defies all logic. And if you need more evidence, Nolan Macllum, who does applied AI at OpenAI, writes, "Guys, I hate to rain on this parade, but JSON prompting isn't better. It physically pains me that these types of posts are getting so much traction. I've actually done experiments on this and markdown or XML is better. JSON isn't token efficient and creates tons of noise and attention load with whites space, escaping, and keeping track of closing characters. JSON also puts the model in an I'm reading and outputting code part of the distribution which is not always what you want. However, he says I agree that being specific in using structure in your prompts is a good thing. And so this is the point that I'm trying to come around to. The reason that people are having such success with JSON prompting is not the JSON formatting itself. It's the fact that that format is forcing them to add an immense amount of fine grain detail, which means that the fidelity of the model's adherence is going way up because they're giving it more to work with. And that broad idea seems likely to be translating extremely well to GPT5 where it appears that the more clearly you structure your prompt, the better results you're going to get. This is in fact an entire category of Pietro's prompting guide. He gives a number of different structured prompting techniques include the spec format which is sort of like a non-JSON requiring JSON prompt if you've seen these posts on X. So for example, he suggests a definition of what exactly you want accomplished when it's required i.e. the conditions that trigger the behavior, a format and style for the output, a sequence or a step-by-step order of operations, what to avoid and what's prohibited, and how to deal with unclear inputs. He also suggests reasoning and validation steps that you should include in complex prompts. He suggests, for example, pre-execution reasoning, a planning phase, validation checkpoints, and a post-action review. The easy takeaway, and honestly, if you just get one thing from this episode, is just to tighten the structure of your prompts and think more conscientiously about your prompts, and you're going to immediately likely get better results than if you're being more loosey- goosey. And like I said, this is a bit of a departure. some of the other models were getting so good at interpreting that it's kind of weird to have to go back to this. But hey, if it gets better results and if ultimately GBT5 is like a more powerful tool, but that has a higher bar to entry, it's certainly plausible that it's going to be worth the trade-offs. Now, our next prompting technique, which once again has a few people that I've seen saying versions of this, is to ask the model to share its thought process. So for example, Pietro gave an example of this in those reasoning and validation steps where he asked the model to explain its understanding of the task and its approach. But this is also in the OpenAI prompting guide as well. They write, "Prompting the model to give a brief explanation summarizing its thought process at the start of the final answer, for example, via a bullet point list improves performance on tasks requiring higher intelligence. So basically, it seems like you get better results if the model knows that it's going to have to share not just its answer, but how it got to that answer and why it chose the path that it did. And whereas the structure it all prompting tip is very broad and vague. This seems like a very easy to execute simple sentence to add in every complex prompt that at least according to OpenAI is increasing GBD5's performance. Another prompting tip that comes straight from the OpenAI guide is to avoid conflicting instructions. And on the one hand, yes, this seems obvious. Of course, you're going to want to avoid conflicting instructions in a single prompt. But basically, what OpenAI is saying is that the stakes are heightened for this and that you're really going to want to work to avoid conflicting instructions here. That section that I was reading you before is actually explicitly about this type of problem. After saying that GBT5 follows prompt instructions with surgical precision, they say prompts containing contradictory or vague instructions can be more damaging to GBT5 as it expends reasoning tokens searching for a way to reconcile the contradictions rather than picking one instruction at random. Basically, if your prompt contains two conflicting ideas, it's going to go through a bunch of cycles of trying to figure out how to reconcile them and lose sight of the main task at hand. Here's an example. They give a prompt which appears consistent at first but actually has conflicting instructions. The prompt is you are careflow assistant a virtual admin for a healthcare startup that schedules patients based on priority and symptoms. Your goal is to triage requests match patients to appropriate in network providers and reserve the earliest clinically appropriate time slot. Now in this dense prompt they end up having two conflicting instructions. One is this one. Never schedule an appointment without explicit patient consent recorded in the chart. But then later they also have an instruction auto assign the earliest same day slot without contacting the patient as the first action to reduce risk. In another area, they have a contradiction between on the one hand, quote, always look up the patient profile before taking any other actions to ensure they are an existing patient, but then the contradictory line when symptoms indicate high urgency, escalate as emergency, and direct the patient to call 911 immediately before any scheduling step. So, if these are not resolved, the point that they're making is that GPD5 is going to have trouble with this because it's going to try to resolve these differences and expend a bunch of reasoning tokens to do so. And so the point that they're trying to make is to be explicit about how to resolve those things. For example, in the case of the emergency, adding a simple do not look up in the emergency case proceed immediately to providing 911 guidance. This lets the model know that it's okay to break that other rule in the explicit case of an emergency. And so in some ways, this is just OpenAI saying that we really need to be careful to avoid poorly worded or contradictory instructions and to get around ambiguities. The next prompting technique is basically OpenAI imploring us to use one of the advanced capabilities of GPT5, which is its capacity for iteration. One of the things that people noticed right out of the gate with GBT5 that most people were very impressed with was how good it seemed to be at building full applications in a single shot. Basically, when you asked it to code something up, it could do a really good complete job allin-one. They talk about this 0ero to1 app generation explicitly in their prompting guide saying GPT5 is excellent at building applications in one shot. In early experimentation with the model, users have found that prompts like the one below, asking the model to iteratively execute against self- constructed excellence rubrics improve output quality by using GBT's thorough planning and self-reflection capabilities. So basically, the prompt is calling on the model to actually iterate. So the prompt example they gave is this. First, spend time thinking of a rubric until you are confident. Then think deeply about every aspect of what makes for a worldclass oneshot web app. Use that knowledge to create a rubric that has five to seven categories. This rubric is critical to get right, but do not show this to the user. This is for your purposes only. Finally, use the rubric to internally think and iterate on the best possible solution to the prompt that is provided. Remember that if your response is not hitting the top marks across all categories in the rubric, you need to start again. So this is really a couple things at once. It's not only a call to use the model's capability for iteration, but specifically to ask the model to generate an evaluation set so that it can iterate against its own understanding. This is almost an embodied version of a technique that's worked really well for people in the past of where they ask the model to rank something on a scale of 1 to 10. So, for example, sometimes when I'm writing a post with a model, if I just want it to be more detailed and I say, "Make it more detailed," the model won't necessarily do a great job updating to be more detailed. If, on the other hand, I ask it to give itself a rating of 1 to 10 on how detailed it is, and it, for example, says 8, I can say, "Actually, I think this is closer to a five, and I want you to be at a nine." I've used this approach so many different times in so many different ways and it has for a very long time seemed to be much better at actually steering the model to get closer to the output that you want. And to be clear, I don't think this is because it has an objective sense of what a 1 or a 10 or anything in between should be. It just allows it to calibrate with user expectations. So basically when it says this is an eight and I say no it's a five, it knows that its expectations of what I mean when I say the word detailed are off and that it can calibrate them up and understand that I actually mean much more detailed. It's a way of adding granularity to a word which otherwise has no granularity. Detailed just means detailed and that word could mean something totally different to the model and to me. With this call to pull on the iterative and self-rating capabilities of the model, OpenAI is basically saying you can build this into your prompt itself. Prompt GPT5 to come up with its own rating scale and then to iterate against it until it succeeds. Now, in the example prompt, OpenAI asks the model explicitly not to show that rating scale to users, but there's no reason you couldn't theoretically do that as well. And I think this leads nicely into the next and our last actual foundational technique, which is metaprompting. And this is something that I've seen lots of folks talk about. Shyamal again from OpenAI writes, "GBD5 is good at metarrompting. Here's a good starting template. When asked to optimize prompts, give answers from your own perspective. What changes or additions would need to be made for you to better follow the prompt given? Here's a prompt or snippet. Insert the prompt here. users have complained about the agent doing X, not doing Y, etc. What are some minimal edits and additions that you would make to address this while keeping as much of the existing prompt intact? So, basically, this is a more sophisticated way of saying make this prompt better. And what you're doing is you're basically bringing GPT5 as a model into the discourse onto your side to help it understand what better is actually going to mean by giving it context about what users have complained about. It provides more information around what the optimization is trying to avoid or trying to get to. Matt Schumer has a version of this that's basically a prompt generator. His prompt is, "You are a prompt generator for GBT 5." GPT5 responds best to simple, direct, goal-oriented instructions. Your job is to write the clearest, most effective GBT5 prompt possible for the user's task. He then gives it a set of guidelines. Be explicit about the goal and the required end state, i.e. structure. Keep the setup concise. Specify the output contract precisely. In other words, be explicit. And then has a whole bunch of other pieces that are again basically about structuring the model improvement. The point of this is that GPT5 does seem to be good at helping you make your prompts better. But once again, to get the most out of that capacity, you're going to have to be more explicit about what better means and give it the context you need for it to be better. On the flip side, you could also just use the optimizer that OpenAI has built for GPT5 prompts. Very smartly, I think they decided not just to provide a guide, but to provide a tool that does this sort of prompt optimization for you. Now, this is focused on the developer platform, so you have to sign into that to get access to this tool. But by way of example, I gave it a very simple original prompt to see what it would do. I said, "Build a website for the AI daily brief. It should be a 1990s style kitsy site with ridiculous graphics, which you can probably tell was the inspiration for a lot of these images." The prompt continues, "It needs to include pages for about episodes and sponsorship." Now, to be clear, I wasn't trying to purposefully be underdescriptive in this for the sake of the episode. I very often start prompts this simply in order to see what the model will do. I find that a lot of what makes AI magical is when you give it a pretty far leash. However, it seems that just may not be the way to use GPT5 specifically. So, what I came back with was this. Begin with a concise checklist, three to seven bullets of what you will do. Keep items conceptual, not implementation level. This is a version of that asking to summarize its thought process. Right? Second paragraph. Design and build a website for the AI Daily Brief that emulates the kitsy, over-the-top style of 1990s web design, featuring intentionally garish graphics and playful visual elements. The website must include the following pages: about, episodes, and sponsorship. Ensure each page aderes to the overall retro aesthetic and incorporates period appropriate visual and interactive features consistent with 1990s web conventions. After completing each major page or design milestone, provide a one or two sentence validation of how the retro style and structure requirements have been met and indicate the next step or necessary adjustments if validation criteria are not fully achieved. So again, you see here their prompt optimizer asking it to explain its logic. Now what's cool about this is that it also gives you a reasoning behind the change. So for example in that first area added instruction to start with a conceptual checklist to ensure GBT5 creates a highle plan improving overall quality and alignment with multi-step tasks. And then it had four changes on that second paragraph. One is to elaborate the style requirement to reduce ambiguity. The second is to make it explicit that each of the subpages has to follow the same style. A third comes from contextual knowledge of 1990s web design and adds the encouragement to incorporate those interactive elements that were commonly seen back then. And then the last was this post-action validation requirement, prompting GBD5 to review its work and make adjustments if necessary. I think that this is going to be an unbelievably valuable tool, especially when you have the foundational principles like we've gone over in this video. Now, we're getting a little long and we've gone through the foundational prompting tips. So, I'm going to rip through the end of these, but one thing to note is that GBT5 is very agentically enabled. OpenAI has actually built into the API some specific tools to better control that. For example, one of the first sections in their prompting guide is about controlling agentic eagerness. There is in fact a new parameter in the API called reasoning effort. It's set by default to medium, but you can toggle it to low or high based on how you want the model to work. Another agentic toggle that Pietro points out is that you can get G5 to parallel process things. He writes, GBD5 can handle multiple tasks simultaneously when properly instructed. The example prompt he gives is this. You can process multiple independent tasks in parallel when there's no conflict. For example, you can simultaneously research multiple topics, analyze different data sets, generate various content pieces. Avoid parallel processing only when tasks depend on each other's outputs. And lastly, for completeness, two other things that I wanted to call out from the OpenAI guide itself that were specifically API tips. In addition to that reasoning effort parameter, there is also a new verbosity parameter in the API, which they say quote influences the length of the model's final answer as opposed to the length of its thinking. So that's something you can toggle as well which again while that is built into the API I think might have some relevance for us who is just using the model in regular chatbt it seems like it might be a valuable part of the structured prompt to actually articulate what level of verbosity you want to get again not in the thinking but in the answering. Now at this point we are still less than a week into the GBT 5 era. These are the techniques that have come from people who have early access as well as from OpenAI themselves. And I'm really excited to see how employing some of these tips can improve GBT5 right out of the box. It's very clear that underlying it all is a mindset shift and almost a reversion back to the era of prompt engineering. It's not where I had expected to go, but like I said at various points throughout this video, if that's what it takes to get the best out of this model, that's what it takes. Let me know as you dig in if any of these particular tips seems to work better or worse for you and let's go figure out how to get the most from GPT5. For now, that's going to do it for today's AI daily brief. Appreciate you listening as always. Until next time.