everything you know about prompt engineering is pretty much obsolete I'm just kidding well sort of these new open AI models Mark the beginning of the end of the old way of prompting it's not that the language of prompting has changed but there's a brand new dialect and I think that this new dialect will start to take over more and more over the next 12 months as these models get smarter you'll need more Nuance in these prompts to be able to get to the same outcome which is why in this video I'm going to show you how to use the 01 models to their fullest potential so you can Master this new accent of prompting I'm going to go over the right and wrong way to prompt these models when to use which type of model how to convert your existing prompts using a gbt put together and finally how to use meta prompting to generate optimized o1 prompts and if you stick around till the end I have 43 different prompts that I put together to get you started on your 01 prompting Journey if you don't know who I am my name is Mark and I've been running my own AI automation agency called prompt advisors for the past two years we work with companies in all Industries to integrate AI where it makes the most sense in their workflows without further Ado let's Dive Right In all right when it comes to these o1 preview and 01 mini models there's five core things you need to know Numero Uno you want to write short and sweet prompts before even myself on this channel I've built enormous prompts that sometimes span four to six pages there's definitely still a place for those kinds of prompts in the new world but not for these models at least not for now the next is to avoid Chain of Thought the whole point point of 01 preview and 01 mini is that they have a buil-in chain of prompt if you're not familiar with that that basically means that the model quote unquote thinks which basically it iterates over its output to see does it fit the requirements and does it fit the goal that the user has next is ideally use markdown XML or delimiters and delimiters are literally just like apostrophes triple apostrophes to be able to isolate text or things that you want to actually make the model focus on by using punctuation or some form of notation next is to not dump context and this is actually a big one because people think okay new model that means I could probably throw even more files at it and ask it for something very bespoke in this case you actually have to do a bit more leg work with these models instead of giving it a 20page document like gbd4 you should actually go through that document find the specific paragraph or excerpt that you want to analyze and feed that one excerpt instead of having it go through and find the needle in the Hast stack and the reason why is because these models think a lot they tend to overthink when you give it too much information so in a way to avoid its overthinking until it can actually compensate for that on its own you want to try to give it exactly what it should focus on and the last thing that no system messages are needed and this is more so concerned if you're using openi on the back end sometimes you have a system prompt where you say you are a world class writer in this case you actually can avoid that part and you say exactly what you want I want a well-written essay about XYZ things directly something like this takes a bit of getting used to because you kind of have to unlearn a lot of things that we've learned over the past 2 years I'm going to go through each one of these one by one to show you exactly what you should or shouldn't do so when it comes to writing short and sweet prompts one thing you shouldn't do is say hello I hope you're doing well I'm working on a research paper and need a detailed analysis of the economic social political factors that contributed to the fall of the Roman Empire please include quotations from historical figures statistical data where applicable and ensure it's comprehensive and insightful so this is a bit too detailed for what we're looking for the right way saying analyze the economic social and political factors that contributed to the fall of the Roman Empire if anything we have to qualify less and spoon feed less than we did before which is even better for you as the user it's just about getting into the habit of getting used to this the next thing is as we said we want to avoid giving it a Chain of Thought which is pretty much by saying do this thing step by step and in this example solve the following problem and explain each step of your reasoning in detail if a car travels at a constant speed of 60 km per hour how long will it take to cover a distance of 180 km think step by step so in this case we don't need to do that last part because natively it does think step by step so you could actually go and say how long will it take a car traveling 60 km an hour to cover 180 km so a little technique to help you out is imagine your Tech texting the model instead of writing an email to the model because up until now we've been writing emails or essays but if you start to get into the mindset you're sending an iMessage or a WhatsApp message to these models it'll help you be a lot more succinct naturally next this is something that is not new especially if you watch my content on meta prompting or anything related to prompt engineering markdown is super helpful so if you say read and summarize these articles article one discusses the impact of the renewable energy and article two covers advancements in I the Right Way is actually to say summarize the following articles and somehow denote them in this case we're using three quotations on each side to make it very denoted exactly where the focus should be from The Prompt now alternatives to this you could use hashtags you could use the parentheses or larger than or smaller than signs if you're using something like XML if you don't know what XML is don't worry about it the hashtags or something like this would be enough to help the model understand what the goal is all right and the next thing is to again not dump context so it's super helpful now that in chat gvt you physically can't upload a file but in a few months that's completely going to change you're going to be able to upload larger files you're going to be able to access the internet so we'll have to remember these principles to make sure we we're still able to get the best out of these models so the wrong way to do this is say based on all the following documents write a report on climate change and then you throw each document and tell it exactly what those documents are that's what I'm used to with gb4 o models Claude 3.5 Sonet and pretty much every large language model to date where we have to go is we want to choose a very specific excerpt So based on the following document write a report on climate change and maybe we say one specific documents or one set of excerpts from that document and this one is the hardest one to unlearn but the most important thing is that we need to start understanding that there's multiple models and multiple versions of models that are all optimized for specific use cases before all we had was gd4 so we use gbd4 for everything but now we have to be very selective about when we use gb40 mini when we use gb40 and when we use these 01 models now no system messages needed this one's self-explanatory instead of saying you are an expert travel guide who provides detailed itineraries you would just say plan a 7-day trip to Japan and just assume that it understands that it would have to take on the Persona of a expert travel guide to actually accomplish this goal now when should we actually use each of the open aai models based on everything we just mentioned now the best analogy to always keep in mind is that you now have a toolbox of models instead of one hammer you have a hammer you have a wrench you have a screwdriver in my opinion the hammer would be something like an 01 Model A Wrench would be something like a gb4 o model you'll use that a lot more often and a screwdriver is probably the mini models or the 3.5 turbo models so when it comes to open ai1 preview right now it tries to act as a pseudo Oracle you want to ask it more of the big picture questions so in my case I started asking it questions like hey here are the operations of prompt advisors here's what we're doing every month what do you think we should do strategically from a pivot perspective when it comes to servicing projects some form of big picture project imagine 01 preview of like an executive in a company you wouldn't tell them to be like you know write me a 10-page essay or write me a story or read this whole document and summarize it you would give it very high level tasks now only 1 mini is meant to be the Proto J of the 01 preview where it's fast it's Nimble it's much better at math it's much better at coding so Hallucination is way better than 3.5 turbo or 40 mini but again you'd probably want to use this for admin tasks when it comes to either coding mathematical ability counting things but maybe not necessarily writing actual sonnets or responding to things yet ideally the smaller models are the ones you'd want to use in like a make.com automation a chat agent an opening eye assistant just because they're quick they're Nimble and they're usually smart enough to actually get across so right now I wouldn't really have a place to use the 01 midi models yet until opening ey adds a few features they were talking about one of those features is actually to control the amount of time that these models think so right now it can go anywhere from 5 Seconds to 50 seconds but in a few months you'll be able to say hey you only have 5 Seconds to think of the answer when we get to that point this is where I would consider 01 mini for more of a production use case now the last ones gb40 gb40 mini Etc and all the old reliables I would still use this for 80% of cases and I would use this for 100% of the cases if there's anything you have that's in production because again it's not just about the time or latency of these models they're actually going to be much more expensive for a while in terms of reasoning because there's a lot of hidden tokens that are burning behind the scenes that will quickly bankrupt you if you are doing something in operations of thousands to Millions so for now I would stick to the 40 models and that whole family Suite of models from before because those are predictable those won't break in production and most importantly when it comes to actually deploying these this won't have as big a deal in terms of latency as these newer models will all right with that said I'm now going to move on to show you how we can convert prompt from the old world into the new world in case you want to use the o1 models for similar tasks all right so I created this custom GPT that you're going to have access to in the link in the description below it's going to be a gumroad link with this whole presentation the GPT as well as the Easter egg the 43 prompts I put together for you all available for you and if you want to support the channel I love you if not I still love you so the idea is let's create a prompt from the old world and let's use my strategy of meta prompting to actually get there so let's go here and let's say you are a prompt engineer you write detailed prompts and output them in markdown in a code block um write me a very long detailed prompt on how to travel a 10day trip to Australia and then it should write something pretty complex or at least pretty detailed and as you see here it probably will be a few pages or at least one page Page worth of content and what we're going to do is is we're going to take this copy paste it into my GPT and behind the scenes and you can break into my gbt to look at the instructions but pretty much all I'm saying is be succinct don't do any Chain of Thought don't say step by step pretty much the whole presentation I showed you I've basically shoved those instructions into the custom GPT so if we take this and we go into my custom GPT and click on convert my prompt and then and we will just Place The Prompt here what it should do is take that prompt and create a very succinct version of it that's optimized for the 01 model so you'll see in terms of this to this it's at least half if not a third of the size and it was very detailed into the point and that's kind of what you need to do to optimize your chances for Success if we did another example here so if we did let's let's do another prompt for writing an SEO blog article about any topic with the topic configurable as a variable at the bottom of the prompt so if we send that over all right and we get a very long prompt and you'll see how detailed this one is it goes down to the very nitty-gritty from the H2 tags the body tag to summarizing the key points in the article so if we take this and we go back to my custom GPT back to the bottom and say convert this to and we paste it and we get something that's literally half the size again if not a third this was our original prompt if you scroll all the way up this is easily a page and a half to two pages and this one is much more succinct and very to the point so if anything when it comes to creating these prompts another analogy to help you if the first one wasn't helpful was if you know what a zip file is where you take a bunch of folders full of files and then you zip them together and compress them that's pretty much what you have to do with your old world prompts if you want to use the new world models okay and for my next trick like I promised I'm going to show you how to generate these from scratch instead of just converting old to new so if you go to the notion that you'll find in that gumroad link in the description below you'll see that I put this together for you along with all 43 other prompts where you'll see the actual instructions the markdown and I've made them all variable specific meaning all of them will have something that's configurable whether it's prompt for a meta title or it's a podcast outline generator you'll see some variables here in the same vein if we go to The Meta prompt you'll see this one here that I'll just copy and we'll open a brand new tab and paste it here and what it says is as a prompt engineer create a detailed and succinct prompt that Auto at's task name and the reason why I put it in Brackets is we're going to denote it in the bottom of the prompt so that it's easy for you to actually configure it so you'll see here it'll say your prompt should provide clear and Specific Instructions include relevant context or background information Define the desired format and style of the output mention any constraints or requirements to consider aim for clarity and brevity and then output the prompts and markdown and then add variable placeholders denoted by square brackets and this is more so for you if you want to be able to easily create variables that you can configure at the bottom I just gave it a placeholder like this so if we go to the bottom of this and we say task name and then we say um SEO blogs for a marketing agency basically creating them with a topic and list of keywords this should get you started and you'll see the similarity to this output to the output of my custom gbt where it's pretty succinct and compact it's written in the 1 2 3 4 format which is actually very effective when it comes to the 01 models based on at least our testing and if you go down you have these variables it came up with that obviously are fully configurable to you and in this case they're topic audience type tone word count and list of keywords all right and that's pretty much it I tried to make this as zipped a conversation as possible to give you everything you need to know without taking half an hour so hopefully all these tricks and dos and don'ts will help you quick understand how to use these models and most importantly which model to use for which type of task if you love content like this and find it helpful please leave a like and a sub on the channel it super appreciate it it helps a lot and other than that I'll see you next time