Transcript for:
Guide to AI Chatbots and Business Potential

Hi, welcome back to the channel. My name is Bogdan and in this video I'll provide you with the most comprehensive piece of content on how to master AI chatbots. If you watch this video from start to finish, you'll be able to clearly understand how AI chatbots work, how to build them and which kinds of chatbots you can sell to businesses.

There is a great tutorial on AI chatbots published by Leah Motley. It covers a lot of basics, so if you haven't checked it out yet, feel free to do so. But But it was published almost a year ago. And since then, we've had a lot of updates.

Liam and I actually discussed this recently. How rapid the technology advancement is. I mean, in the last 10 months, we've seen more capable GPT models, higher token limits, new tools, vision capabilities.

Once GPT-4.0 is fully released, we'll have audio input and output. And all of these updates obviously create new use cases for businesses. So long story short, this video is going to be.

an updated version of a full guide on AI chatbots. My team and I worked on this video for over a month. We could have made it a paid course, but we decided to offer it for free. Well, not exactly for free. You pay for it.

You pay for it with your attention, which is the currency of today, right? So make sure your investment pays off. Watch this video as many times as you need, but understand the information here and act on it.

Remember, knowledge alone doesn't equal results. Your actions do. And for full transparency, my business interest here is clear. I've been building IT solutions for the last five years.

With my current CTO, we've built a SaaS product, a marketplace, and now we run an AI automation agency. You can check it out at bossar.agency. What we essentially do is build these AI chatbots, voicebots, and automation solutions, and we sell them to various businesses. We have a team of developers.

we are fully equipped to take on more complex solutions and we are not niche specific so the more leads we get the better if you watch this video and find it valuable youtube algorithm will push it more i'll get more attention and more leads for my business i hope that makes sense to you and explains why we spent a ridiculous amount of time and effort to prepare this comprehensive guide so what we'll cover today first we'll discuss the ai business potential If you clicked on this video, you already realized the potential I know, so I'll just quickly share some important stats and explain what it means for you if you want to leverage this AI opportunity. Then we'll dive into understanding AI chatbots. This will be the theoretical part of the video, but I will only discuss the parts that you'll deal with in practice, okay? You absolutely must understand and be able to talk about these aspects, because in 90% of cases you'll touch on them. during sales calls.

Of course, if your goal is to build and then sell these shot bots to businesses. If you want to learn more about LLMs, prompting, neural networks and other fundamentals, check out the suggested videos in the description. I highly recommend Andrej Karpaty's one-hour talk on intro to large language models. Also, check out the IBM channel, they have a great playlist called Understanding AI Models. And one of my favorite channels is Three Blue One Brown.

which also has a fantastic playlist on neural networks. I also included a chapter on prompt engineering because it is a skill. It is a skill that you need to learn in order to build efficient AI chatbots. I'm gonna share some tips and hacks on prompting that can save you a lot of money, so make sure to use them. Next, I'll share some interesting use cases.

All of them are from real-life experiences. As we get daily leads looking to implement AI solutions in their businesses, we have Plenty of stories to share. And then in the practical part of the video, I'll review the most useful tools for building and deploying AI chatbots.

I'll show you how these tools have evolved over the past year, adding some very interesting features. Finally, we get to the most exciting part, the practical tutorials. You might be tempted to skip everything and just jump right to this, but I urge you to give the theory part a chance because you need to understand it before implementing these solutions. For the tutorials, I'll guide you through comprehensive solutions, step by step, sharing my screen. First, I'll use a no-code chatbot builder to create a basic customer service chatbot.

Then, I'll show you how to build a more advanced chatbot, introducing a product recommendation algorithm. After that, I'll demonstrate how to use custom code and different LLMs, such as Claude and Gemini, explaining which cases each llm is best suited for and eventually i'll show you a comprehensive ai chatbot capable of providing customer support recognizing images and recommending relevant products based on the customer's needs so we've packed a few tutorials into one comprehensive video to make it as valuable for you as possible i could spend a lot of time discussing ai trends and growth rates and by the way the ai market is estimated to hit $1 trillion in the next 6 years. Numerous studies show that the adoption rate of AI among businesses will keep increasing, as shown on this slide, but I will only say one word that should matter most. Timing. Obviously, this is a technology revolution.

Every company is diving into it. We just had Apple's WWDC event introducing Apple Intelligence, which means hundreds of millions of people will start using AI, accelerating mass adoption even faster. It's clear that we are quickly heading into a world where you can just take your phone, talk to it, and it responds intelligently, knowing you. And historically, the penetration of new technology typically begins with business-to-consumer, B2C market before expanding into the B2B sector, business-to-business. It happened with social media platforms, it happened with the internet, and it's just beginning to happen with AI.

In the last 20 years, we've seen businesses go digital. Companies made websites, set up social media profiles, and moved from offline to online ads. The next big thing is going to be automation, especially with AI getting into business processes. This change aims to cut costs, save time, and overall boost efficiency. And if we look back at the early days of going digital, companies that focused on making websites and doing social media marketing made a lot of money because the timing was right.

Now we have a similar opportunity with AI. Agencies that focus on AI and automation will be key in helping businesses get on board with these new technologies. As more businesses start using AI, you want to get in as early as possible. This chart shows the technology adoption cycle. And we are still at the early adoption stage, which represents only 13.5% of the market.

When the early majority and then the late majority start looking to implement AI solutions, you want to already be established as an expert. With track record, with case studies in your portfolio. Again, right now, only a small number of businesses use AI. But...

that's changing fast. So the best time to start building and selling these solutions, particularly AI chatbots, is now. Alright, let's talk about chatbots.

Just to make sure we're on the same page, we've got two types here. Old-school rule-based and the new AI-powered ones. The old-school rule-based chatbots are pretty limited and manual. They work by following a set of predefined rules, which means they can't handle anything outside those rules.

On the other hand, AI-powered chatbots use large language models to understand the user's query and provide an answer. That's obvious, right? Basically, ChatGPT is also a chatbot, but for our solutions, we connect the same GPT model to a different interface with a different context.

Now I'm going to simplify a lot of things, but I'll try to structure how it all works so that you understand the main logic. I like to think of it as three main elements. user prompt, knowledge base, and LLM.

So it works like this. The user asks a question or makes a request. Then the chatbot searches its knowledge base to understand the prompt. Next, the AI processes the prompt and uses the knowledge base to create an answer. And finally, the chatbot provides the answer to the user.

But the issue is the token limitations. Every LLM has a token limit. You've probably heard of it.

For example, GPT 3.5 had a limit of 4096 tokens. The latest and most advanced GPT-4.0 has a 128,000 token limit. So compared to a year ago, you've got much more flexibility when it comes to token usage.

But keep in mind, tokens get used up for three things. The first one is processing the user's query. The longer your prompt, the more tokens will be used. Second, pulling information from your knowledge base.

Tokens are used both for querying the knowledge base and for the information it's retrieves. And third, generating a response. So the length and complexity of the response consume tokens too, including interpreting your input and adding relevant info from your knowledge base. And even though GPT-4.0 is twice as cheap as the previous GPT-4 Turbo, it's still 10 times more expensive than GPT-3.5 Turbo.

So if you can achieve your goals using a cheaper model, it's always better to go for it. To get around the token limits, we use chunking. It means that your knowledge base is split into chunks of text and the AI picks only the relevant chunks to answer the user's prompt.

That's why our still high level but slightly more detailed framework for the chatbot looks like this. The first step, the user enters a prompt. The second step, break the knowledge base into smaller chunks. The next step is to retrieve the most relevant chunks based on the user's prompt. Then create a new prompt that includes the user's question.

and the relevant context from our knowledge base. Then feed the new prompt to the language model and finally return the generated answer to the user. Let's visualize it here.

So we have knowledge base plus user prompt. Then the system creates a context-aware prompt. So based on the relevant chunks of our database to the user prompt. And then LLM generates the final result. But the problem is with step number two.

How do we decide Which chunk of text is relevant to the user's query? A common solution is to use embeddings. Embeddings capture the semantical aspects of texts.

Let's use this graph to explain. Each day of the week is represented as a point in space, okay? The positioning of these points shows how closely related their meanings are. For instance, the days Monday, Tuesday, Wednesday, Thursday, and Friday are grouped closely together, indicating that they are semantically similar. They are all weekdays, right?

Similarly, Saturday and Sunday are also close to each other, representing the weekend. And they are all close together because they are all days of the week. The embeddings work by converting words, phrases, or other pieces of text into numerical vectors, as you can see on the right-hand side here.

These vectors capture the semantic meaning of the text and when displayed in a multi-dimensional space, like in this slide, similar meanings are placed near each other. So, let's summarize, just to make sure we are on the same page. Words or phrases that have similar meanings or contexts will be positioned close to each other.

The farther apart two points are, the less similar their meanings are. And all the words or phrases are given an embedding vector. The numerical one, right? And by comparing the distance between two embedding vectors, you can measure how similar their meanings are. I hope that's clear.

It is important to understand this if you actually want to sell these solutions in real life. Okay, now we have an updated high level framework for our chatbot. When the user enters a prompt, the system starts by chunking, right? Which means it divides large texts, like it breaks your large knowledge base into smaller manageable pieces.

Then it converts the data into numerical vectors or embeddings that capture their meanings. and this allows the system to compare and retrieve similar information effectively. Next, the system creates an embedding for the user's prompt and searches the embedding database for the chunks of information that are closest to this prompt embedding. It retrieves the actual text of the most relevant chunks and creates a new prompt that combines the user's question with context from the database. This revised prompt is then sent to the language model, which generates the answer.

So... The key is not that the AI knows everything, but that it smartly retrieves and uses the most relevant information. It's like a librarian fetching the right books for you rather than knowing everything off the top of their head.

All right, guys, if you've managed to understand this, you can be proud of yourself. There are a lot of people talking about AI on YouTube who don't even get this high-level framework. Moving on, let's discuss prompting.

I promise to only touch on the theory you need to actually build these chatbots in practice, and effective prompting is one of those essential things, because it directly impacts the cost and the efficiency of your chatbots. I used to think it was straightforward. There are so many videos with perfect formulas for prompts on YouTube, and you know, if your goal is to just use ChatGPT to revise your emails, that might be enough.

But if you want to build ai assistants put them to work or even sell them to clients you need to understand that there is a science behind it and if you don't learn it you'll struggle to get the proper cost efficiency ratio to be able to sell to anyone so i want to break it down for you and while doing that we will write a good prompt which we will use later in this video when i build a chatbot live the chatbot will serve as an online beauty store consultant for an imaginary brand Bossar cosmetics. It will be able to provide customer support, recognize images, and recommend relevant products to users. What I'm sharing with you now is based on research papers, not just my own experience. Scientists have tested various prompting techniques and measured their impact on efficiency. In the video description, I'll provide links to all these research papers, so you can check them out yourself.

There are two types of prompt engineering, conversational and single-shot. Conversational prompting is suitable for small tasks or personal use. You know, when you ask ChatGPT to fine-tune your email and if it doesn't do well on the first try, you can follow up until you get the output you want.

Single-shot prompting, however, is important for automating systems and creating scalable AI solutions. This is what we aim to do here. This method involves crafting a prompt that provides all the necessary information in one go, which is essential for large-scale and kind of more complex applications.

There are no follow-ups, okay? So the main components of a good prompt are role, task, specifics, context, examples, and knows. Each of these components is supported by a prompt technique that has been researched and backed by scientific papers. These techniques are role prompting, chain of thought prompting, emotion prompt, few thought prompting, and lost-in-the-middle effect.

Let's quickly cover each of these components and the relevant prompt technique and then we'll move on to the next chapter. The first component is role and the relevant prompt technique is role prompting. Role prompting is a technique where the language model is assigned a specific role to play during the interaction. For example, you are a highly qualified and experienced online beauty store consultant. You are the best at selecting the perfect beauty and makeup products to meet each customer's unique.

needs. Super simple. I know you are already familiar with this technique, but make sure to use it because research shows it can increase output accuracy by up to 25%. This is especially true if you not only describe the role, but also provide a complementary description of their abilities.

Complementary description. So in my case, the role is you are a highly qualified and experienced online beauty store consultant. And the complementary description is you are the best at selecting the perfect beauty and make a product to meet each customer's unique needs. The next component is task and the correlating prompt technique is chain of thought prompting.

That's where we tell it what to do. Provide support, generate text, etc. It should be concise and specific. Okay? Chain of thought prompting involves instructing the model to think step by step, essentially giving it a detailed process to follow.

Research shows this technique can boost output accuracy by up to 90% for complex tasks, which is a significant boost, right? Here's a screenshot from one of the research papers I mentioned. It shows an example of this prompting technique where chain of thought reasoning is highlighted, so you can see the difference in handling an arithmetic task.

But let's have a look at our example, okay? So I always start with a verb provide customer service and advice on services available at Bossar Cosmetics. And then I provide a step-by-step process instruction. Follow this step-by-step process to ensure your script is first class. First, greet the customer warmly and answer any questions they might have.

Step two, identify customers' needs. Ask what kind of beauty products they are looking for, skincare, makeup, haircare, or something else. Step three, gather detailed information. Ask them about their skin type, specific concerns, and the look they are aiming for.

Step four, request an image of their face for better assessment, because we are going to recognize images. Step five, suggest products based on the customer's needs and available products in the store. Step six, explain how the recommended products address their specific concerns or solve their pain points.

Step seven, let them know they can reach out for further assistance after their purchase. Next, we have specifics, and the associated prompt technique is Emulsion Prompt. This section is where you can list the most important notes about executing the task outlined above. In our case, we can add specifics such as Check the product database before recommending products to ensure they are in stock.

Or, if you can't find the right products to satisfy the customer needs, encourage them to search the site themselves. The Emulsion Prompt technique involves adding short phrases or sentences with emotional stimuli to the original prompt. This method has been shown to boost the accuracy of generated output by up to 115% for complex tasks.

So you can get better results by adding more bullet points with phrases like your role is vital for the whole company. Both I and our customers greatly value your assistance and recommendations. I know it sounds strange and you might think it's nonsense, but I encourage you to check out the research papers to see how it was measured and studied. For the next section, context, we're going to combine both role prompting technique and emotion prompt. This section's goal is to give context about the environment our LLM is working in and Why it's doing its specific task.

We want to explain its role within our business context and kind of hype it up, adding some additional stimuli to show how important the chatbot's role is. So we wanna use phrases like, you are the world-class assistant and your expertise is highly important to the company. Or you are the most important component of our business processes. People that you are advised to rely on you as never before. Something like that.

Here's my example. Our company sells high-quality cosmetics like skincare, makeup, hair care, and more. We value our customers and our goal is to solve their pain points.

That part provides context about my business. Then, your role is to provide customer service, understand customer needs, and recommend products that meet those needs. Here, I describe its role within our business by accurately identifying customers' needs, you directly contribute to their well-being and the growth and success of our company. Therefore, we greatly value your attention to customer service and need identification. And here I add some emotional stimuli to show how important its role is.

So basically, there are two things to remember about context. Explain the chatbot's role in the business context, including details about customers, types of services or products, company values, etc. Then emphasize its importance with emotional epitaphs.

appeal. Okay, so you want to highlight its impact on the business and the wider community or even the whole society, if that makes sense. Next section is examples.

And the associated technique is few shot prompting, which essentially means that we provide several examples while zero shot prompting means there are no examples given. And one shot means there is one example provided. So according to studies, the accuracy can be increased by up to 57%.

percent. if you provide multiple examples. So on the graph here, you can see that the accuracy can be increased by 40 something percent if you go from zero examples to at least one. And then if you add more, you can get even higher accuracy.

One thing you should remember, though, is that token usage involves processing your prompt. We discussed it already, right? So the more text you include in your prompt, the higher the token usage and you pay for those tokens. So keep the prompt as brief as possible while implementing. all these techniques to achieve the best result.

You could add thousands of examples, but then your prompt will be huge and it will be more expensive to process it. So in practice, we use four to five examples on average, depending on the context, and usually it is enough to achieve the best performance. And also it is important to provide examples that the system struggles with. We usually start by testing it. During testing, we identify the types of queries that are most difficult for the model to answer.

Then we take those queries and provide the ideal outputs as examples in the prompt. Just a little life hack for you. And this is also an opportunity for us to teach it how to structure the output by providing specific examples. For our beauty store, we could provide examples such as these. So it could be typical question by the customer and then the ideal output, the ideal answer by the chatbot.

So, hi, I have really dry skin, especially during the winter months. Can you recommend some? products to help with hydration. And then the ideal answer by the chatbot.

I won't waste your time reading out loud the rest of the examples. You can pause and just check them out. Let's move on to the last section, which is for notes.

This is your final opportunity to remind the model of key points and add any final guidelines to get the output right. Also, it's a good place for some cool hacks. I like to include things like letting the model say, I don't know.

This is a great way to prevent hallucinations. You allow it to say I don't know instead of making things up. So definitely use it. Then giving it room to think. This allows the model to draft better responses because you kind of allow it to take time and think about it.

Other words, you allow it to use this step-by-step thinking process. And once again, be encouraging. Remember, you are the world-class expert in X.

That helps a lot. One thing to keep in mind is the lost in the middle effect. Studies show that language models do best when important information is at the start or end of the prompt.

If your prompt is long, stuff in the middle might get overlooked, as you can see on this graph, which is again a screenshot from a research paper. I didn't make it up. So keep the notes section short and focus on the most important functions and the style you want. For our example, I could come up with notes like this. If you don't have the answer to a query, you can say this.

I don't have an answer. Please send your query at support at bossarcosmetics.com. Then, before answering the query, take a deep breath and think through it step by step.

Okay? Then you are the world-class expert in beauty industry. And something like your tone should be friendly and your main goal is provide the best customer service.

Alright, this is our final example prompt broken into sections which we will use later when building chatbots. And finally, a few more tips regarding prompting. Number one, implement all of the techniques we've talked about. If you do that, you can boost your performance by up to 300 percent.

Number two, prompt length and cost. So for high volume tasks, keep your prompt short and to the point. Because obviously each time it runs, you are charged for the input tokens.

So a shorter prompt means lower costs and we always want to keep the system as cheap as possible as long as it can complete the task. Okay, number three, be smart about choice of model. Good prompt engineering can make Cheaper models work better. Here's a strategy on how we sometimes approach this.

Let's say we use open AIS models. Start by testing GPT 3.5 Turbo. Then test GPT 4.0 and if you notice any difference, if it does the job better, then you can use the results from GPT 4.0 as examples within the prompt for GPT 3.5 Turbo. That way you can achieve... the same results as if you use GPT-4.0 using GPT-3.5 Turbo for your specific examples.

And you would save a ton of money because GPT-3.5 Turbo is 10 times cheaper than GPT-4.0. And then we have temperature. Be it a chatbot builder or an OpenAI dev platform, while creating an assistant you'll often find this temperature in the model configuration.

Temperature controls randomness. So as the temperature approaches zero, the model will become deterministic and repetitive and that is what we usually want because we aim to achieve consistent and predictable results with our ai system in most cases so we usually set the temperature to zero one of the exceptions might be if you want to do some kind of creative writing or ideation or something like that okay so then you can test higher levels for the temperature but usually by default we set it to zero Now that you understand how AI chatbots work and how to craft and test the best prompts, let's quickly review the use cases. There are obvious benefits like improved customer engagement, like 24-7 availability, right? They offer global service by communicating in multiple languages, and of course you can save on costs by reducing the need for a large customer service team.

These chatbots can increase revenue through personalized communications and upsells. and at the same time provide data analytics to you. These benefits are already significant and they come from basic AI chatbots. But on top of that, you can build a lot more automation.

For example, lead qualification and customized sales files are among the most popular requests we get at our agency to convert leads effectively. Businesses need to target them with tailored sales funnel right to address their specific pain points. A general sales funnel for all leads results in lower conversions.

For instance, at our agency we build chatbots for customer support, we build voicebots to handle calls and serve as receptionists, and we also automate social media management. Different leads are interested in different solutions. An AI chatbot can qualify leads, identify their specific needs, and run them through customized sales funnels offering targeted solutions instead of a one-size-fits-all approach. And that is how you can help businesses dramatically increase their conversion rate.

Another great example from real-life projects is product recommendation combined with website scraping for affiliates. An AI chatbot can provide customer support, match clients with the best products and then scrape websites like Amazon. to recommend the products while attaching affiliate links.

You know, on Amazon, you can get an affiliate link and earn some commission fee on each sale. So that is what affiliates start using these chatbots for. Matching users with products and fetching products with affiliate links already in real time, basically selling them to leads right away. And there are many smart and creative ways to utilize AI chatbots. Obviously, the more experienced you are, the more advanced you are in software development the more comprehensive projects you can take on and if you are interested in exploring more use cases check out my video titled top 5 ai automations to sell in 2024 where i cover more use cases and projects moving on let's review the entire toolkit you need to get started in this space my toolkit here is kind of default one the same tools liam showed in his video 10 months ago I'll quickly go through them and show you how they've changed and the new features they offer because now you can achieve much more using the same tools.

Okay, so here's our metrics prototyping software. They are extremely easy to use, right? You can create a basic AI chatbot in a few clicks, but they are still quite limited in terms of customization. Okay, chatbase.

A year ago in chatbase, you could add documents or paste a website URL to use them as a knowledge base for an AI chatbot. And you could only deploy to a website as a widget, you know. Now... in addition to documents and website data, you can use Notion.

So all your pages in Notion can be used as a knowledge base. They've also introduced a bunch of integrations so you can deploy the chatbot to WhatsApp, which is very useful today. And you can also integrate it with Zapier, Slack, or WordPress. Let's now create a customer service AI chatbot and deploy it to your website using chatbase just to show you how quickly you can do it. This is going to be our demo webpage.

It's for Bossar Cosmetics. And we want to deploy a chatbot here. We can see there are no chatbot widgets displayed at the moment.

So let's go to chatbase and click create new chatbot. Right away, you are prompted to add your data sources. You can add files, text, websites, or connect Notion. This is something new, right?

So let's try this out. I'm going to click connect Notion. I understand.

Here, we select which pages to use. I have my Bossar Cosmetics knowledge base prepared. prepared, so I'm going to allow access to that one. By the way, all of these sample resources, such as knowledge base, prompt, site HTML, and the entire presentation will be available for free in my school community, and you can access it using the link in the video description. Let's click create chatbot.

It takes a few seconds to create. All right, so basically, it is done. You can check which model you are using, such as GPT-4.

or O in this case. In the activity section, you have chat logs and analytics. If you go to sources, you can add more knowledge-based files at any moment. The connect tab.

Here, you can embed the chatbot to your site, share it in a separate URL, or integrate it into WhatsApp or any other apps that we discussed a moment ago. In settings, you can go to AI and select a large language model. You can modify the role instructions.

We have some pre-configured default role and constraints here but you remember that prompt that we kind of created together according to the best prompt techniques. Let's just copy and paste it here and temperature unless you want to use it for creative tasks like creative writing or ideation just keep it at zero. Now you can customize how it looks, change the colors, the icon etc and you can embed or share it. Just make it public And let's quickly test it out on a separate page first. Okay, hi, what are your products?

Hello, welcome to Bossar Cosmetics. We offer a wide range of high quality beauty and skincare products. And then it provides me with a list of products.

Nice, it works well. Now, to actually put it on our website, I'm going to copy this script here. Then go to the HTML of the website and paste it somewhere.

somewhere here. Save it, then go back to the website, refresh it, and we have our chatbot widget displayed in the corner. Hi, do you have any hair products? Yes, we do have a variety of hair products available.

Are you looking for shampoos, conditioners, let's say shampoos? Which ones do you have? And it gives me options available according to my knowledge base in Notion.

And that is how this simple prototyping works. You can set it up in a few minutes. You just need to have your knowledge base and prompt or role instructions prepared. Okay.

Dante AI. Dante AI is also a great tool for prototyping. A year ago, you could use documents and websites as a knowledge base, and you could add a YouTube URL, which would be automatically transcribed and used as a knowledge base. Now they have introduced Google Drive and Google Sheets as sources of knowledge for a chatbot.

And that's already a lot. Additionally, they have pre-configured some popular functionalities. The chatbot can now collect user data with lead generation forms and book meetings using your calendar links.

This is how it looks like. You can click create AI chatbots. Let's call it Bossar Cosmetics Assistant. Click next. And you can either upload files or URLs.

It could be YouTube, Google Drive, Google Sheets, or website. I have my product database in Google Spreadsheet with product names and pricing, so I'm going to copy this link and paste it here. Click next.

Review and confirm. Then create the chatbot. Okay, now it should use my data. For example, we have this product name and the price for it is $14.99.

Let's ask what is the price for and paste that product name. It replies the price is $14.99. So it is successfully using my Google spreadsheets as a knowledge base and that's great.

On top of that, I love the user experience here. the system guides you through the customization steps. You can modify the appearance of the chatbot, you can add your logo, the chatbot URL. The next step is the chatbot's personality. So there are some pre-configured templates for you to choose from, or you can create your own prompt.

So I'm going to use my prompt again, just copy and paste it here. And at the bottom, you'll see chatbot creativity, which refers to the temperature, right? So they just named it differently. But it is the same thing.

It determines how creative or random the responses might be. Next, you can change the welcome message. You can add some suggested prompts.

And if you choose show always, they will be displayed here on the right. Let's say what are your products. And it looks like this.

Next is lead generation. And this is really impressive. Listen, I have a video where I built a lead generation chatbot in voice flow and it was quite complex.

There were a lot of steps involved and in that tutorial I also used make.com to connect the chatbot with Google Spreadsheets using webhooks and I had to set up the triggers and so on. But using Dante AI you can achieve the same by just checking this box and describing when you want the lead generation form to show. You can allow the user to skip the form, you can uncheck the option to show it at the start and instead describe a condition when the form should pop up.

For example, when the user asks to be contacted by agents. So whenever they ask to speak to a human agent, the chatbot would collect their contact details and this way you generate leads, right? You can also add more fields like phone number, name, email, etc. Alright, and another big update is booking meetings. You can just paste your calendar link and describe when you'd like the book meeting button to appear.

For example, when the user asks for a meeting. Or you can also set it to always be visible. And it looks like this at the bottom of the chatbot window. I really like these two options.

They are of course for paid users only, so you'd have to upgrade to use them. But as for prototyping software, now there is much more flexibility. They also offer some integrations here. You can connect your chatbot with WhatsApp, Messenger, Zapier and more and they made it very easy. I mean they provide you with a detailed step-by-step integration guide so if you want to connect it to WhatsApp you don't even need to search you know for YouTube tutorials or something like that.

It is all here. Then we have chatbot builders which are much more flexible. You can implement more advanced features using tools like Voiceflow or Botpress but they are harder to use. They have kind of a modular structure requiring you to build the chatbot's workflow logic step by step.

It's not as user-friendly and pre-configured as Chatbase or Dante. And to create really advanced features with Voiceflow or Botpress, you often need to use some webhooks or write a few lines of code. So I'd say that these are low-code rather than no-code tools when it comes to building more advanced solutions. Okay, comparing the two, Botpress is definitely more complicated and requires more technical background. So I've put together this comparison table.

Botpress is an open source conversational AI platform, which means it's flexible, but might require more hands-on work. Okay, VoiceFlow, on the other hand, is a no-code platform. So if you're not into coding, VoiceFlow might be easier to get started.

Target users. Botpress is geared more towards developers and businesses, while Voiceflow targets designers, product managers, and also businesses. So again, it's more user-friendly, especially if you don't have a developer background.

Customization. Botpress offers extensive customization with its modular architecture. You get a lot of control here. Voiceflow is more limited to the platform's features, right? It's straightforward, but less flexible.

Hosting. Botpress is self-hosted, you have to manage your servers. Voiceflow is hosted by Voiceflow, so they handle the hosting, which is one less thing to worry about. Pricing, they both have free plans available so you can test them out right away.

Overall pros and cons. Botpress gives you full control over data and deployment. It's highly customizable if you have the skills, but it has a steeper learning curve. At the same time, Voiceflow is user-friendly and still provides enough customization to be far more advanced than Chatbase or Dante.

However, comparing to Botpress, you'd be more dependent on their pre-configured features and have less control over deployment. So there is always this trade-off, you know, between flexibility and ease of use. I'm going to provide you with a voice flow tutorial later in this video, so just stay tuned, okay?

Then we have what I call integration tools. And there are many options. However, Mac.com and Zapier have proven to work well in our context.

I mean, with these tools, you can build workflow automations that enhance... enhance the capabilities of your AI chatbots. For example, if you need to establish communication between your chatbot in VoiceFlow and a third-party tool like Google Spreadsheets or a CRM system or many other apps, you can use make.com to create a scenario and just drag and drop these apps to connect them instead of coding the API integration.

If we compare them, I'd say they are quite similar in terms of what you can achieve, just as Botpress is. technically more advanced compared to Voiceflow. In this case, Mac.com can be more complex for non-technical users. Sometimes building advanced scenarios requires some technical background to kind of set it up properly.

Other than that, they are both drag and drop solutions used to build workflow automations and both offer very generous free plans. So if you want to do something like complex workflows requiring multiple app integrations, such as connecting chatbots to CRM systems or creating customer support tickets from chatbot chats, I'd go with mac.com for its flexibility and ability to handle more sophisticated scenarios. But if it's simpler or more straightforward automations you're after, like automatically sharing new blog posts on social media, or automatically creating tasks in project management tools, for example in Jira, you know triggered by new emails or form submissions for that i'd go for zapier for its user-friendly interface and really extensive library of predefined templates and since these tools are probably the most popular you can find a tutorial on youtube for each of the automation tasks i just mentioned and that way you can learn them you can learn how to use them in no time And then we have custom code option. This is what we do as our agency.

And just to give you more insights, we use Node.js for all our functions. Sure, you can get the same results with other programming languages and we know Python, we know PHP and Rust, but we mainly stick to TypeScript and Node.js because we have the most experience with them. We use AWS S3 to store files.

We deploy our functions to AWS Lambda. Of course, the bot response time is important, so you need to reduce it as much as possible. We use LLRT as the runtime in our Lambda functions to achieve this. Don't overthink it, okay?

I just mentioned this in case you have a technical background and are curious what our devs use. So why did we choose to custom code our solutions instead of building them in VoiceFlow and connecting to other apps using Mac.com, for example? First of all, this is basically the cheapest approach because you pay fewer third-party margin fees. Second, it is the most flexible solution because you are not dependent on what was pre-configured by the VoiceFlow development team. You can create any solution according to the client's needs.

I'll give you an example. Let's say you want to build an AI-powered customer support chatbot that can recommend products based on customer needs. You can build a chatbot in VoiceFlow.

Implement a product recommendation algorithm, then connect it to Mac.com, store the product database in Airtable or Google Spreadsheets, use webhooks to connect your voice flow chatbot with Mac.com. And by the way, I'll show you how to do exactly that in a moment. Or you can write custom code, build an AI assistant and connect it via API to Airtable or Google Spreadsheets or whatever it is you want.

You'll achieve the same result, but the second option is more flexible. For example, if you wanted to add image recognition on top of that, using Botpress or Voiceflow wouldn't be possible because they don't support image recognition. But using custom code, you could just modify the code and add new features. Of course, the entry bar here is higher because you need a software development skill set. I must say though, that out of all the leads we've had, about 80% of the projects would not be possible to complete if we only used no-code or low-code solutions.

Now let's move on to the practical tutorials. We already built a basic customer support chatbot in Chatbase. This time I'm going to make it a bit more advanced.

I'll show you how to build a customer support chatbot which will also be capable of recommending product listings to customers. And for that, I'm going to use VoiceFlow as a chatbot builder, then Google Spreadsheets to store my product database, and Mac.com to connect VoiceFlow with Google Spreadsheets so that my chatbot could have access to my inventory in real time. All right, let's start with the demo. Here's our demo website.

I've already added our chatbot widget. Let's start a new chat. how can I help you find the perfect product? Okay, can you recommend a serum?

And it provides me with product listings. These are the serums available in my product database. There are buttons to visit the product page and to purchase.

Each product also has a brief description. Next, let's ask, can you also recommend a scrub? Anything under $8.

And yep, it recommends the 3 hot shea sugar scrub. It's a great option within your budget and it recommended only one product. If we go to our Google Sheets where I store my products and check the subcategory column, we'll find scrubs there. There are two scrubs, one below $8 and one above.

That's why it recommended only this one which satisfies my request. This is how the whole chatbot looks in Voiceflow. It's not too complicated and we are going to build it together from scratch.

We will use this spreadsheet as our product database. I'll also upload this knowledge base, which is for my fake online store. You remember Bossar Cosmetics. Once you sign up with Voiceflow, click New Agent.

Enter the agent name. Let's say Bossar Cosmetics Assistant. Select Modality, Chat and select English and create the agent.

This is going to be our workspace. Okay, let's delete all these beginner tips here. And first, we want to add a knowledge base, go to knowledge and click Add data source here. I'll add it as a plain text, select all of it and copy paste it here. Now it has some information about Bossar cosmetics.

Let's go back to workflows, click Edit workflows. And here we will start building. I've made it extremely easy for you. Every step is described in a Word file that I will also attach in my resource hub in School Community. It details every step.

And when I say every step, I mean if it says talk text, it means you go to talk and then text. So there should be no confusion at all. Also, you have all the text and code that you can just copy and paste, such as this welcome message.

I'm going to generate a few more variants. And this is how detailed this guide is. So feel free to use it, okay? So go to listen, buttons, then click no match and create a path.

Connect it to a new block. It should be logic set, name it set AI question and apply to let's create a new variable. Name it question and as a value, select last utterance, which is the reply from the user in the chat, okay? Connect it to the next block, AI, set AI, select AI model as a data source, and paste this prompt. Classify whether this user is asking for a product recommendation or not.

If they are asking for a product recommendation, say yes, if not, say no. Apply this to a variable recome and create the variable. Next, add a logic block, choose condition, set it so that If the variable Recom contains yes, it will go to the product recommendation algorithm.

If no, other words, no match, create a path and it will go to the AI text block. So the next block is AI, set AI, here keep AI model as a data source again, and paste this prompt. Here is what the customer has requested. We have our question. Reply to this question according to the knowledge base.

If you don't have an answer, refer to support at bossarcosmetics.com and then create a new variable let's call it re-com text and apply it let's label it ai text and now go to talk text drag and drop it here and select our variable RecomText. Then connect it back to the first block. Okay, so this part is done.

It can already provide customer support and determine if the user is asking for product recommendation or not. Let's mark it with one color. Okay, the idea here is to check if a product question is asked.

If yes, it will make requests to mac.com. If not, it will return us back to the first block. Okay, let's run a test. Welcome to Bossar Cosmetics.

When are you open? The response is thank you for reaching out to Bossar Cosmetics. Our store hours are Monday to Friday, blah, blah, blah. So according to our knowledge base, it replies correctly. Now we need to build the product recommendation part.

Let's add a new block, logic, set. Here we need to set our Google Sheets variables. So this part of the assistant will be responsible for running Airtable request. Sorry, I meant mac.com request, not Airtable.

Here we need to set Google Spreadsheets IDs. Let's add a few sets. The first one is applied to Spreadsheet ID.

Let's create this variable. Okay, then go to your Google Spreadsheets URL. And the Spreadsheet ID is this part after D slash. up until slash edit paste this id as a value here with quotation marks the second one is the sheet id create the variable and you can find this id in your google spreadsheets url again after gid equals so in our case it's zero next go to logic set this will be our main google spreadsheets logic okay it will set the number of google sheets row responses apply to number of responses, okay, create variable, and I want to set it to four to have only up to four product listings in the chatbots reply, okay? Then go to AI, set AI, drag and drop it here.

This will be our Google Spreadsheets query. Choose AI model as a data source and paste my prompt, which is convert the following query to Google Charts query language. If there is no valid query, reply with there is no valid query. The query should only at max include the product category. Create a variable spreadsheets query and then go to the prompt settings and paste my system prompt from the guide.

Obviously you'd have to modify it according to your needs, according to your product database and your context. But overall these instructions describe how to convert user queries into queries for Google Sheets. We provide the column names according to the columns in google spreadsheets it should be a b c d we list the products product categories and subcategories and provide a few more instructions here for example if they ask for something that is not listed just assign a product that is close to what they want for example they ask for a shower gel just assign body wash subcategory since it is close to accomplishing the body washing purpose of the product you are only to answer with the query for example input do you sell any serums assistant like the output should be only the subcategory all right the next text block is just a logic block so go to logic condition if query that would be a new variable you need to create it if it contains no valid query go to the ai text block the one we created here so it will go back to the loop okay and if no match if the query is valid then we want to make a request to make this calm. So let's add another block, which would be dev API. And in this block, we'll configure the API call to make this calm, we need to set what we are going to pass to make this calm.

So first, we need to switch this to post. Now I want to add the body, we are going to send the query, which will be the spreadsheet query, it was set in the second block here, then we want to add the spreadsheet ID. the one we set in the first block and also the sheet ID which was also set in the first block. Then we have capture response.

Let's set the response and apply it to formatted response variable and we need to create this new variable. Okay, if it fails, we need to add a new text block and say something like sorry, something went wrong. Please try again. And I'll generate more variations here. Let's also mark this block as failed.

and give it a red color. If it succeeds, we will continue, okay? But for now, let's set up a web hook for make.com.

Go to make.com, sign up, and create a new scenario here. The first component should be custom web hook. Create a new web hook.

Let's name it Bossar Cosmetics Voice Flow and save. It will be waiting for variables. Select this URL, copy it, and paste it into post in our API block in Voiceflow. Now let's send our variables to mac.com.

Click run, run test, say recommend me a serum. It should go through the whole logic here. It should be successful but we haven't built the following blocks yet. However, in mac.com our data structure is successfully determined.

The next block should be Google Sheets. So scroll down and select Search Rows Advanced. Okay. Connect your Google account here and leave Enter manually.

We need to select our variables. So Spreadsheet ID, Sheet ID, and Query. And set the maximum number of returned rows to 10. Okay.

By the way, for my.com... I will also attach this guide so you can follow it step by step without any trouble. Now, the data we receive from Google Spreadsheets will be aggregated into JSON and then the JSON string will be returned to our Voice Flow bot. So add a JSON, aggregate to JSON.

The source model should be your Google Sheets here. Data structure. I just select product because I have this one. pre-configured data structure. In your case, you'll have to configure it.

So you'll have to click add and add items according to your column names, such as product name, then add item again, category. You want to add all these items one by one. Now you see I don't have any values populated from my spreadsheet yet. So let's run the whole thing again to populate some values.

Click run once, run anyway. run the whole thing again okay recommend me a serum so it's gonna query the google sheets now and if i switch back to mac.com and go to that json component i should have these values for my columns populated product name category subcategory description price and image link the last block is webhook response it's gonna send the json string back to our voice flow bot okay For body, select JSON string and I also want to add some custom headers here, so content type and application JSON. Click OK and mac.com scenario is now set up. Let's reset test, run it and say recommend me a serum.

It's going through the flow and in mac.com everything is initialized and finalized successfully. Just make sure that if you go to scenarios, this scenario is turned on, okay, to make it work. The next block is dev then JavaScript.

This block takes the Google Spreadsheet data and converts it into variables. Basically, this part of the system is to run our make.com request. So let's mark it with one color.

For the JavaScript block, you need to enter the JavaScript. code here. Just copy and paste it from my guide.

So we get the response from mac.com, then product count determines how many products it returned. If more than zero, then we set the variables here. Very repetitive code to be honest, but you know for this structure you'd have to do it.

Obviously, modify it using your names of your own columns. If it fails, go back to the AI text block and start over. If it succeeds, we add a new block, which is logic condition. And this part is just to set the logic and make it display the right amount of product listings according to the amount of products we got in the response from make.com.

The first one is zero. Let's create a variable product count. If the product count is zero, create one path, then another condition if product count is one and the same.

We do 4, 2, 3, 4. And then no match. create a path. So if make.com returns two products, then it will go to two product listings. If four, then it will go to four.

And if zero, then we'll send it back to our AI text response and kind of close the loop. Else means that it is not 0, 1, 2, 3, 4. So it is five or more. And in that case, we want to display also four products because that's the maximum amount of product listings. we want to display so i'll connect it to the same product listing blog as if it was four products next we need to create four blocks it will be ai set ai the goal of these blocks is to create the follow-up messages to support the product listings right to to describe the suggested products so text one select a model as a data source then just paste my prompt here here's what the customer has requested our variable for question here is the query that will be ran spreadsheet query here's the product recommended product one name which is our variable for for the first product if the product recommended is not what they asked for please tell them that we don't have what they are looking for but we found this as a close alternative and we want to apply it to our variable recom text to be more specific you can provide a system prompt here and I like to do that usually. Something like, your job is to help the customer understand the products they were recommended.

Your answers need to be short and concise, max one to two sentences. Above this message will be the product listed, so you don't need to ask if they want to see them and don't ask any queries. Just to be safe, all right? Duplicate it three times and add more recommended products.

Here's the first product recommended, the second, the third, and the fourth. And then just modify the second and third blocks here accordingly. The last step for the whole system is to display the product listings. Add a new block, talk, then carousel.

Here you want to switch to link and create a few variables. The first one is product image created. The second one should be product name. Okay.

And the third one product price. Then we can add some buttons here. For example, visit product and if you have a website with a product listing you should go to actions select open url and paste your url the second button can be purchase and again you can add your url here if you have a website then go to talk text drag and drop it here select our variable rec home text That's the one the AI model generates in the previous block here, according to our instructions, right? It will complement the product listing with a description.

Okay, then go to listen, button, drop it here, name it like let's start over. This is just to complete the loop. So actions, go to block, search for start, and it will bring the user back to the first block.

Just duplicate this, adding more product cards according to the product number. Every time you'd have to create new variables like product2-image, product2-name, and product2-price, etc. And once it is all done, our chatbot is basically ready to be used. Okay, this is how our whole system looks like. Let's click run, recommend the best serum you have.

And let's see how it works. It is going through the steps. And this is how the output looks like.

We have the product listing, two buttons, then a brief description of this serum, and a button to start over. If I click this button, it will begin the flow from the start. So you don't have to build it from scratch.

You can just modify it according to your needs. I will attach a template to this chatbot in my resource hub. So you don't have to actually build it from scratch.

You can just... import it and modify according to your needs. Many people ask me how to use the template, so let me just quickly show you.

In VoiceFlow, click on the icon in the top right corner to import the template. Upload the template and you'll be able to edit the workflow. Okay, for make.com, create a new scenario, click on the three dots and select import blueprint.

upload my template in JSON format and you'll get access to my scenario. This system is quite basic. It can only search by categories and subcategories and sort by price.

But later in this video, I'll show you a bot that can actually analyze product descriptions and evaluate customer needs and then match the relevant products to customer needs. Now, pay attention. This is important.

Good news. If you really master chatbot builders like Voiceflow and integration tools like mac.com you can already do a lot you can provide a real value and there are many tutorials on youtube on how to use these tools including my channels so you'll have enough resources to learn from but here's the bad news since there are so many tutorials and these tools are user friendly requiring no code no background in development a lot of small and medium-sized business owners would rather watch the same tutorials and do it themselves instead of paying you a few thousand dollars for implementation every second lead that books a call with us always says well i am technical enough i can use voice flow and make.com when it comes to code that's where i'm stuck so my point is that if you weren't limited to no code tools and low code tools and you could build some kind of custom solution to fit the customer's need, then you would have a great competitive advantage. Now, the big question is, where can you learn to build custom code for these AI and automation solutions?

Well, you could spend a year learning Python and then even more time figuring out how to apply those skills to these solutions. But a more efficient way would be to take some kind of a coding crash course. specifically designed to give you the knowledge and skills to build exactly this kind of AI and automation solutions that you can sell as an AI automation agency.

And this is exactly what we are going to offer. We are going to launch the AI Fellowship, a community program consisting of three pillars. The first one is AI automation coding crash course. We've noticed that if you learn how to build and adopt 10 to 15 solutions for different businesses, you can handle about 80% of projects.

They really repeat a lot. And since we are doing it, we know which solutions are in high demand. That's why we are putting together a curated crash course focusing on AI and automation solutions that are currently being sold. It is the 80-20 rule.

You need just 20% of the effort to achieve 80% of the results. And our team of developers is preparing the modules right now to provide you with that crucial 20% of technical knowledge most relevant to our field. Then you also need to know how to sell these solutions. You can save many months of your life by learning from our trials and errors. So along with the coding course, we'll provide AI automation agency coaching.

This is a complete guide on how to start and scale your business. You'll learn how to generate leads and close them. You'll get an entire toolkit for running an agency, including email templates, contract templates, how to price your solution, and basically everything from A to Z.

And of course, the most valuable part of the coaching pillar is the lessons and tips that come only from real life experience. And what I believe to be the most important part of the whole AIF program is the community. You can find all the information and knowledge about coding and sales on the internet.

We just save you a ton of time and money by providing curated modules. But what is not so easy to access is a community of like-minded people working towards similar goals. And that's the third pillar of our program. We'll have masterminds, a closed Discord community.

We'll introduce a matching system. So if you are a salesperson looking for a developer or vice versa, we'll match you. And the support you'll get from other participants is invaluable.

Instead of working alone, you'll be a part of a group where someone is just a step or two ahead of you and has faced the same challenges. The community is truly the most important part of the program. I'll attach a link to the AI fellowship where you can sign up for the waiting list. The first 50 people on the list will get a 50% discount on the entire program, so sign up now and I'll provide more details soon. I'm going to show you two projects with custom code as examples of what you'll be capable of once you complete the course.

But before that, let's quickly review the top large language models available today. It is important to know at least the top ones because they each have their pros and cons for different tasks and you want to use the right model for the right tasks. Overall, there are three models we usually test to pick the best one for a project.

Google's Gemini, Anthropix Claude and OpenAI's GPT. I've put together this table to compare the latest versions Gemini 1.5 Pro, Claude Sonnet 3.5 and GPT-4.0. I'm going to refer to them as Claude, Gemini and GPT-4.0 not to repeat every time Sonnet 3.5, Gemini 1.5 Pro and so on.

So when it comes to the context window, Claude offers 200,000 tokens, Gemini 1 million, making it... perfect for handling extensive data sets and kind of long documents and gpt4o provides 128 000 tokens which is more than enough for many tasks but the smallest of the three models here talking about speed according to our tests cloud is faster than gpt4o but slower than gemini so gpt4o is currently the slowest among the three looking at costs cloud charges three dollars per million input tokens and $15 per million output tokens. Gemini $3.50 for the input tokens and $10.50 for the output tokens.

GPT-4.0 is the most expensive with input tokens costing $5 per million and output tokens at $15 per million. GPT-3.5 Turbo is 10 times cheaper by the way. So we usually test it if we can achieve the same results and the same output performance with GPT-3.5 Turbo. Also Gemini is the cheapest. only if the context window is up to 128k tokens.

Once you need to use more, it becomes the most expensive one with $7 for input and $21 for output tokens. Overall, when choosing between these models, try to consider their unique strengths and match them to your specific needs. If you deal with large data, try Gemini 1.5 Pro for detailed and precise tasks, especially in legal and, you know, kind of educational fields, try using Cloud Sonnet 3.5. And for a flexible model that performs well across different tasks, GPT-4.0 is a good choice, even though it costs more. But take this with a grain of salt, okay?

Even though GPT-4.0 might seem like the worst option, right, based on the features and pricing alone, it's not that straightforward. In fact, we use GPT-4.0 or GPT-3.5 Turbo for 80% of our projects. The only way to find the best model for your task is through testing.

For example, for one of our projects we needed a large context window. Theoretically, Gemini's 1 million token limit should have worked, but it returned error in practice, and we just couldn't use it. This is just something from our experience for you to keep in mind, make sure to test it properly.

Now I'll show you a simple code to build AI chatbots using these different models. And to make it more interesting, these chatbots will be able to also recognize attached images on top of customer service. This is our Gemini bot. It is in Replit. I am going to share this code as a template in my resource hub in school, so you can copy it and play around or feel free to steal it, I don't care.

In this project, we have only three files. GeminiService.js, Index.js and UtilityService.js. This is just a code overview, okay?

So don't worry if you don't fully understand it. The goal is just to show you the logic of how it works. When you join our AI fellowship program, we guarantee that by the end of the course, you'll be able to code chatbots like this one on your own.

So the first file is index.js. It contains our endpoint where requests are made. Slash chat is our route for requests from basically... any chat in use, be it a WhatsApp chat or web chat, doesn't matter.

It expects two fields, chat ID and message. Then we have the validation if all the data is passed through. And once validated, it is passed to the ask function, which is located in Gemini Service JS.

Then in Gemini Service JS, the bot creates a local database, stores the message history or loads it if it already exists and it creates a new message and sends it to the chat. Then it receives the result, stores it in the database and returns it. The utility service file has only one function that uploads and formats an image into the base64 format which is expected by Gemini's library.

That's pretty much it. If you need to understand this better, here's a tip. Pause the video, make a screenshot of the code, upload it to ChatGPT, and ask for any clarifications you need. Obviously, to make it work, you need to add your API key from Google AI Studio to actually connect it to Gemini.

So go to secrets and set your API key. For that, just go to Google AI Studio. The URL is aistudio.google.com.

You need to log in, and here you can create your API key. Create API key in new project. And here it is, just copy it.

and then paste it in replit as value. Once done, click run. I'm gonna copy the dev URL from here and go to my demo page.

We just quickly created this page for the purposes of this video, just, you know, to demonstrate how the chatbot works. So I go to settings, enter the URL from replit. I need to add slash chat because that's our endpoint. Okay, save it and let's try it out. Hi, it replies back.

Now let's upload an image, for example, this one, and ask it to list the objects in this photo. Okay, give it a second. And it provides me with the objects it can see in the photo. We can also ask additional questions, such as, I like the keyboard, tell me about it.

And it responds as a general model would usually respond that It's hard to say anything specific about the keyboard without more information, however, based on the image, I can make some general observations. And it provides me with specifics such as type, layout, color, and material. Alright, now let's look at the Cloudbot. Don't forget to set up secrets.

Since we are using Cloud, this time you need to go to Anthropic, so console.anthropic.com slash dashboard. You can sign up with your Gmail account. and you'll get some free credits to get started. Just click on Get API Keys, then click Create Key, give it a name, say Test Key, and click Create Key.

Copy your key from here and paste it in Replit as value in secret. Okay, this cloud board has a similar structure. Index.js is identical to what we had with Gemini.

It calls the ask function. which is this time located in Cloud ServiceJS instead of Gemini ServiceJS. Utility ServiceJS is also the same, it uploads an image.

Then in Cloud ServiceJS we create a database, create a new message, then send it to Anthropic, receive the response, store it in the database and return it to the user. Okay, let's test it out. I'll click run, copy the dev URL from here. Switch to my demo page, go to settings and paste the URL.

Again, don't forget to add slash chat. Okay, click save. And now I should be able to try it out. Hi, we get the response from the bot.

Now I will attach the same image as before and ask it to list the objects in this photo. Give it a few seconds to process and it provides us with the output. The bot recognizes the objects well. It even recognized that the notebook in the image had notes written on the cover, which is not readable for a human without zooming, right? All right, let's ask it, tell me more about this keyboard.

And once processed, it replies well. It even identified that the keyboard is most likely an Apple Magic keyboard and provided many details. So it works great. I think it's even better than Gemini in this case. That's why you should always test and compare the outputs.

Those were Claude and Gemini. For AI assistants using GPT-4.0, I have two separate videos. One for a general assistant and another with image recognition capabilities.

So be sure to check them out as well. And for the final chatbot today, I'll show you a bit more advanced solution. This chatbot, again, it is going to be our online beauty store consultant for Bossar cosmetics. This time, instead of building it in Voiceflow and Make.com, I'll use custom code and in addition to product recommendations, it will also be able to recognize images. So I'll basically combine everything we did today.

Customer service plus product recommendations plus vision capabilities. Let's start with the demo right away to give you an idea of how it works and then I'll break down the code. This repli template will also be available in the resource hub so you can use it do not forget to set up your secrets this time we use gpt 4.0 so you need to head to platform.openai.com go to api keys and create your secret key once done click run wait for the dev url copy it and paste it on the test page without this slash in the end And let's begin the conversation with the bot.

Hi, I need skin care. It takes a few seconds to process the request. And it asks me to provide a bit more information about your skin type and any specific concerns you have. This will help me to make more tailored recommendations. Because remember, in our prompt, we instructed the bot to find out about customer needs first and then recommend tailored products.

And this is what it does. Okay, let's say oily skin and upload a photo of a face, send it and wait for the response. So it says, based on your needs and our available products, here are some recommendations for managing oily skin and addressing acne. First of all, it correctly understood that the customer needs products for managing oily skin and addressing acne. I mentioned the oily skin, but I never mentioned acne.

It recognized that purely from the image. Okay. Secondly, it searched our database for the available products. and then provided the user with the relevant options. In the end, it also provided a summary and a short description for each of the suggested products.

So let's ask it something else. I'll say, excellent. Also recommend a blush for me.

It is now searching our database for blushes and provides us with the relevant products along with the descriptions. Also, I need a scrub. And it suggested two scrubs.

There is a button to view product, but since we don't have a website, it doesn't redirect anywhere. But obviously, if we had a website, it would take us to the product purchase page. That's it for the demo.

Now let's go over our code. We tried to keep it very simple without extra functions. For this project, the product database is stored in an Excel file to avoid complicating the code with the code.

you know, online requests and additional functions in the assistant. So this is just to simplify the process. If you want to test it with your own product database, you need to delete this product XLS and upload your own file, naming it the same way.

And when we receive a request, we search this Excel file for the necessary information. Okay. Index.js, just like in previous projects, it's almost the same.

The difference is that when we receive a file, we create an OpenAI file. For that, we have the upload image function and it is located in OpenAI service.js. So it downloads the image using a URL, sends a request to OpenAI, purpose vision, this is very important to make it work in the assistant, and then deletes the downloaded image and returns the file ID which we received from OpenAI.

We then use this file in the message that is added to a thread, okay? Here we have the same thing as in previous projects. It starts with creating an assistant using the create assistant function.

This function is also located in OpenAI service.js. Also, the instructions here are quite extensive. You remember our prompt was quite big, right?

So they are stored in a separate file instructions.txt. Here it reads this file in the second line of code. And also here we have the names of our product database.

and knowledge base. Really, just check out my video on how to integrate GPT-4o assistant to a website. I covered this whole structure there, I covered what assistant.json is for, so you'll have more understanding if you watch that video. If we don't have the assistant.json file in the project, it will create and save one after it gets a positive response from OpenAI.

Initially, we create an OpenAI file for the database and store it in a vector database. i hope you remember what vector database is i discussed it in the chapter about understanding ai chatbots if you need you can go and re-watch it Then we create a products file and we will use its ID in the code interpreter here. There are different tools available at OpenAI, for example, file search or code interpreter. So for the knowledge base, it uses file search. You remember how it works, right?

The vector database, the chunking, all of that. So it retrieves the relevant chunks of text from the knowledge base according to the user's input. And then for the product database, it uses code interpreter. it will search for the relevant products in our Excel file. And here, once the assistant is created, it is saved in assistant.json file.

Instructions.txt file contains our prompt, the one we created in this video. And we just added some specific instructions here at the end to ensure the assistant returns the product data that it found in our Excel file using JSON format. This makes it easier to display the product listings. And That's it! This is a simplified process just for the purposes of this video.

If it were a real project, we'd make it more complex and definitely more reliable. But I just wanted to give you an idea of how more advanced and custom-coded AI chatbots look. Guys, if you manage to understand how this code works, you're probably in the top 1% of viewers.

It would take numerous videos to actually teach you how to write code like this. This isn't something you can learn from a, you know, a quick 15-minute tutorial. That's why we invite you to our AI fellowship program. You'll get a complete course on this and by the end of it, you'll be very comfortable building projects like this one. Other than that, if you watched and understood this video till the end, you can be proud of yourself.

You are now ahead of the majority of people who are interested in AI. Now you have a full understanding of what it takes to build these solutions, which isn't as easy as it might initially seem, right? My goal is not to sell you this idea. But if you are serious and ready to commit, you can make a lot of money.

If you start now, you can still be early enough to leverage this opportunity. And once you've learned how to build AI chatbots and other AI solutions and workflow automations, you need to learn how to sell them. The best way, if not the only way, to do this is through practice. to get more practice you need more sales calls you need more leads right cold calls don't work here not for me not for other ai agency owners we actually discussed this recently and everyone agrees that it doesn't work just yet you need to generate warm leads i have a video on how to start an ai automation agency where i break down step by step how to start and generate the first leads the next video on the channel will be the second part of that video with more insights and specific metrics I've gathered over a few months, so make sure to subscribe and not miss it.

Long story short, at this stage, the best way to get warm leads is through generating content, putting out value, helping people, and showing your expertise at the same time. That is what I'm doing today, and I hope you'll consider it as well. Thank you very much for watching, and I'll see you soon.

Bye!