Hi all, how are you doing? This is Balagopal Reddy and welcome to my YouTube channel. Today I'm going to show you a project I've been working on which is the AI Blog Content Assistant. So this is the tool which helps us to generate the blogs based on the given topic and the title. I have built this application by using the state-of-the-art Langchain framework and I have built the application's user interface with the help of Streamlight package.
To power this particular application's AI in the background, I have inferred the Metalama 3 8 billion instruct model from Hugging Face through API calls. yeah that's all and uh without further ado let me show you the demo of this particular application and then we can start the coding part so this is the application i have built and right now it is running on the localhost let me put it to full screen so this application has two features one is title generation and the other one is block generation Like, for suppose if you want to write a blog and we are not sure about the title, we can take help from AI. Assume that I'm writing an article on data science.
If I submit the topic, see here on top right corner, the application is running. So yeah, we got the suggested output. I mean, for the data science blog.
These are the titles given by the AI, the magic of data science, unlocking the secrets of the universe, the art of data science, a guide for the creatively inclined. So in the second part, in the block generation part, either we can give the title from the above output or else we can write a custom input title over here, and we can tune this particular number of words for our block and And specifically, if we want some keywords to be included in the in our blog, we can add them as well. So this is the entire application.
Let me show you. For in this scenario, like right now, I'm just taking this title data science for the curious journey through the unknown. And I'll set the words to 200 words.
Yeah, since it is a data science blog, let me add a few keywords like machine learning. I want to know that advancements in data science and course statistics stats pitching the data yeah i think this would be enough so let's see when i click on submit info again the model is trying to generate a blog of 200 words by including these keywords. I'll show it to you.
Right now the LLM model is running in the background and it will generate the text for this particular block. See here, that's how it's done. So first let's create a new folder. and I'm gonna name it as AI block content assistant.
I'll open this folder in visual studio code. Yeah here we go. Here we need to create all the files which are required for our project development.
Before doing that let me uh i mean we need to install some packages that can be done with the help of uh terminal here we need to write a few commands to install this packages strongest chain see in my local um in my personal laptop already the line chain is installed so that is the reason why we i'm getting uh this particular requirement is already satisfied and the other packages like you know and finally the stream light package so once you install all these packages then we can start with the coding part so first suppose if you got an error that we don't have this particular package you can follow this approach just by opening the terminal and you can uh use pip install and that particular following uh package so that so that all the requirements will be installed in your local Yeah, in this folder, we need to create few files. The first one is for storing the API keys. I'm gonna name that particular file as secret API keys.
so what i'm gonna do is i'm gonna uh paste the api secret api key from the hacking phase and uh and and then i will import this particular file to the domain file so for us the main file is app dot and we are all uh let's open a python notebook for practice here we go now the first thing is importing the packages what i'm gonna do is i already got the code let me first copy the comments and then we can start the coding part step by step. So first thing is we need to set up the environment variable. In order to set up the environment variable, what we need is we need to import a OS package. And also we need to have a have that particular API key, which is our access token. I'll show it to you in a bit.
Let me import the OS package. and then go here uh this is the hugging face website here in our icon go to access tokens i have two access tokens i'm gonna create a new access token here and give and i'm giving all the permissions uh for this particular token yeah i'm gonna name this token as my ai blog app token once this is done we just copy this token and store it some uh somewhere secure because you cannot open uh that access token is api and i'm storing here i'm defining an api variable and i'm storing it here so that what i can do is from the api import again and then in order to set up the uh environment variable yeah here you can pass our hugging face api key variable so this is how you set the environment variable wait that is running and the second part is importing the hugging face endpoint class here i want to mention two things The first thing is the reason why I'm preferring API rather than downloading the model locally in the computer. So inferring this model through API has its own advantages because we don't need to worry about computation power and the storage capacity of our personal laptop. And we can concentrate on building application logic. basically we can concentrate on building application logic rather than setting up the infrastructure for that particular API and also in order to import the hugging face in endpoint class first we need to know which package do we need to import here so yeah I'm gonna open this document.
This is the entire document for Hugging Face line chain package. This package is a collaboration of both of these companies. The main aim of this particular package is to integrate the Hugging Face model seamlessly within the line chain environment. There are so many ways to import the LLM to our customers.
development environment either we can do it through pipeline or we can do it through endpoint so since we are opting the serverless api where we need to do it with the hugging phase endpoint method okay first of all what i'm gonna do is i'm copying and pasting this over from line chain hugging face importing hugging face endpoint and then yeah the repo id the model which we are going to infer from hugging face token token is the variable that we have stored in the secret api keys and then there is also one more important parameter that I want to mention is the temperature parameter. So the temperature parameter refers to the randomness and creativeness of the model, which means if the temperature parameter is less than one, the model is more accurate and it will give precise results. if the mode if the temperature parameter is greater than one the model the results of the llm model are more creative and it's going to give diverse output but they may but they may not be accurate like it may give factual errors so that is the reason like i mean most of the people set the temperature parameter to around 0.7 or 0.6. Let's set it something. And now, so what model we are going to import from hugging face, we are use cases, title generation, or block generation, which is nothing but generating a text.
So we need to like since hugging faces open and. open source i have preferred this because it's free of cost and you know there are amazing almost seven million like yeah seven million models are there in the hugging phase So let me go to the hugging face website, when I refresh this, be here, almost 7.8 million models out there. And since our job is text generation, what I'll do is I'll click on text generation. it will filter out all the models that are present for this task. And these are the trending models, CROC, Mistral, AI.
And here I have chosen Metalama 38 billion instruct model. Since it has the highest number of downloads, I just preferred this. uh it's your choice you can either go with mistral or grog or meta lama models so yeah before importing this particular model what we just need to do is like uh i'll show you how how we can import this particular model to our coding environment we can just directly copy the path of this particular model and that's it it's so simple yeah uh yeah before executing this cell i just want to tell you a few things about this metalama 3 8 billion instruct llm so this more this model was trained on over 15 trillion tokens of data from publicly available sources and has a cut off this model has a cutoff of march 2023 as a as it is in the name that model is of contains 8 billion parameters and this model works on group query attention like dqa which means to put it in a simple way we can say that first it will group all the similar items and then it will search from that small group of similar items so in this way uh like the task i mean it is very fast compared to the traditional models which follow multi-head attention technique which means like it starts those models searches everything in detail so it it takes a lot of time for it and one more thing which i found very very interesting in this particular you page is the carbon footprints of this particular model when it was training yeah speed here like the carbon footprint uh pre-training utilized as a 7.7 million gpu hours of computation of hardware and the carbon emitted is equal to 2003 almost 2300 tons of co2 equivalent yeah of course it is irrelevant irrelevant but like i found this very interesting inside and yeah they said that hundred percent of which were offset by meta sustainability program now going back to the ds code let me execute this okay be here your token has been saved to my uh catchy memory and uh it is showing that login is successful this means uh now we can access anything in the hugging phase first of all let me invoke this particular llm in order to invoke this what we can do is invoke hey llm give me some titles for let's say i want to write a ml block machine block see what output it will give yeah so this is directly giving the output i need a title for the blog post about machine learning this post is let me just print it so that you can see it properly yeah see over here i need i need a title for a blog post about machine learning this post is about how machine learning can be used to improve that efficiency of the special process and it has suggested few titles let's ask it in a different way hey i want to write a blog related to machine learning suggest me some titles and don't give any explanation since it has given so much explanation let's try in this manner and see whether what output we get p over here here here are some put in block some potential block title ideas related to machine learning and it has given some titles yeah so based on the prompt you give to the model it will generate the output so but here like whatever the prompts we have given right now these are vague like we are not telling the model that who are our audience our audience are like students or the professionals something like that and we didn't mention the LLM, what type of titles we need? Were they short? Should they be short or catchy?
Whether they are formal or not, we can properly construct a precise template. And one more thing. Yeah, see, every time if you want some titles, we can, we can try it. this entire explanation. I mean, we can't give the entire prompt to it.
Instead, it's always better if we can create a template and this here here. So this is the code if I want, let's say instead of machine learning, if I want global tourism i am not changing the entire contest context i'm just changing the topic in this prompt so what uh what we need to do if we can create a generalized prompt template and then uh like based on the requirement if we can uh if we can change the topic on that particular prompt then it's uh like it makes us our job easier right so what we can do is we need to define the prompt template and and there are so many prompt like there are so many prompt templates present in the lanche environment first of all let me copy that code since it is very i mean there are so many ways to feed the data to a model. So yeah, I'll explain it in the coming videos about that.
First of all, see from linechain.prompt we are importing the prompt template. And then what we are doing, this particular prompt template for title suggestion has a has input variables. Those are like that is topic variable and rest of it is just a prompt just the instructions we are giving to the uh llm now see this particular prompt see how organized and precise this looks and if a model can read this and like there there is a very less scope of confusion for this particular model i'm planning a blog post on topic and the title is informative uh and i mean we are talking about the title we are giving the target audience and uh we are mentioning that we need around 10 creative and attention grabbing titles for our blog post and then we also mentioned that we doesn't need any explanation we just need the titles that's all and now if we execute this yeah now if we can if we after executing this what we need to do is we need to chain chain everything like we already got the prompt template right this problem template is for title generation so we already got the prompt template and we we have defined our llm model so what we can do is from template title chain yeah we are making a simple chain and i'm storing it in the title chain variable So this is a way of writing the line chain components in a declarative way. This Yeah, this language is called line chain expression language.
After executing thing, we can just call the model. Yeah. so let's see the topic name is basketball and then yeah after invoking this particular model yeah now when we executed the topic name whatever the topic name given to this particular chain it will be like it will go inside this particular prompt that i want to show you how this prompt really looks when we send the topic This is the information we are feeding. Let's see here guys, we have defined basketball as the topic.
And we already have this particular template. And just the variable like the variable at the topic is replaced by basketball. And, and this information is going to go inside the title chain. And the title chain like, yeah, like in title chain, we have a applied the prompt template on top of the LLM. So the the input will be going from prompt chain to mean prompt template to the LLM and the output is going to get the output is going to get from here.
So this is it. This is our first task which is title generation. And yeah, let's move into the next step in this next task.
which is blog generation here we need to have three variables as you have seen already those are uh title of that particular blog and number of words and formatted uh keyword keywords which are on the which are which are basically a string separated by commas these are the keywords. Now we need to again define a particular template, simply copying it from here. so see here guys in this particular prompt template like there are three input variables the first one is the title of the blog and number of keywords sorry number of words and formatted keywords and we uh check out the template over here we want to write a high quality informa informative and plagiarism free blog post of this on this particular title And we mentioned about the target audience and and yeah, we mentioned the length of the block. And we also instructed we are going we are instructing our model to include these keywords inside the block. Again, it's the same process, blog, pain, or two.
prompt template we are applying this prompt template on top of our llm there is nothing to be confused guys and then we just need to simply call invoke our model so we are passing uh the list of the dictionary of parameters inside the invoke method see in the previous scenario we passed this topic name inside the flower brackets just like here yeah see that's how it's done so let's start with the app.py file now what we need to do is previously we have executed all the steps separately now we need to combine them so what we can do is let's slowly copy all the code to app.py setting up the hugging phase api and then inferring the model that is done let's define the prompt template for title position and then define the other prompt template for block content generation now we can create both the paints here this is the title chain and the other one is the blockchain this was the this was the background python code that we we need so that is done now let's start with the ui of this particular application design design so as i have previously said that this entire application's user interface relies on relied on uh steamlight package first we need to import that particular package as i said uh the main significance the significance of this uh streamlight package is that it lets us to transform the python scripts into interactive apps and you know the reason why i am using uh streamlight package is that it's uh open source and it's pythonic which means easy to code and a notable advantage of the stream streamlight is uh it supports live coding which means like when you run this particular application one time and uh and if you want to change some uh some of the code like uh whatever the changes in the code the up application the software will automatically detect that changes and it will modify itself instantly by detecting the code so that is very good advantage compared to rest of the software so now what we can do is yeah let's define a header here the header of our application is okay instead of the header let's define the title yeah the title of our application is ai block content assistant i'll show you how everything works and how we can define each component so what we can do is let me create a new terminal so to run this particular application this is the command you need to follow swimlight run and that application file name in this case app.py is the application is the file so i just mentioned this over here when i executed this he automatically uh we got a pop-up of this particular web application and if you observe here as i have mentioned this application is running on the local url localhost 8502 check this out 8502 and uh yeah let me close this completely so i just defined a title ai block content assistant it's different sub header i'm just copying it popping the long text from the other page so that it saves us some time yeah i'm just saving this check this out as i have mentioned that the streamlight package has this feature called live editing it it clearly showed us that the source file is changed it is asking us uh do we need to rerun this up like you need to rerun this application and once you click on that see we got the title here and we got the sub header component here now let's design our featured one which is my blue let's write a sub header until generation rather than writing in a traditional way let's define a component here which should be like an expander so to do that particular task we can just uh we can just write st dot expander and we need to save this and we're saving this in this particular variable let's see i'm just i just saved the code let's rerun the app yeah so we just defined the variable we haven't mentioned what's there inside this particular expander so in order to write the components we need to write the entire components in the width within the this particular width condition inside this this line means uh we are writing we are we are going to define some components within the topic expander the first thing is it should have a input field and what is that input since we are only giving the text like in this case we the topic is a string so we just need a text input method let's give the key as with me and let's store this in a variable once this once you give this input uh if we have a button like submit so that uh information is gonna be like is gonna be uh given to the model and and the model has to give us some output so that is what we need to do let's define a button here sd dot button select the button click on that button it will be like submit once you submit that button like i'm storing this entire component into this variable button topic and when you click this button if this submit button is clear what do we need to do we need to display all the llms output before that i'll show you how this expander works so think this out it is showing the input in it is showing the showing that we need to input the topic and if we write like my laptop and if you click on the submit button it's not showing anything but like what is our task when you click on this the models input the models output should we displayed on the bottom so we what we can do is inside the if condition when the button is pressed we have to display this particular thing how we can do that we we already know like we already have defined this title chain and we just need to invoke that particular thing with the topic st dot write method is the one which we can use to print all the output on the web page times it takes longer than usual yeah here we go these are the few titles assisted by the by our llm model and of course we can do a little bit formatting to get this output in a to display the output in a better way so what's our next job we need to define the feature too that is block generation just like in the previous scenario let's take this code let's define sub header and then an expander or log details here we are storing this in the blog expander variable and within this blog expander variable what we need to do just like the previous scenario we have to input the title of our block and we have to define a slider like you can uh you can select how many words you want for your generated block number of words and we have to set some minimum and maximum value of the slider and we can set the step size as well minimum value let's consider it as 50 and the maximum value outside and each step 50 over this and i'm gonna store this in number of variable so i think so far so far i think it's clear guys as uh now i am going to just copy and paste a block of code which is for the uh which is the formatting of the keywords right now it is not that much necessary even i have taken this particular block of code uh from gemini I'll explain that to you as well. Give me a minute. Yeah, here we go. So this entire context is about define defining a keyword.
keyword input we can just ordinarily input the keywords as a swing of swing of keywords separated by commas instead i just built a particular component inside this expander blog expander i'll show it to you how this works Let me rerun this. So here we go. Like we already got the outline here.
and the title generation is completely working fine and in the blog generation so we need to display the title over here and then the number of words yeah enter the title yeah check this out so we got the title here title input and we can slide uh the number of words of that particular block and so here what we can do is let's say we are talking about how we can choose a laptop what will what would be the keywords here like processing power processing speed when you click on this p here automatically a rectangular box is getting displayed over here which has a light green color grayish color inside it and we got the text here so in order to build this particular functionality i have used that kind of code rather we can just create a component of text area or text input and we can write all the components over here so first of all let me show you like let me show you how this code like let me explain you this code so what we are doing is so we are going to input each keyword and whenever we give this input and we are also i'm going to define this particular button add keyword when we click on this particular button it has to whatever the keyboard inputted whatever the keyword input is given by the user it will be appended to this session state of keywords and once the once it is appended the input on this particular input on this particular keyword is gonna get clear and this process continues till we give all the keywords which are necessary so that is the uh that is the meaning of this particular uh block of code almost everything is done guys what we need to do is we just left with uh Defining the button. When we click this button, what should happen? It has to display the title of this particular block. And then it has to display the generated context of the particular block.
What we need to do is subheader. and this is the title name the title name variables uh will have the our input title and sp dot write off our blockchain once we invoke this the entire output like we need to write the entire output. Inside the invoke, this is the title of the blog. take forward to change the name register yeah we haven't defined the formatted keywords number of words number of words title of the block title of the block yeah perfect there is an extra so whatever the keywords we have given those are stored in this particular uh session state of keywords let me show you that first and then we can do i'm just commenting this line i'm gonna uncommented after formatting this sessions session state keywords let me show you how all these keywords were so let's say i want yeah okay let's say i want to write a blog on badminton when you click on submit the topic see these are the titles given by the model the bird is mastering the art of badminton giving this title and i'm choosing the number of words as 350 and give me some keywords related to batman like your next thing to do when i click on submit the info check this out guys this is what uh and this is how our keywords are stored but we cannot directly give this entire keywords in inside the formatted template we can just do a little formatting and convert the these keywords into a string and if you pass them to the to our model it will give you better text generation so first let's uncomment this line and save the code when you rerun this yeah let's keep the topic as badminton and here we go these are the suggested titles of our model let's stay i'm considering this title and giving it in the block generation part and i'm setting the number of votes as 400 and let's give you keywords like little part mash record a piece and through wooden floor or i think these are these keywords are sufficient when we click on the submit info see here uh the title of the blog is displayed over here and the model is running in the background yeah and we got our blog related to badminton if you wanna see a few things like i mean we have given few keywords right that little bit so this is this keyword is included and let's see in the yeah he was in the pv sindu's name is included in this article and smash mash the techniques mashing your way to success let me see we got record yeah this is how it's done so guys yeah this is the entire application of ai block content assistant so this is the code I'm going to post it on the description of this video.
I'll create a GitHub repository and I'll share the link. So guys, my point is, we can able to build this entire application with almost less than 100 lines of code. It's that powerful.
the today's AI is. This is very simple and yet elegant app. So all you need to do is you just need to put some extra effort in learning the AI.
So if you have come this far, I hope you like this video and if you do, please subscribe my channel and share it with your people. and like this video so that's all from my side this is balagopal reddy signing off thank you thank you everyone