Transcript for:
Exploring LangChain for Data Applications

I really believe this is one of the best opportunities for data scientists and AI Engineers right now in this video I will give you an introduction to the Lang chain Library using python langchain is a framework for developing applications using large language models I will walk you through all the modules then the quick start guide and then finally we will create our first app which will be a personal assistant that can answer questions about any YouTube video that you provided with so what is it it's a framework for developing applications powered by large language models like open ai's GPT models and instead of just interacting with these models using an API so basically you ask a question for example this is what you do when you interact with chat GPT but in the background it's just an API that you send a message to and then get the message back that's how you normally interact with large language models Lang chain is a framework around that that also allows your application to become data aware and agentic so data aware means that you can connect a language model to other data sources so for example your own data company data that you can build on and agentic means allow a language model to interact with its environment so it's not just asking a question and then getting information back now it's also acting on that information by using various tools for example that we will get into in a bit and now why would you want to learn a framework like language and I really want to get deep into this because I believe there will be so many opportunities if you understand this correctly so I work as a freelance data scientist and up until this point basically my job as a data scientist is to help companies usually larger companies that have a lot of historical data and use that to train machine learning models on but what we're seeing right now with these pre-trained large language models like open AIS models is that also smaller companies without huge amounts of historical data can start to Leverage The Power of AI and now me working as a freelancer this provides me with a lot of opportunities actually to also work with smaller businesses doing smaller projects while also still being able to make great impact for that company and also with really large machine learning projects using lots of historical data you also never really quite know what you're going to get so actually a lot of data science projects fail and I believe using these large language models for small businesses or even for large businesses will be a much more predictable way of doing AI projects because the model is already there you know what it can do and now you just have to provide it with extra information and tune it to a specific use case so if you learn this if you understand Lang chain and more specifically the underlying principles basically of this particular framework then I think you will set yourself up for many great opportunities to come you can really make a lot of money here if you understand this correctly so let's get into this so I will start off by explaining all all the different modules to you all the different building blocks of the Lang chain library that you can use to start building your intelligent apps and after briefly explaining each of the core components I will give you an example from the quick start guide within vs code so you also have an idea of what it looks like in code and how you can use it and there is also a GitHub page available for this project that you can go to link is in the description so you can clone it and you can follow along here I also explain how to set this up and what kind of API Keys you need and how to set up the environment and install the keys in your Dot and file so if you're not familiar with that I would suggest checking out this GitHub page so that way you can follow along but coming back to the getting started page over here so these are all the modules in increasing order of complexity so we will start simple and we will start off with the models so these are the model Integrations that Lang chain supports and there is a whole list over here that you can check out but you have the models from open AI you have for example hugging face and a whole lot of other different models that are supported right now so that is the first module so now let's see what it looks like in vs code so I have an example over here where I load the open AI model from the langchain library and I can basically Define my model by providing a specific parameter here for the model name so for this example we are going to use the text DaVinci 3 model and if you go to the API reference for open AI you can see there are a lot of models that you can pick a lot of models that you can choose from I am currently on the wait list for gpt4 so once you get access to that it will become even better but coming back to the example over here so we load our model and then we can basically provide it with a prompt so let's say right a poem about Python and AI so let's first initialize the model then say store our prompt and now we are going to call the model and put in the prompt so it will now send out an connection to the open AI API with our prompt and then it will give us back the result so this is just a general way of interacting with these large language models and something that I can also do in chat GPT so here you can see the poem that we get back from the API so this is nothing new up until this point but this is the starting point that we need in order to interact with these language models then next on the list is prompts and this you can use to manage your prompts optimize them and also serialize them so coming back to our project we have the prompts template over here that we can also import from the langchain library and the prompt template what we can do we can provide it with an input variable and then a template so what we can do with this is we can basically ask user information or get some kind of variable information and then put it into a prompt similar to how you would use fstrings for example in Python and this is just a nice class that you can use and there are more things you can do with it but this is just a basic example so we can provide the prompt template over here let me clear this up for you so what is a good name for a company that makes and then between curly brackets product then we see the input variables is product over here and now we can call prompt.format and then we can provide the product so after running this what you can see is that we now have the prompt what is a good name for a company that makes Smart apps using large language models and then the third component is memory so we can provide our intelligent app with both long term and short-term memory to make it smarter basically so it does not forget the previous interaction that it has had with the user so coming back to our example over here we can import the conversation chain so that is also from link chain import conversation chain so how this works is we can initialize a model again and then start a conversation and then we are going to call the dot predict method on the conversation and provide it with an input so right now the conversation is empty but we can send this over and predict it and what you can then see is that we will have a conversation so so there is a general prompt here so the following is a friendly conversation between a human and an AI the AI is talkative and provides lots of specific details from its context Etc so this is already engineered within the library and then the human says hi there and then the AI provides us with a response and that is the output so we can print that and that is hi there nice to meet you what can I do for you and now what we can do next is we have that output and we are going to make another prediction by saying I'm doing well just having a conversation with an AI so let's run this here you can see the history so first we have to hi there then we have the response from the AI and then you see R response here again so what we've just entered and now we can print that again and you can see that now the AI is responding by it's great to be having a conversation with you what would you like to talk about Alright and then next up is indexes so language models are often more powerful when combined with your own text Data this module covers best practices for doing exactly that so this is where it gets really exciting so this was the example that I was talking about previously where you can build smart applications for companies using their own data their existing data and we will get more into this in the example that I will provide at the end of this video but for now just know that there are document loaders text Splitters and Vector stores and also retrievers so this is really exciting when we start to work with our own data but for now let's continue to chains which is another core component of the Lang chain model so chains go beyond just a single large language model call and are sequences of calls language provides a standard interface for change lots of Integrations with other tools and end-to-end chains for common applications so this is really where we start to bring things together so the models and the prompts and the memory it's nothing that new right we've seen it we can use it in chat GPT but now when we start to chain things together together is when it gets really exciting so what does this look like in code so let's look at the llm chain class that we can import from langchain dot chains so given our previous model setup and the prompt that we've provided so coming up with a company name we can now actually start to run this chain so the prompt template was just for engineering your prompt the model is just for making a connection with the API and now we can change this together so let's quickly store this then set up this chain so we provide the model and the prompt as input parameters and now we can run this so let's try another example what is a good name for a company that makes AI chatbots for dental offices AI dentek love it alright so now you start to get a sense of how you can turn this into an application you pre-define the prompts over here and then you combine it with user input and run that using a chain so you could already turn this into a web app for example company name generator dot AI this is it basically and now the trick here the key is being really smart about what you put into these templates so this is a very straightforward example what is a good name for a company but you can get really specific here and provide it with lots of information really tailored to a specific use case to get the result that you are looking for given the user's input and I will give you a good example of this once we start to develop the YouTube AI assistant later in this video and then the last component agents so agents involve and large language model making decisions about which actions to take taking that action seeing an observation and repeating that until it's done so this is really where you get to build your own Auto GPT baby AGI kind of applications by using these agents and these agents can use tools so there are tools agents toolkits and executors and tools for example we have all kinds of tools that are already supported straight out of the box so we have have Google searches we have Wikipedia we have Google searches for you the serp API all kinds of stuff that we can use and if we use these agents they will use the large language model so for example the GPT model to assess which tool to use and then use the tool to get the information and then provide it back to the large language model there is even upon those data frame agent that you can use and it's mostly optimized for question answering so here you can see a quick example and you can basically ask it how many rows are there and then it knows that it can interact with the pondless data frame called The Lang function to get the length of the data frame and then provide that as a result so let's look at another example from the quick start guide so if I want to start using agents what I can do is I can import the initialize agent agent type and the load tools to also provide it with some tools and then coming back over here I can first list all the tools so these are also on on the website that was just showing you in the documentation but here you can see the specific name that you have to use in order to provide the agent with that tool and now let's say for example we want to create an agent and give it access to Wikipedia and it should be able to do some math we can set up the tools like this then initialize the agent provided with the tools the model that is defined over here and then the agent type is zero shot reacts description which basically means that based on the prompt that we give to the agent it will pick the best tool to solve the problem so it will basically pick the tool on its own and this is where it gets really interesting because now you can provide an agent with a set of tools and then it will figure out on its own which tool to use to come up with the best answer so let's try this query over here and what year was python released and who's the original Creator multiply the Year by three and we only give it access to Wikipedia and math alright so now let's first run this so let's see what it will do so new executor okay so it understands that it needs the action Wikipedia and then you can see the input Python programming language so it understands that that is the query that you have to search for in Wikipedia then it will get the history of python summary alright so I have enough information to answer the question so the final answer python was created in 1991 by hitoforosum and the year multiplied by 3 is 5763 all right so this is really awesome right and this is beyond what chat GPT or the GPT models are capable of because we can get live information from the internet and then also the results are stored as well so here you can just see the plain text string we now have that available and now if we start to combine everything together so multiple chains multiple prompts and then use agents to get information also use memory to store everything now we can actually build some really cool stuff alright so I'm now going to show you how you can create an assistant that can answer questions about the specific YouTube video so coming back to the indexes I've previously explained how these large language models become really powerful when you combine them with your own data and your own data in this scenario in this use case will be a YouTube transcript that we are going to download automatically but you can basically replace that transcript with any other information and this approach will still work so the langchain library has document loaders text Splitters and Vector stores and we are going to use all of these so let's first talk about document loaders and these these are basically Little Helper tools basically that make it easy to load certain documents and here you can see everything that is supported right now so we have things like Discord we have figma we have git we have notion we have obsidian PDFs PowerPoints but also YouTube so let's first see how we can get the YouTube transcript given a video URL using this document loader alright so coming back to vs code we have the following video URL over here which is a podcast from The Lex Friedman podcast where he talks to Sam Altman the CEO of open Ai and I thought this would be a nice video to use as an example so we are going to first read the transcript of this podcast this two and a half hour long video using the document loader so for that we're going to import first the YouTube loader from document loaders and we're going to input the video URL so let's run this and see what we get so now we have the loader and to get the transcript then we call the loader dot load method so we call this method over here and then run that and then that will run for a while and now we can have a look at the transcript over here which is basically a very long string over here with all the text in here alright so now we have the fill transcript and it is within a list and we can access that using the page content to get the actual string the actual transcript but now we have the following problem if I run this to see how long the transcript is we can see how many tokens there are in there so this is the total amount of characters and we can see that it's over one hundred thousand and now this was really a aha moment for me because you cannot just provide this full transcript with over 100 000 characters to the API of these large language models it's just too large so if you want the model to be able to answer questions about this transcript we have to find a workaround to provide it with the information it needs to answer the questions without sending the transcript in full and that is where the text Splitters come in because if you go to the API documentation you can see the max tokens over here per model for open AI models and if you use the latest model that I can use right now so the GPT 3.5 turbo is 4096 tokens that you can input to the API if you're already on gpt4 you can basically increase the token size but for now we're stuck to around 4 000 tokens so how do we deal with that if we have a transcript of over 100 000 tokens we can use the text splitter to first split this up in several chunks so what this basically will do is I say hey we have this transcript over here this size but I want to split it up in chunk sizes of 1000 character each and here you can also specify if you want there to be a bit of overlap and if I run this so let's run the text splitter and then sorry first Define the text splitter and then call the text splitter split documents and then put in the transcript that we've just created so that is uh the object over here the list with the document and the page content in here and if I run that so let's do that and now I now have the docs and what we can now do if we have a look at what dox is we can now see that is just a list with a bunch of split up documents so it has taken the very large transcript over 100 000 tokens and split it up into chunks of one thousand okay so that is the first step okay so now you might wonder Okay so we've split up the transcript but we can still not provide it to the API right correct and that is where the next part comes in and that is embeddings and Factor databases so this is quite Technical and I won't go go into the details in this video I will make future videos about this because for now I want to give you a brief demonstration and overview of how to use this and then later we can get more specific but first we use the embeddings from open AI to basically convert the text the splits that we have just created of the Thousand tokens long to convert them into factors and factors are basically a in this case a numerical representation of the text itself so we convert the text to a vector of numbers then we will use the phase Library which is an Library developed by Facebook that you can use for efficient similarity search we will combine that to basically create a database of all these documents that you see over here and when a user wants to ask a question with regards to this YouTube transcript we will first perform a similarity search to find the chunks that are more most similar to the prompts that the user is asking so what this means is that we have this database with all these factors and we can do a similarity search on that to find the relevant pieces of information that we need and now this is the critical key to working with these large language models and your own data first create a filter a lookup table of some sort to get just the information that you want and then provide that to the large language model with your question so if we bring all of that together in this function create DB from YouTube video URL we can for any given video URL load the transcript then split it up into chunks of 1000 tokens and then put it into a vector database object that we can return using this function now what we can then do next is we can provide this to another function the get response from query where we use this database that we've just created to answer specific questions so how does this work well we provide the database and the query so the question you want to ask about the video to this function get response from query and then we also have a parameter K over here which defaults to four and here you can see the reasoning behind it that is basically to maximize the amount of tokens that we sent to the API and then this is where it gets really interesting is we perform a similarity search on the database using the query and we return K documents so given our question it will go through all of these documents and it will find the most similar ones so it will do a similarity search and then what we do once we have all the documents so for by default We join them into one single string and then we create a model over here and now we use the GPT three and a half turbo model and next you define a template for your prompt like we've seen earlier in this video and this is where you can get really creative so in this example you are a helpful assistant that can answer questions about YouTube videos based on the video transcript and then we provide the input parameter docs which we will replace by the string that we've just created so all of the document information only use factual information from the transcript to answer the question if you feel like you don't have enough information to answer the question say I don't know your answer should be for both and details so like I've said this is really where you can get creative and based on the kind of applications that you want to create design your template over here and basically by creating minor changes within this template you can create entirely different apps for all kinds of Industries Alright and then the next step is to chain all of this together and since we are now using the chat function using the GPT 3.5 turbo model this is slightly different but you can find everything in the quick start so first it explains how you can use the general models and then it continues with the chat models so the syntax is a little different because here we have the system message prompt and a human message Pro so this is nice to First Define a message a problem for the system basically so that is the description over here the template explaining the AI the agent basically what it should do and then we have a prompt to alter the question or the input that the human is providing so I for example added answer the following question and then put in the question over here so I'm not sure if this is necessary right now but you can alter the input from the user as well so that is how you would do it and then it combines all of that into a chat prompt and then like we've seen earlier we can put that into a chain the chat and the prompt and then we can run that chain again also like we've seen before and then we just put in a query and the docs that we have defined earlier alright so now we have all the building blocks that we need and we can actually start to call these functions so again let's define the video URL and let's first create a database from this video so let's see what that will do so it goes quite quickly so it gets the transcript and then converts it so now we have the database object and now we can fill in a query over here and then call the get response from query function to answer a specific question about this video transcript so let's actually see what they are talking about and let's say I don't have time to watch all of this but I'm pretty interested in what they have to say about AGI over here so I can come over here and listen to what they have to say but I can now also come to this application over here or this function so to say and then fill in what are they saying about AGI so that is the query and now let's get the response and let's print it so there we go in the videos transcript they are discussing AGI artificial general intelligence and the work being done by open AI to develop it Sam Altman the CEO and so on so it's answering the question based on the transcript awesome so let's ask it another question who are are the hosts of this podcast so let's run it all at once so it will do some thinking first get the response and then based on the transcript is not clear who the host of the podcasts are however it is mentioned that the podcast features conversations with guests such as Sam Altman Jordan Peterson and is hosted by someone named Lex Friedman okay so this is really interesting it is admitting that it doesn't have all the information but it is recognizing all the entities and it is correct it's a podcast by Lex Friedman alright so let's try another one what are they saying about Microsoft in the transcript the speakers are discussing their partnership with Microsoft and how they have been amazing partner to them alright awesome and now also this function get response from query not only Returns the response but also the docs so it's actually quite cool we can also have a look at the documents for in this case that it's using to get this answer so for this you also get the reference to the original content which is very convenient if you want to do additional research or fact check your models to see if it's actually giving you answers that are correct alright so now we basically have a working app and all you have to do is create a simple web page around this post it on a server or web app somewhere and people can interact with this so fill in YouTube url ask questions and it will do that for you and really when I look at all of this stuff my hat really starts to spin I have so many ideas because for example what you can do with this approach alone let's say you create a list of all the channels that talk about a specific topic so for example you want to stay up to date on AI you list all the podcast Channel all the popular channels and then you create a little script that every now and then checks if they have a new video public on their page scrapes all the URLs and then process all of those videos with these functions and then really engineer your prompt in such a way that you can extract useful information from that that you can use for example to do research or create a social media account account for example a Twitter account where you tweet about the latest updates in AI or even a YouTube channel where you want to talk about AI you can really Scout everything and then ask okay what is the Lex frequent postcard saying about AGI what is Joe Rogan saying about AGI and you can do that all automatically and then you can combine this chaining it together with different agents to store this information in files on your you can see you can really the possibilities are endless of so like I've said I am really going to dive deep into this because there are so many opportunities right now and as I will learn I will keep you guys up to date on my YouTube channel so if you're interested in this make sure to subscribe so you don't miss any future videos on this and really it's been amazing how many requests I'm already getting from companies to help them Implement these tools help them with AI I've been getting tons of messages so it's really exciting so for me as a freelancer this is a really exciting opportunity a really exciting moment to basically also start to work with smaller clients smaller companies and Implement these tools and now if you also feel like you want to do more with this you want to exploit this opportunity and start to work on your own freelance projects but don't really know where to start then you should really check out data freelancer which is a mastermind that I host specifically created for data professionals that want to Kickstart launch their freelance career in data but don't really know where to start so in this Mastermind you will literally learn everything you need to get started to start Landing your first paid project I share all my systems and models that I've developed over the years to basically systemize freelancing in data to make sure you never run out of clients and you will become part of a community of other data professionals that are working on their freelancing career and we are all here together to work on the same goals of making more money working on fun projects and creating Freedom that's what we're trying to do here feels like hanging out with friends but with real business results so if you consider freelancing or want to take advantage of all the amazing opportunities that are out there right now in the world of AI but don't really know where to start then check out data freelancer first link in the description and sign up for the wait list foreign foreign