What's up everyone, it's Brace, and this is the third video in our generative UI series on building generative UI applications with Langchain. In this video we are going to walk through how to build a generative UI chatbot with a Python backend and then an XJS frontend. If you've not seen the first video, you should go back and watch that because that's where we cover some high-level concepts like what is generative UI, some different use cases, why it's better than previous methods, and then we go into a little bit of detail into the apps we're going to build today. If you're looking for the JavaScript version, that's going to be linked in the description. In that video, we build the same chatbot that we built here, but we built it with a full JavaScript TypeScript stack.
This video is going to have a Python backend, but we're still going to be using some JavaScript for the Next.js frontend. So for a quick refresher, if you watched the first video, this is the architecture diagram of the chatbot we're going to be building today. We can see we have two distinct sections, the server, which is where our Python code will live, and the client, which is where our Next.js code will live. So the server takes in some inputs, some user input, any images, chat history.
Those then get passed to an LLM, and the LLM has a few tools bound to it. These tools all correspond to UI components, which we have on the client. This LLM is then invoked with these tools. It can either select a tool to call if the user's input requires it, And if not, then the LLM will just return plain text.
We're going to be using Langraph for our Python backend, and that's where this conditional edge goes to. If you're not familiar with Langraph, I'm going to add a link somewhere on the screen to our Langraph playlist, where we go into detail on Langraph and all of its APIs. But as a quick refresher, we can take a look at this simple diagram. Langraph is essentially one of our libraries, which you can use to construct... graphs or we like to use them for anything we would have used an agent for in the past.
So this simple diagram shows you what kind of what a laying graph application consists of. So you take an input, each of these circles are a node. In laying graph a node is just a function that gets invoked and some state is passed to it. So the question gets passed to the retrieve node and then at the end of each node, so in the beginning all the state or your current state gets passed into the node.
That could be a list of messages, it could be a dictionary with five keys, or whatever you want. Your state can be really whatever you want. So that your state always gets passed into the node, and then when you return that node, you can return an individual item or the entire state, and LandGraph will just combine what you returned with the state. So if you just returned one item in your dictionary, it's just going to replace that field, or there's some more complexities you can go into where you can make them like... combine or add or you know have a custom function deal with you combining states But for now we'll just think about it It gets all the state to the input and whatever you return just replaces that field in the state So we have a retrieve node the results of that get then get passed to our grading node The results of our grading node get passed to this conditional edge.
We also have a conditional edge here And this conditional edge essentially says are the documents relevant if they're not relevant or sorry are any docs irrelevant. If they're all relevant, then it goes right to the generate node, and then the generate node returns an answer. If they're irrelevant, then it gets routed to the rewrite query node. The results of the rewrite query node go to the web search, and then finally we go back to the generate node and then to the answer.
So a lane graph essentially, as we can see here, allows you to have a series of nodes and then route your application flow between these nodes without having it be, say, an agent, which could... pick any node and it's not very predictable and it could you know go right from retrieve to generate or something or an llm chain which will always do the same flow so with lane graph you're able to construct your graph in a way which it can be somewhat smart and make decisions on its own but it's still somewhat fenced in so it can't just do whatever it wants so if we go back here we see our llm is our first node that gets invoked and the results of that get passed to our conditional edge If no tool was called, then we just stream that text right back to the UI. And as these chunks are coming in, then they get rendered on the UI. If a tool is used, then it gets passed to our invoke tool node. Here you see we stream back the name of the tool that was used.
We then execute some tool function. This is any arbitrary Python function. In our case, it's typically going to be hitting an API. And then after that, we invoke our, or we return our function results, which then get streamed back to the client. We're going to be using the stream events endpoint from langchain, which essentially allows you to stream back every event which is yielded inside of a function in your langchain.
in our case our lane graph graph. So one of these events will yield back is the name of the tool. We then send that back to the client as soon as it gets selected. So we can map that to a loading component or some sort of component to let the user know that we're processing the request and we've selected this tool. And that gets rendered on the UI right away.
So instead of having to wait until the entire lane graph graph finishes and we have the results, we can select the tool, and that usually happens pretty quickly, and then instantly render something on the page. So the user knows we're working on their request and has a much quicker time to first interaction. Then while their loading component is being shown to the user, we're executing our tool function in the background.
And then once the results come in, we then stream those back to the client and map our tool to our component. And this will then be our final component. Our loading component will then populate that component with whatever fields are returned from our function.
And then we update the UI. And this updating and appending the UI process can happen. Or sorry, we can update it or append the UI as many times as we would like.
In our case, we're only going to update it once and then finish it with a final component. But you could update and append your UI as many times as you would like. Let's see, you could have some much more complex lane graph graph like this, where the retrieve node updates the UI, and then you let them know you're grading it, and then you let them know the results of the conditional edge. So since we're using stream events, we're able to get all those events and render them on the UI as they happen on our server. So for our Python backend you're going to want to go into the backend folder and then gen ui backend and find the chain.py file.
This is the file where we will be implementing our lang graph chain and the first thing we're going to do here is define the state of the chain which can be passed through to each of the nodes. So we're going to name our state generative ui state at our imports. We will use this AI message later but for now we just need the human message.
Our state contains the input which will be human message and that's going to be the user's input. It will also contain the result which is optional because this will only be set if the LLM calls a string or calls does not call a tool and only responds to the string so it's the plain text response of no tool was used. We also have an optional tool calls list of objects so a list of parse tool calls if the LLM does call a tool or tools we're going to parse it and set that value before we invoke the tool.
and then the result of a tool call if the lm does call a tool we'll call invoke tools and then this will return this tool result result value which we'll then use in the client to update the chat history so the lmc is our user input and the result of a tool so it knows it properly processed that tool now we can implement our create graph function we have not implemented our nodes yet but this will give us an idea about the different nodes and the flow our graph is going to take we're going to want to implement or import our state graph and compile graph. This is what we're gonna use it as a type or type int and this is gonna be the state graph we're gonna use for lane graph. As you can see it's pretty simple. There's two nodes invoke model which will be this model or that this node and then invoke tools which will be here. You see we don't have a node for plain text response because this conditional edge which is this part will essentially say if the model use a tool then call the invoke tools node and if it didn't use a tool It's just going to end and end the graph and send the response back, or sorry, the result back to the client.
Our entry point is going to be invoke model, and our finish point is going to be invoke tools, or the end variable, which this conditional edge will return if no tools were called. Then we're going to compile the graph and return it, and then inside of our lang serve server file, when we import this, this can be the runnable which lang serve can call. Now that we've defined our graph structure, we can define our first model.
So that, or sorry, our first node, which is going to be invoke model, is going to take in two inputs, one for state, which is going to be the full generUiState that we defined. Since this will be the first node that's called, it will only have the input, and then pre, or nodes that are called after this will have these different state values populated if the model called tool or return a string or... you know, whichever one the model uses.
Then we have a config object which will pass to the LLM when we invoke it, and then finally it's going to return an instance of generated state, and as we see we have total false, and that's so we don't have to return all of the different values in this class. Now that we've defined the structure, we can go ahead and define the first part of our invoke model node. We're going to have a tool parser, which is a JSON output tools parser from the OpenAI tools output parsers. And then a prompt.
This prompt is going to be pretty simple. You're a helpful assistant, you got some tools, you need to determine whether or not the tool can handle the user's input or return in plain text. And then we have a messages placeholder for the input where the input in chat history will go.
After defining our tools parser and our prompt we can go and define our model and all of the other tools we will assign to it. So we can paste that in. As you can see we imported our GitHub repo tool, our invoice tool, and our weather data tool. We will implement these in a second and we've also imported our chat open AI class. So we define our model, ChetOpenAI, GPT-4-0, temperature 0, and streaming is true.
We then define our list of tools, which is the get a repo tool, invoice parser tool, and weather data tool. Next, we're going to bind the tools to the model. So we define a new variable, model with tools, and then we're binding these tools to the model.
And finally, we use the lane chain expression language to pipe the initial prompt all the way to the model with tools, and then invoke it. passing in our input and our config and we get this result which will either contain the tool calls or it will contain just a plain text response now we can implement our parsing logic so first we make sure that the result is an instance of ai message it should always do that but we have this check here just so we get this type down here um this should in theory never throw then we check to see if result.toolcalls is a list and if they're are more than zero or if there is a tool call there. If a tool call does exist, then we're going to parse this tool call, passing in our result from the chain.invoke and the config, and then we're going to return tool calls with parse tools which populate this field. If tool calls were not called, then we're just going to return the content as a string in the result field, which will populate this. Then now we can implement our add conditional.
conditional edge which will say if result is defined and and if tool calls are defined then call our invoke tools node which will implement after our conditional edge. So for our invoke tools or return method it takes in the state and it returns a string. So if result is in the state and it is an instance of string which means it would have been defined because we returned it then return end.
And this end variable is a special variable from LaneGraph, which indicates to LaneGraph to finish and not call any more nodes. It's essentially like setting, like calling setFinishPoint, but you can dynamically call it because if LaneGraph sees return to end from a conditional edge, it's just going to end. If result is not defined but tool calls are defined and they are an instance of list, then return tool calls. LaneGraph will read this and then it will call the tool calls, invoke tools node.
In theory, this will never happen because we should always either return a string via result or tool calls, but we add this just to make it happy in case there is somehow a weird edge case where that happens. Now that we've implemented our conditional edge, we can implement the invoke tools function, which will then process or handle invoking these tools and sending the data back to the client where we can process it and send the UI components over to the UI. So this is for the invoke tools. function.
This is somewhat similar to what we saw in the server.tsx file where we're mapping or adding the map toolmap here. It basically has a toolmap with the same names of the tools and then those tools and we're going to use the state to find the tool that was requested and then we we can invoke it. So what we do after this is we say if tool calls is not none which means that tool calls have been returned here and our conditional edge called tool calls which they should never be none but once again linting issue got to make it happy because invoke tools shouldn't be never be called unless they're already in instance of a list but yeah we need to make it happy by confirming that they are defined we will then extract the tool from state tool calls and then just the zeroth item You could update this to process multiple tools that your language model returns.
For this demo we're only going to handle a single tool that the language model selects. Then via our tools map, tool.type, type is always going to be the name of the tool. We can use our tools map to find the proper tool. So now we have our selected tool, and then we return tool result with the selected tool.invoke with the args language model supplied. And that's going to populate this field, and then since tool Invoke tools is our finish point.
The lane graph graph will end. Now we can implement our GitHub repo tool, and then I'll just walk you through how the invoice and weather data tool are implemented. They're pretty similar to GitHub repo, but we'll only implement the GitHub repo tool. So in your backend you should navigate to tools slash github.py and the first thing we're going to want to do is define the input schema that the language model is going to get passed so it knows what fields to pass to this tool if it does want to select it. You need to make sure to import base model and field from langchain.pydanticv1 and not pydantic and then we can define our github repo input with two fields owner and repo.
The owner will be the name of the repository owner and the repo is the name of the repository. like langchain.ai slash langraph. And these are the fields that the GitHub API requires in order to fetch data about a given repo.
Next, we're going to want to define the actual tool for our GitHub tool. So we're going to import tool from langchain core.tools. So from langchain core.tools, import tool. We're going to add this decorator on top of our GitHub repo method. We're setting the name to GitHub repo, which we also have here obviously, so we can map it properly.
And then the schema for this tool and return direct tool true. And then our GitHub repo tool takes in the same inputs as here, owner and repo, and it returns. Let's add these imports, object and string. So now we can implement the core logic here, which is going to hit the GitHub API. If it returns an error, then we'll return a string.
And if it is. not return an error, we're going to return the data that the API gave us. So first things first, we'll add our documentation string and then implement or import os to get the GitHub token from your environment. I have a readme in this repo if you want to use the tools that we've provided or that we've provided in this repo pre-built. You're going to need a GitHub token and then for the weather tool you're going to want this geocode API key. They're all free to get and I've added instructions in the repo on how to get them.
But then you should set them in your environment. And inside this tool, we're going to want to confirm that this token is set before calling the GitHub API. Then we will define our headers with our environment token and the API version and the URL for the GitHub API, passing in the owner and repo, because this is an F string.
And now we can use requests to actually hit this URL and hopefully get back the data from our repo if the user and the LLM provided the proper owner and repo. for a given repository. So what we'll do is we will wrap our request in a try and accept. So if an error is thrown, we can return a string and just log the error instead of killing the whole thing. What this is going to do is it's going to try to make a get request to this URL with these headers, raise for status, get the data back, and then return the owner, repo, description, stars, and language.
This is going to be the owner of the repo, the name of the repo, description if the description is set, how many stars are on that repo, and then the primary language like Python. This is the end of the GitHub repo tool and now we can quickly go and look at the invoice and weather tool. As we can see they're pretty much the same. The invoice tool has a bit or is much more complex with the schema and that's because it's going to extract these fields from any image you could upload and then it's going to use our pre-built invoice. component on the front end to fill out any fields like the line items or the total price shipping address from an invoice image that you update and then it just returns these fields for the weather tool it's just going to hit three apis in order to get the city the weather for your city state country and then today's forecast which is the temperature and then schema is also simple city state optional countries defaults to USA.
Now that we've defined our tools, we can define our LangServe endpoint, which we'll use as the backend server endpoint that our frontend will actually connect to. For the LangServe server, you want to go to your GenUI backend and then the server.py file, and then the first thing we're going to want to do here is load any environment variables using the.env dependency, and this will load any of our variables from your env file like your OpenAPI key. or open AI API key, your GitHub token, yada yada yada. Now to implement our fast API for our lang serve endpoint, if you've ever worked with lang serve this should be pretty familiar, but we're going to have this start, this should be named start, start client does not make much sense, and then we're going to define a new instance of fast API which is going to return this app, we're going to give it a title of genui backend, and then this is just the default for lang serve.
Since our backend API is going to be hosted on locally localhost 8000, and then our frontend is localhost 3000, we need to... Add some code for cores so that it can accept our requests. We're going to add this import as well. Once we've added cores, we can go and add our route, which is going to contain our runnable, which we defined inside of our chain.py file, this create graph function.
So we will create a new graph. Add in types so LangServe knows what the input and output types are. We're going to add a route, slash chat, it's going to be a chat type, and then passing in our runnable in our app. This runnable is going to be what's called when you hit this endpoint. And then finally start the server here at port 8000. As you can see we have this chat input type here which is going to find the input type for our chat.
So we're going to want to go to backend slash types and define this type. This type is fairly simple. It's our chat input type, which contains a single input, which is a list of human message, AI message, or system messages.
And these are going to be our input and chat history that we are compiling on the client and sending over the API to the backend. Once this is done, your server is finished and you can go to your console and run our poetry run start and this should start your oh that's right we updated that name so we need to update this file as our poetry or pi project sorry to instead of trying to call the start cli it should just call start so now if we go back here and we run poetry run start Our LanxServe server has started. And then we can go to our browser and go to locos8000 slash docs. And we can see all of these automatically generated Swagger docs for our API endpoint.
And this is the stream events endpoint, which we are going to be using. Now that we've done this, we have one thing left to do, which is add the remote runnable to our client. So we connect to this and then using our UI chat box, which this repo already have pre-built out, you just clone the repo.
and you can use that then we can actually start making api requests and check out the demo so for our remote runnable you want to go back to the front end directory app and agent.tsx we're then going to import server only because this should only run on the server and then add our api url obviously for the production this should not be localhost 8000 but for us in this demo it is and slash chat which is this chat endpoint we defined here once we've done that we can define our agent function which takes in some inputs, your input, your chat history, and any images that were uploaded, and designate this as a server function. This is similar to the, or this is the inputs we saw here. And then we're going to want to create a remote runnable, so we'll say const remote runnable equals new remote remote runnable from langchain core runnable slash remote, passing in the URL as the API URL here. And this is how we will have a runnable that can then connect to our LangServe API in the back end. But since it's a runnable we can use all the nice langchain types and invoke and stream events that we've implemented in our stream runnable UI function here.
So this remote runnable is what we'll pass to this function and then we'll call stream events on. So now we can import stream runnable UI. Import stream runnable UI from utils slash server and then we can return stream runnable UI, the remote runnable inputs, but then we need to also update these inputs to match the proper type that the back end is expecting. So we iterate over our chat history, creating a new object with a type role and content of the content. And then finally, the input from the user should be type human and content is inputs dot input.
Once this is done, we'll be able to use this agent function on the client. But First, we need to export our context. So this is going to be able to be used. So export const endpoints context equals expose endpoints passing our agent. And this is using that same function we defined in our server.tsx file, which is going to add this agent function to the React context.
So now in our chat.tsx file, which you should use from the repo and not really update it at all. We have our use actions hook passing in our endpoints context which we defined here and then since we're using react's create context it knows it can call an agent it's then going to put these elements to a new array with the UI that was returned from the stream and then finally parse out our invoke model or invoke tools into the chat history so the LLM has the proper chat history this is obviously implementation specific so if you're updating this for your own app with your own lane graph back-end you should update these to match your nodes and kind of how you want to update your chat history Finally, we clean up the inputs, resetting our input text box and any files that were uploaded. And then this is just the JSX, which we'll render in our chatbot.
Go to the frontendutils.server.tsx file, and this is where we will implement all of the code around streaming UI components that we get back from the server to the component, and calling the servers runnable via stream events. So the first thing to do in this file is... import server only, and that's going to tell, let's say using Vercel, that this file should only be ran on the server.
Next, we are going to implement this with a resolver's function. Essentially, this has a resolve and reject function. Those are then assigned to a resolve and a reject function in a new promise, and then it's all returned.
And we have to tsignore this because TypeScript thinks that resolve is being used before it's assigned. Technically in the context of just this function, that's correct. However, we know that we will not use this resolve and reject function before we use this promise.
So in practice, this is not the case. Next, we're going to implement this expose endpoints function. This is going to take in a generic type, which will then be assigned to actions. This action in practice will be our LandGraph agent, which we will then invoke, or the remote remote runnable, which will call this LandGraph agent on the server. and then it returns a JSX element.
This JSX element is going to be a function called AI which takes in children of type ReactNode, so any ReactNode children, and then it passes the actions variable here as a prop to the AI provider which we'll look at in a second, and then any children. And this AI provider is essentially going to use React create context to give context to our children which will be the elements. that we are passing back to the client and any actions that we want to use on the client which will be our agent action which will then call the server and it uses react's create context to give context to these files if we look inside of our app layout.tsx file we see we are also wrapping the page in this endpoint context variable which we will implement in just a minute Now that these two are implemented, we can go and implement the function which will handle actually calling the server, calling stream events on that, and then processing each of the events. So this function is going to be called stream runnable UI. We will add our imports.
Import runnable from functions.core slash runnables. And then also import. It's not getting it. Import compiled state graph from lane chain slash lane graph.
So our runnable will be our remote runnable, which we'll use to hit our server endpoint. This remote runnable we're going to call stream events on, so we get each of the events, or all the events that our server streams back. And then we're going to have a set of inputs.
These inputs are going to be things like the user input and chat history, which will then pass to a runnable when we invoke it. The first thing we want to do in this function is create a new streamable UI, which we can import this function from the AISDK. This createStreamableUI function is what we will use to actually stream back these components from a React server component to the client.
And then we're going to use our withResolvers function we defined to get our last event and resolve, which we will resolve and await a little bit later. Next, we're going to implement this async function, which we're calling. Let's add our imports. This has a last event value which we will assign at the end of each stream event we iterate over so that this will always contain the last event. We're then going to use this a little bit later on after we resolve our promise on the client so we know when the last event is resolved because this function will resolve before we add this import.
This function or this async function that is returned will resolve before the actual API call is finished so we need to assign each of events to that so that the last event will be in this variable and then when we await our last event will be able to access our last event on the client. even though the async function would have already resolved. We also have this callbacks object, which is an object containing a string, and then either create streamable UI or create streamable value.
This is going to be an object which tracks which streamed events we've processed already. The string will be the ID of that stream event, and the return type will be the UI stream, which is getting sent back to the client, which corresponds to that event. So it could be a tool call. or it could be just a plain text LLM response.
After this, we need to go up above this function and define two types, and then one object map. Let's add our imports first. Why is that deprecated?
That's because we imported from the wrong place. We need to add this here as well. So these are some pre-built components, rendering like a GitHub repo card. We have a loading component for that as well, and then the actual component which takes in props.
These are all just normal React components. Even though we're using them on the React server components on the server, they're normal React components that'll get streamed back to the client. So you can essentially stream back any component that you would build in React and they can have state, they can connect to APIs. And that's kind of what makes this so powerful is you can use actual React components that can have their own life inside of them.
So you can stream this back to the client, you get a new UI component on your client that user sees. and that UI component can be very dynamic and stateful and whatnot. But those are prebuilt and we have this map here tool component map.
We will use this as our tool component map here. So when we get an event back which matches the name of our tool, we can then map it to the loading component and the final component. There will be a different event which we'll implement in a second, which checks if it's if it should be the loading component which stream back or the final component gets streamed back. And then you can pass any props to these components.
Now we're going to define two variables, selectedToolComponent and selectedToolUI. These are going to keep track of the individual component and the UIStream, which we've implemented to stream the components back to the client. That's because after this we're going to be iterating over stream events, and we need these variables to be outside of each event so we have access to them in all of the subsequent events after they've already been assigned. But now we can implement the stream events.
That's just going to call runnable. stream events with the v1 version passing any inputs. This runnable is the same runnable that gets passed in here which will be well we'll implement this in a second but it's essentially going to be a remote runnable function which calls our lang serve python server.
And now we can iterate over all of the stream events and extract the different events that we want to then either update our ui or update these variables or callbacks and whatnot. So really quick, we're going to extract the output and the chunk from our stream event.data, and then the type of event, which we will use a little bit later on. Now we're going to implement our handle invoke model event. This handles the invoke model event by checking for the tool calls in the output.
If a tool call is found and no tool component is selected yet, it selects the selected tool component based on the tool type and appends the loading state to the UI. So what this is going to do is we will call this if the streamed event is the invoke model node. When we do implement our Python backend, one of the nodes in our lane graph is going to be invoke model, and this is the function which is going to process any events streamed after that invoke model is called. Now for the body of this function, we first check to see if tool calls is in the output, and if output.toolcallsAtLength is greater than zero, so if there are more than If there is one tool call, then we're going to extract that tool call. This is the invoke model, so it's going to be this first step.
And this conditional node will either return a tool or string. If it returns a tool, then this should get caught. We extract that tool, and then if these two variables have not been assigned yet, then we're going to find the component in the component map, create a new streamable UI, passing in the initial value as the loading component for that component.
And this is going to then update, we're then going to pass the streamableUI.value to our createStreamableUI, which is going to get sent back to the client, with the value of our new createStreamableUI, which will be our loading component for the first event. The next function we want to process, or sorry, the event we want to process is the invokeTools event. We're going to update the selectedToolsUI with the final state.
Sorry, with the final state and tool result data, that will be from this node, and it takes an input, handle invoke tools event. So now it's going to be pretty similar to this where we're going to take the event of this tool node and update the UI but using these already defined variables. So if selected tool UI is true and selected tool component are true, which they should always be because the invoke tool node should never be called until the invoke model tool is called, which we'll see when we run our Python server, then we're going to want to get the data from the output here via the tool result and then tool UI dot done with the selected component which we assigned here and then the final version of that component passing in any props.
So for example let's say we have our weather tool. It's then going to use the UI stream for the weather tool. Find the final version of that component which is the current weather.
Passing any props to it and then update that stream and call done to end the stream. updating the weather component that is already being rendered on the UI. Now the last function we want to implement is going to be handleChatModelStreamEvent, and that's going to be if the language model just does not pick a tool and is only streamed back text, it's going to stream back all those text chunks, and we're going to want to extract those to then stream them again to our UI.
So handleChatModelStreamEvent by creating a new text stream from the... for the AI message if one does not already exist, and for the current ID. Then it appends the chunk to the content, and then appends the chunk content to the corresponding text stream. So the value of this function is going to be this. We're going to use our callbacks object here after we add our import.
And we're going to say if callbacks, if the run ID for the stream event does not exist in our callbacks object, then create a new text stream. We want to create a text stream because this bypasses some batching that the create runnable UI does because we're only streaming back text. So we create our text stream, use our stream or sorry, create streamable UI and add our AI message which will look like our AI message text bubble and the value of that is going to be the text stream and then we are going to set this callbacks object with the run ID to this value of the text stream.
Then If we set that or if it was already set, then we're going to check make sure it exists and then append any of the content from the stream. So each chunk that the LLM streams will be chunk.content and we will append that to our text stream value, which will then stream each text and update the UI message as those chunks come in. Now we've implemented these functions, we're going to want to implement our if else statements on the different stream events. so we can get the proper events and call the functions which are required for those events.
So the first one we want to implement is if the type is n, so that means if the chain has ended and the type of output is an object, we first check to see if the stream event.name is invoke model. If it was invoke model then we want to handle the invoke model event passing in the output, or if the stream event was invoke tools then we call the invoke tools event, makes sense, passing in the object. The last function we need to add an if statement for is the chat model stream.
So those are not going to be tool nodes, instead they're going to be on chunk model streams. So we're going to say if the event is on chat model stream, the chunk is true, and the type of chunk is an object, then handle the chat model stream. And then finally at the end of our, let me collapse these, once we're at the end of our stream event iteration, we assign the last of value to the stream event.
This is so this value is always going to be the last stream once the stream exits. Finally we're going to clean all this up. So using our resolve function return from our with resolvers we're going to pass in the data.output from the last event.
This is going to be the last value from our stream. If it was text then it's going to be text, if it was the result of a tool it's going to be a tool. That data we will set when we implement our Python backend. We're then going to iterate over all of our callbacks. and call done on each of them, which is going to call this stream dot, sorry, stream dot done, even though we're calling dot UI.
And that's just so this create streamable value stream finishes, and then call UI dot done. And that's for this create streamable UI, and it's going to end the stream streaming UI components back to the client. Finally, outside of this async function, we're going to want to return the value of our UI stream. This is going to be the JSX element, which we'll render on the client.
And then the last event right here, which is that promise that we can resolve. once our stream events have finished resolving and then get the value of the last event. Now everything is finished, we can go back to our terminal and we can run yarn dev.
This will start up a server at low cost 3000. We can go to our UI, reload this page, and we should see our generative UI application that we just built. And we say something like, what's the weather in SF? Send that over.
Boom. we get back our loading component, it recognized that it was in San Francisco, California. As we saw, it selected the tool, sent that back to the client, that was a map to our loading component that was rendered here, and then once the weather API had resolved, it then sent that data back again and it updated this component with the proper data. So we can also say something like, what's the info on lanechain.ai slash lanegraph? We send that over, it should select our GitHub tool, we saw it was loading for a second, and now we have our GitHub repo component here, which has the description and the language and all the stars.
This is, you know, React's component, so it's interactable. We can click on the star button, and it takes us to the Langraph repo, and we see that the description and stars all matches. So before we finish, the last thing I want to do is show you the Langsmith trace.
As we see, this is a Lanxerve endpoint slash chat. It passes in the input, the tool calls, and then the most recent input. As we can see, the output contains tool calls and tool result, which we use to update our chat message history. But it calls invoke model as the first node in LanGraph. As we can see, obviously there's no inputs for these because they have not been called yet, but it does contain the messages input field that then calls our chat model.
Our chat model is provided with some tools. It's selected the GitHub repo tool, which is what we want because we asked about a GitHub repo. Return the values for that.
That then got passed to our output parser. And then our invoke tools or return conditional edge, which obviously we invoke tools. So it's then going to call the invoke tools node, which invoked our tool.
While it was invoking our tool, it was stringing back the name of the tool, which we used to send the loading component to the client. Then after it hit the GitHub API. It streamed back the final result of our tool, as we can see here, and then that on our client was used to update the component with the final data.
And then since invoke tools was the last node, it finished. And that is it for this demo on building generative UI with Python and React frontend. If you are interested in the TypeScript video, which is just the same demo as this, but with a full TypeScript app, that will link to the description. And I hope you all have a better understanding.
of how to build gendered UI applications with LaneChain now.