Transcript for:
Generative UI App Development & Deployment

what's up everyone it's brace and this video is a continuation in our generative UI Series in this video we're going to be building a whole new generative UI app which is going to be deployed on lra cloud which is our new Cloud offering for hosting lra applications um and because we're using lra Cloud we're going to build the back end which Powers this generative UI app in langra Python um and then this front end is as the same as other videos it's written in xjs with typescript so the reason this video is different from other videos and the way it's more customizable is because here we're going to be using this entire page as our canvas for generating and placing the UI components in the previous videos they were all chat bots so it's pretty predictable where the next UI component was going to have to get placed and they weren't very Dynamic it was usually you know some preset component that would then just get some simple props passed to it and stuck right in as the last message here we're using pretty much this entire UI to generate and place components on um and the back end is also a little bit more complex so as you see right here we have this magic filter input this is going to take a user's input which is just natural language describing the filter and maybe the way they want the data visualized our LM is going to take that perform some query analysis on that and extract different things like the filters that should apply to our orders um the chart type the way it should render the data on this chart type and then it's going to actually perform those filters send it all back to the client and we're going to render it in real time as each node is completed so if we see up here we have this filters and display types pop up if we click on the filters we can see all of the different filters that the LM can select from so when you're performing your magic filter input uh you can use this to kind of reference different filters you want to apply so let's say we want a filter by state we could say like filter by California or we could say filter by orders in California that have a discount um we can use really any combination of these filters we want and our llm is going to perform some query analysis and extract these filters we also have some preset display types these will map to what you're seeing right here so it's they all link to either a bar chart a line chart or a pie chart um and it's the preset way that the data is going to be displayed in that chart so it's either um some unique data point for the y- AIS and or the the Y AIS and the xais so you can also use these when you're referencing to reference when you're performing your magic filter um the LM should be able to take what you described in your filter and map that to the proper display type to then render your data so we can do something like orders by status here we see our input and after it loads we should hopefully get a pi chart yep and it describes us the uh the name of display type that it's rendering and gives us a short little description of what this is showing us we can see we have 51 orders in processing 37 that are pending 43 that are shipped and so on uh we don't have any filters here and that's because this input doesn't really need any filters applied based on the filters that is available to it it right it's just saying give me all the orders and show them in a stat in a by status so it's going to in this case it skipped over the filters didn't apply any filters because we want all the orders and instead it looked at the pie charts and chose the proper pie chart or it chose the proper chart to then render the database in our request we can also say something like orders after 2023 101 so this should use the after date filter there we go after date 2023 101 and it decided to give us a line chart showing each day and the total dollar amount of orders from that day we get the description um and we get our chart so now let's quickly look at langra deploy to see what this Trace looks like in the back end and then we can jump into the code and see how this is written so if we go over to our lra deploy app we see this name is genui video um in a little bit after we finish walking through the code I'm going to walk you through exactly how you can deploy this yourself and it connect it to the front end and get running uh but let's quickly take a look at just what this dashboard looks like so we see here we have all of our revisions these are like a deployment um when I make a commit I can add a new re revision and it'll redeploy my server with that latest code we can see some charts and we have a lot more charts we can access and then we see all of our runs so if I open this up if if you're familiar with lsmith this should look pretty familiar to you because it's just a lsmith run and it shows all the nodes so we have our generate filters generate chart type generate display format and filter data node um and then we can inspect each of these runs and look at the actual you know API call or prompt template call that happened okay so now that we have a high level idea as to what lra deploy is and what the UI looks like let's jump into the code and Implement our Lang graph python backend so if you want to follow along in the code I'm going to add a link to this GAA repo in the description if you've already watched our other gener UI videos it's the same repo um but we have a new directory charts so this should all look pretty familiar to you and the new directory is charts just pull fresh from Main you should get that code speaking of the other videos if you haven't seen them you should definitely go watch at least the intro video as to what is generative UI talks about some use cases where it performs better than other traditional methods um and some other high level things and if you want to watch the chatbot video where I Implement that I go into a lot more detail in the code AS to exactly how we're rendering these components on the UI uh generating these UI components and just lot more detail into the actual code whereas in this video we're going to be going over the code um but mainly focusing on the L graph employ aspect of it so if you're following along you should go to backend geni backend charts chain. piy and as you can see here this is where our entire Lane graph chain lives right here we have a diagram which shows our application um kind of a high level architecture look at our application you see we have two distinct charts or uh sections sorry the python back end which is hosted on Lang graph cloud and then the front end which is running on the edge or the client and that's in nextjs so before we get into the code take a quick look at this we can see that our backend takes in a user input which is that magic filter input we saw in the demo and then it passes that through to some nodes the first node is generate filters this node has an llm this llm has a tool bound to it and that tool has all the filters which we also saw in the demo that it can apply to it uh based on your user input and some other prompting it then selects some filters to apply to the order um generates those pass that to to a tool Alpa parser and then we have our final generated filter fil's object that is then stream back to the client this dotted line Arrow represents a streaming event um and we're able to extract all these events because langra deploy and Lang chain is able to do this we we have this endpoint callede stream events which essentially takes every single event in your app so in this case it would be like an onchain start when we start this node uh anything that happens in here like an llm call that would be streamed back and then onchain end which is when the node ends um that's what we care about so when the node ends we stream the result of that node back to the client and then from there we're able to map that to different components like we saw if we look at the demo again this would be one of those filter components that we got from the first step and then it renders that in the UI um we stream these events back to the client in order to get a much quicker time to First interaction and continuously update the UI while our backend is working on it so the user knows that something's going on um and if we weren't to do that it might take 3 or four or 5 Seconds to render the final node and render the data so this way we just let the user know something's happening um it keeps them engaged while our backend is working on it that then gets passed to the next node as you can see most of these nodes or three out of four are pretty similar they have an llm with a tool tool output parser and then we get some sort of result which we uping the state with uh so this next node generate chart type we bind a tool to it with all the different charts it can select so in our case it's bar chart line chart or pie chart um it uses some context like the user's input the filters generated um the different ways it can display play data on those charts and it uses all that to make a decision on what chart to then render we get the result of that stream that back to the client and we render a loading component on the client once we know exactly what chart it is so if it's a line chart we might have this loading line chart component render that in the UI um and then before we've even performed the filters or show or or we have access to the final data we can show the user hey we have these filters and we're going to use this chart and we're working on it then we go to the next node generate data display format and that selects one of these formats to best render your data based on the filters and the chart type and your input we update the state and then pass that to the final node which actually performs the filters on the data using your generated filters and the orders which we passed from the client when this uh backend was first invoked that just contains some if else statements um and then updates our order state with the filtered orders streams that back to client and we render the final chart on the client now let's jump into the code so we can see at the top of this file we have our agent executor State this contains the state that is act that is that we're going to be have available to all of our nodes um you can see these first three are not optional they're required and that's because these are the fields which we're going to pass in to our back end right when we invoke it so we have the user input which is that magic filter input display formats which were all the ways it can display data and then orders which are all of our fake generated orders um that we generated on the front end and we send it to the back end these are all optional and these are that's because these will be generated in a node so selected filters will only be generated in the first node so technically it's optional because the um graph won't have access to this field until it's generated in the first node chart type generated in the in the second node display format generated in the third node and this fourth node we're just going to be updating this original filters Fuel and replacing it with our generated filters if we scroll down we see this create graph function this create graph function returns a compiled graph which which is the type of GRA of this when we compile it um and then we have our nodes so we have our generate filters generate chart type data display format and filter data node those all match up here um and then we're not doing anything complex with this Lane graph graph we're just passing each node to the next so we'll always start at generate filters we'll always finish at filter data filters will will get passed to chart type chart type will get passed to data format and display data format will get passed to filter data um and then once this Sol finish we compile our graph and assign to this graph variable this is because we're using Lang graph deploy and langra deploy needs to know exactly where to find the uh compiled graph which you can then use on the cloud so we have this Lang graph. Json file and this is where we Define our L graph configuration so we list our dependencies in our case it's our geni backend directory and we have a requirements.txt file here which lets L graph know what requirements to or what dependenc to install on the server um and then we have this graph dictionary and this graph dictionary contains all of the graphs we want on our server so in our case we only have one but you could have as many as you would like here and they just need to map to a compiled graph um so ours his name genui graph and that contains a file path genui backend charts chain and then the name of the variable which this compile graph is assigned to and then for local testing and development we specify where our environment variable folder is or file is but we're not going to be doing local here so we don't need to worry about that okay so now that we've done that we can get into the actual code behind the nodes so let's take a look at our first node generate filters here we see it takes in the agent State and returns agent executor State um here we're only returning one field and that's because Lan graph allows us to return a single field to the state and it will either append or update that field to the full State and then pass the full state with that um addition or update to the next node so here we get our state we Define our prompt our prompt is pretty simple you're a helpful assistant your task is to determine the proper filters to apply given a user input the user input is in response to a magic filter prompt they expect their natural language description of the filters to be converted to a structured query we then get all of our product names so using our state. orders field we extract all the product names convert them to lowercase and then pass that to our filter schema function this function takes in our product names and then returns a new pented class fil schema and you'll recognize these filters all correspond to the filters we saw on our front end dialogue and that's because these are the filters the LM can select they're all optional because the LM does not necessarily need to select a filter um but if wants to it can select something like the product names to filter by specific products uh statuses to filter by one or multiple statuses minimum discount percentage and so on so we get our schema and then we instantiate a new chat openai class uh you can use any chat model which supports tool calling in our case we're going to be using open AI turbo because it's quick and then we we pass with structured output our schema and that binds this tool to our model and forces our model to invoke that tool using the Lang and expression language we pipe our prompt to our model and then we invoke our chain this invoke call um if you're not familiar with linkcn chain expression language real quick it just invokes our prompt to pass in this input field which is what we're passing as our human input and that's going to put whatever content um which in our case is the magic filter input it's going to pass it to this field convert this to an open AI uh compatible format and then pass that to our model where we it's going to hit the open II and give us response since we're using with structured output that already contains a tool parser so the result is going to be already parsed into just the tool call um we can update our selected filters State field with the selected filters if any were pasted the next two or three nodes we're going to kind of blow through them because they're all as we saw in our diagram pretty similar to the first just an llm call with a unique prompt output parser um M schema and then we return the type so generate chart type takes in our state this prompt is a little bit more complex so we say you're an expert data analyst your task is to determine the type the best type of chart to display the data based on the filters and user input you're provided with three chart types bar line and Pi the data which is being filtered is a set of orders from an online store the user has submitted an input that describes the buils they'd like to apply to the data keep in mind each chart type has set formats to display the data we saw this earlier with our display data formats so we tell the LM you should consider the best display format with selecting your chart type then we pass it those data display formats this contains the key which it can use later to reference the data uh the title of the format and the description the LM then uses this and the bar lineer pie chart to select the best chart um based on whatever the LM format it wants to then render the data we also passed the selected filters which have been generated and the magic filter input from the user as extra context we then have our schema which contains the chart type just one of these three um and the LM is going to then pick one of these to render the data on pass it to our model invoke our prompt and then update the chart type State field next our generate data display format uh we know what the chart type is so we say you're an exper data analyst um and we basically tell it you need to pick the proper data display format based on the chart type we pass in the extra context and then we Define then we Define our display key which contains a description saying the key of the format to display the data in must be one of and then we just join all the different keys from the display formats which match the chart type so these this display format schema contains the title which is for example this the chart type bar line or pie chart the description which would be one of these and then also a key which is say some unique key we can use later on to identify which data display format was selected so getting back to our node we pass all that through and then the LM will select the best data display format which it deems is the best way to render the data for the user uh we invoke our model and then update this data display format State field and our graph with whatever the LM selected finally we have our filter data node which just takes in the selected filters from the state um any orders which we passed in originally and then we just have some if else statements which filter out the data um and if it managed to get past all the filters we append it to this filtered orders list and then finally return via the orders field and that's just going to overwrite the orders field where originally it was all of the orders but here it's going to overwrite that to only the filtered order that's it for our backend now we can look at exactly how we would deploy this to a fresh langra Cloud app um and then connect that to the front end run it and look at some more demos so you're going to want to find your Lang Smith app if you're not already on langth I'll include some links down below where you can sign up um and then you want to click new deployment your screen will probably look a little bit different than mine if you don't have any deployments already um but this new deployment app should be pretty much in the same place so you click new deployment and then you're going to want to connect your GitHub account I've already connected mine and then you just select the repo so I'm going to want to deploy our genui python app I do that and then I add a name um test test deployment this is a path to our Lang graph. Json file lives in this repo it lives inside the backend so backend and then a get reference I am going to be specifying a custom uh Branch because I have some changes which aren't merge domain but this could be any git branch that you have so add your git reference in my case it's brace SL improved VI Biz and then this is the deployment type I'm just using development so I select development but if you're going to production you're going to want to select production oh I got rid of it open it back up it's all pretty much saved uh test deployment got to add that back in there and then environment variables this is where you add any environment variables you might need for your app so I'm going to add my open AI secrets and some other things um and then we'll come back so I've added my open API key and then I add a tracing project and this is where all the runs from this app will get sent so I don't have a test deployment tracing project yet so we're just going to use the default name at select um and hit submit and now I've submitted my deployment it's going to load and then try and deploy or not try it's going to deploy my app this is going to take a few minutes so I'm just going to use the deployment that I already have and we can see how you would connect that to your front end um and then actually make API request to it okay so now we're at my deployment we can see my deployment is currently deployed and we have this API docs link right here we can click on that um while you're here you can look at any API and points you might want to hit but in our case we want this base URL and this is the URL which we're going to use to connect to it um below it we see this API key this is a normal lsmith API key you can generate those in your settings of lsmith and we're going to want to pass that under a custom header via X API key so once you have your your API key and your url um in my case I wanted to set them as an environment variable so I've set my API URL under the langra cloud API URL and then my API key under lra Cloud API key you're going to want to instantiate a new instance of the client from the Lang Smith or Lang graph SDK see I've imported client from Lang chain L graph SDK and then I passed in my API URL and my default headers containing my API key this is going to be the SDK we're going to use to connect to our backend and make API requests so you can see the first thing I do is I search for my assistant um lra deploy offers a whole lot of features around you know stateful uh threads where you can revisit certain steps and persist the data that were in them in this case we don't really need any of our previous threads are run so I just grab the first assistant um and then I have this runnable Lambda the reason I have this runnable Lambda is because the stream runnable UI function we use to stream the UI components to the client requires a lang chain runnable to invoke as the API the Lang Smith SDK does not contain a stream events endpoint so I wrap it in runnable Lambda called client. runs. stream which is going to hit my Lang graph API and stream back events the same way um and then I just yield each of those events as they come in and I'm yielding that inside of a runnable so I can pass it to my stream runnable UI call stream events on this runnable and extract all of these events in the same way I would if I were just calling stream events important to also apply this run name and that's because there is one small little implementation detail we don't need to worry about inside stream runnable UI and the name needs to line up with whatever name you've assigned to your runnable that's wrapping your uh L graph SDK stream call then as you can see right here we have the stream runable UI um if you watched our previous videos you're familiar with this function we went into detail and exactly how it all works here we're not going to do that but you should go check out those videos um we're passing in our runnable the inputs which we got from our front end which contain the content and then orders in display format and then our event handlers these event handlers are what's going to be called each time an event is yielded in our stream events call so in our case we just have one event handler one in here we get our stream event which is each event that's yielded and then Fields these fields contain this UI field and callbacks we only care about UI here this UI field is from the aisk and it's what allows us to update the UI um using react and they do some heavy lifting in the hood we don't need to worry about um so then we iterate over all of the events we only care about the event if it's on chain end and that's because Lang graph or sorry L graph and uh Lane chain stream events are going to stream events for everything that happens in this graph so we can get an onchain start some events inside the chain and then onchain end in our case we only care about onchain n because we only want the results of each node so if the if the event does not equal on chain end then we return next we say if the name is generate filters and these names are all going to correspond to whatever names we gave our nodes in the back end so if the name is generate filters we're going to get the filters that we're applied and call this handle select filters function this handle select filters function is going to map over all of our filters um and assign the value of that filter to this filter button which is what we see on the front end right there Then when it's done it calls ui. update and this is going to actually update the UI with these filters so that the user can see them right away as soon as the filters come in all these functions do something pretty similar to that where they take the result of a chain um output and then they update the UI with some data we can jump right to the last one where we iterate over our bar chart type um this display data object contains a props function where we pass in our filtered orders and that formats these orders in a way which can be passed to bar chart pie chart or line chart we then assign our bar chart variable either bar chart pie chart or line chart and then we update our UI with as we saw the title of the display format the description and then the bar chart uh this is already there and we don't need to update or append this again because when you call ui. update all it does is replace the last jsx element which upd in the UI or you can call ui. append which will add a new jsx element to the UI that we're updating and we can no longer update the first element so when we added our um our first buttons right here we called ui. update and the next time we added some UI element we called ui. append and that will set this in stone and we can't touch that again so we call ui. update and that adds our bar chart um and it replaces whatever was there before in our case it's going to be that loading component which is going to get replaced by this update call and then finally it finishes and gets rendered to the client so if we go back we can look at a couple more examples and then we can look at the Lang graph Cloud playground which is how you can test this and iterate on things in development okay so we're back in the UI and we can look at our display types model which will show us all the different ways we can render this data um and let's say we want to see our weekly order volume the xais is the date and the Y AIS is the number of orders this chart will show you the trend of order volume over time and allow you to identify Peak ordering weeks that seems interesting so let's say say orders by week after 2023 101 and with an order price of more than $200 we'll hit submit we'll wait for it to load we see our after date 2023 101 and then our minimum amount to filter by is 200 and there we go we have our bar chart as we saw here we wanted a bar chart and the the xais are the dates let's see if that lines up yep dates and the y- AIS is the number of orders and that makes sense we can see each week this day is going be start week and the number of orders now let's go take a look at another type of chart we can request and see if we can get it so we open up our display types model let's scroll down let's say we want a p pie chart how about quarterly order distribution groups orders by quarter using the ordered at field so let's say show me orders by yearly quarter hit submit wait for it to load I don't think we should have any filters so yeah we have no filters right here and it selected our pie chart just like we wanted and it shows us all the orders in a specific quarter so now that you can see how that works we can go take a look at the langra cloud playground and see how you can vot this in sort of a development mode and um iterate on your or uh Lang graph graph a little bit more so we go to Lang Smith go to our deployment page and then we click Playground open this up in a new tab and here we see we have all the different nodes these nodes are drag and droppable we can hit this to add interrupts um marking so that when it reaches this node it'll pause um we can look at all of our previous threads we can create new threads and we can actually invoke our chain so I'm going to paste these inputs in because they're kind of long and we'll come back once they're in and we can submit it okay so we're going to place an input which says orders by week after this date with a price of More Than This price I've added my display formats because these are also what is passed to the chain we first invoked in the client and also our orders these all I just copy pasted them um once those are all in we can hit submit and we should see our graph executing in real time right here so submit it loads and then boom it's flying through everything we can see over here as well exactly what nodes it's hitting and when it's done it exits out there and we can actually go and look at all the different nodes and the outputs from each node so a filter data returned orders um generate dat display format return the display format chart type return the chart type um and so on so we can see exactly all the filters which were applied to these and it just gives us a much better idea as to what's going on inside our chain we can do certain things like edit those fields um we can run from specific nodes with different results and we can see how our graph reacts and acts based on different inputs and it can give us a much quicker iteration cycle because we don't need to hit the entire API at once um or go and modify the code like that because we can just edit the inputs and outputs and rerun from right there that's the end of this video I hope you guys all have a much better idea as to how to build a general VII application like we saw here I hope you a good idea is to Lang graph cloud and go try it out on your own it's a lot of fun to build with Lang graph and now L graph cloud makes it so much easier to deploy to production um and I hope to see lots of interesting and fun G VII apps running on lra cloud in the future see you on the next one