Transcript for:
Advancements in Knowledge Graphs for RAG

all right so let's go and get started my name is Jonathan Larsson and today I'm excited to come here today to talk to you about graph rag graph rag is an LM derived Knowledge Graph for Rag and just to give you a small teaser what you see over here on the right hand side is actually a picture of Kevin Scotts behind the Tech podcast as an llm derived knowledge graph an LM memory representation we'll be covering this in about the middle of the presentation here today so just wanted to give you though a teaser of what's to come so what is graph rag graph rag is really a two-step process it is first an indexing process that's run over top of private data to create llm derived knowledge graphs these knowledge graphs serve as a form of an llm memory representation which can then be used by subsequent steps to then do better retrieval which leads us to the Second Step second part of it is it's an llm orchestration mechanism that utilizes those pre-built indices that I just talked about and then those indices can be used to construct much much better more empowered rag operations and that's really leads us to the key differentiators of what graph rag allows us to do the first is allows us to help enhance search relevancy this is because it has a holistic view of the semantics across the entire data set second it helps us enable new scenarios that would today require a very large context for example doing holistic data set analysis for Trends summarization aggregation things like that so if you haven't had a chance to read it yet in the pre-add encourage you take take a look at the blog post take a look at the archive they have a lot more technical details and measurements and evaluations in this space but the one thing I want to ask of you today as we go through this presentation is how should we best Drive Impact with this technology okay so let's go ahead and explore how graph rag actually works the way that we explain graph rag is first to explain of course how Baseline rag works so in Baseline rag what you do is you take a private data set you chunk it up using embeddings and you store into a vector database then you perform your neighbor search and you can use those nearest neighbor searches to augment the context window graph rag is a parallel process to the way that Baseline rag Works what we do with this is we actually take the same text chunks and then we take that those sentences that are being extracted and we asked the llm to perform reasoning operations over top of each sentence in a single pass through over all of the data so let's take this example sentence here we have the P leader Sylvia Mar who took the stage with luo Jack founder of sea of our wildlands now the major differentiation here is you can see there's some named enti recognition done over top of the sentence and that's pretty typical for this type of text analysis however the major differentiation here is we're not just looking for the named entities we're looking for the relationships between those entities and the strength of those relationships and this is where gp4 really comes in to play a very strong leading role in the capability of this technology for example we can actually see that Sylvia mark is very strongly related to the PO because she is the leader we can see that she is perhaps weakly related to save our wildlands because she is apparently taking the stage with its leader but she's not the leader of that and that's a major differentiation here is that gp4 can understand the semantics of these relationships and that allows us to create weighted graphs from those relationships that are far richer than just like co-occurrence networks which is where traditional ner would typically take this type of problem so once we create these knowledge graphs let's say for example we took all these sentences across this data set we create a Knowledge Graph here what you get are a series of nodes that are connected to each other via these relationships but that's not all we can do once we have the graph we can utilize graph machine learning to do semantic aggregations and heal Gomer over top of those structures so if you take this graph right here that we just created which has no labeling on it no colors if you will we can then create a labeling on that at one level and then we can hierarchically create subpartitions and subpartitions until you get down to individual nodes this allows us effectively a granular filter that allows us to ask questions at any level of granularity across the data set for a semantic topic once that is built we can then take that into a variety of different end use cases just to list out a couple we could do data set question generation we could do summarize Q&A and a variety of other methods that we can talk about here later today um so let's go ahead and jump into some demonstrations here and show you this technology in action so as I bring up the screen here what you're going to see here are three different columns in addition I need to explain the data set that is behind each of these different columns that are going we're going to be analyzing so we have three different rag systems that are implemented over top of one data set this data set is about 3,000 articles in total it's articles that originate from both the Russian and the Ukrainian side of the war of the conflict and we're going to ask it a question that to all three different rag systems we're going to ask it what is nov roia and what are its targets there actually two questions now if you're not familiar with no what Novar roia is Novar roia are the uh Russian occupied portions of Ukraine it's also a political movement and then the water's targets are looking for what noia might be looking to destroy is basically we know that there's some information inside this data set let's see how we can actually use rag based retrieval to find those particular targets so using traditional Lang chain based semantic search L is just using Baseline rag right here we can see it actually fail to answer either of the two questions sometimes it can answer the first part of the question okay but today it decided it couldn't come up with anything the second column here is a much improved version of rag this is a uh as people who've been using rag know it oftentimes requires a lot of tuning prompt engineering and other improvements to actually make the rag work more effective L and we can see that here so the left hand side here is looking at Baseline rag the middle column here is is a much improved version It's a supercharged version of rag now if you read through this text right here I'm just going to scroll through here real quick you will actually see that it actually does an okay job of actually talking about the first part of this question what is Novar roia but it actually fails to mention anything about the specific targets that Novar roia are looking to uh uh destroy in comparison over here on the left hand side I'm going to highlight this first paragraph this first paragraph here actually addresses the what is no roia part of the question that's good but the second paragraph right here gives us a list of very highly specific targets that novos was looking to destroy for example the national television Company Ukraine the radio station the canery uh looks like also some private private bank and Roshan properties and they're also planning on terror attacks in the city of of Odessa and so this is exactly what we're looking for these are the specifics that we were seeing recall failures for on the Baseline rag operations that we performed in the left two columns now when we perform graph rag one of the nice features about it is it also allows us to actually look at the underlying Providence so you'll see inside the text here it refers to relationships we can actually take a look at those relationships I'm going to move to a second tool here which is this is a p purely graph rag tool so I have two columns here one is doing a local search one is doing a community based search we'll get into the details of what each of these two are later both of these answer the question correctly but I'm going to focus on that question about Novar roia has targeted several entities for Destruction let's go ahead and click into those relationships and if we open up the raw text we can get the English translation because I can't read Russian or Ukrainian and actually get to the originating text Chunk that was used to make the specific claim made above right here so this can really help aid in understanding if there are hallucinations being made the system and detecting them and then also providing grounding and evidence which is critical for analysts trying to use this data for per for their production purposes the other thing we can do in here of course as well too is we can use a second agent to really help reduce hallucinations by providing a verification score in this process what we can actually do is we can also use the information that's provided in the context and the answer that was provided and use an independent agent to evaluate the two of those together to ask it if anything was hallucinated in those results and so the score can really help provide an after Thea analysis as to whether the information was correctly grounded or not now let's go on to some of the capabilities where graph rag can show new opportunity spaces where regular rag struggles I'm going to ask it the question what are the top five themes in the data now when you ask this question to a typical Baseline rag system it's going to take this phrase it's going to vectorize it using like an ADA embed look for the nearest neighbors the problem here is there's nothing in this query that would indicate any specific filtering over top of that data set unless someone inside of this data set already wrote what are the top five themes of the data which is highly unlikely because each of these articles are independent of one another and they don't have any knowledge of one another so what does Baseline R give us in this case in this case it gives us state of Russian economy it actually did in this case also come up with one example here about acknowledging that there seems to be a war the national rating the investment climate improving the quality of life and the meeting of Vladimir Putin so I want to just take a step back for a second and kind of emphasize this is a data set that's primarily about a conflict like 80% of the articles are about a conflict and you can see about 80% of the bullet points here don't have anything to do with the fact that there's a war going on in contrast because we can use the semantic and thematic and agative approaches that we've built over top of the graph machine learning parts of graph frag we actually have a holistic understanding of what's happening in the data set so on the graph rag side of things you can see front and center the first thing it's talking about is the conflict and the military activity and how that plays through on each of the major themes that it then gives you in context now I do want to draw your attention to one other thing on this with Baseline rag it's using about 5,000 tokens of context and it returned in about 8 seconds graph rag here is a lot more expensive it took about 50,000 tokens of context and it took about 71 seconds to respond but the important piece here is that though graph is using a lot more resources it's providing a much much richer and correct answer which we've seen for our customers is the crucial piece and what they really care about and they're willing to actually pay the extra cost to get these much better answers returned next let's take a look at this data set from a network map perspective so this is actually a visualization of the entire graph Network map of this data set which is called the vena data set over the Russian Ukrainian War now again I said about 80% of it's on the war so the main core of this is going to be about war topics on the periphery however we'll find lots of normal topics and one of the ones I like to look at in particular is this group over here on the left it turns out a lot of these entities here are about soccer and you can see that they're semantically grouped next to each other in the embedded space which is also great because it starts showing us that there are a group of entities that are co-related with one another and then these colors here serve as a for us to actually uh look at the communities that Define these semantic boundaries so for example we can actually choose one of these colors I'm going to choose a color for Community looks like number 450 I can extract the subgraph for it and we can also extract out pregenerated reports that the LM has already generated on each of these Community structures so in this case here we can see clearly this is the Novar Roosa community that was being used to answer some of those um queries that I made earlier in the demonstration all right let's go and switch context a little bit to a different data set this is a data set where we took in all of the transcripts from the behind the Tech podcast which is of course hosted by Kevin Scott so we can actually use the same type of methods here with craftag and ask it holistic thematic and Trend type questions over top of this data set where again regular rag would tend to fail so the first question I'm going to ask it are one of the top 10 technology Trends in the podcast and so you can see here it has some good breath and diversity we'll dive more into this here later too when we go to compare this with Gemini so I'll ask you to uh maybe uh comment on some of these later because we'll actually be able to see it in side by side with Gemini the next question and this is an important one because we have a lot of other demonstrations tied to this one or what are the most odd conversations discussed again this is a question where if you imagine Baseline rag being run over the top of this it would perform very poorly it would effectively pull back random chunks and if those chunks had randomly picked up something with some odd conversations it would comment on them but it's not going to have the comprehensiveness and diversity that uh graph rack approach will in this case the last example I'm going to show you here is actually a side by side so as I mentioned before we have orchestration that runs over top of the indexes over the indices and we one we have two of those methods that we're highlighting here today and on this orchestration we have one that does a local search of the knowledge graph so it looks for like in this case nodes discuss artificial intelligence and looks at the nearest neighbors or we have a global search which is going to be a more expensive operation that look at looks at the community summaries and you can see the comparison of the depth and breadth between these two so in this case here you can see that very clearly the global summary here is providing you a much more comprehensive and diverse view than you are in like the local search so let's take a look at Kevin's podcast uh Knowledge Graph uh an interactive graph visualization tool here now each of the nodes here represents an entity that the llm extracted and again I just want to take you back for a second this knowledge graph did not exist before the llm was exposed to all the transcripts it read through all those transcripts and created this from nothing and that's pretty cool so if we take a look at the the colors here the colors are semantic partitions that represent a highle topic now one of the things I should mention because I can only show one level of that hierarchy at a time is that we're we're looking at the root hierarchy of this graph right now so if we zoom into this node right here let me go ahead and just pull it up here we can see this node right here represents Kevin Scott zooming out and looking up a little bit we can see that this node is Microsoft has another very highly connected one and another one right here which is a major landmark is Christina Wharton who helps out on the podcast as well too now if we zoom into one of these colors in particular let's go a take a look at this green section over here one of the things we're going to notice is we can see all the entities that are being pulled out so we can see things like RNA virus Spike glycoprotein SARS Corona virus 2 some things like synthetic biology computational biology and what's interesting here is it's actually grouped two episodes together into the same semantic topic we have one with Drew Andy in episode 22 we have one with oh uh yeah one episode 33 of David Baker I looked these two people up it looks like they're both in the bi biology field so it totally makes sense and it actually goes to show that graph is actually working in the sense that it pulled out all the entities and then actually group them semantically together which is exactly what we want because that will help us answer those questions so if we get bi biology type questions on the orchestration side of the house we can then come to this portion of the graph to then help better answer and augment those questions