Transcript for:
Graph Neural Networks in Wireless Networks

[Music] hey everyone welcome back today we're taking a deep dive into something pretty amazing graph neural networks GNN yeah or gnn's for short and we're going to be focusing specifically on how they're being used in a wireless communication and networking right now we know you guys out there are listeners you're already familiar with machine learning but gnn's that might be a bit of a new frontier for some of you yeah so think of this deep dive as your uh your crash course your express lane into the world of gnn's and how they're uh really changing the game in Wireless communication absolutely so to keep things organized we're going to break this down into three main areas we'll start off by uh by really understanding what gnn's areh what makes them tick and uh and why they're such a natural fit for the challenges we face in wireless networks okay then uh we'll move on to looking at how gnns are being used to tackle real world problems at the physical layer right so we're talking about things like um optimal power allocation beam forming making sure those signals are strong and clear where they need to be and then finally we'll uh we'll zoom out a little bit and go up to the networking layer okay we'll see how gnns are being used for routing scheduling even creating these super fast Network simulators that uh that can really speed up research and development now I don't know about you but I am already geeking out just thinking about all this it's really exciting stuff it is so let's bring in our expert to guide us through this fascinating world of graphs all right happy to be here so let's kick things off by talking about a fundamental shift that's happening in machine learning these days okay so traditionally we've been very focused on individual data points right you know treating them as kind of isolated entities right but now the real power is in understanding the relationships between those data points ah okay and what better way to repres relationships than with a graph right a graph I could see that so think about your wireless network you've got your devices your base stations your users all of these can be nodes in our graph okay and those communication links between them those are the edges in our graph ah okay and we can even weight those edges based on things like signal strength interference levels you name it oh I see I see so it becomes like a A visual representation of the entire network and how everything is connected exactly it Capt the whole structure and those crucial interactions that's cool now there were some earlier attempts to learn from graphs things like deep walk and node two backc okay but they kind of hit a wall they weren't very scalable you couldn't easily add new nodes to the network and they completely ignored those valuable node features that we have in wireless networks right right each device has its own unique characteristics exactly and those characteristics can be really important for making good decisions okay so what was the uh the Breakthrough what came after that that's that's where gnns step in graph neural networks ah quote gnn's they're designed to overcome those limitations okay so first off they share parameters across the entire graph which makes them incredibly scalable ah so it doesn't matter how big the network gets exactly and then new notes yeah no problem they could adapt on the Fly okay and most importantly they can actually use those node features to gain deeper insights ah so they're not just looking at the connections they're looking at what's at each node precisely they're taking everything into account wow okay so I'm I'm intrigued now but how do they actually work I mean what's what's the magic under the hood all right so one of the key Concepts here is what's called a message passing neural network message passing or mpnn for short picture each node in the graph having a little chat with its neighbors sending messages back and forth sharing information or like gossiping about the network kind of and each layer of the GNN it Aggregates information from those immediate Neighbors okay so as we stack more layers we're essentially expanding each node's sphere of influence so it's like a ripple effect spreading through the network exactly it starts locally and then expands outward oh I see I see mathematically each layer can be described by equations that update a nod's information based on what it receives from its buddies that makes sense that makes sense now this mpnn framework has actually given birth to a whole family of different GNN architectures oh you've probably heard of graph convolutional networks or gcn's gcn yeah they are kind of the rock star in this world and they're very powerful and widely used right all right so we've got these powerful gnns that can understand relationships can handle complex networks but let's uh let's bring it back down to earth a little bit why are they so well suited for the specific challenges that we face in wireless communication right good question so it really comes down to a few Key properties of gnns that align perfectly with the nature of wireless networks so first off scalability scalability okay real world networks are constantly growing and changing right gnns are built to handle that they can be trained and tested on networks of all shapes and sizes so they can grow with a network exactly no matter how big it gets that's important and then second distributed implementation okay in a decentralized wireless network you don't want all the processing happening in one central location right creates bottlenecks it's not very efficient yeah yeah but gnn's they lend themselves naturally to distributed computation because that message passing we talked about it can happen locally between neighbors ah so each node is kind of making its own decisions based on what's happening around it exactly no need for a central brain calling all the shots I like that I like that and you know in wireless networks optimal decisions they often depend on local information interference levels Channel conditions nearby devices all that stuff gnn's capture that local awareness beautifully it's like they have their finger on the pulse of what's actually happening on the ground exactly but here's something even cooler gnn's are inherently permutation equivariant or invariant permutation equivariant okay now you're just using big words to try and impress me mhm ah fair enough but it's a really important concept what it means is that if you Shuffle the nodes around in a network the underlying communication principles they stay the same okay gnn's understand this inherent symmetry rearranging the furniture in a room exactly the room is still the same just the layout has changed you got it and this symmetry awareness it gives gnn's a huge advantage in wireless networks it's different from how say convolutional neural networks or CNN's work in image recognition yeah CNN's those are for images right CNN's exploit translation and variance they understand that an object in an image is the same whether it's in the center or off to the side okay but gnn's they're all about the connections regardless of where things are physically located so CNN's are about location in space while gnn's are about relationships connections precisely and this leads to a really exciting approach where we can blend those classic Wireless algorithms the ones that we've been using for years with these data driven GNN modules so we're not throwing out the old stuff we're just giving it a boost exactly we get the best of both worlds we get the insights of those established models combined with the flexibility and learning power of gnn's okay I like that I like that all right so we've laid the groundwork we kind of understand the basics of gnn's now yeah now I'm really curious to see how this all plays out in practice are you ready to jump into some real world examples absolutely let's start with the physical layer that's where gnn's are already making a big impact all right lead the way okay so we're diving down to the physical layer what's happening down there what are gnn's doing at this level so down at the physical layer it's all about making sure those Wireless signals are getting where they need to go right and one of the biggest challenges we always face is interference yeah right too much interference and your Network's going to slow to a crawl it's like trying to have a conversation in a crowded room everyone's talking over each other nobody can understand anything exactly so one of the main weapons we have to fight interference is power allocation power allocation and that basically means dynamically adjusting the transmit power of different devices okay to try to squeeze out as much network capacity as possible it's like finding the right volume for everyone in that crowded room so that everyone can be heard without shouting each other down Perfect Analogy but you know it gets really tricky when you have tons of devices all trying to talk at the same time yeah real world networks are messy they are Channel conditions are changing constantly you need solutions that can adapt quickly that's where gnn's can really shine ah okay so how are they being used for this power allocation problem well one powerful approach we're seeing is this idea of uh domain inspired learning domain inspired learning okay and that basically means we're combining The Best of Both Worlds we're taking those classic Wireless algorithms right the tried andrue stuff exactly and we're giving them a GNN boost we're using GNN modules to enhance them and make them even better okay so we're not throwing out those classic algorithms we're just making them smarter exactly so let me give you a good example okay let's take the classic wmse algorithm it's well-known algorithm for power allocation okay it works but it can be a bit slow sometimes it gets stuck in suboptimal Solutions and it doesn't always take advantage of all the extra information we have about the network right right it's like it's working with one hand tied behind its back yeah kind of so that's where the gnn's come in okay imagine you have a network with multiple transmitter receiver pair and you want to find the optimal power for each transmitter right so traditionally you would use the CSI the channel State information CSI which basically tells you how well the signals are propagating between devices okay like a map of signal strength right but gnn's they can go a step further oh okay they can incorporate not only the CSI which we represent as a weighted adjacency Matrix of our Network graph but also node specific information node specific like well things like Q length okay or user priori so each node has its own own you know unique characteristics right and we can feed all that into the GNN so it's getting a much richer picture of the network not just those raw signal strengths precisely it's understanding the whole context and that allows gnn's to learn a direct mapping from all this combined information to the best power settings it's almost like um like we're teaching the GNN to be a power allocation expert ah okay it's learning from all this data and figuring out the optimal way to adjust the power exactly it's figuring out those complex interactions that influence power allocation and there are some um some interesting GNN architectures out there that are doing this really well like RGN and idcn net RGN and igcn net okay so RGN it uses a standard kind of layered approach each layer gathers information from a wider neighborhood in the network okay and igcn net that one focuses on actually Computing the pawise interference between neighbors so two very different approaches right right both using gns yeah and both achieving good results okay now there's another really cool approach we should talk about it's called algorithm unrolling algorithm unrolling okay that sounds interesting it is so what we're doing here is we're actually taking the steps of an existing algorithm like that wmse algorithm we talked about we're taking those steps and turning them into layers of a neural network then we use gnn's to refine those steps ah okay so we're using the algorithm as a starting point but then letting the GNN kind of learn and improve upon it ex exactly and one specific method that does this really well is called UWM MSE UWM MSE unrolled WM MSE okay and it's achieving some really impressive results it's getting near optimal sum rates it's running faster than the original wmse algorithm and it can adapt to changes in the network wow that's amazing so we've tackled power allocation but you know things get even more complex when we start talking about mimo systems you're right multiple antennas yeah yeah beam forming right so with myo we're not just controlling the power we're also controlling the direction of those signals right it's like having a spotlight you can adjust how bright it is but also where it's pointing that's a great way to think about it so we're trying to focus the signal towards the intended receiver and minimize interference with other devices right more Precision but also more complexity definitely but the good news is we can extend that U mmse method we talked about to handle myo as well okay so the same basic idea but adapted for a more complex scenario right and in this case the gnns are actually learning to optimize both the power allocation and those beam forming vectors the beam forming vectors right those determine the direction of the signals exactly so U mmse it's a bit of a multi-tool then it is handles both those simple single antenna cases and the more complex myo cases all with the help of gnns yep and the results are really impressive w mmse is outperforming other state-of-the-art methods in terms of both achieving High data rates and doing it quickly that's awesome so we've covered power allocation in different forms MH but I know there's another area where gnns are making a splash and that's Federated learning right right so how do they fit into that world so in Federated learning you have multiple devices that are all working together to train a machine learning model but they don't share their raw data right right they just share their model updates exactly and power allocation it's crucial here because those devices need to upload their local model updates to a central server okay and we want to do that as efficiently as possible we have to consider delay energy consumption right we don't want devices draining their batteries too quickly exactly and we also have to consider that different devices might have very different data right some might have lots of data to upload others might have very little so it's a pretty tricky problem it sounds like it so how are gnn's helping with this well researchers have developed a really interesting GNN called PDG PDG it stands for Primal dual graph convolutional power Network wow that's a mouthful it is but basically it's a GNN that's specifically designed to handle the challenges of power allocation in Federated learning okay and it works in two stages okay first it learns a power allocation policy based on the network structure and the goals of the Federated learning task right and then during each iteration of the learning process it applies that policy to determine the optimal power level for each device it's like it's giving each device a personalized power plan exactly to make sure those model updates are sent efficiently and contribute effectively to that Global model that's really clever yeah and the results are promising PDG is leading to lower transmission error rates and it's significantly improving the overall performance of Federated learning that's awesome so we've seen how gnns are optimizing the physical layer we've looked at Power allocation in different forms beam forming even how they're being used in Federated learning I'm ready to move up the stack now what can gnn's do with the networking layer Okay so we've explored the physical layer we've seen how gnns are making things more efficient down there but now I want to zoom out a bit and look at the networking layer okay yeah it's like we're moving from you know fine-tuning the engine to managing traffic flow on a massive highway right right what kind of challenges are we dealing with at level so at the networking layer it's all about making sure that data gets where it needs to go smoothly efficiently reliably okay we're talking about things like routing finding the best paths for data packets to travel through the network and Link scheduling making sure those paths are being used effectively ah okay okay so it's like uh it's like air traffic control but for data packets exactly making sure everything flows smoothly no collisions no delays I imagine these tasks they get pretty complicated oh absolutely we're often dealing with these incredibly complex combinatorial optimization problem Dorial optimization okay now you're just using big words to scare me no no but it basically means you have a ton of different options okay and you're trying to find the absolute best one okay it's like trying to find the shortest route through a city with thousands of streets and the traffic patterns are changing every minute yeah that sounds like a nightmare it can be especially in a wireless network nodes can move around Channel conditions are constantly fluctuating right right it's a very Dynamic environment it is and to make things even more challenging we often need solutions that can be implemented in a distributed manner distributed okay so each node has to kind of make its own decisions right based on its local information you don't want everything relying on a central controller that makes sense that makes sense so where do gnn's fit into all of this well as we've seen gnn's excel at learning from structures mhm but they can't always handle the hard constraints that come with these networking problems hard constraints what do you mean well for example in link scheduling you can't have two devices transmitting on the same channel at the same time if they're within range of each other right right that would cause interference exactly it's a fundamental rule you can't break it okay so how do we combine the flexibility of gnn's with those hard and fast rules that's where a really clever framework called gdpg twin comes in gdpg Twin twin yeah it stands for a graph-based deterministic policy gradient twin all right I'm I'm going to try and remember that but tell me what's the basic idea so think of it as a team effort okay you've got the actor that's a GNN that proposes actions like which links to schedule which routes to choose okay then you have the critic another GNN that evaluates those actions it provides feedback on how well they're working okay and then to make sure the actor doesn't suggest anything crazy anything that breaks the rules we have a traditional algorithm acting as a safety net ah okay so it's like a system of checks and balances exactly the actor proposes the critic evaluates and the algorithm makes sure everything stays within the lines I like that I like that so let's see how this gdpg twin thing actually works in practice all right so one type of problem where it's really effective is what we call Independent repetitive combinatorial optimization problems wow that's a mouthful it is we just call them independent archip books for short independent arix okay think of it as solving a series of related puzzles okay each puzzle represents a network optimization problem like for example scheduling links to maximize throughput okay and the independent part means that we can solve each puzzle individually without worrying too much about how our solution will affect future puzzles so each puzzle is self-contained right it has its own optimal solution exactly and in this setting the gnn's in gdpg Twin they learn to recognize patterns in the network structure and use that knowledge to make better decisions okay so one concrete example is using gdpg twin for the maximum weighted independent set problem maximum weighted independent set yeah it's a classic problem in link scheduling basically you're trying to find the largest set of links that can be activated simultaneously without causing interference okay so finding that balance between maximizing throughput and avoiding those conflicts exactly and gdpg twin has shown really significant performance gains over traditional methods for this problem wow okay that's impressive but what about those cases where decisions do have consequences down the line right good point so you know in many scenarios what you do now can affect what happens in the future it's like a game of chess you have to think several moves ahead exactly so in these cases the gnn's in gdpg Twin they have to step up their game okay they not only need to encode the current Network State they also need to learn to predict those future rewards based on the action taken so it's a more strategic approach it is and one example where this is really important is delay oriented link scheduling okay where the goal is to minimize the time it takes for data to travel through the network you're not just trying to avoid interference you're trying to get those packets to their destination as quickly as possible so latency becomes a key factor it does and what's amazing is that gdpg twin can achieve performance comparable to much more complex reinforcement learning methods really but in a more efficient way exactly exactly it's much more computationally efficient that's great so we've seen how gdpg twin can be used for those independent rcips and those scenarios where decisions have long-term consequences are there any other applications for this framework oh absolutely it's incredibly versatile you can apply it to things like back pressure routing in Wireless ad hoc networks back pressure routing yeah the goal there is to avoid congestion by making smart routing decisions or you can use it for congestion aware distributed task offloading where you're trying to decide which devices should share the workload to minimize overall congestion so it's like a master traffic controller for the whole network that's a good way to think about it all right all right so I'm really impressed with this gdpg twin thing yeah it's a powerful tool but there's one more area I want to touch on before we wrap up and that's those super fast Network simulators that you mentioned earlier right right the digital twins yes so traditional Network simulators they can be really slow right they can it takes a lot of time to run those simulations it does they're very computationally intensive yeah so how are gnn's changing that well we can actually use gnn's to create these digital twins of the simulators okay so essentially we train a GNN to predict the key performance indicators of a network you know things like delay Jitter throughput okay packet drops all those important metrics and it can do that much faster than running a full-blown simulation so it's like a shortcut yeah it is it's a way to get those results without having to wait for the simulation to Crunch all the numbers and that could be a huge timesaver it can and it opens up a lot of exciting possibilities like what well we can evaluate Network designs much faster okay we can try out different configurations we can even use these GNN based simulators to train other machine learning models for Network optimization wow so so it's like we're using machine learning to turbocharge our Network simulations and then using those turbocharge simulations to design even better networks it's a really cool feedback loop it is it's a really exciting area of research and one specific G&M architecture that's doing this really well is called planet planet which stands for path link and node Network okay and it's achieving incredible prediction accuracy while being orders of magnitude faster than traditional simulators we're talking about 1,000x speedups or more wow that's amazing it's like having a crystal ball that shows you how your network will perform it really is and you know this is just the beginning gnn's are a relatively young field but they have immense potential in Wireless Communications and networking well I have to say I'm really excited about the future of gnn's in this space me too it's a fascinating field we've seen how they can optimize the physical layer manage traffic at the networking layer even speed up the way we design and evaluate networks it's been an amazing journey it has so for our listeners out there the key takeaway is that gnns are incredibly powerful tools for learning from the complex relationships in wireless networks yeah they could be used on their own or in combination with those traditional methods that we've been relying on for so long and they're being applied to a wide range of problems from Power allocation and beam forming at the physical layer all the way up to routing scheduling and even simulating entire networks gnns are really transforming the landscape of wireless communication they are and is the continues to evolve the possibilities are truly Limitless absolutely so that's it for our Deep dive into gnm's thanks for joining us we hope you found this episode insightful and we hope you're as excited about the future of this technology as we are until next time keep exploring keep learning and keep pushing the boundaries of what's possible in wireless communication