Transcript for:
AWS Networking Foundations Overview

Welcome to NTA 307 AWS Networking Foundations. I'm Mike Kornstubble. I'm joined with my colleagues, Anoop Taluri and Nayeon Kamuri, who you'll meet here in a little bit. We all know that every application runs on top of a network. So I am so enthused to see so many of you here in the age of AI, here at a networking talk, right? So thank you so much for being here with us at Mandalay Bay. Hello to all of us in the simulcast over in the Caesars. and a big shout out to everyone streaming live from home. So most importantly, we know that networks run our applications, networks operate deliberately, so we have to make deliberate decisions about the networks we build because those decisions drive our customer experiences and our operational experiences going forward. So myself and my colleagues here, we're in this space called Greenfield where we help people that are new to Amazon build the best way forward so they have the best foundation and the best path to work from. And that's what we'll teach you here to take you from VLAN to VPC. So let's think about our agenda here. We'll talk a little bit about our global infrastructure, just to give you a little bit of a review of what Amazon is and how we run our global infra. Your network stands on top of us, so it's good to know what we're doing ourselves. We'll talk through getting started in a single VPC. Everyone's journey walks with a first step, right? Everybody's journey here starts with a single VPC. Then we'll talk about expanding beyond that. We'll think about... hybrid connectivity to on-premise environments, and most importantly, we'll also introduce you to partners that can help you along your journey along the way. So first and foremost, AWS Global Infrastructure. What is it? AWS is comprised of 32 regions, which are places around the world that we put infrastructure. This infrastructure looks like this. Our infrastructure in those regions is a collection of data centers, Data centers that we put in constructs we call availability zones that are redundant, high availability, and high scalability, highly fault-tolerant constructs that we then put in regions. Regions contain multiple availability zones, and those are connected with transit centers. An availability zone's design, it's a fully isolated infrastructure of one or more data centers. So when you see the term AZ or availability zone, Think of multiple data centers connected together redundantly that are isolated, that are a meaningful distance apart, and that comprise that availability zone. They're a unique power infrastructure. They have many of hundreds of thousands of servers at scale. And most importantly, think of whenever you're running a workload in AWS in a region, it will be in an availability zone. These AZs are connected with redundant and isolated metro fiber into what we call a transit center. There are at least two... transit centers per region. These transit centers connect our availability zones to each other and to our global network. Our global network is comprised of private capacity and least capacity globally that connects us to multiple carriers and ourselves and takes your traffic from region to region availability zone to availability zone. So now that you know the infrastructure that we have globally that allows you to turn up anything you want globally in minutes, now let's think about your first time getting started. Everyone gets started in a single VPC. Their journey usually looks like this. We'll break it down quickly and then dive deep into what these pieces are. An individual's journey in a single VPC usually starts with picking a region, defining your availability zones, building a VPC, putting in subnets, putting in routing tables and internet connectivity, and finally, launching instances that are either private or public with public IP addresses. So let's break this down in detail, piece by piece, and walk you through what it looks like to build exactly this. First and foremost, what's a VPC? A VPC is a virtual network that closely resembles a network that you would have in your own data center. Think when you have a data center, you get a large network space, then you divide that in VLANs across your racks, right? VPC is the same way. You have a large space that we then subdivide in subnets across subnets into our availability zones that we'll talk about in the next slide. A VPC supports IPv4 and IPv6. Our IPv6 length of our CIDR is fixed at slash 56 and our IPv4 CIDR can be between slash 16 and 28. And the reason why I say this is we'll plan our subnetting based on the size of the CIDR that you give to your VPC. Cool thing we have now too, contiguous IPv6 CIDR blocks are available from Amazon. So we're big, big, big IPv6 proponents. If you think about getting an IPv6 block from Amazon and you have multiple VPCs, you can have contiguous blocks that allows for really easy route summarization in v6. So think about that going forward. And speaking of IP address considerations, think VPC planning before creating it. So just like we said, you have one large CIDR block for that VPC. Think about using multiple VPCs in the future. Think about future region expansion. And what I mean by that is, if you're given one large IP space to delegate for your entire cloud operation, don't assign it to one VPC. Think about what it's like to have multiple VPCs, multiple regions, and delegate that later. Because remember, VPC siters can't be changed once they're made. And also, I would put at the bottom, because everyone's been there before, overlapping IP spaces. equal a future headache, right? So we think about VPC planning and your network ranges so that you have a range that you can use in your cloud, that you can expand with, and that you can natively route without conflicts later. So let's think about subnetting. Subnets are parts of your VPC. Your CIDR is your VPC. You subnet from there. VPCs span a region. Subnets are allocated as a subset of that VPC in either IPv4 or IPv6 CIDR ranges. A public subnet or a subnet that we call public, is considered public when it has a route to an internet gateway. We'll say that a lot, and we'll also explain what an internet gateway is, but it's that definition term that whenever you see public subnet in an Amazon diagram, it means it has a native route in its routing table to an internet gateway. So in this case, we have a public subnet that is both IPv4 and IPv6. We can have two of them, one in each availability zone. Subnets do not span multiple availability zones. One subnet. is in one AZ. You can have multiple subnets in an AZ, but one subnet will not span more than one availability zone. One subnet, one availability zone. You can have up to 200 subnets per VPC. Our private subnet here is also described, and we have a private subnet that's also IPv6. So notice our subnets can be either dual stack, v4 and v6, or it can be v4 or v6. Network access controls. We talked about subnets. Let's think about our way of securing a subnet. This is not the only AWS security construct that's out there, but if we're thinking about security at the subnet level, a network ACL is what secures things at the subnet level. It's the thing that you've probably used in a bunch of different contexts a ton of times. It's IP-based, supports UDP and TCP ports, and has an allow and deny rule. So let's say, in the case of this scenario, we have a public subnet that has an easy-to instance, we have a private subnet, that has a MySQL DB in it. We want the public subnet to be able to access the database subnet, no more and no less. So in this case, we have a subnet range of and we're looking for port 3306 because we're talking SQL. We can put a network ACL that permits that traffic from that source to that port and denies everything else. Each subnet that you create comes with a network ACL. That by default permits all traffic in both directions. Now let's think about route tables. Route tables contain a set of rules that allow us to route traffic. Route tables point to either gateways, piers, or endpoints. And if we think more about route tables, each subnet has an associated route table. A route table can be associated with multiple subnets, but a subnet cannot have multiple route tables. So in this case, we have two public subnets. They have a thing called route table one associated with them. Of course, because they're public subnets, they have a route to the Internet gateway. So notice we have a colon colon zero route for V6 and a quad zero route for V4 going towards an internet gateway, making those subnets public. We can then have two other private subnets that are associated with a different route table. That route table does not have connectivity or should say have a route to a public internet gateway or have the quad zero route to an internet gateway. Therefore, it is not a public subnet. Our public subnets are above us because they do have a route to an internet gateway. And I've said the term internet gateway a lot. This is what that is. An internet gateway is a horizontally scaled, redundant, and highly available component that is associated with your VPC. That allows communication between your VPC and the internet. You have one internet gateway per VPC, right? So you have one internet gateway, it's associated with your VPC, you then point routes to it to be able to use it. That's it. It supports IPv4 and v6. and it connects your subnets to the internet. So in this case, we have an internet gateway associated with the VPC. We then have a route table that's associated with that public subnet that has our quad zero and colon colon zero routes going to that VPC, and that allows internet connectivity from that public IP on that EC2 instance. Ooh, but I said that word, public IP, right? What about public IP addressing? If we have internet connectivity, we have to have addressing that the internet understands. IP addressing. Elastic IP. Elastic IP is a dynamic, static IPv4 address designed for dynamic cloud computing. It is dynamically assigned to your account in that region, and it can be associated to a network interface, and most importantly, re-associated to a network interface. So let's assume we have two instances. We have our instance on the left ending in 1.16, and our instance on the right ending in 2.48. Let's say we have an Elastic IP. We associate it with our account. We then associate it with this instance. Our instance here, 116, has the Elastic IP, 4578, and can talk to the internet. Cool. Now let's think that we have to do maintenance on this instance. So we want to move this IP for any other reason or move the service. Okay. Well, in previous times, we might have had to procure a different IP, do DNS changes, tell our vendors upstream to change their allow lists, not with Elastic IP. With Elastic IP, We can disassociate that elastic IP, re-associate it with the instance on the right-hand side, our 2.48, and everything's fine. We can move our IP amongst instances, eliminating you the need to re-number or do re-DNS addressing. Let's think about outbound traffic. That's all native IP traffic, meaning inbound and outbound. Let's think outbound only. There's multiple times where we have things that only need to talk outbound. They don't need inbound connectivity or maybe we don't want inbound connectivity from a security perspective. Think going out to grab software updates or pushing updates upstream. So in this case, we have two subnets, public and private. We know our public subnet has access to the internet through its internet gateway and routing to an internet gateway. What we can do is use something called a NAT gateway. A NAT gateway is a network translation service that enables outbound connectivity to the internet that has up to 45 gig of aggregate bandwidth per gateway. It lives in an availability zone in a specific subnet, and it is capable of being assigned an elastic IP. This NAT gateway can be talked to by having your private subnet route its default traffic to your NAT gateway. If your private subnet has a route to your NAT gateway, and your NAT gateway lives in a subnet with a route to an internet gateway, your NAT gateway can perform the translation service, getting communication from your private subnet, doing the translation, and then pushing communication out through your public subnet and your internet gateway. NAT gateways also support private NAT. So if you end up in a situation where you do have overlapping IP space or need to have some sort of security boundary, we can also NAT between private subnets. Now let's think VPC endpoints. What if we want to talk to something without using the internet at all? A VPC endpoint enables us to privately connect to AWS services or vendor-supported private link services. So in this case, we can have a VPC endpoint that lands on our VPC that allows us to talk to any of the aforementioned services, like, say, S3. Remember how route tables point to gateway, peers, or endpoints? This is that endpoint that it points to. So let's think about this from accessing AWS services. S3 is a really common service that people access. S3 is API-driven. and it lives outside your VPC, so you have to talk to it through IP communication. Without an endpoint, you would talk to your NAT gateway. If you're in a private subnet, to your internet gateway, and then to the service itself, going across the internet. With an endpoint, we talk for the interface endpoint from the subnet directly to the AWS service powered by PrivateLink. So in this case, we don't talk over the internet. We talk directly to S3. Lastly, Amazon provided DNS for VPC. DNS resolution. We provide a DNS result. that is the plus two of your subnet range within your VPC. We also have a private-based space that it resolves in that helps you resolve DNS names, like private DNS names, resource-based private DNS names, or public DNS names. We also support... Excuse me, this is enabled in your VPC by enabling DNS host names and DNS resolution. We also support Route 53 private hosted zones through that resolver. So in our service for DNS Route 53, you can make a private zone, in this case, for example, example.aws. You can place your DNS records, your private records, A records, quad A records there. Like in this case, we have a record for our database, which is our database instance on the right-hand side. And if our application instance wants to talk to this instance here, it will first resolve DNS using that built-in resolver, resolve the DNS name to its IP address, and then talk to the DB instance itself. And if we bring all these concepts together, your single VPC journey looks like choose a region, Pick your availability zones. Your VPC will span your availability zones. You'll build your public subnets, your private subnets. You'll add a route table. You'll add your Internet gateway. Make sure your route table is pointed towards your Internet gateway, making those subnets private, sorry, public. Then we'll have our private subnets. We'll make another route table. This is our private route table. Our private route table does not have connectivity to the Internet natively through an Internet gateway. And then we can finally place our resources. That's it. But what about private link, right? We still want to talk to our AWS resources. Let's make sure we add in our private link and our endpoints for private link services too. But what about going beyond that? Expanding to multiple VPCs and regions? That's just one. Everybody's journey starts in a single VPC. Let's think about multiple regions and multiple VPCs. And for that, my colleague Nain will tell you all about it. Thanks, Mike. Thanks, Mike. So after we having an understanding of how single VPC would look like and what are the different components, that forms the ecosystem of VPC. So we did discuss about subnets, route tables, internet gateways, NAT gateways. So before touching on multiple VPCs, I would like to give some use cases where day-to-day scenarios, there might be a need for VPC pairing. So for instance, let's say we have a multi-tier application. We would love to have our app table sitting in one VPC, database and storage in a different VPC. It might be for the reasons of security, isolation, and scalability. And let's think of another scenario where we want our application deployed in two different VPCs sitting in two different regions for globally distributing my application, right? And some use cases like file sharing, right? I want apps sitting in different VPCs use the common file sharing systems so that the user will be accessing the same amount of storage data. So how do we make two VPCs communicate with each other? So here comes the concept of VPC pairing. So VPC pairing is basically creating a network connection between two VPCs. You have a source VPC and destination VPC. You create a network connection with the help of private IP space. So AWS here utilizes VPC backbone infrastructure for creating this network connectivity. For an instance, let's say you have a VPC on the left-hand side and call it as a source VPC, and we have a... One more VPC on the right-hand side, the destination VPC. For instances on source to communicate with the instances sitting on VPC, we need a private IP space connectivity. This is where VPC peering comes into picture. The source VPC usually sends a request for the destination VPC to connect through private IP space. So the owner of the destination VPC accepts the invite, and the peering connection is up and running. This is not the end of the story, right? We did discuss about route rules in the route table. That's how we control the traffic between two VPCs, right? So the next immediate step would be I'm going to create a route rule in the route table to move the traffic to and fro between these two VPCs. The best part about the VPCs, it's highly available and scalable, but there are some limitations when I talk about the scalability that I'm going to cover in the next slide. The VPCs are supported between two AWS accounts. You need not be the owner of your AWS accounts to create or connect the VPC connections. And it's also supported across AWS regions. That being said, VPCs sitting in US East can communicate to the VPC sitting in a region, Asia Pacific, for example. And it's a bidirectional traffic. So whenever a VPC link is up and running, the traffic will flow both directions to and fro. And remote security groups can be referenced. In the previous slides, Mike pretty much touched about how do you allow traffic flowing into your instances. So, for example, say there is an EC2 instance sitting in public or private subnet of your source VPC, you can reference a security list of the another destination instance that's sitting in your destination VPC. So this is basically creating a connection, communication channel between two instances that's running in different VPCs. The other... Overlapping IP addresses is something. We don't want any overlapping addresses between two VPCs because two CIDR blocks, if they have an overlapping IP, there might be an addressing conflict. So that's the reason when you're testing VPC pairing, we want to make sure there is no overlap. And no transitive routing. So if VPC A is connected to VPC B and B is connected to C, it's not mandatory that EA should communicate to C. So if you're planning to create... A full mesh of communication channels, you have to create VPC connections between all your VPC components to complete all attachment communications. So in this VPC pairing, you can have fine-grained security rules at the subnet level. So for example, say VPCA is connecting to VPCB, it's not mandatory that you have to allow all the subnets of instances in VPCA communicate to B. You can segregate and have some security policies in place. Only a set of production subnets can communicate to the destination VPC of production subnets. Or you can have limitations at the subnet level as well. So let's take an example of VPC pairing. So on the left-hand side, we have a source VPC. On the right-hand side, we have a destination, and different instances are running in both the VPCs. And now for me to create a communication channel, I have to create a VPC appearing connection, source sends a request, destination owner accepts it, it's all good, and the private IP space of AWS Backbone takes care of the network connectivity, and then we need the route rule in order to make sure whenever there is a traffic that distins for the destination VPC, we are telling through the route table that the source has to go. through the peering connection. And the same thing is applicable in the destination VPC. We have to attach one route rule with the source VPC side, and then we are giving the target as a VPC peering connectivity. So what happens here? Like, you know, we did discuss about VPCs, for example, say a single VPC, multiple VPCs, or... VPCs sitting in different regions, you keep on adding those peering connections. Sometimes this might end up in creating a full mesh. The reason is you can still create a hub-and-spoke model, but if you want all these attachments to communicate to each other, it's easy, it falls into a full mesh model. So what happens in a full mesh model, say for an instance you have 10 VPCs, the full mesh model would look like n into n minus 1 by 2. So it's pretty much bringing us to 9 into 5, 45 VPC connections. So assume in production environments you're scaling, you're having some verticals, you're segregating your organizations, and this can easily double and multiply in no time. So in order to, and as I was describing before, this is not a transitive approach, right? So there is a big call for a system that can smartly create a centralized hub for us. with some hub and spoke model. And at the same time, like icing on the cake, we need some transitive features for that. And here comes the AWS Transit Gateway. So AWS Transit Gateway, you can think this at a region-level virtual router. It's a transitive hub for our networking topology. So all the traffic, you can attach multiple VPCs to Transit Gateway, and all the attachments can communicate with each other with help of Transit Gateways. So the best part about Transit Gateway, it's not limited just to the VPCs, but you can also scale it up to your on-premises network connectivity. So in the coming slide, my colleague Anup is going to give you an overview on how to create a hybrid connectivity, what is VPN connectivity, how Direct Connect will help you to communicate from instances sitting in your VPC to on-premise data centers. So the Transit Gateway, its scope is not just limited. at a single or multiple VPCs, but also you can scale it across your VPN connections and data center connections as well. This is fully managed and highly available, and it can scale from single VPCs to 1,000 VPCs. The best part of the Transit Gateway is you can start from 1 gigabyte per second and can scale all the way up to 100 gigabytes per second based upon the amount of the workload, the network bandwidth that's required. And peering the transit gateway is possible. For an instance, we have the scope of the transit gateway is at a region level. So if you want multiple regions to communicate with each other, just like VPC pairing, you can peer two transit gateways together. Flexible segmentation is possible. In the next slide, I'm going to show you an example of how routing roles would look like and how the traffic can be segmented with help of transit gateway. Routing of traffic with multicast is being supported with Transit Gateway. So all the subnets... that are connected with each other between two VPCs with help of Transit Gateway does support multicasting. And it's a simplified management and network visibility. It's because it's transitive in nature and it's an Habensburg model. And it's taking the complexity of full mesh for us. Let's take an example here. We have a VPC A, VPC B, and two more VPCs sitting in a single region. And Any transit gateway that you create has a default routing table, and this is a place where you define your routing policies between the VPCs. So whenever you bring a new attachment to the transit gateway, rules propagation happens by default to this default route table. So, for example, say I'm bringing three VPCs, two VPN connections, two direct connections. So all attachments can communicate to the other attachments because the routing... A provocation happens to this default route table. And if you want to have some limitations, for example, say, dev instances in one VPC should only communicate to dev instances in another VPC, you can customize your route tables and now have one more route table apart from the default one. That's the reason we have two routing tables here, A and B, for connecting VPCs, different VPCs segmenting instances. And a Habenspoke model, as discussed, and it's on to... It can go up to 100 gigabytes per second based upon the use case in the workload. And let's see how the multi-region connectivity would look like. On the left-hand side, you have a prod and dev account sitting in US East 1, and then on the right-hand side, we have a different region. And for these instances to communicate to each other, we need to have static rules, just like we did for the VPC pairing, right? So the route tables in the transit gateway can handle both dynamic and static rules. So with the help of static rules, we created transit gateway peering between two regions and instances can communicate with each other. So that being said, now we started our journey with single VPC. We expanded it to subnets, route rules, how to communicate with the outside world, internet gateway, private space, NAT gateway. And then we discussed about multiple VPCs, hub and spoke model, complex mesh. And then the transit gateway came into place for transitive behavior of route rules. Now we're going to switch our gaze and take one step forward and discuss about how my single VPC or multiple VPCs sitting in a region can communicate to the on-premises environment with help of hybrid connectivity. So I'll pass it on to Anup. Hello all, thanks for being here. So let's start with different connectivity options for hybrid connectivity between your on-premise and AWS Cloud. So we'll start with AWS VPN technology. So it supports IPSec VPN. And so in order to interface with a single VPC, you can use something called a virtual private gateway and interface with a single VPC. It is a fully managed VPN endpoint device, and it supports redundant IP. IPsec VPN tunnels in different availability zones, providing you the high availability. And AWS VPN supports both dynamic routing using BGP and static routing. And the virtual private gateway device, which interfaces with a single VPC, on that you can have up to 10 side-to-side VPN connections and can increase the limit based on your usage. So where would this be a good fit, right? So most of our customers who are new to AWS, they start with a single VPC. They want to test some workloads. And for this use case, you usually have a single VPC. You can interface a VPN gateway or a virtual private gateway with the VPC. And you build a VPN connection on top of that. And each VPN connection, by default on AWS side, it'll have two tunnels. And both of these two tunnels are pre-configured on AWS side. And it is up to you, the customer, to either keep one tunnel up or both the tunnels up on your side. When it comes to the routing, as we noted, both static and dynamic routing using BGP is supported on AWS VPN. With static routing from the AWS side, it randomly picks one tunnel for sending the traffic from the cloud to your on-premise. So in this case, if you're having both the tunnels up, we're... we recommend that your customer gateway supports asymmetric routing over the VPN tunnels. And another point I would like to note is each tunnel is... capable of supporting up to 1.25 gigabits per second. And with the virtual private gateway, you would not be able to do any ECMP, but there is a different technology and different design pattern for that. We're going to discuss that in the next slide. So let's say you started with a single VPC, and now you're expanding to multi-VPC, be it within the same account or different accounts, and you also have multi-region connectivity, and you would like to set up this hybrid connection, right? So you can still use VPC, the difference being you can have a transit gateway, associate the transit gateway with multiple VPCs across account or be different regions using TGW pairing, and then you can deploy... a AWS VPN connection on the transit gateway. And as noted, you can use BGP routing, and for influencing the traffic in this case, the BGP path attributes are supported, like ASPath and also multi-exit discriminator MED. Both these values are supported in order to influence the traffic from AWS side to your on-premise. So how can we scale the throughput? As I noted earlier, 1.25 gigabits per second is the throughput, which we discussed about, per tunnel. So what if you have a requirement where you want to increase the throughput, right, for using the same transit gateway? you can have the existing setup on your AWS side, and you can build multiple VPN connections on a single transit gateway, and you can use a transit gateway feature called ECMP. You can use ECMP on transit gateway. and do the equal cost multipath routing across the VPN tunnels. This effectively multiplies your throughput. Let's say you have three VPN connections, two tunnels per connection. You multiply it by six, 1.25 gigabits per second to six. And you can use the same setup. Let's say you have different locations geographically distributed, and you want to connect to your AWS cloud. You can have a single transit gateway connect to these customer gateways using the same transit gateway. You can have the VPN connections, and you can also do ECMP among the different sites. So we have discussed the IPsec solution, VPN solution. So let's dive into the Direct Connect, which is a dedicated network connection. So Direct Connect provides you the shortest path to your AWS resource. So instead of traffic traversing over the internet, you can traverse using a shorter path via cross-connect to your AWS cloud. While in transit, your network traffic remains on the AWS global network and never touches the public internet. This reduces the chance of hitting bottlenecks or any unexpected increases in latency. So when deploying a Direct Connect, you have multiple options. You can either go for a dedicated connection where you own the connection, and we support all the way from 1 gigabits per second to 100 gigabits per second of throughput. And we also have... a partner-hosted connection option where you can... So partner manages the underlying connection, and they allocate certain throughput for your requirement. And this supports all the way from 50 megabits per second to 10 gigabits per second. And another advantage of using Direct Connect would be reduced data egress charges. Let's say traffic is flowing from your AWS cloud to on-premise. Instead of taking the Internet, if you were to traverse the Direct Connect, you benefit with reduced egress data charges. And we support more than 100... Direct Connect is supported at more than 100 locations globally, so you can take advantage of that and have a location closer to your data to interface to AWS. Let's dive deeper into the Direct Connect architecture. So as we can see, onto your right, you have your corporate data center, and onto the extreme left, we have our AWS global network, and we have a Direct Connect co-location facility. in between. So with the Direct Connect co-location facility, so once you're ready, you can order a cross-connect. I mean, you can order a Direct Connect via your AWS account. You download a letter of authorization, and you can place a request. request with the co-location facility to complete the cross-connect request for you. And there might be cases where customers might have their equipment within the direct connect co-location facility. In that case, you can just create the circuit using the cross-connect. But if not... If that's not the case and you have your circuit within the on-prem data center, for the last-mile connectivity, we have AWS partners who can help you achieve that last-mile connectivity. And with this architecture, you can connect to the VPCs, like you can connect to your virtual private cloud using private virtual interface on a Direct Connect or a transit virtual interface on a Direct Connect. You can also connect to AWS public resources like any of these database services like Amazon DynamoDB or even storage services like S3 using a public virtual interface. So what if I have a single direct connect circuit and I want to connect globally, right? So we discussed previously regarding VPN, how we can use it to connect to a transit gateway and can connect to multiple sites or multiple VPCs. across region. In a similar fashion, you can have a Direct Connect device, Direct Connect setup at a particular location, and using that, using Direct Connect gateway and a transit virtual interface and a transit gateway, you interact with this transit gateway, and you'll be able to connect to multiple accounts, multiple VPCs, or even, you know, spanning multiple regions as well. So this gives you that HubBend model spanning multiple regions. So how can we combine the advantages of Direct Connect with the security and security which you get with IBISEC VPN? How can we effectively use both of them together? So that's where private VPN... over Direct Connect comes into the picture. So for the underlay circuit, you can have a Direct Connect. You can deploy a Direct Connect with a transit virtual interface which connects to Transit Gateway. This will be used as an underlay circuit, and you can build an IP. IPsec VPN on top of this under lay circuit. So this way, you will get the advantage of consistent network performance of a direct connect, and also you have the security provided by IPsec VPN or AWS VPN. And in addition, if you're looking for layer 2 encryption, MACSEC is also supported for 10 gigabits per second direct connect and also 100 gig direct connect on a dedicated connection. So what about the backup options and what about failover and redundancy, right? So we have something called... You can use a direct connect as a primary connectivity option and have a VPN as a backup. So... and you can connect both of them to a transit gateway. So in case your direct connect circuit were to go down, the traffic from transit gateway will be taking your VPN connection to connect to your on-premise. So that will be your backup. And for traffic routing priority from transit gateway, the direct connect propagated routes are highest preferred. And if that were to go down, only then the VPN routes are preferred. There are also redundancy options at the Direct Connect level. So DX can be deployed with a single or multiple circuits, multiple providers. You can have redundancy at different Direct Connect locations and also at different customer data centers. And in order to influence the failover behavior, BGP path attributes can be leveraged, like you can have AS path prepending, and also BGP community tags and local preference BGP community tags are supported. Let's dive deeper into failover scenarios and how failover happens between Direct Connect circuits, between a region, and across multiple regions. So here we have a single region. We have Direct Connect location. and there are two circuits in that region, and we have a VPC presence in a particular region. And in order to influence the active-passive failover behavior here, right, so for outbound traffic, you can use BGP... path attribute, like local preference, for your outbound traffic. And in order to influence the inbound traffic, you can do the AS path prepending, and the shorter AS path is preferred. In this case, as we can see onto your right, the active circuit has shorter AS path, and that's preferred. And as we expand into multiple regions, as we have more regions, so on AWS side, there is a regional affinity. That is, let's say I have a VPC in region A, and I have a circuit, DX circuit in that region. Then that local region is preferred, and there is an affinity for that region. This is due to community tags, which we apply internally, and the medium preference community tag is the default. As we can see here, that is 722472. 200 There are lower preference community tags with 7100, and the higher preference community tag, which is 7300. And let's say you want to load balance, or you want to have an ECMP across multiple regions. That is, you have circuits across different regions, and you want to load balance the traffic. So that's where you can advertise from your on-premise. You advertise... this same community tags, and that will cause the traffic to be load-balanced. And these local preference community tags have higher preference over AS path. So this will override any AS path preference. And if you want to, let's say you want to influence and make a circuit in a different region as your active circuit, you can advertise a higher preference community tag for that, and that becomes your active circuit to your on-prem. So let's discuss some of the resiliency models for AWS, how to have high resiliency. So we'll start with the high resiliency model where we have two locations here, and we recommend you to have a circuit in each location. That is one circuit per location and at least have presence on two locations. And this way, your traffic, so even if, let's say, a location were to go down or you have a circuit failure or device failure, you still have a circuit in another location. And this is for workloads which are production workloads or of importance for you. And then we also recommend maximum resiliency model, that is where you have critical workloads on your AWS environment. And in this case, we recommend you to have two circuits per location and have presence in two locations at least. For less critical workloads or for non-prod workloads, you can have a circuit or two circuits in a single location. So we have discussed about the different connectivity options between AWS Cloud and your on-premise. So let's dive a bit into DNS. That is, let's say you have your application spanning on-premise and cloud. So how do you... let's say you have private domain, either in AWS or on-premise, and you want to resolve that. So those domains are not available over the Internet. You want to resolve, or your application wants to resolve that. So conventionally, what customers do is they have... a server where they set up a conditional DNS forwarding rules, and that might be a single point of failure. So for such cases, we have Route 53 Resolver. And using Route 53 Resolver, you can have an outbound and inbound endpoint, and you can create rules. This is a fully managed endpoint. On AWS side, you don't manage any of the underlying infrastructure. You just create rule based on your requirement. Let's say you want to resolve a private domain within your on-prem, and your application is sitting on AWS. So you create a rule and forward it to your on-premise DNS server using an outbound endpoint. And vice versa, if you have an application server sitting in your on-prem and you want to resolve a private domain on AWS Cloud, you can create an inbound endpoint, create the corresponding rules, and you will be able to resolve the private domain on the AWS side. So there might be wide area networks which you might be leveraging, and you want to integrate that with your AWS environment, right? So what are the options for that? That's where AWS Cloud WAN comes into the picture. It's one of our newer services. AWS Cloud WAN simplifies multi-region and multi-account connectivity and gives you the ability to integrate your AWS cloud environment with your existing wide area network. So customers can deploy core network edges. So you create a CloudBand core network, and you deploy core network edges within each region where you would like to have your AWS presence in. And you can create segments spanning these regions. The core network edges which you deploy will dynamically exchange the routes between the regions. You don't have to manage all the complex peering connections, all the transit gateway. So all that... that is taken care of by the underlying core networking edge routing. So this simplifies the overall connectivity options, and let's say you have different workloads such as production, development, and so on. You can segregate that by having these segments created and associating corresponding VPCs spanning multiple regions in those segments, and the routes are exchanged within that segment so they can talk to each other. And these segments are isolated by default, and if you want a segment to talk to another segment, you can also do the segment route sharing, and those segments will be able to talk. to each other. So greatly simplifies the overall network connectivity across accounts and regions. So this is, if you're interested, we highly encourage you to talk to your account team or definitely feel free to reach out to one of us later on. And so we have discussed different connectivity options and we have discussed the strategies of designing that. What about, so we have, we tie up with different partners. who can help you with setting up the Direct Connect circuits. Let's say you want to set up a hosted connection where you don't want to manage the underlying link. We have partners to help you with that. Let's say you want to set up the last-mile connectivity between your data center and the co-location facility. Partners can help with that. We also have partners who are trained in deploying the... on the AWS side, and also for setting up any, let's say, network monitoring or complex networking topologies, they can help you with that. So if you're interested, please feel free to check that URL, which we see on the screen. And to summarize, we have started our journey with a single VPC. We explored the options of how to connect multiple VPCs, hybrid connectivity options, DNS resolve across on-premise and... AWS Cloud, and also discussed about how to integrate WAN. No matter where you are in your cloud journey, we hope some of these networking design patterns are helpful for you in your journey. And with that, if any of you are playing the mobile treasure hand game, the keyword is foundations. And thank you all for being here. Really appreciate it. And we kindly request you to complete the session survey in the mobile app. Thank you so much.