all right we are in uh over to Sia yeah hi everyone good evening how many of you attending first time okay so mostly okay thanks thanks for uh joining us today okay so we are uh uh gate group um yeah we are the most active kubernetes and AI Meetup uh community and we have 14,000 plus followers in Meetup yeah members in meetup group and then 9,000 plus followers uh in LinkedIn okay so yeah our mission we are very passionate about kubernetes Technologies our mission is to accelerate kuber net ad option we're learning from each other yeah if anybody uh interested to speak uh regarding to the kubernetes Technologies we are welcome and uh we have a global fo footprint um in uh uh Australia Melbourne Sydney yeah in within Australia Melbourne Sydney and uh Canada and uh UK and then yeah again in Singapore okay so we we do have new uh uh in uh Philippines as well so today we have a topic about scaling with Carpenter on eks which is on AWS Cloud so yeah glad Galvin you will takes care of uh session sorry glad win Neo right glad glad win yeah some type of sorry about it [Music] okay so our main sponsors are code cloud and uh our main sponsors for the today event code cloud and uh AWS thanks for AWS for providing space and then uh arranging uh uh snacks so here is our main team for Singapore Kang and fair rites and uh myself siiva Ram Fatima and Jes okay so here are some of the upcoming events 15th August in Australia and 29th August in UK and then 4th of September in again Australia Melbourne yeah for those who haven't joined yeah uh if anybody uh willing to uh join as volunteers or take a sessions as a speakers please do register and if any sponsors yeah we are all welcome and uh yeah please follow us on LinkedIn Twitter and telegram if you are not at uh uh member of these uh uh groups we do have linked in Twitter and telegram okay moving to next next yeah uh here is another program Cub Cub strut program where it is it is all about uh uh completing all these uh five certifications with the Linux foundation cka ckad cks and kcna KCA so we do have um uh exclusive you coupons for for five certifications uh currently there is a promo code is running if in case anybody interested to uh enroll yeah we do have a promo code for all five together a bundle or yeah 25% for some of these if in case each one I think 38 plus percentage if in case you would like to go with a single uh certification as well okay so we have wait CU astronauts in Singapore in Singapore okay yeah so they all donating yeah they are all donating 50% of Watchers when they certified they will get all these Watchers so they are donating for the community as well thank you all for those who are donating and yeah here are some of the promo codes which I was referring if in case you are interested to go with the certifications please uh use this promo codes yeah this was the I was referring 48.5% for not which is which comes with the five bundle of all certifications and uh I think there may be other one 47% there is another one for cl code Cloud promotion if in case anybody looking for a uh learning platform access which is a code Cloud where they provide all these kubernetes and devops uh learning uh tools yeah and uh there is another one recently introducing aw aw astronauts program which is again uh all these combination of available certifications within AWS okay so yeah for those who wants to uh yeah yeah please follow the LinkedIn and for for more updates so here is our uh uh a uh AWS not who is our founder he's already completed and uh yeah there are there are landing Pace there is a new uh Landing Pace introd introduced by Kate Su okay so yeah maybe you can you can follow https goku.com yeah you can please scan and then feedback okay so I will now in uh invite Mr ladn Neo to take over the session thank you I'm not going to use the mic but can everyone hear me okay so I'm going to ask a different question previously I think someone asked if it's your first time attending so I want to ask if anyone is like the second or third time attending so the first time it was held here was I think two months back when we celebrated the 10th anniversary of kuties where there's like a huge huge Ki so who attended the event okay may maybe half okay so who knows about what Carpenter is okay less than half so there actually a trick question because if you don't know what Carpenter is my colleague during the event he actually introduced briefly talked about Carpenter so if anyone of you still don't know what Carpenter is you're probably not listening to the the presentation but it's okay it was only briefly mentioned so I'm going to go slightly deeper today so today you can see the room is not fully filled so if there's anything you want to ask you want to stop me or anytime like just just feel free to stop me I mean there's only so many of you here I can just take any questions I mean since we have quite a lot of time as well so today we're going to talk about scaling with Carpenter on Amazon on eks so what exactly is carpenter carpenter basically in a in in a phrase is just TR AO scaler with steroid so how many of you know what trust AO scaler is okay not a lot which is a bit surprising and a bit concerning it means that you are not scaling your applications at skill maybe you are developing uh demo environment testing environment but it's okay everyone is here to learn so you don't know what cluster autoscaler is it's basically an open source tool that together with communities when Google developed it you have to scale your worker not so clust scaler is essentially your worker note scaler so only four basic things I'm going to talk about today first one what is Carpenter second how Carpenter works exactly how it slightly differs fromo scaler some of the Synergy Carpenter brings with you with regards to flex compute and allowing you to mix and match all the different CPU architecture type and lastly what are some of the best practices uh you can achieve with Amazon eks together with Carpenter right so in terms of scaling your application typically what happens is your HP or your horizontal Port Auto scaler will start scaling first so this can be done typically through CP utilization memory that's very basic so what happen happens if you set it at a specific threshold such that at 70 or 80% your ports can't handle any more requests so you look at the extreme right you see that there's a threshold the moment it passes that threshold then you will start scaling up so these are not these are not scaling your actual notes these are just scaling your ports which are your container applications so typically the first step will be to scale your ports first before you scale your notes and in the event that your notes can cannot hold any more ports because based on your let's say for a very big requirement a note Can Only Hold 10 parts because these 10 ports are requesting a certain number of CPU as well as memory but at the 11th Port if you can't skill up anymore you'll be it will become unschedulable because it's really maxed out it's just like when you're trying to fit something into a car if it's maxed out you cannot you cannot Feit any anyone you have to have another car so that at additional person can get into the car so same concept so in this case plus two2 scaler will then scale up your addition notes to house this port so that the cube scheder can schedule these ports into the worker noes so let's say in this example you look at the top so just take it that one square corresponds to one CPU so in total you're requesting 7 vcpu but look at your notes right here running four vcpu and 16 gig of um Ram there is no way it can house a total of seven vcpus because there's no more notes so what clust do it will spin up two additional new noes so these two new nodes will total equate to8 vcpu so the moment it has scill up the new notes the CU scheder will then schedule this two uh sorry this two application of workloads into the new nodes so that's typically how your class skill Works in general so why are we here like why can't we just continue using CL scaler why why do we even need Carpenter I mean what are we trying to do here so if you notice right when you're using clust scaler you can only skill as much as how many Nots you have so let's say you want you have diverse workloads that requires different instance me uh different instance family different vcpu different KS of memory how many no groups are you going to create I mean managing communities is really difficult but if you want to manage the number of new groups that you have is going to be complex as well so the whole idea of why we even came why Amazon came out with Carpenter is because we want to remove the concept of not groups we call it group not not groups so essentially no groups that you you know when you go into the console right at the bottom you can see create a node group so essentially when you're using carer you don't have to create multi groups because all you need is just the base layer of the not group to house your Carpenter ports and then subsequently when Carpenter scales you w't you won't be creating any node group so essentially it doesn't go through Autos scaling group in uh AWS it really hits the easy to fleit API to provision new nodes and this really makes it more flexible because let's say you have like tens of thousand or tens of or hundreds of workloads that you have to Scale based on your specific CPU memory instance family you want M you want C you want G you want to have a x86 uh CPU architecture you want to have uh arm 64 you want to have on demand you want to have sport instanes if you're using class scaler all of this has has to be created using the node groups while if you're using Carpenter everything is done through the custom resource definition file or the yo file yo file that I'll show you later so essentially it makes it provides a lot of flexibility and control over what you can do you can mix all the instance type that you have or the different uh vcpu or the memory requirement CPU architecture you want to have a spot or on demand everything can be just done in one single yo file so essentially removing the no groups provide a lot more flexibility then what you can do with cl Scala which still uses manage not groups and kabo also has a very interesting uh feature that we have is consolidation so what if maybe when you're scaling up and down the new notes are spun up what if there's a chance that you can reduce the overall cost of your cluster if you manage to scill it down or to pack everything into a new node so assuming that your workloads can move Lo so okay some some of the customer workloads that we work with slightly more streight they like the workload to stay in the port they don't like uh stay in the note they don't like it to jump around but assuming that you are okay with your ports jumping around the notes we have a consolidation feature which essentially moves all of your port with one new E2 instance which is a lot more optimized and it will also bring down a cost of cluster because instead of running maybe I don't know uh two large machines you might be able to bring down the C of a cluster by running maybe three small or maybe two media machines or maybe one media machine so that's how the consolidation feature works and all this is done automatically you don't have to predefine it it's done intelligently of course if you have strip requirements you can also just disable the consolidation feature so likewise going back to the example you see that previously in the in the example that we have let's look at top and bottom at the top prly we mentioned we need to have like a instance type of 4 vcpu 16 gig of memory by using carent if you don't have any stre requirements you can just leave it open leave it as default and every instance every CPU architecture will be considered by default so by default if you don't specify anything Kento will automatically pick the cheapest version of Keepers instance family for you to that means to say that let's say if all us are operating at I don't know um m5. large instance you would pick m5. large as well as sport instance because that spot cost cheaper than on demand so by default Carpenter already have you in mind how to cost optimize area of your cluster that you have and just pick the cheapest one for you this in this example coming back coming back to what we've discussed previously you are requesting 7 V CPU but there's no new noes so Carpenter spins up one new note one instead of two just one new note that has eight vcpu so how it does is why is why is it acting differently from cast Skiller because C has this very interesting feature where it batches of or Aggregates all the resources and spin up one new E2 instance that can house the workloads together so essentially right I think I'm able to show a demo later and you can see how it works instead of having two m5. large to house two work Carpenter spins up one C6 I don't know x x large and house all the workload and it bring it brings down the overall cost of your over of your cluster because it actively selects the cheapest one for you and beamex everything into one big easy to uh easy to instance for you so essentially once the new Noe has been spun up scheder shed you all these parts into this new Noe and your container applications are up and running so this just a pictorial form of how Cento Works likewise it works the same as CL Scala the only difference is that it bches everything together so instead of creating two new notes it creates one new note because it looks at these two work container workloads they are requesting seven vcps in total so when you're using clust scal T SC tend to do them as different while using Carpenter it patches everything to everything together and provision one new one new E2 work note or ac2 instance and then your Cube scheduler will shedu all of them into this m so this is this is not much different fundamentally as to how you're scaling your notes the only thing different is that Carpenter batches everything together and the next reason why Carpenter is better because Carpenter in a s in Simplicity works a lot faster okay I wouldn't say a lot it actually depends on your workload but essentially carent Works faster than T scaler so why okay so let's look at the traditional way of scaling your worker notes so clust scaler when your ports go above the threshold you have to scale it HPA kicks in you have pending ports Cube shed has know where to put those all these ports T looks at the pending ports and then sends a request to ASG so ASG in this case means Amazon Autos Skilling group so a Autos Skilling group the one when you're when you're are sting your E2 instances your Autos Skilling group so C8 house ASG I have no enough notes to house all my ports please create new notes and then a ASG would then hit the E2 a API update the update ASG from let's say two to three because I need to have new notes to house these pending ports and then when this note is successfully being spun up Cube sched will then schedule the ports into these notes so how does capendo work so again we go back to Port scaling if it go goes past a certain threshold you spin up new you spin up new ports and if there's pending ports scheduler has nowhere to put it carender then comes in looks at the pending ports and should we hit to hit the fleet easy2 Fleet API so you can see essentially we are bypassing ASG so when you are using Carpenter right you don't have to if there's something that goes wrong if it's not if it's not scaling as as expected there's no point in going to ec2 ESG to see what's going on you won't you won't see any ESG there because it doesn't work with ASG so this is why it's called goodless we don't work with ASG we the fle easy to Fleet API directly so if you're trying to debu there's no point in going to ASG to see how come it's not scaling I'm supposed to have four notes why un myal scaling you won't see anything there because we are removing that concept which is why it's actually we are skipping step and Y carpenter work faster than CA so also carent Prov is been teing so if everything can be squeezed into one e to note you can but that's assuming that you don't have any stre requirements you don't let's say for example for some reason or another you don't want you want to achieve ha so everything can be packed into one easy to know into one availability Zone comper does it for you by default but if you want to customize it you can use a combination of node Affinity T and Toleration to make make sure that everything won't be won't exist in one single note so let's say one single note F at least thata note with your container workload can still work as expected so what about scaling in again it works faster because we are bypassing ASG we w't tell ASG hey my workloads are under utilize change my desired size from three to two because I don't need three worker notes I only need two scale down move the ports over and then kill off one note so essentially Center works a lot faster because we don't talk to ASG if we see that it's underutilized notes we shift it to a existing note if we can or if we see that there's a chance for you to bring down the overall cause of a cluster spin up one smaller or medium siiz E2 instance shift everything over and then K off the the P queue of the other notes with Carpenter everything is done through the carpenter crd everything will be done through a yo file you want to have on demand you want to have spot x86 arm 64 everything can be done even the different instance type that you want to have M5 C5 G the only thing they can put in is a b metal I don't think it works for now but again it's a r issues case that you want to use bare metal but essentially every of the instance which are more commonly used among amongst customers all of it can be included so this is how the custom resource definition file will look like we call it not pool it's has gone through certain changes so now previously was called provisioner now it's called not poool so you can have one single not pool or what we call one single provisioner to R them all so every of your container workloads can run on one single note every of the container workload will have this characteristics you can choose either M5 C5 R4 it can be either availability Zone 2 a 2B you can either have spot you can either have on demand or you can have different CPU architecture so every every workload regardless I don't know your if you're running distributed system micro Services your login authentication card checkout everything will be based on this single note for crd of course there is also a there's also flexibility to mix and match different provisional or note pools to fit your different workloads let's say for example the checkout microservice is handling more requests than authenication for example I'm just giving a very generic example you want probably a bigger instance size or you probably want on demand because it needs to be up and running all the time while your check uh while your authentication or sign up service can run on spot because there's no requirement for it to be up and running all the time for example so you can mix and match so it's not uncommon to see that in one single cluster you have multiple note pool of course if it gets too complicated you can always default it to just use one single note pool it will work but if you have if you want to have more flexibility you want to have more control over what your apps can do cannot do or what your apps require then by all means spin up multiple not pools or provisional to suit your workload so you can see we have flexibility over instance type if you don't put anything here all of the instance from Mt to G everything will be con uh everything will be considered so Carpenter looks at what your uh your ports or your container workloads are requesting and then spins up a more cost optimized is uh easy2 instance for you you can dictate which in uh which availability zones this uh new work notes will be housed in if you want to save cost you want to have spot or on demand if you don't mind your workload running on spot then by all means put Spot On Demand but by default since it loses the cheapest easy to instance for you everything will be everything will be sport so the only time where if you put sport on demand and Sport isn't successfully successfully being spun up is because there are no spot instances let's say for example M5 large running on spot instances is like the most popular out in the market if Carpenter cannot find your M5 running on spot then it will just fall back to On Demand assuming that that's already the cheapest alternative let's say for example if C5 large running on sport cost the next cheapest of course the sport instances will be spun up but if you don't see the sport instances being spun up successfully it will probably mean that there's no more spot you only can fall by on on demand which is why it's also a strategy to include as many instance type as you can because you do want a situation where you're trying to save cost but keep speding up on demand and you come back to asking like how come you promise me that you will bring down the overall cost of a cluster but every time instead of spot it SPS up on demand because the instance that you selected only can run on demand because there's no more supply of E2 instance or for your sport to run on so as you can see in terms of CPU CPU architecture m64 and X x86 so again by default m64 will be spun up because it's typically cheaper or in most cases if not all cheaper than x86 instances for the second type of crd or custom resource definition that you probably have to take off is something called note plus so essentially you just specify your Security Group where your subnet is and if you have if you're running stateful workloads what kind of like block device mappings that you have so EBS EFS all can be configured so like cluster to scaler when you scale up you want to have certain requirements both parts can be in the same node one must be in here one must be the other in another node all of these can be configured things and tolerations not Affinity not selectors everything can be configured so what it's safe to say that whatever clust Pho scaler can do Center can definitely do so there is practically no reason why you shouldn't be using Carpenter unless unless so Carpenter can do one thing which is GPU I think for now it's really has really been brought up in uh GitHub uh issues I think the service team is currently working on that but if you're running ml workloads you want to be a little bit more cost optimized you do want to use the full GPU you want to use like a quarter or half of it carender is not is not smart enough yet to do that so keep in mind so coming to sport if you're running Sport with Carpenter or running running sport instances with carpenter carpenter try to provision sport instances whatever that we recommend for your ec2 instance for example everything stays the same you want to select a price capacity optimized spot so it balances between price as well as capacity if you go full on price you might not get capacity if you go on capacity you might not get a cost savings that you really want and also diversify and don't con train refers to the fact that it's generally advisable to provision or to State more than one ec2 instance that you have so in the previous example I think we had quite a lot but in the case where you only put m5. large C5 large sport on demand if you're unlucky both M5 C5 not available for sport it will just it will definitely for by on demand because that's an exp alternative then of course your cost W your entire cluster won't be as C optimized as you hope because you only specify two E2 instance to use but you have like 10 E2 instance and you specify both Sport and on demand and there might be a chance where two of them don't provide sport instances but the rest of the it can provide and will also ultimately bring the cost of the entire cluster down so likewise in terms of interruption pretty standard if you're not already running it in ec2 in the ec2 setting when there's Interruption you get like 2 minute so everything stays the same there's nothing different in terms of best practices these are some of the best practices have with regards to Carpenter we generally do not run the carpenter Port so when you're using carpenter carpenter is typically deployed as carent as as a port so if you have two instance and you specify two Carpenter ports both ports in order to maintain H will be reciting in different notes so typically we don't recommend that you run the carpenter ports onto a note that is managed by Carpenter so what does that mean so let's say you have two when you create when you start creating a cluster you have two worker nodes so both of them each have one carent Carpenter port and for some reason you have pending ports hbaa uh sorry uh C KCK in you spin up a new note and then all the workloads shift to the new note so generally it's not advisable that hey I think I will utilize this note more than the previous two the basic two that I provision at the start I want to shift one of my Carpenter ports into that into the new note that provision so so typically it's not recommended as of now I have I haven't had I haven't yet had a customer who did that but what I imagine is it's going to be a little bit messy because that part that you that new note that Carpenter spun up it's not it's not for life but it will change what if Carpenter decides that hey I don't need such a big note I want to consolidate it into a smaller note then your Carpenter ports will get picked out and then the centor will spin up a new Noe and then it will shift back in a new Noe because it gets a little bit messy and a little bit wonky because your cander ports are supposed to be relatively stable you're supposed to be placed on a note that has little to not to no changes at all but if you plac it on the Noe that has been SP up by Carpenter it will always change because it will be Consolidated it'll be moved around it's a little bit more unstable and you might be facing with a situation where your Carpenter PS unable to look at the incoming request in time and your scaling might fail because it's still in the process of consolidation it's still unschedulable you're still trying to find a new home and then your scaling might even fill uh eventually so when you are viewing the eks blueprints there's a choice to either deploy your Carpenter on fargate or ec2 instance if you want of of course a more manage of doing things of course uh eks fet you can just deploy your Carpenter on eks fet but if you still want to retain some control because you have specific requirements you have golden Ami you want to use it then you can also deploy it on E do this it's totally up to you it's totally up to uh your preference so in terms of when you're customizing the not pool crd or the custom resource definition file ideally if your workload are able to jump around you don't need your work to be that to be that uh stable then you should enable a consolidation feature to bring down the overall cost of a cluster so if you want to you can use the note expiration TTL time to live to rotate the notes so let's say a very a very very common use case that there's new Ami so let's say you're using a default eks Ami maybe when a cluster gets up graded your Ami will have a new version if you want this to happen automatically you can set it such that after 30 days the notes get killed off and the new notes get spun up so instead of upgrading it manually like oh I need to change a new Ami you don't have to do anything because after 30 days the note automatically refresh itself create a new one you love this move all the notes I move all the ports over to this new note with the latest Ami this is the common use case that customers are using and and of course use a diverse set of instance type so going back to our spot versus on demand if you only have like a few E2 instance that's running on spot on demand and you wish to optimize cost to the maximum and use a more diverse set of easy2 instan instead of just having two maybe have like eight 10 e to instance so in the rare scenario that AWS can't find E2 instance that is of spot of the type of type spot then if you don't have a d set then you will fall back to on demand and you won't get the cost savings that you uh that you desire so lastly in terms of scheduling Port I think this is pretty standard it's not Carpenter specific it's more e specific you want to limit the range you want to avoid moisy neighbors you don't want all of your you don't want one container to consume all the resources that are in your notes so all these are pretty standard it's not Carpenter specific and I do have a simple demo that I wish to show that you probably see how uh how carender works e e hey so you can see right at the top it's a cluster running on cluster uh it's a cluster that is running on Carpenter a scaling mechanism and at the bottom is running on plus total scaler as a scaling mechanism and you can see that at the start both are running on m5. large and now I will attempt to skill this demo workload that I have is pretty simple so so Curr both both of these workload my app dash deployment one is at zero so I'm going to attempt to scale it to 20 so if you can see on the right hand side you see that there are 20 pending pots is not being scheduled to the two nodes that we have here because it has no long it no longer has any space to be placed there likewise for the top you see that there's like 17 pending PS because three of them can be placed in existing note so you can see that in terms of carpenter is really being is really spinning up a new note and classo scaler is just beginning to spin up and the next most obvious reason is that for clust scaler you are spinning up multiple M5 to large E2 instance but for the case of carpenter you are spinning up one C5 4X large which essentially brings down your overall cost of your cluster if you look at the if you look at the top this will only cost you 140 per month while at the bottom it will cost 660 $630 per month so this goes this this going back to the example that we had if Carpenter sees that there's a large instance that you can provision to bin pack every of the container workloads that you have it will provision that it will provision that easy to instance for your work for your ports to be housed in whereas for cluster scaler you have to have different note groups and there might even be a chance that it might not hit the hit the another note group that can provision a big uh that can provision E2 resistance big enough to house off a port so for this particular clust cluster I have I only have one single m5. large note group so you'll definitely see only m5. large SP up but whereas for Carpenter I left it rather open it's only running on one single notebook custom resource definition file and just so happens that it looks at the Port request it sees that C5 4X large is the best and cheapest instance that I can use to house off my uned unschedulable ports so it will spin up that and then all the unschedulable PS will then go into this um loal note that comp spins up so we've seen how it skills out how about scaling in So currently 20 I want to scale it to zero so obviously all the ports will be kicked out because previously I requested 20 now I want none of them running so all the ports will be kicked out of the note so in this particular example right for some reason I'm not sure why but the scaling out behavior is a little bit more obvious than the scale uh the scaling in behavior is a little little bit more obvious than your scaling out so what I mean to say that is that the scaling out the scaling in sorry the scaling in Behavior will happen a lot faster when you're scaling out it's a little bit more obvious you can see that currently all of the new notes that you have provision or both Carpenter as well as clust have provision is underutilized there are no there are no ports that are running inside this of so business logic or anything so you can see that it has really killed off the note that's been SP by Carpenter whereas for clust Scala it's still being it's still in the reading state so which means that if you skill if you try to scale out your ports your ports can still enter this E2 instance they are running and every minute or every hour you run your E2 instance you are getting charged for it but Carpenter immediately once it's underutilized will start killing it off so that no no new pts can enter this ec2 cluster uh this ec2 instance so you can see now that it's starting to get cordon off but it still means that it's still running being coron up doesn't mean that the the notes are dying or like shutting down it just means that it's not able to accept a new Port that's coming in so it's still running you're still getting charged for it whereas in Carpenter you don't see it you the you started off with two you increase to three and then you go back to two so this is the base State whereas for cluster to scaler is still starting to spin down you can see it's slowly getting spin down it's Cordon it's not completely shut down or terminated so for some reason I don't know why the scaling in behaviors happens a lot faster than when you're scaling out but the point I'm trying to make is the terms of behavior in terms of performance center will definitely assume all else equal perform a lot faster than clust Skiller so finally after speaking for so long clust Skiller has successfully and finally spin down everything to its Bas State yeah so I hope that was insightful I hope that everyone is to is able to see the value proposition of using carpent World classo scaler I hope everyone actually knows what Carpenter is I mean if you have any question just you can connect with me on LinkedIn you can talk about it or if you have any question now in fact let's let's just address it all because whatever questions that you may have it's not stupid questions people might have the same question but are afraid to ask just break the ice by starting come again no requirement sorry I'm still not getting your question cost instance for which one PL scaler or Carpenter for Carpenter so if I can bring up this is my terone F that I have you can see that in this requirement these are the these are the requirements that I stated I only put C M and R so in that particular scenario carpenter has determined that c 5.4x large will be the cheapest of them all you can see that I have C I have M I have R I have spot I have on demand in terms of like instance CPU I four all the way to 64 and in that particular scenario carpenter has determined that the C5 4X has been provision is the most cost effective and it's the most suitable for the workloads they are requesting the resources yep y [Music] so the only thing that carender detects is that if the control plane of eks as well as data plane is of a different version if we want to if you want to achieve what you have mentioned then by default Carpenter you can specify the TTR the time to live so every 30 days you will refresh the instance so that you can be let's say for example every 20 days you update your Ami you can you can choose to configure Center such at the 21st day one day after you will be you spin up a new Ami it will queue off the notes of the older Ami and then spin up a new note with the new Ami [Music] yeah say I [Music] want it's doable as well and when is pending with particular tolerations so does it detect okay can to with the St yly it's doable as well it's configurable so this is what I meant where by I say comp has a lot of flexibility to play with things Toleration no affinity and all of this instance requirement according to it according to your your desired State everything can be done and can be configured of course whatever that I'm showing you is just a base case it's a b simplistic but if you want we can probably get into a a deeper discussion on how comp help you achieve your your work so if you can see in this example right I only have one single not PR in ide State you wouldn't want every services or services that have to be only using one note to let's say what what happens if I need to have a GPU instance ideally I wouldn't want it to spin up cm andr and if I leave it as it as as it is right your workload there that requires one GPU or requ a g instance type Capo will just oh okay I I I need to spin up the cheapest version the cheapest easy to instance for it would just spin up a C5 spot for example so perspective like this ofing the so how can I choose like um okay I need to say only this yeah so everything you you you can do it with a combination of Port disruption budget as well so that you ensure that at any given point of time 40% is still running or 60% you can just move around I don't really care as long as eventually it brings down the overall cost of my cluster I want to actually say machine spins up I want I want to some some install AG so um can I do all things before I want to do some check Che so typically that is being run at the infra level or the app level in infra level so I believe you can run some scripts before these instances are being launched as well Cloud scripts yeah come again how does itet c yeah because it hits the oh so Center currently is only available in if you're using AWS eks or Microsoft eks it doesn't work on on Prem yet but so need a join given this so all these are done at the back end so the moment it provision so let me go back the slide if you see this it's it's trying to see what's the cluster name as as well as subnet so there's some Discovery being done at the tech level so he knows that it's supposed to join once the new not one once a new note is being spun up it belongs to this particular cluster sorry that I'm not too sure I might need to check on it yeah yeah yeah it's support it's just that you can't do slicing yeah yeah slicing is not supported come again uh that I'm not too sure as well currently is open source so if the open source Community decides to release it for E uh for on premise communities I mean that's possible because is is no longer owned by us it's actually being donated to the cncf foundation so anyone can technically use it and plug it and play it yeah you might need to reconfigure the whole logic fin questions before we off everyone expert in ETS now everyone knows how to use I'm hoping to see that for right