Transcript for:
Home Lab Tour: Services and Configurations

hey everybody and welcome back to Jim's Garage in this video we're going to be continuing the home lab tour and this time we're going to be focusing on the services in the last video you should go and check that out because I spoke about all of the different Hardware that I have so we're going to take a look at the actual network configuration I.E things like vlans my firewalls things like my Waf and my proxy and then we'll get on to most of the applications I'm actually running in a sort of jeel setup I've got a Docker host and a kubernetes cluster let's dive right into it so just to orientate yourselves from a high level overview this is pretty much my network I spoke about this more in my previous video so go and check this out but we're kind of going to be starting from a bottom up approach we'll look at sort of the internet and the networking then we'll look at the actual device themselves and what I'm running on them so the first thing we're going to do is talk about the firewall now for my firewall I'm currently running open sense this is inside a virtual machine we'll have a look at that in a moment I've previously been using sofos XG which I think is in the video from last time I did this and I've also used UniFi Dream Machines in the past and that's something I'm going to be revisiting in the near future so what does this look like well let me talk to you quickly about how I've got this set up we'll have to dash back and forward between proxmox and here I'll do proper proxmox breakdown later in the video so if we head over to my proxmox installation you'll see here that I've got this open sense ha that's high availability and that's because this can basically move between any three of these machines now if we actually go to these machines themselves you'll actually notice that even on things like the networking tab every single one has basically the same layout so that's why if we go to this open sense ha if you look at the hardware this is just linked to the same ports on each device and all of those are wired up exactly as I showed in the previous video in here there are our cables exactly the same into exactly the same port and so that VM doesn't really care where it is and if we go over to proxmox and if we go to the data center where we've got this ha you'll see here the VM 100 is actually able to then move between any of the nodes now within open sense I've got a number of interfaces I think what's that about 10 these are all different vlans that I'm using to segment my network into different zones I.E there's some things that I put on say uh DMZ there's things that are not trusted for example I put my NZ on and try and follow sort of a delicate balance between micro segmentation and creating too much of a hassle to actually administrate on top of that really there's not much else apart from sort of this wire guard I put all of my management stuff on VLAN 200 so that'll be things like proxmox itself and maybe some of my Docker host my kubernetes machines Etc and those are kind of my most trusted Network that has very restrictive access internally I I also follow that same principle for my Nas because basically my Nas is my backup and where I put all of my sensitive data all of the stuff that I care about photos Etc and so again I want to be very prescriptive in terms of the levels of access that I give to it so each of these will have corresponding firewall rules that basically try and limit that access down as much as possible so that I'm happy on top of that I've not really got much else other than things like a few Port forwards for people who I want to give access those are restricted to the the wire guard and also to the IP address of the individual tied back to a Dynamic DNS which in open sense you can tie people's uh firewall rules down to their Dynamic DNS which will update Etc so that's pretty much it from a firewall perspective I'm going to go now over to the proxmox setup because in the last video I showed you the actual setup in terms of those three MSO ons but this is actually what it looks like when it's configured now as I discussed in the previous video I've got this set up kind of in two ways we've got a traditional Network running over sort of the ethernet ports the SFP plus and the 2.5 gig ports those are basically routed through into the key switch so you've got the sfps in this aggregation switch and then you've also got those rj45s going into the 2.5 gig for the 2 gig fiber and then crucially you've got this separate Network here which is that Thunderbolt ring now that Thunderbolt ring is what sets up my seph seph is provisioned for my vmss so things like my kubernetes cluster I've actually got things like the master nodes those are actually running and that data is sat on that SEF Network you can see that here this hard disk VM diss that is actually a seph network this one here and if we look at the actual discs you can see all of these machines here those are all available on Seth the benefit of that being I want my kubernetes cluster to be highly available it is naturally but Distributing it across other systems also gives further resilience so the benefit of this approach is let me just keep pinging uh Cloud flare and then in the background why don't we do a migration so here's open sense ha let me just hit migrate I'm going to move that over to Dawn which is the second machine down here let's migrate that now let's pull up the CMD just to make sure that that's still going you can see that it's now migrating in hopefully in the background we'll get that little airplane symbol there'll probably be like one tick where it misses it out but other than that it should be fine so that's still going that's still going you can see it's now migrating you can see it's now migrating it's just migrated it requests timed out once and then we're back up and running and now that VM is up and running now on my second node so you can see exactly when the one of machines fails actually it doesn't bring my network down which is awesome and remember the only sort of the SF p+ ports and the RJ45 ports are actually being routed through the open sense I've still got that backhole Seth Network which is how this is all able to work and fail over irrespective of even if I turn off my main switch this stuff should fail over so if you have a look now further into my proxmox setup what have I actually got running in there well you'll notice that I don't actually have any lxc at the moment these are all virtual machines now that's just kind of a point in time I do spin up lxc from now and again to do testing Etc but at the moment I don't typically I do tend to prefer a virtual machine just because I find it easier to sort of move across from different things you've got the isolated kernel Etc there's little dependency then on the underlying host it pretty much just works yes there are overheads with virtual machines versus lxc but I think the benefits outweigh the negatives and they're still pretty lightweight to be honest so what have we got here well I've got this test Docker VM that's pretty much what it says on the tin as someone who's constantly creating videos and things and spinning new things up I often want to have a dedicated machine just for testing then if we move down we start to get on to my kubernetes cluster so in this cluster setup I've got a number of Masters and a number of worker nodes and those are distributed evenly across my three nodes so each one really should have an agent that's kind of the worker node and then each one should have sort of two servers to Master nodes those are in charge of doing all of the management of the cluster you'll see some old VMS as well so that pop-up H Lab video for example and you'll see also my tals kubernetes cluster I've got that there again as sort of just some testing loads of test machines here and I also have this test vm12 and that will go all the way up to five distributed across all the three nodes again that's usually for doing more kubernetes more clustering things like spinning up Docker swarm that enables me to do it quite quickly all of the these machines have their Mac address saved within the firewall so that every time I boot up I know exactly where they are they've all got snapshots associated with them so that if I want to during testing I can just take that back to the start and then I can deploy over and over which is really good for when I do videos with things like anable for example and things with shell scripts all those sorts of things any kind of automation being able to roll back being able to have sort of a static environment to get back to is really handy other than that really it's kind of these cloud images which I've covered in previous videos which just make my life so much easier having cloud images means that I can basically right click this click clone give it a name set it to a full clone and then a new virtual machine is created instantly it's it's automatically up and running and it automatically pulls all of the latest updates and for someone who's constantly spinning stuff up that's really really helpful other things of note I've got uh proxmox backup server that's just backing up straight to my Nas as you can see that's a little bit bit full I need to go through clear some stuff out and think about expanding my Nas you can see that proxmox backup server actually sits on this second node here and this is responsible for backing up a number of my virtual machines and then really outside of my cluster and those virtual machines I have my Docker VM now as I mentioned at the start of this video I kind of have a dual node setup I tend to put some things in Docker that I'm testing obviously for my videos and things but also if I'm testing out a new application I'll typically stick it in Docker first just because it's much simpler to kind of get up and running and it's also much easier to sort of uh debug and problem solve once I'm happy with it once I realize it's something that I do want to keep running in my home lab I'll then transition it over into kubernetes and that'll give me more resilience in terms of up time Etc now that we've discussed sort of the overlying firewall and sort of the prox MOX that basically puts all this together let's look now more at the actual Network layout out because I'm actually running this through UniFi so if I go to my UniFi setup here you can see the actual physical devices that I showed in my past video whereby I've got the aggregation switch I've got the 48 Poe switch and then I've got my two access points I've also got those two um cameras for my CCTV which we'll look at later but I'm not actually running those through UniFi I'm using frig for those which again we'll come on to you later the actual topology of the network is pretty straightforward sorry that's quite small but effectively you've got all of these devices here these are all Wireless clients those go into the respective um access points and essentially those access points plus these physical devices those all plug into the switch and then on the other side basically we've got this Upstream aggregation switch so the 48 plugs into the aggregation in this you've basically got all of those MSO ones so on this here you can see that both of these SFP plus ports they basically go into the aggregation switch which you can see on here and then the corresponding final plug is for my Nas which sits off over on the side you can see that up here that plugs into the aggregation switch and then that basically the aggregation switch that goes up to the firewall but actually the firewall is Downstream on these MSO ones and so this sits on top you've got the open sense and then Downstream is the aggregation switch and then everything kind of piggybacks off the back of that so hopefully that makes sense in terms of how this is all cobbled together really nothing too exciting if you have a look at the Port manager you'll know that I've massively underutilizing this switch that's largely in part to basically everything now going on to those MSO 1s but I try and break these up as required it's a Poe switch as well which is pretty nice and allows to cut down on the number of wires here you can see those are actually my cameras here and my access points are over here plus a number of these ports are actually breakouts throughout the house that aren't currently being used as I mentioned in the past video I am really looking forward to getting more into the UniFi ecosystem especially with the version 9 being released and I am going to be replacing open sense at least temporarily with the udm pro Max and I'm Keen to see how I get on with that so following on from that sort of outside input perspective speaking about the firewall how that's cobbled together the networks Etc and how you get into my services the next thing really is we have to talk about kubernetes but there's one final thing before that and I want to talk about my sort of proxy and what security I have on that then we'll talk about the services so if we log into Rancher which is what I use to manage my cluster you can see that I'm running this on k3s I have dabbled in the past with rk2 and I do have a kind of test cluster to setup I find kind of that's good from a security perspective obviously that's what it's designed for but for a home lab I've kind of just Fallen back to k3s um seems to be much more widely adopted within the home lab space plus it tends to be more popular just in general and it's easier to work with because it isn't as locked down as rk2 but if you need security rk2 is definitely the way to go those kind of hardened images Etc now the way this works is once you come in through the firewall um it then goes to my reverse proxy now my reverse proxy as you know very well by now is traffic and if we look down here we can see that we've got traffic running in kubernetes now clicking on traffic you can see that it's actually running on agent 3 at the moment which is one of my physical nodes I could scale this up as well if I wanted to but I just kind of keep it as one and it fails between it's always trying to keep up one um pod alive but on top of that that traffic what I run is crowd secc and so here you can see that I've got crowd secc itself you can actually see if we look at the pods crowd secc is running three agents that's one on each of those nodes so on agent 03 02 and 01 and then we've also got the local API here running on agent 01 now the reason I have that is because whilst I do have a firewall this isn't able to inspect the https traffic Etc the way I get around that is obviously by having traffic handle the SSL so the offloading of https and then crowds SEC is baked in to the proxy itself so it can see all of the data in plain text a bit like when I always advise people about understanding what's happening with a cloud flare tunnel I'm not saying they're bad just understand that basically it's doing all the SSL offloading for you so Cloud flare do get to see all of your traffic unless you specifically set it up to enable https which I'll probably do a video on in the future because I think if you're going to go down the cloud flare tunnel route a you should be aware and B you should use https anyway that's kind of it from an outside in in terms of how I get access into here again as I've said everybody kind of connects in through wire guard anyway the list of people who can get in is restricted to their IP address and they also then can only access the services through the proxy and hopefully crowd SEC is pretty good hopefully it should be able to pick out anything that's too nasty I actually have some separate rules in here as well within my cluster to through things like Network policies that further restrict the level of access that people have and in the past I've actually set up separate instances of some of the services I use just as kind of like a DMZ in terms of people outside of my network I.E untrust so let's have a quick look through um what I'm running in kubernetes we'll also have a look at what I'm running in Docker so frig I mentioned before frig is for my CCTV if we load up frig here you can see the two cameras we can't cuz I blured it out at the front and the back of my house and in the past I've been using friger with one of these this is a coral TPU this is actually a dual TPU but actually and I spoke about this in my last video when I actually moved from my Dell r730 to my ms01 the ms1 isn't bated on the slot so I couldn't actually get both jeal TP use working um and then actually fortunately I decided to translate it over anyway so it now uses the internal GPU on my CPU as opposed to requiring a discrete TPU card now there's some pros and cons to that and it's largely down to latency for what I need it for I don't really care and if we go back to my proxmox instance you'll see that for the agent so let's have a look at abdon if we look at the hardware here you can see that I've actually passed through the GPU the igpu to the machine and that's exactly the same for Dawn if we have a look here you can see that gpu's passed through but actually for sanguinius this one doesn't have the GPU passed through that's because I've I think I'm not sure no I haven't anymore I was passing it through to my Docker machine and that's because I passed it through to the docker machine because I used frig for the igpu just to test it I check that it was working and then obviously the benefits of now having frig running in kubernetes means I get high availability on my CCTV which is something that is obviously beneficial you want to make sure this is always up and running right now frig itself I think is really good um really easy user interface um dead easy to kind of click on buttons review um it also still has the AI object detection and you can also export footage pretty easily now one thing I'm really excited about is again getting into the UI system the UniFi system because when I previously used it it was on the udm pro that only had say a single drive there was no easy way to actually export the footage and it kind of felt a bit like vendor locking that's all changed now and I'm really excited to dive back in I would probably say that the user experience and the user interface again all being that single pan of glass it it does look nicer albeit I do still love friger and it's fully functional I've used it for years without any issues so Keen to see where that's going heading back into kubernetes you can see that I've got gotify set up and gotify is that notification server I'm probably going to skip ahead quickly and and talk about homepage just because homepage is my dashboard and that might give you a nice visual overview of everything I do have running so on homepage you can see that I basically got it a running in kubernetes but this is kind of a One-Stop shop for most of the things that I have running most of the things that actually have an integration so at the top you can see I've got all of my cluster details that's telling me all the resources um as you can see I'm barely scratching the surface on these devices and it also talks about the and it also shows the space available on each of those devices and also plugs into my UniFi and so hopping back into Rancher the next thing really was to have a look at gotify and here you can see I've got a shortcut to gotify if we click that you can see that I've got some notifications set up down here for different applications different users Etc and this is basically set up to a you'll get notifications here but B I've got the gotify application on my mobile phone so whenever I'm at home I'll receive that and usually when I'm out and about I'll have my split tunnel VPN enabled and also SMTP so it can send me emails as well if for whatever reason that doesn't work next up we've got home assistant which I could speak about for hours but effectively home assistant is what controls my house it's pretty rudimentary how I have setup um I love the product I just don't have masses of iot devices around the house it's predominantly in the form of Lights switches and a few sensors that might be something I look into more in the future but to be honest I just can't think what I have need for all of these devices for home assistant is plugged into my CCTV so frig so there's a connector into frig so that you can get all of your dashboards here and I've also got some other metrics on here for when switches and events occur I've got all of my temperatures and things here this is actually a particulate sensor running on a Raspberry Pi Pico um that's currently it tends to crash every sort of week or so and I have to reset it I'm not sure what happens there um but this will measure particulates and I have this just because um where the fire is for the kids it just measures the particles in the room Etc send some alerts after that we've got the lights and different light events around the house again as you can see um my wife hates these kind of gadgets so it's just largely in sort of the garage and the hallway so that when I walk in here the lights turn on Etc and when I come out at night um everything automatically lights up I've got some Integrations with um various media players largely just sort of in the garage and testing but I don't actually tend to use it that much because the TV is just over there I tend to just use the remote I find it much more um convenient to use but if I wanted to I could fall back and control it from here might be quite nice if I'm on the computer and the kids come in and I want to turn off the TV I can do that remotely as I've covered in the previous video I do have hacks set up the community store um and there are a few custom Integrations or sort of non-official thirdparty ones that I have in there moving on down the list of applications um I do have jellyfin installed um I tend not to use jelly fin too much anymore and I can see the horrid faces I find that whilst it works well on most devices it doesn't work well on all devices some devices simply don't have a client to view it and sometimes I get weird sort of transcoding bugs and that leads me to fall back to Plex which I know it's third party it's not open source all of that this that and the other but I do find if you just want a better user experience it's pretty much always guaranteed to work with Plex another application that I use is kind of memos now memos is just a very simple kind of bit like a Google Keep um just a way of keeping notes I often use memos um just to jot down some ideas for some videos people's requests Etc I'll just kind of spin up memos got a video on that and then just drop a note in there and that's pretty handy we'll skip over metal lb just because that's a metal load balancer for kubernetes to give you a um IP address so that you can rout like you would say a standard Docker service I've covered that in my k3s video previously I've got the ubiquitous Minecraft server obviously who doesn't um and that's just spun up within kubernetes mainly more just sort of an academic setup but also um just to play with the kids Etc I've got myo talk which I think is a great product this is an open source uh browser based um service for calls uh and messaging and also things like you can do whiteboarding streaming Etc and it's dead simple just to get this up and running here you go you kind of straight in with a couple of clicks and then you can just kind of copy that URL and send it to people and give access to whoever you want to really cool little service again this is obviously available to people who can access uh my network and enables me to do sort of that voice calling Etc um with people I trust after that we've got mosquito U which is an mqtt server and that's largely for my um lights and my zigg so I've got the combi 2 stick which I spoke about in the last video that's registered to sort of listen physically on the mesh Network on the um zigby Network for events to happen that pushes it to mqtt and then home assistant is listening to that as I mentioned in that video a while back now that's still pretty old school you can actually put everything into home assistant and I have thanks to everyone who mentioned from the last video I have purchased a new zigg stick Poe and that's going to revolutionize the way in which I do all this I'm going to basically cut out mosquito and then going to just embed this straight to home assistant doesn't make sense to kind of have additional Services all spun up when I can just do it all in one place the next part really this node feature Discovery sounds pretty boring but it's actually related to this and if you go to one of these agent nodes the GPU so the node Discovery feature is an Intel feature for um sharing your gpus and what this actually does is deploys within the cluster and finds out basically what each node can do and this discovered that it has a GPU and it also deploys the Intel GPU plug-in for the cluster and what that means is various different containers or pods in this case can actually have access to the GPU so that's right if we have a quick look on here actually things like Plex and jelly Fin and uh frig you'll note that I only have two igpus um because only two are passed through but actually these three services can access it so it's actually pretty cool I didn't realize when I first deployed it that actually you can have one igpu split between hundreds probably even thousands of containers a bit like with an lxc and a single proxmox host you can share that between as many lxc as you want after that I've got py hole I've got actually two py holes up and running and those are both set as my sort of primary and secondary respective uh DNS resolvers so if we go into the firewall for example if we go into say the DHCP um you can see on here that 21 and 222 are configured as DNS servers and that's basically the IP addresses that are load balanced to these respective services so basically every device on my network has ad blocking enabled to it my actual cluster this um has Pyle set as its primary but it also has Cloud flare 1.11.1 set as its secondary and the reason I've done that is because in the past um when my cluster's been taken down or I'm doing something or just something has broken if pie hole wasn't up it couldn't pull new containers so it couldn't even get pie hole up and running because it couldn't query P hole container registry to pull down pie hole that's a lot of P hole so now it will fail over to Cloud flare if it needs to and then obviously it can just pull those containers next one that's interesting is arone and now I use AR clone to basically back up my home lab so I've mentioned this in a previous video so what typically happens is a few things if we go back to my proxmox cluster you'll know that I've got proxmox backup server running and this is configured to backup a number of virtual machines and proxmox backup server itself is set up to save it to this proxmox backup server which is actually my Nas which is on here so long story short all of my VMS everything on proxmox gets backed up to my Nas which is great but that doesn't give us that 321 that's where our CL comes in and the use of a cloud backup now our clone if we go to that it still has a bit of a Rough and Ready interface but it's a great way to just make sure that everything is running through a nice gooey what this is actually doing is if we went to this Explorer here and this is all driven through configs and I've done a video on this previously you can see here that I've actually got this Google Drive Naas Crypt and this is actually a folder over on my Google Drive so I've actually got an Enterprise gole Google Drive um set up and this replicates certain folders and files Etc on my physical NZ and replicates those up into the cloud it does that on a regular basis C of once a week and backs everything up into the cloud it actually also does my CCTV that's actually a synced folder so whenever my friger is recording something as soon as that's stored to the NZ it'll automatically upload it I.E sync it up to my Google Drive drive so that's pretty good in case of someone smashed and grabbed and took my NZ for example hopefully that footage should be up in the cloud and I should be able to get access to that if ever it was needed this is actually an invaluable Service as well cuz I can actually dial into my Naas here for example this is actually an encrypted file if you look on my Google Drive itself all you will actually see are these long um encrypted file names so no access to this unless you've got this AR clone running you've mounted it and using those keys to decrypt it so that's a pretty cool way of using a major cloud provider and still having the sort of security and privacy using encryption after our clone we've got reflector and that just enables um certificates to be reflected between different machines that's needed for traffic and the SSL certificates after that I've got Trillium which I can use uh more for note taking I believe it's actually deprecated now um which is a shame but Trillium is a good way for me to make notes for certain videos so I'll use memos basically for kind of topics just very short prompts shopping lists Etc and I'll use Trillium to actually then write it up flush it out put a few bullet points about what I want to cover in what order UniFi controller well we know what that is that's this thing over here that's running in kubernetes which is really handy because it means that I've got the high availability benefit of having kubernetes there uptime Kuma is also running within kubernetes and that's on the the homepage as well which you can see here and if we have a quick look on there you can see that I've got this tied to both actual physical services so for example things like the Minecraft server or mosquito py hole but I've also got it tied to actual physical nodes as well so my kubernetes nodes so that I know if a node goes down it will tell me about that this is then tied in the back end to my gotify and it will also again it will send me you can see on here Rob time Kuma it will also send me email emails as well next up we've got Vault Warden which is uh open source implementation of bit Warden and I use that to basically store all of my passwords and you can see up here in my browser for example I would get all of my passwords access to my credit cards all of those sorts of things and I like to have that self-hosted um because obviously then I don't have that dependency on a a third party seeing all my passwords there's obviously a tradeoff there um if this was to get breached I'd be in trouble um the reliability of my home lab is probably not as good as something like Google or something like bit Warden but I think for now it's absolutely fine I've used this for a number of years and it's been great and also your passwords Etc are going to be synced onto your phone so you've got that kind of cash save if you ever need it in you're out and about lastly I've got wire guard easy up and running on my kubernetes cluster and that's basically there for doing a bit of testing really I actually have wire guard configured over on my open sense you'll be able to see that if I go to the VPN and then go to wire guard down here now the reason I've got it in kubernetes was because I'm doing a bit of testing I actually find that in my open sense I'm not able to get sort of more than a 100 megabits per second and I don't really know why that is I think it's to do with the MTU and MSS kind of the the packet size that's being sent but I can't really find any solution to this so if anyone knows please do gain touch I suspect it's probably Complicated by having um this virtualized setup having tagged vlans Etc and I've probably not quite dialed in the size I've done ping tests all that stuff but if people can help me please do so that pretty much concludes what's going on in kubernetes the one other thing that complements kubernetes is obviously Lorn now if you click on Lorn you'll see that I've got these three nodes here good job I just checked I need to check this out cuz there's obviously something taking up too much storage space But as I've shown in my previous videos this is basically three storage nodes you can actually see it's the three agents it's again it's these ones here so this one where all the containers are sitting with the GPU that is also a longhorn node and what that means is that all of the data for the containers is replicated across these three machines so if we go back to the proxmox agent 01 agent 02 and agent 03 down here those will all have copies of the data and all of that data again is then backed up through proxmox backup server onto my NZ and then backed up into the cloud so in terms of me losing this data I think the odds of that are pretty low next up basically I've got the docker machine itself and this is pretty much my own Docker and what I sometimes use for my videos and I also use sometimes this Docker test VM and if we fire up Porta you'll see what that looks like so again I've got paina it's synced up into my homepage and if we click this you'll see I've got a few of the test applications running I've got authentic running on here um which I've been meaning to put into kubernetes for a while now I just haven't got around to it I've also got crowd SEC um and traffic set up just for most of my videos and testing um and that's pretty much it you can see a few old videos it tools you've got posters uh I've been testing out posters since when I released a video a few weeks ago and I really like it so I'm probably going to now translate that over into kubernetes and the same with it tools I just need to get around to actually putting it into kubernetes but that's pretty much it that's all I use Docker for when I did this video originally um I was pretty much all within Docker but now I'm pretty much all within kubernetes so that's been pretty much my um transition over the past sort of 18 months months so thanks for watching everybody I know that was a little bit dry but hopefully now you've got a sense of what I have in my home lab the services I run and sort of how I've got them configured you can obviously go and consult my channel for pretty much videos on every single thing I've discussed anyway if you like this video give it a thumbs up hit that subscribe and I'll see you on the next one take care everybody [Music]