Transcript for:
Google Kubernetes Engine Lecture Notes

in this final topic you'll learn how to leverage google kubernetes engine you've already discovered the spectrum between infrastructure as a service and platform as a service and you've learned about compute engine which is the infrastructure as a service offering of TCP with access to servers file systems and networking now you'll see an introduction to containers and gke which is a hybrid that conceptually sits between the two it offers the managed infrastructure of infrastructure as a service with the developer orientation of platform as a service gke is ideal for those that have been challenged when deploying or maintaining a fleet of VMs and it's been determined that containers are the solution it's also ideal when organizations have containerized the workloads and need a system on which to run and manage them and don't have dependencies on kernel changes or on a specific non Linux operating system with GK there's no need to ever touch a server or infrastructure so how does containerization work infrastructure as a service allows you to share compute resources with other developers by virtualizing the hardware using virtual machines each developer can deploy their own operating system access the hardware and build their applications in a self-contained environment with access to their own runtimes and libraries as well as their own partitions of RAM file systems networking interfaces and so on you have your tools of choice on your own configurable system so you can install your favorite runtime web server database or middleware configure the underlying system resources such as disk space disk i/o or networking and build as you like flexible as d comes with a cost the smallest unit of compute is an app with its via the guest OS may be large even gigabytes in size and takes minutes to boot as demand for your application increases you have to copy an entire VM and boot the guest OS for each instance of your app which can be slow and costly a platform as a service provides hosted services and an environment that can scale were closed independently all you do is write your code in self-contained workloads that use these services and include any dependent libraries workloads do not need to represent entire applications they are easier to decouple because they are not tied to the underlying hardware operating system or a lot of the software stacks that you use to manage as demand for your app increases the platform scales your app seamlessly and independently by workload and infrastructure the scales rapidly and encourages you to build your applications as decoupled microservices that run more efficiently but you won't be able to fine tune the underlying architecture to save cost that's where containers come in the idea of a container is to give you the independent scalability of workloads in a platform as a service and an abstraction layer of the operating system and hardware in an infrastructure as a service it only requires a few system calls to create and it starts as quickly as a process all you need on each host is an OS kernel that supports containers and a container runtime in a sense your virtualized the operating system it scales like platform-as-a-service but gives you nearly the same flexibility as an infrastructure as a service containers provide an abstraction layer of the hardware and operating system an invisible box with configurable access to isolated partitions of the file system Ram and networking as well as a fast startup with only a few system calls using a common host configuration you can deploy hundreds of containers on a group of servers if you want to scale for example a web server you can do so in seconds and deploy any number of containers depending on the size of your workload on a single host or a group of hosts you'll likely want to build your applications using lots of containers each performing their own function like micro services if you build them this way and connect them with network connections you can make them modular and deploy them easily and scale independently across a group of hosts and the host can scale up and down and start and stop containers as demand for your app changes or as hosts fail with a cluster you can connect containers using network connections build code modularly deployed easily and scaled containers and hosts independently for maximum efficiency and savings kubernetes is an open source container orchestration tool you can use to simplify the management of containerized environments you can install kubernetes on a group of your own manage service or run it as a hosted service in GCP on a cluster of managed compute engine instances called google kubernetes engine kubernetes makes it easy to orchestrate many containers on many hosts scale them as micro services and deploy raw laughs and rollbacks kubernetes was built by google to run applications at scale kubernetes lets you install the system on local servers in the cloud manage container networking and storage deploy rollouts and rollbacks and monitor and manage container and host health just like shipping containers the software container makes it easier for teams to package manage and ship their code they write software applications that run in a container the container provides the operating system needed to run their application the container will run on any container platform this can save a lot of time and cost compared to running servers or virtual machines like a virtual machine imitates a computer a container imitates an operating system everything at Google runs on containers Gmail web search maps MapReduce batch processes Google file system Colossus even cloud functions our VMs in containers Google launches over 2 billion containers per week docker is the tool that puts the application and everything it needs in the container once the application is in a container it can be moved anywhere that'll run docker containers any laptop server or cloud provider this portability makes code easier to produce manage and troubleshoot and update for service providers containers make it easy to develop code that can be ported to the customer and back kubernetes is an open source container orchestration tool for managing a cluster of docker Linux containers as a single system it can be run in the cloud and on-premises environments it's inspired and informed by Google's experiences and internal systems gke is a managed environment for deploying containerized apps it brings Google's latest innovations in developer productivity resource efficiency automated operations and open source flexibility to accelerate time to market gke is a powerful cluster manager and orchestration system for running docker containers in google cloud gke manages containers automatically based on specifications such as CPU and memory it's built on the open source kubernetes system making it easy for users to orchestrate container clusters or groups of containers because it's built on the open source kubernetes system it provides customers the flexibility to take advantage of on-premises hybrid or public cloud infrastructure