class: title, self-paced Kubernetes Mastery<br/> .nav[*Self-paced version*] .debug[ ``` ``` These slides have been built from commit: b26c1ef [shared/title.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/title.md)] --- class: title, in-person Kubernetes Mastery<br/><br/></br> .footnote[ **Course: http://www.kubernetesmastery.com** **Slides: https://slides.kubernetesmastery.com** ] .debug[[shared/title.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/title.md)] --- name: toc-chapter-1 ## Chapter 1 - [A brief introduction](#toc-a-brief-introduction) - [Pre-requirements](#toc-pre-requirements) .debug[(auto-generated TOC)] --- name: toc-chapter-2 ## Chapter 2 - [What and why of orchestration](#toc-what-and-why-of-orchestration) - [Kubernetes concepts](#toc-kubernetes-concepts) .debug[(auto-generated TOC)] --- name: toc-chapter-3 ## Chapter 3 - [Getting a Kubernetes cluster for learning](#toc-getting-a-kubernetes-cluster-for-learning) - [Docker Desktop (Windows 10/macOS)](#toc-docker-desktop-windows-macos) - [minikube (Windows 10 Home)](#toc-minikube-windows--home) - [MicroK8s (Linux)](#toc-microks-linux) - [Web-based options](#toc-web-based-options) - [`shpod`: For a consistent Kubernetes experience ...](#toc-shpod-for-a-consistent-kubernetes-experience-) .debug[(auto-generated TOC)] --- name: toc-chapter-4 ## Chapter 4 - [First contact with `kubectl`](#toc-first-contact-with-kubectl) - [Running our first containers on Kubernetes](#toc-running-our-first-containers-on-kubernetes) - [Accessing logs from the CLI](#toc-accessing-logs-from-the-cli) - [Assignment 1: first steps](#toc-assignment--first-steps) .debug[(auto-generated TOC)] --- name: toc-chapter-5 ## Chapter 5 - [Exposing containers](#toc-exposing-containers) - [Kubernetes network model](#toc-kubernetes-network-model) - [Assignment 2: more about deployments](#toc-assignment--more-about-deployments) .debug[(auto-generated TOC)] --- name: toc-chapter-6 ## Chapter 6 - [Our sample application](#toc-our-sample-application) - [Shipping images with a registry](#toc-shipping-images-with-a-registry) - [Running DockerCoins on Kubernetes](#toc-running-dockercoins-on-kubernetes) - [Assignment 3: deploy wordsmith](#toc-assignment--deploy-wordsmith) .debug[(auto-generated TOC)] --- name: toc-chapter-7 ## Chapter 7 - [Scaling our demo app](#toc-scaling-our-demo-app) - [Deploying with YAML](#toc-deploying-with-yaml) - [The Kubernetes Dashboard](#toc-the-kubernetes-dashboard) - [Security implications of `kubectl apply`](#toc-security-implications-of-kubectl-apply) - [Daemon sets](#toc-daemon-sets) - [Labels and selectors](#toc-labels-and-selectors) - [Assignment 4: custom load balancing](#toc-assignment--custom-load-balancing) .debug[(auto-generated TOC)] --- name: toc-chapter-8 ## Chapter 8 - [Authoring YAML](#toc-authoring-yaml) - [Using server-dry-run and diff](#toc-using-server-dry-run-and-diff) - [Rolling updates](#toc-rolling-updates) - [Healthchecks](#toc-healthchecks) .debug[(auto-generated TOC)] --- name: toc-chapter-9 ## Chapter 9 - [Managing configuration](#toc-managing-configuration) .debug[(auto-generated TOC)] --- name: toc-chapter-10 ## Chapter 10 - [Exposing HTTP services with Ingress resources](#toc-exposing-http-services-with-ingress-resources) - [Ingress in action: NGINX](#toc-ingress-in-action-nginx) - [Swapping NGINX for Traefik](#toc-swapping-nginx-for-traefik) .debug[(auto-generated TOC)] --- name: toc-chapter-11 ## Chapter 11 - [Volumes](#toc-volumes) - [Stateful sets](#toc-stateful-sets) - [Running a Consul cluster](#toc-running-a-consul-cluster) - [Persistent Volumes Claims](#toc-persistent-volumes-claims) - [Local Persistent Volumes](#toc-local-persistent-volumes) .debug[(auto-generated TOC)] --- name: toc-chapter-12 ## Chapter 12 - [Kustomize](#toc-kustomize) - [Managing stacks with Helm](#toc-managing-stacks-with-helm) - [Helm chart format](#toc-helm-chart-format) - [Creating a basic chart](#toc-creating-a-basic-chart) - [Creating better Helm charts](#toc-creating-better-helm-charts) - [Helm secrets](#toc-helm-secrets) .debug[(auto-generated TOC)] --- name: toc-chapter-13 ## Chapter 13 - [Extending the Kubernetes API](#toc-extending-the-kubernetes-api) - [Operators](#toc-operators) - [Owners and dependents](#toc-owners-and-dependents) .debug[(auto-generated TOC)] --- name: toc-chapter-14 ## Chapter 14 - [Centralized logging](#toc-centralized-logging) - [Collecting metrics with Prometheus](#toc-collecting-metrics-with-prometheus) .debug[(auto-generated TOC)] --- name: toc-chapter-15 ## Chapter 15 - [Resource Limits](#toc-resource-limits) - [Defining min, max, and default resources](#toc-defining-min-max-and-default-resources) - [Namespace quotas](#toc-namespace-quotas) - [Limiting resources in practice](#toc-limiting-resources-in-practice) - [Checking pod and node resource usage](#toc-checking-pod-and-node-resource-usage) .debug[(auto-generated TOC)] --- name: toc-chapter-16 ## Chapter 16 - [Cluster sizing](#toc-cluster-sizing) - [The Horizontal Pod Autoscaler](#toc-the-horizontal-pod-autoscaler) .debug[(auto-generated TOC)] --- name: toc-chapter-17 ## Chapter 17 - [Declarative vs imperative](#toc-declarative-vs-imperative) - [Kubernetes Management Approaches](#toc-kubernetes-management-approaches) - [Recording deployment actions](#toc-recording-deployment-actions) - [Git-based workflows](#toc-git-based-workflows) .debug[(auto-generated TOC)] --- name: toc-chapter-18 ## Chapter 18 - [Building images with the Docker Engine](#toc-building-images-with-the-docker-engine) - [Building images with Kaniko](#toc-building-images-with-kaniko) .debug[(auto-generated TOC)] --- name: toc-chapter-19 ## Chapter 19 - [Building our own cluster](#toc-building-our-own-cluster) - [Adding nodes to the cluster](#toc-adding-nodes-to-the-cluster) - [API server availability](#toc-api-server-availability) - [Static pods](#toc-static-pods) .debug[(auto-generated TOC)] --- name: toc-chapter-20 ## Chapter 20 - [Owners and dependents](#toc-owners-and-dependents) - [Exposing HTTP services with Ingress resources](#toc-exposing-http-services-with-ingress-resources) - [Upgrading clusters](#toc-upgrading-clusters) - [Backing up clusters](#toc-backing-up-clusters) - [The Cloud Controller Manager](#toc-the-cloud-controller-manager) .debug[(auto-generated TOC)] --- name: toc-chapter-21 ## Chapter 21 - [Namespaces](#toc-namespaces) - [Controlling a Kubernetes cluster remotely](#toc-controlling-a-kubernetes-cluster-remotely) - [Accessing internal services](#toc-accessing-internal-services) - [Accessing the API with `kubectl proxy`](#toc-accessing-the-api-with-kubectl-proxy) .debug[(auto-generated TOC)] --- name: toc-chapter-22 ## Chapter 22 - [The Container Network Interface](#toc-the-container-network-interface) - [Interconnecting clusters](#toc-interconnecting-clusters) .debug[(auto-generated TOC)] --- name: toc-chapter-23 ## Chapter 23 - [Network policies](#toc-network-policies) - [Authentication and authorization](#toc-authentication-and-authorization) - [Pod Security Policies](#toc-pod-security-policies) - [The CSR API](#toc-the-csr-api) - [OpenID Connect](#toc-openid-connect) - [Securing the control plane](#toc-securing-the-control-plane) .debug[(auto-generated TOC)] --- name: toc-chapter-24 ## Chapter 24 - [Next steps](#toc-next-steps) - [Links and resources](#toc-links-and-resources) .debug[(auto-generated TOC)] .debug[[shared/toc.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/toc.md)] --- class: pic .interstitial[] --- name: toc-a-brief-introduction class: title A brief introduction .nav[ [Previous section](#toc-) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-pre-requirements) ] .debug[(automatically generated title slide)] --- # A brief introduction - This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person, instructor-led workshops and tutorials - Credit is also due to [multiple contributors](https://github.com/BretFisher/kubernetes-mastery/graphs/contributors) — thank you! - I recommend using the Slack Chat to help you ... - ... And be comfortable spending some time reading the Kubernetes [documentation](https://kubernetes.io/docs/) ... - ... And looking for answers on [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and other outlets .debug[[k8smastery/intro.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/intro.md)] --- ## Hands on, you shall practice - Nobody ever became a Jedi by spending their lives reading Wookiepedia - Likewise, it will take more than merely *reading* these slides to make you an expert - These slides include *tons* of exercises and examples - They assume that you have access to a Kubernetes cluster .debug[[k8smastery/intro.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/intro.md)] --- class: pic .interstitial[] --- name: toc-pre-requirements class: title Pre-requirements .nav[ [Previous section](#toc-a-brief-introduction) | [Back to table of contents](#toc-chapter-1) | [Next section](#toc-what-and-why-of-orchestration) ] .debug[(automatically generated title slide)] --- # Pre-requirements - Be comfortable with the UNIX command line - navigating directories - editing files - a little bit of bash-fu (environment variables, loops) - Some Docker knowledge - `docker run`, `docker ps`, `docker build` - ideally, you know how to write a Dockerfile and build it <br/> (even if it's a `FROM` line and a couple of `RUN` commands) - It's totally OK if you are not a Docker expert! .debug[[k8smastery/prereqs.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/prereqs.md)] --- class: title *Tell me and I forget.* <br/> *Teach me and I remember.* <br/> *Involve me and I learn.* Misattributed to Benjamin Franklin [(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/) .debug[[k8smastery/prereqs.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/prereqs.md)] --- ## Hands-on exercises - The whole workshop is hands-on, with "exercies" - You are invited to reproduce these exercises with me - All exercises are identified with a dashed box *plus* keyboard icon .exercise[ - This is the stuff you're supposed to do! - Go to https://slides.kubernetesmastery.com to view these slides - Join the chat room: [Slack](https://chat.bretfisher.com/) <!-- ```open https://slides.kubernetesmastery.com``` --> ] .debug[[k8smastery/prereqs.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/prereqs.md)] --- class: pic .interstitial[] --- name: toc-what-and-why-of-orchestration class: title What and why of orchestration .nav[ [Previous section](#toc-pre-requirements) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-kubernetes-concepts) ] .debug[(automatically generated title slide)] --- # What and why of orchestration - There are many computing orchestrators - They make decisions about when and where to "do work" -- - We've done this since the dawn of computing: Mainframe schedulers, Puppet, Terraform, AWS, Mesos, Hadoop, etc. -- - Since 2014 we've had a resurgence of new orchestration projects because: -- 1. Popularity of distributed computing -- 2. Docker containers as a app package and isolated runtime -- - We needed "many servers to act like one, and run many containers" -- - And the Container Orchestrator was born .debug[[k8smastery/orchestration.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/orchestration.md)] --- ## Container orchestrator - Many open source projects have been created in the last 5 years to: - Schedule running of containers on servers -- - Dispatch them across many nodes -- - Monitor and react to container and server health -- - Provide storage, networking, proxy, security, and logging features -- - Do all this in a declarative way, rather than imperative -- - Provide API's to allow extensibility and management .debug[[k8smastery/orchestration.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/orchestration.md)] --- ## Major container orchestration projects - Kubernetes, aka K8s - Docker Swarm (and Swarm classic) - Apache Mesos/Marathon - Cloud Foundry - Amazon ECS (not OSS, AWS-only) - HashiCorp Nomad -- - **Many of these tools run on top of Docker Engine** -- - **Kubernetes is the *one* orchestrator with many _distributions_** .debug[[k8smastery/orchestration.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/orchestration.md)] --- ## Kubernetes distributions - Kubernetes "vanilla upstream" (not a distribution) -- - Cloud-Managed distros: AKS, GKE, EKS, DOK... -- - Self-Managed distros: RedHat OpenShift, Docker Enterprise, Rancher, Canonical Charmed, openSUSE Kubic... -- - Vanilla installers: kubeadm, kops, kubicorn... -- - Local dev/test: Docker Desktop, minikube, microK8s -- - CI testing: kind -- - Special builds: Rancher k3s -- - And [Many, many more...](https://kubernetes.io/partners/#conformance) (86 as of June 2019) .debug[[k8smastery/orchestration.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/orchestration.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-concepts class: title Kubernetes concepts .nav[ [Previous section](#toc-what-and-why-of-orchestration) | [Back to table of contents](#toc-chapter-2) | [Next section](#toc-getting-a-kubernetes-cluster-for-learning) ] .debug[(automatically generated title slide)] --- # Kubernetes concepts - Kubernetes is a container management system - It runs and manages containerized applications on a cluster (one or more servers) - Often this is simply called "container orchestration" - Sometimes shortened to Kube or K8s ("Kay-eights" or "Kates") .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Basic things we can ask Kubernetes to do -- - Start 5 containers using image `atseashop/api:v1.3` -- - Place an internal load balancer in front of these containers -- - Start 10 containers using image `atseashop/webfront:v1.3` -- - Place a public load balancer in front of these containers -- - It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers -- - New release! Replace my containers with the new image `atseashop/webfront:v1.4` -- - Keep processing requests during the upgrade; update my containers one at a time .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Other things that Kubernetes can do for us - Basic autoscaling - Blue/green deployment, canary deployment - Long running services, but also batch (one-off) and CRON-like jobs - Overcommit our cluster and *evict* low-priority jobs - Run services with *stateful* data (databases etc.) - Fine-grained access control defining *what* can be done by *whom* on *which* resources - Integrating third party services (*service catalog*) - Automating complex tasks (*operators*) .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Kubernetes architecture .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: pic  .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Kubernetes architecture - Ha ha ha ha - OK, I was trying to scare you, it's much simpler than that ❤️ .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: pic  .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Credits - The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI (Courtesy of [Yongbok Kim](https://www.yongbok.net/blog/)) - The second one is a simplified representation of a Kubernetes cluster (Courtesy of [Imesh Gunaratne](https://medium.com/containermind/a-reference-architecture-for-deploying-wso2-middleware-on-kubernetes-d4dee7601e8e)) .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Kubernetes architecture: the nodes - The nodes executing our containers run a collection of services: - a container Engine (typically Docker) - kubelet (the "node agent") - kube-proxy (a necessary but not sufficient network component) - Nodes were formerly called "minions" (You might see that word in older articles or documentation) .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Kubernetes architecture: the control plane - The Kubernetes logic (its "brains") is a collection of services: - the API server (our point of entry to everything!) - core services like the scheduler and controller manager - `etcd` (a highly available key/value store; the "database" of Kubernetes) - Together, these services form the control plane of our cluster - The control plane is also called the "master" .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: pic  .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: extra-details ## Running the control plane on special nodes - It is common to reserve a dedicated node for the control plane (Except for single-node development clusters, like when using minikube) - This node is then called a "master" (Yes, this is ambiguous: is the "master" a node, or the whole control plane?) - Normal applications are restricted from running on this node (By using a mechanism called ["taints"](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)) - When high availability is required, each service of the control plane must be resilient - The control plane is then replicated on multiple nodes (This is sometimes called a "multi-master" setup) .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: extra-details ## Running the control plane outside containers - The services of the control plane can run in or out of containers - For instance: since `etcd` is a critical service, some people deploy it directly on a dedicated cluster (without containers) (This is illustrated on the first "super complicated" schema) - In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible (We only "see" a Kubernetes API endpoint) - In that case, there is no "master node" *For this reason, it is more accurate to say "control plane" rather than "master."* .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? No! -- - By default, Kubernetes uses the Docker Engine to run containers -- - Or leverage other pluggable runtimes through the *Container Runtime Interface* -- - <del>We could also use `rkt` ("Rocket") from CoreOS</del> (deprecated) -- - [containerd](https://github.com/containerd/containerd/blob/master/README.md): maintained by Docker, IBM, and community - Used by Docker Engine, microK8s, k3s, GKE, and standalone; has `ctr` CLI -- - [CRI-O](https://github.com/cri-o/cri-o/blob/master/README.md): maintained by Red Hat, SUSE, and community; based on containerd - Used by OpenShift and Kubic, version matched to Kubernetes -- - [And more](https://kubernetes.io/docs/setup/production-environment/container-runtimes/) .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? Yes! -- - In this course, we'll run our apps on a single node first - We may need to build images and ship them around - We can do these things without Docker <br/> (and get diagnosed with NIH¹ syndrome) - Docker is still the most stable container engine today <br/> (but other options are maturing very quickly) .footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)] .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: extra-details ## Do we need to run Docker at all? - On our development environments, CI pipelines ... : *Yes, almost certainly* - On our production servers: *Yes (today)* *Probably not (in the future)* .footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)] .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Interacting with Kubernetes - We will interact with our Kubernetes cluster through the Kubernetes API - The Kubernetes API is (mostly) RESTful - It allows us to create, read, update, delete *resources* - A few common resource types are: - node (a machine — physical or virtual — in our cluster) - pod (group of containers running together on a node) - service (stable network endpoint to connect to one or multiple containers) .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: pic  .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Pods - Pods are a new abstraction! -- - A *pod* can have multiple containers working together - (But you usually only have on container per pod) -- - Pod is our smallest deployable unit; Kubernetes can't mange containers directly -- - IP addresses are associated with *pods*, not with individual containers - Containers in a pod share `localhost`, and can share volumes -- - Multiple containers in a pod are deployed together - In reality, Docker doesn't know a pod, only containers/namespaces/volumes .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- ## Credits - The first diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha) - it's one of the best Kubernetes architecture diagrams available! - The second diagram is courtesy of Weaveworks - a *pod* can have multiple containers working together - IP addresses are associated with *pods*, not with individual containers Both diagrams used with permission. .debug[[k8smastery/concepts-k8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/concepts-k8s.md)] --- class: pic .interstitial[] --- name: toc-getting-a-kubernetes-cluster-for-learning class: title Getting a Kubernetes cluster for learning .nav[ [Previous section](#toc-kubernetes-concepts) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-docker-desktop-windows-macos) ] .debug[(automatically generated title slide)] --- name: install # Getting a Kubernetes cluster for learning - Best: Get a environment locally - Docker Desktop (Win/macOS/Linux), Rancher Desktop (Win/macOS/Linux), or microk8s (Linux) - Small setup effort; free; flexible environments - Requires 2GB+ of memory -- - Good: Setup a cloud Linux host to run microk8s - Great if you don't have the local resources to run Kubernetes - Small setup effort; only free for a while - My $50 DigitalOcean coupon lets you run Kubernetes free for a month -- - Last choice: Use a browser-based solution - Low setup effort; but host is short-lived and has limited resources - Not all hands-on examples will work in the browser sandbox -- - For all environments, we'll use `shpod` container for tools .debug[[k8smastery/install-summary.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-summary.md)] --- class: pic .interstitial[] --- name: toc-docker-desktop-windows-macos class: title Docker Desktop (Windows 10/macOS) .nav[ [Previous section](#toc-getting-a-kubernetes-cluster-for-learning) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-minikube-windows--home) ] .debug[(automatically generated title slide)] --- name: dd # Docker Desktop (Windows 10/macOS) - Docker Desktop (DD) is great for a local dev/test setup -- - Requires modern macOS or Windows 10 Pro/Ent/Edu (no Home) - Requires Hyper-V, and disables VirtualBox -- .exercise[ - [Download Windows](https://download.docker.com/win/stable/Docker%20Desktop%20Installer.exe) or [macOS](https://download.docker.com/mac/stable/Docker.dmg) versions and install - For Windows, ensure you pick "Linux Containers" mode - Once running, enabled Kubernetes in Settings/Preferences ] .debug[[k8smastery/install-docker-desktop.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-docker-desktop.md)] --- class: pic ## Docker Desktop for Windows  .debug[[k8smastery/install-docker-desktop.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-docker-desktop.md)] --- class: pic ## Enable Kubernetes in settings  .debug[[k8smastery/install-docker-desktop.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-docker-desktop.md)] --- class: pic ## No Kubernetes option? Switch to Linux mode  .debug[[k8smastery/install-docker-desktop.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-docker-desktop.md)] --- class: pic ## Check your connection in a terminal  .debug[[k8smastery/install-docker-desktop.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-docker-desktop.md)] --- class: pic ## Docker Desktop for macOS  .debug[[k8smastery/install-docker-desktop.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-docker-desktop.md)] --- class: pic ## Enable Kubernetes in preferences  .debug[[k8smastery/install-docker-desktop.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-docker-desktop.md)] --- class: pic ## Check your connection in a terminal  .debug[[k8smastery/install-docker-desktop.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-docker-desktop.md)] --- class: pic .interstitial[] --- name: toc-minikube-windows--home class: title minikube (Windows 10 Home) .nav[ [Previous section](#toc-docker-desktop-windows-macos) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-microks-linux) ] .debug[(automatically generated title slide)] --- name: minikube # minikube (Windows 10 Home) - A good local install option if you can't run Docker Desktop -- - Inspired by Docker Toolbox - Will create a local VM and configure latest Kubernetes - Has lots of other features with its `minikube` CLI -- - But, requires separate install of VirtualBox and kubectl - May not work with older Windows versions (YMMV) -- .exercise[ - [Download and install VirtualBox](https://www.virtualbox.org) - [Download kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows), and add to $PATH - [Download and install minikube](https://minikube.sigs.k8s.io/) - Run `minikube start` to create and run a Kubernetes VM - Run `minikube stop` when you're done ] -- .warning[.small[.footnode[ If you get an error about "This computer doesn't have VT-X/AMD-v enabled", you need to enable virtualization in your computer BIOS. ]]] .debug[[k8smastery/install-minikube.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-minikube.md)] --- class: pic .interstitial[] --- name: toc-microks-linux class: title MicroK8s (Linux) .nav[ [Previous section](#toc-minikube-windows--home) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-web-based-options) ] .debug[(automatically generated title slide)] --- name: microk8s # MicroK8s (Linux) - [Easy install](https://microk8s.io/) and management of local Kubernetes -- - Made by Canonical (Ubuntu). Installs using `snap`. Works nearly everywhere - Has lots of other features with its `microk8s` CLI -- - But, requires you [install `snap`](https://snapcraft.io/docs/installing-snapd) if not on Ubuntu - Runs on containerd rather than Docker, no biggie - Needs alias setup for `microk8s kubectl` -- .exercise[ - Install `microk8s`,change group permissions, then set alias in bashrc ``` bash sudo snap install microk8s --classic sudo usermod -a -G microk8s <username> echo "alias kubectl='microk8s kubectl'" >> ~/.bashrc # log out and back in if using a non-root user ``` ] .debug[[k8smastery/install-microk8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-microk8s.md)] --- ## MicroK8s Additional Info - We'll need these later (these are done for us in Docker Desktop and minikube): .exercise[ - Create kubectl config file ``` bash microk8s kubectl config view --raw > $HOME/.kube/config ``` - Install CoreDNS in Kubernetes ``` bash sudo microk8s enable dns ``` ] - You can also install other plugins this way like `microk8s enable dashboard` or `microk8s enable ingress` .debug[[k8smastery/install-microk8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-microk8s.md)] --- ## MicroK8s Troubleshooting - Run a check for any config problems .exercise[ - Test MicroK8s config for any potental problems ``` bash sudo microk8s inspect ``` ] - If you also have Docker installed, you can ignore warnings about iptables and registries - See [troubleshooting site](https://microk8s.io/docs/troubleshooting) if you have issues .debug[[k8smastery/install-microk8s.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-microk8s.md)] --- class: pic .interstitial[] --- name: toc-web-based-options class: title Web-based options .nav[ [Previous section](#toc-microks-linux) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-shpod-for-a-consistent-kubernetes-experience-) ] .debug[(automatically generated title slide)] --- name: pwk # Web-based options Last choice: Use a browser-based solution -- - Low setup effort; but host is short-lived and has limited resources -- - Services are not always working right, and may not be up to date -- - Not all hands-on examples will work in the browser sandbox .exercise[ - Use a prebuilt Kubernetes server at [Katacoda](https://www.katacoda.com/courses/kubernetes/playground) - Or setup a Kubernetes node at [play-with-k8s.com](https://labs.play-with-k8s.com/) - Maybe try the latest OpenShift at [learn.openshift.com](https://learn.openshift.com/playgrounds/) - See if instruqt works for [a Kubernetes playground](https://instruqt.com/public/tracks/play-with-kubernetes) ] .debug[[k8smastery/install-pwk.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-pwk.md)] --- class: pic .interstitial[] --- name: toc-shpod-for-a-consistent-kubernetes-experience- class: title `shpod`: For a consistent Kubernetes experience ... .nav[ [Previous section](#toc-web-based-options) | [Back to table of contents](#toc-chapter-3) | [Next section](#toc-first-contact-with-kubectl) ] .debug[(automatically generated title slide)] --- name: shpod # `shpod`: For a consistent Kubernetes experience ... - You can use [shpod](https://github.com/bretfisher/shpod) for examples - `shpod` provides a shell running in a pod on the cluster - It comes with many tools pre-installed (helm, stern, curl, jq...) - These tools are used in many exercises in these slides - `shpod` also gives you shell completion and a fancy prompt - Create it with `kubectl apply -f https://k8smastery.com/shpod.yaml` - Attach to shell with `kubectl attach --namespace=shpod -ti shpod` - After finishing course `kubectl delete -f https://k8smastery.com/shpod.yaml` .debug[[k8smastery/install-shpod.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/install-shpod.md)] --- class: pic .interstitial[] --- name: toc-first-contact-with-kubectl class: title First contact with `kubectl` .nav[ [Previous section](#toc-shpod-for-a-consistent-kubernetes-experience-) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-running-our-first-containers-on-kubernetes) ] .debug[(automatically generated title slide)] --- # First contact with `kubectl` - `kubectl` is (almost) the only tool we'll need to talk to Kubernetes - It is a rich CLI tool around the Kubernetes API (Everything you can do with `kubectl`, you can do directly with the API) -- - On our machines, there is a `~/.kube/config` file with: - the Kubernetes API address - the path to our TLS certificates used to authenticate - You can also use the `--kubeconfig` flag to pass a config file - Or directly `--server`, `--user`, etc. -- - `kubectl` can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"... - I'll be using the official name "Cube Control" 😎 .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: extra-details ## `kubectl` is the new SSH - We often start managing servers with SSH (installing packages, troubleshooting ...) - At scale, it becomes tedious, repetitive, error-prone - Instead, we use config management, central logging, etc. - In many cases, we still need SSH: - as the underlying access method (e.g. Ansible) - to debug tricky scenarios - to inspect and poke at things .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: extra-details ## The parallel with `kubectl` - We often start managing Kubernetes clusters with `kubectl` (deploying applications, troubleshooting ...) - At scale (with many applications or clusters), it becomes tedious, repetitive, error-prone - Instead, we use automated pipelines, observability tooling, etc. - In many cases, we still need `kubectl`: - to debug tricky scenarios - to inspect and poke at things - The Kubernetes API is always the underlying access method .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## `kubectl get` - Let's look at our `Node` resources with `kubectl get`! .exercise[ - Look at the composition of our cluster: ```bash kubectl get node ``` - These commands are equivalent: ```bash kubectl get no kubectl get node kubectl get nodes ``` ] .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Obtaining machine-readable output - `kubectl get` can output JSON, YAML, or be directly formatted .exercise[ - Give us more info about the nodes: ```bash kubectl get nodes -o wide ``` - Let's have some YAML: ```bash kubectl get no -o yaml ``` See that `kind: List` at the end? It's the type of our result! ] .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## (Ab)using `kubectl` and `jq` - It's super easy to build custom reports .exercise[ - Show the capacity of all our nodes as a stream of JSON objects: ```bash kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity" ``` ] .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Viewing details - We can use `kubectl get -o yaml` to see all available details - However, YAML output is often simultaneously too much and not enough - For instance, `kubectl get node node1 -o yaml` is: - too much information (e.g.: list of images available on this node) - not enough information (e.g.: doesn't show pods running on this node) - difficult to read for a human operator - For a comprehensive overview, we can use `kubectl describe` instead .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## `kubectl describe` - `kubectl describe` needs a resource type and (optionally) a resource name - It is possible to provide a resource name *prefix* (all matching objects will be displayed) - `kubectl describe` will retrieve some extra information about the resource .exercise[ - Look at the information available for *your node name* with one of the following: ```bash kubectl describe node/<node> kubectl describe node <node> ``` ] (We should notice a bunch of control plane pods.) .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: extra-details ## Exploring types and definitions - We can list all available resource types by running `kubectl api-resources` <br/> (In Kubernetes 1.10 and prior, this command used to be `kubectl get`) - We can view the definition for a resource type with: ```bash kubectl explain type ``` - We can view the definition of a field in a resource, for instance: ```bash kubectl explain node.spec ``` - Or get the list of all fields and sub-fields: ```bash kubectl explain node --recursive ``` .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: extra-details ## Introspection vs. documentation - We can access the same information by reading the [API documentation](https://kubernetes.io/docs/reference/#api-reference) - The API documentation is usually easier to read, but: - it won't show custom types (like Custom Resource Definitions) - we need to make sure that we look at the correct version - `kubectl api-resources` and `kubectl explain` perform *introspection* (they communicate with the API server and obtain the exact type definitions) .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Type names - The most common resource names have three forms: - singular (e.g. `node`, `service`, `deployment`) - plural (e.g. `nodes`, `services`, `deployments`) - short (e.g. `no`, `svc`, `deploy`) - Some resources do not have a short name - `Endpoints` only have a plural form (because even a single `Endpoints` resource is actually a list of endpoints) .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## More `get` commands: Services - A *service* is a stable endpoint to connect to "something" (In the initial proposal, they were called "portals") .exercise[ - List the services on our cluster with one of these commands: ```bash kubectl get services kubectl get svc ``` ] -- There is already one service on our cluster: the Kubernetes API itself. .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: not-mastery ## ClusterIP services - A `ClusterIP` service is internal, available from the cluster only - This is useful for introspection from within containers .exercise[ - Try to connect to the API: ```bash curl -k https://`10.96.0.1` ``` - `-k` is used to skip certificate verification - Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc` ] The error that we see is expected: the Kubernetes API requires authentication. .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## More `get` commands: Listing running containers - Containers are manipulated through *pods* - A pod is a group of containers: - running together (on the same node) - sharing resources (RAM, CPU; but also network, volumes) .exercise[ - List pods on our cluster: ```bash kubectl get pods ``` ] -- *Where are the pods that we saw just a moment earlier?!?* .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Namespaces - Namespaces allow us to segregate resources .exercise[ - List the namespaces on our cluster with one of these commands: ```bash kubectl get namespaces kubectl get namespace kubectl get ns ``` ] -- *You know what ... This `kube-system` thing looks suspicious.* *In fact, I'm pretty sure it showed up earlier, when we did:* `kubectl describe node <node-name>` .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Accessing namespaces - By default, `kubectl` uses the `default` namespace - We can see resources in all namespaces with `--all-namespaces` .exercise[ - List the pods in all namespaces: ```bash kubectl get pods --all-namespaces ``` - Since Kubernetes 1.14, we can also use `-A` as a shorter version: ```bash kubectl get pods -A ``` ] *Here are our system pods!* .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## What are all these control plane pods? - `etcd` is our etcd server - `kube-apiserver` is the API server - `kube-controller-manager` and `kube-scheduler` are other control plane components - `coredns` provides DNS-based service discovery ([replacing kube-dns as of 1.11](https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/)) - `kube-proxy` is the (per-node) component managing port mappings and such - `<net name>` is the optional (per-node) component managing the network overlay - the `READY` column indicates the number of containers in each pod - Note: this only shows containers, you won't see host svcs (e.g. microk8s) - Also Note: you may see different namespaces depending on setup .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Scoping another namespace - We can also look at a different namespace (other than `default`) .exercise[ - List only the pods in the `kube-system` namespace: ```bash kubectl get pods --namespace=kube-system kubectl get pods -n kube-system ``` ] .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Namespaces and other `kubectl` commands - We can use `-n`/`--namespace` with almost every `kubectl` command - Example: - `kubectl create --namespace=X` to create something in namespace X - We can use `-A`/`--all-namespaces` with most commands that manipulate multiple objects - Examples: - `kubectl delete` can delete resources across multiple namespaces - `kubectl label` can add/remove/update labels across multiple namespaces .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: extra-details ## What about `kube-public`? .exercise[ - List the pods in the `kube-public` namespace: ```bash kubectl -n kube-public get pods ``` ] Nothing! `kube-public` is created by our installer & [used for security bootstrapping](https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters). .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: extra-details ## Exploring `kube-public` - The only interesting object in `kube-public` is a ConfigMap named `cluster-info` .exercise[ - List ConfigMap objects: ```bash kubectl -n kube-public get configmaps ``` - Inspect `cluster-info`: ```bash kubectl -n kube-public get configmap cluster-info -o yaml ``` ] Note the `selfLink` URI: `/api/v1/namespaces/kube-public/configmaps/cluster-info` We can use that (later in `kubectl context` lectures)! .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: extra-details, not-mastery ## Accessing `cluster-info` - Earlier, when trying to access the API server, we got a `Forbidden` message - But `cluster-info` is readable by everyone (even without authentication) .exercise[ - Retrieve `cluster-info`: ```bash curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info ``` ] - We were able to access `cluster-info` (without auth) - It contains a `kubeconfig` file .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: extra-details, not-mastery ## Retrieving `kubeconfig` - We can easily extract the `kubeconfig` file from this ConfigMap .exercise[ - Display the content of `kubeconfig`: ```bash curl -sk https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info \ | jq -r .data.kubeconfig ``` ] - This file holds the canonical address of the API server, and the public key of the CA - This file *does not* hold client keys or tokens - This is not sensitive information, but allows us to establish trust .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: extra-details ## What about `kube-node-lease`? - Starting with Kubernetes 1.14, there is a `kube-node-lease` namespace (or in Kubernetes 1.13 if the NodeLease feature gate is enabled) - That namespace contains one Lease object per node - *Node leases* are a new way to implement node heartbeats (i.e. node regularly pinging the control plane to say "I'm alive!") - For more details, see [KEP-0009] or the [node controller documentation] [KEP-0009]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/0009-node-heartbeat.md [node controller documentation]: https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Services - A *service* is a stable endpoint to connect to "something" (In the initial proposal, they were called "portals") .exercise[ - List the services on our cluster with one of these commands: ```bash kubectl get services kubectl get svc ``` ] -- There is already one service on our cluster: the Kubernetes API itself. .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## ClusterIP services - A `ClusterIP` service is internal, available from the cluster only - This is useful for introspection from within containers .exercise[ - Try to connect to the API: ```bash curl -k https://`10.96.0.1` ``` - `-k` is used to skip certificate verification - Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc` ] The command above should either time out, or show an authentication error. Why? .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Time out - Connections to ClusterIP services only work *from within the cluster* - If we are outside the cluster, the `curl` command will probably time out (Because the IP address, e.g. 10.96.0.1, isn't routed properly outside the cluster) - This is the case with most "real" Kubernetes clusters - To try the connection from within the cluster, we can use [shpod](https://github.com/jpetazzo/shpod) .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Authentication error This is what we should see when connecting from within the cluster: ```json $ curl -k https://10.96.0.1 { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 } ``` .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## Explanations - We can see `kind`, `apiVersion`, `metadata` - These are typical of a Kubernetes API reply - Because we *are* talking to the Kubernetes API - The Kubernetes API tells us "Forbidden" (because it requires authentication) - The Kubernetes API is reachable from within the cluster (many apps integrating with Kubernetes will use this) .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- ## DNS integration - Each service also gets a DNS record - The Kubernetes DNS resolver is available *from within pods* (and sometimes, from within nodes, depending on configuration) - Code running in pods can connect to services using their name (e.g. https://kubernetes/...) .debug[[k8s/kubectlget.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlget.md)] --- class: pic .interstitial[] --- name: toc-running-our-first-containers-on-kubernetes class: title Running our first containers on Kubernetes .nav[ [Previous section](#toc-first-contact-with-kubectl) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-accessing-logs-from-the-cli) ] .debug[(automatically generated title slide)] --- # Running our first containers on Kubernetes - First things first: we cannot run a container -- - We are going to run a pod, and in that pod there will be a single container -- - In that container in the pod, we are going to run a simple `ping` command - Then we are going to start additional copies of the pod .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Starting a simple pod with `kubectl run` - We need to specify at least a *name* and the image we want to use .exercise[ - Let's ping the address of `localhost`, the loopback interface: ```bash kubectl run pingpong --image alpine ping 127.0.0.1 ``` <!-- ```hide kubectl wait deploy/pingpong --for condition=available``` --> ] -- (Starting with Kubernetes 1.12, we get a message telling us that `kubectl run` is deprecated. Let's ignore it for now.) .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Behind the scenes of `kubectl run` - Let's look at the resources that were created by `kubectl run` .exercise[ - List most resource types: ```bash kubectl get all ``` ] -- We should see the following things: - `deployment.apps/pingpong` (the *deployment* that we just created) - `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment) - `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set) Note: as of 1.10.1, resource types are displayed in more detail. .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## What are these different things? - A *deployment* is a high-level construct - allows scaling, rolling updates, rollbacks - multiple deployments can be used together to implement a [canary deployment](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#canary-deployments) - delegates pods management to *replica sets* - A *replica set* is a low-level construct - makes sure that a given number of identical pods are running - allows scaling - rarely used directly - Note: A *replication controller* is the deprecated predecessor of a replica set .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Our `pingpong` deployment - `kubectl run` created a *deployment*, `deployment.apps/pingpong` ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/pingpong 1 1 1 1 10m ``` - That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx` ``` NAME DESIRED CURRENT READY AGE replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m ``` - That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy` ``` NAME READY STATUS RESTARTS AGE pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m ``` - We'll see later how these folks play together for: - scaling, high availability, rolling updates .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Viewing container output - Let's use the `kubectl logs` command - We will pass either a *pod name*, or a *type/name* (E.g. if we specify a deployment or replica set, it will get the first pod in it) - Unless specified otherwise, it will only show logs of the first container in the pod (Good thing there's only one in ours!) .exercise[ - View the result of our `ping` command: ```bash kubectl logs deploy/pingpong ``` ] .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Streaming logs in real time - Just like `docker logs`, `kubectl logs` supports convenient options: - `-f`/`--follow` to stream logs in real time (à la `tail -f`) - `--tail` to indicate how many lines you want to see (from the end) - `--since` to get logs only after a given timestamp .exercise[ - View the latest logs of our `ping` command: ```bash kubectl logs deploy/pingpong --tail 1 --follow ``` - Leave that command running, so that we can keep an eye on these logs <!-- ```wait seq=3``` ```tmux split-pane -h``` --> ] .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Scaling our application - We can create additional copies of our container (I mean, our pod) with `kubectl scale` .exercise[ - Scale our `pingpong` deployment: ```bash kubectl scale deploy/pingpong --replicas 3 ``` - Note that this command does exactly the same thing: ```bash kubectl scale deployment pingpong --replicas 3 ``` ] Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`? We could! But the *deployment* would notice it right away, and scale back to the initial level. .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Log streaming - Let's look again at the output of `kubectl logs` (the one we started before scaling up) - `kubectl logs` shows us one line per second - We could expect 3 lines per second (since we should now have 3 pods running `ping`) - Let's try to figure out what's happening! .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Streaming logs of multiple pods - What happens if we restart `kubectl logs`? .exercise[ - Interrupt `kubectl logs` (with Ctrl-C) <!-- ```tmux last-pane``` ```key ^C``` --> - Restart it: ```bash kubectl logs deploy/pingpong --tail 1 --follow ``` <!-- ```wait using pod/pingpong-``` ```tmux last-pane``` --> ] `kubectl logs` will warn us that multiple pods were found, and that it's showing us only one of them. Let's leave `kubectl logs` running while we keep exploring. .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Resilience - The *deployment* `pingpong` watches its *replica set* - The *replica set* ensures that the right number of *pods* are running - What happens if pods disappear? .exercise[ - In a separate window, watch the list of pods: ```bash watch kubectl get pods ``` <!-- ```wait Every 2.0s``` ```tmux split-pane -v``` --> - Destroy the pod currently shown by `kubectl logs`: ``` kubectl delete pod pingpong-xxxxxxxxxx-yyyyy ``` <!-- ```tmux select-pane -t 0``` ```copy pingpong-[^-]*-.....``` ```tmux last-pane``` ```keys kubectl delete pod ``` ```paste``` ```key ^J``` ```check``` ```key ^D``` ```tmux select-pane -t 1``` ```key ^C``` ```key ^D``` --> ] .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## What happened? - `kubectl delete pod` terminates the pod gracefully (sending it the TERM signal and waiting for it to shutdown) - As soon as the pod is in "Terminating" state, the Replica Set replaces it - But we can still see the output of the "Terminating" pod in `kubectl logs` - Until 30 seconds later, when the grace period expires - The pod is then killed, and `kubectl logs` exits .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## What if we wanted something different? - What if we wanted to start a "one-shot" container that *doesn't* get restarted? - We could use `kubectl run --restart=OnFailure` or `kubectl run --restart=Never` - These commands would create *jobs* or *pods* instead of *deployments* - Under the hood, `kubectl run` invokes "generators" to create resource descriptions - We could also write these resource descriptions ourselves (typically in YAML), <br/>and create them on the cluster with `kubectl apply -f` (discussed later) - With `kubectl run --schedule=...`, we can also create *cronjobs* .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Scheduling periodic background work - A Cron Job is a job that will be executed at specific intervals (the name comes from the traditional cronjobs executed by the UNIX crond) - It requires a *schedule*, represented as five space-separated fields: - minute [0,59] - hour [0,23] - day of the month [1,31] - month of the year [1,12] - day of the week ([0,6] with 0=Sunday) - `*` means "all valid values"; `/N` means "every N" - Example: `*/3 * * * *` means "every three minutes" .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Creating a Cron Job - Let's create a simple job to be executed every three minutes - Cron Jobs need to terminate, otherwise they'd run forever .exercise[ - Create the Cron Job: ```bash kubectl run every3mins --schedule="*/3 * * * *" --restart=OnFailure \ --image=alpine sleep 10 ``` - Check the resource that was created: ```bash kubectl get cronjobs ``` ] .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Cron Jobs in action - At the specified schedule, the Cron Job will create a Job - The Job will create a Pod - The Job will make sure that the Pod completes (re-creating another one if it fails, for instance if its node fails) .exercise[ - Check the Jobs that are created: ```bash kubectl get jobs ``` ] (It will take a few minutes before the first job is scheduled.) .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## What about that deprecation warning? - As we can see from the previous slide, `kubectl run` can do many things - The exact type of resource created is not obvious - To make things more explicit, it is better to use `kubectl create`: - `kubectl create deployment` to create a deployment - `kubectl create job` to create a job - `kubectl create cronjob` to run a job periodically <br/>(since Kubernetes 1.14) - Eventually, `kubectl run` will be used only to start one-shot pods (see https://github.com/kubernetes/kubernetes/pull/68132) .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Various ways of creating resources - `kubectl run` - easy way to get started - versatile - `kubectl create <resource>` - explicit, but lacks some features - can't create a CronJob before Kubernetes 1.14 - can't pass command-line arguments to deployments - `kubectl create -f foo.yaml` or `kubectl apply -f foo.yaml` - all features are available - requires writing YAML .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Viewing logs of multiple pods - When we specify a deployment name, only one single pod's logs are shown - We can view the logs of multiple pods by specifying a *selector* - A selector is a logic expression using *labels* - Conveniently, when you `kubectl run somename`, the associated objects have a `run=somename` label .exercise[ - View the last line of log from all pods with the `run=pingpong` label: ```bash kubectl logs -l run=pingpong --tail 1 ``` ] .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ### Streaming logs of multiple pods - Can we stream the logs of all our `pingpong` pods? .exercise[ - Combine `-l` and `-f` flags: ```bash kubectl logs -l run=pingpong --tail 1 -f ``` <!-- ```wait seq=``` ```key ^C``` --> ] *Note: combining `-l` and `-f` is only possible since Kubernetes 1.14!* *Let's try to understand why ...* .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- class: extra-details ### Streaming logs of many pods - Let's see what happens if we try to stream the logs for more than 5 pods .exercise[ - Scale up our deployment: ```bash kubectl scale deployment pingpong --replicas=8 ``` - Stream the logs: ```bash kubectl logs -l run=pingpong --tail 1 -f ``` <!-- ```wait error:``` --> ] We see a message like the following one: ``` error: you are attempting to follow 8 log streams, but maximum allowed concurency is 5, use --max-log-requests to increase the limit ``` .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- class: extra-details ## Why can't we stream the logs of many pods? - `kubectl` opens one connection to the API server per pod - For each pod, the API server opens one extra connection to the corresponding kubelet - If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server - This could easily put a lot of stress on the API server - Prior Kubernetes 1.14, it was decided to *not* allow multiple connections - From Kubernetes 1.14, it is allowed, but limited to 5 connections (this can be changed with `--max-log-requests`) - For more details about the rationale, see [PR #67573](https://github.com/kubernetes/kubernetes/pull/67573) .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- ## Shortcomings of `kubectl logs` - We don't see which pod sent which log line - If pods are restarted / replaced, the log stream stops - If new pods are added, we don't see their logs - To stream the logs of multiple pods, we need to write a selector - There are external tools to address these shortcomings (e.g.: [Stern](https://github.com/stern/stern)) .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- class: extra-details ## `kubectl logs -l ... --tail N` - If we run this with Kubernetes 1.12, the last command shows multiple lines - This is a regression when `--tail` is used together with `-l`/`--selector` - It always shows the last 10 lines of output for each container (instead of the number of lines specified on the command line) - The problem was fixed in Kubernetes 1.13 *See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.* .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- class: extra-details ## Party tricks involving IP addresses - It is possible to specify an IP address with less than 4 bytes (example: `127.1`) - Zeroes are then inserted in the middle - As a result, `127.1` expands to `127.0.0.1` - So we can `ping 127.1` to ping `localhost`! (See [this blog post](https://ma.ttias.be/theres-more-than-one-way-to-write-an-ip-address/ ) for more details.) .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- class: extra-details ## More party tricks with IP addresses - We can also ping `1.1` - `1.1` will expand to `1.0.0.1` - This is one of the addresses of Cloudflare's [public DNS resolver](https://blog.cloudflare.com/announcing-1111/) - This is a quick way to check connectivity (if we can reach 1.1, we probably have internet access) .debug[[k8s/kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubectlrun.md)] --- class: pic .interstitial[] --- name: toc-accessing-logs-from-the-cli class: title Accessing logs from the CLI .nav[ [Previous section](#toc-running-our-first-containers-on-kubernetes) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-assignment--first-steps) ] .debug[(automatically generated title slide)] --- # Accessing logs from the CLI - The `kubectl logs` command has limitations: - it cannot stream logs from multiple pods at a time - when showing logs from multiple pods, it mixes them all together - We are going to see how to do it better .debug[[k8s/logs-cli.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/logs-cli.md)] --- ## Doing it manually - We *could* (if we were so inclined) write a program or script that would: - take a selector as an argument - enumerate all pods matching that selector (with `kubectl get -l ...`) - fork one `kubectl logs --follow ...` command per container - annotate the logs (the output of each `kubectl logs ...` process) with their origin - preserve ordering by using `kubectl logs --timestamps ...` and merge the output -- - We *could* do it, but thankfully, others did it for us already! .debug[[k8s/logs-cli.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/logs-cli.md)] --- ## Stern [Stern](https://github.com/stern/stern) is an open source project originally by [Wercker](http://www.wercker.com/). From the README: *Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.* *The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.* Exactly what we need! .debug[[k8s/logs-cli.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/logs-cli.md)] --- ## Checking if Stern is installed - Run `stern` (without arguments) to check if it's installed: ``` $ stern Tail multiple pods and containers from Kubernetes Usage: stern pod-query [flags] ``` - If it's missing, let's see how to install it .debug[[k8s/logs-cli.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/logs-cli.md)] --- ## Installing Stern - Stern is written in Go - Go programs are usually very easy to install (no dependencies, extra libraries to install, etc) - Binary releases are available [on GitHub][stern-releases] - Stern is also available through most package managers (e.g. on macOS, we can `brew install stern` or `sudo port install stern`) [stern-releases]: https://github.com/stern/stern/releases .debug[[k8s/logs-cli.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/logs-cli.md)] --- ## Using Stern - There are two ways to specify the pods whose logs we want to see: - `-l` followed by a selector expression (like with many `kubectl` commands) - with a "pod query," i.e. a regex used to match pod names - These two ways can be combined if necessary .lab[ - View the logs for all the pingpong containers: ```bash stern pingpong ``` <!-- ```wait seq=``` ```key ^C``` --> ] .debug[[k8s/logs-cli.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/logs-cli.md)] --- ## Stern convenient options - The `--tail N` flag shows the last `N` lines for each container (Instead of showing the logs since the creation of the container) - The `-t` / `--timestamps` flag shows timestamps - The `--all-namespaces` flag is self-explanatory .lab[ - View what's up with the `weave` system containers: ```bash stern --tail 1 --timestamps --all-namespaces weave ``` <!-- ```wait weave-npc``` ```key ^C``` --> ] .debug[[k8s/logs-cli.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/logs-cli.md)] --- ## Using Stern with a selector - When specifying a selector, we can omit the value for a label - This will match all objects having that label (regardless of the value) - Everything created with `kubectl run` has a label `run` - Everything created with `kubectl create deployment` has a label `app` - We can use that property to view the logs of all the pods created with `kubectl create deployment` .lab[ - View the logs for all the things started with `kubectl create deployment`: ```bash stern -l app ``` <!-- ```wait seq=``` ```key ^C``` --> ] ??? :EN:- Viewing pod logs from the CLI :FR:- Consulter les logs des pods depuis la CLI .debug[[k8s/logs-cli.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/logs-cli.md)] --- class: cleanup ## Cleanup Let's cleanup before we start the next lecture! .exercise[ - remove our deployment and cronjob: ```bash kubectl delete deployment/pingpong cronjob/sleep ``` ] .debug[[k8smastery/cleanup-pingpong-sleep.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/cleanup-pingpong-sleep.md)] --- class: pic .interstitial[] --- name: toc-assignment--first-steps class: title Assignment 1: first steps .nav[ [Previous section](#toc-accessing-logs-from-the-cli) | [Back to table of contents](#toc-chapter-4) | [Next section](#toc-exposing-containers) ] .debug[(automatically generated title slide)] --- name: assignment1 # Assignment 1: first steps Answer these questions with the `kubectl` command you'd use to get the answer: Cluster inventory 1.1. How many nodes does your cluster have? 1.2. What kernel version and what container engine is each node running? (answers on next slide) .debug[[assignments/01kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/01kubectlrun.md)] --- class: answers ## Answers 1.1. We can get a list of nodes with `kubectl get nodes`. 1.2. `kubectl get nodes -o wide` will list extra information for each node. This will include kernel version and container engine. .debug[[assignments/01kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/01kubectlrun.md)] --- ## Assignment 1: first steps Control plane examination 2.1. List *only* the pods in the `kube-system` namespace. 2.2. Explain the role of some of these pods. 2.3. If there are few or no pods in `kube-system`, why could that be? (answers on next slide) .debug[[assignments/01kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/01kubectlrun.md)] --- class: answers ## Answers 2.1. `kubectl get pods --namespace=kube-system` 2.2. This depends on how our cluster was set up. On some clusters, we might see pods named `etcd-XXX`, `kube-apiserver-XXX`: these correspond to control plane components. It's also common to see `kubedns-XXX` or `coredns-XXX`: these implement the DNS service that lets us resolve service names into their ClusterIP address. 2.3. On some clusters, the control plane is located *outside* the cluster itself. In that case, the control plane won't show up in `kube-system`, but you can find on host with `ps aux | grep kube`. .debug[[assignments/01kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/01kubectlrun.md)] --- ## Assignment 1: first steps Running containers 3.1. Create a deployment using `kubectl create` that runs the image `bretfisher/clock` and name it `ticktock`. 3.2. Start 2 more containers of that image in the `ticktock` deployment. 3.3. Use a selector to output only the last line of logs of each container. (answers on next slide) .debug[[assignments/01kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/01kubectlrun.md)] --- class: answers ## Answers 3.1. `kubectl create deployment ticktock --image=bretfisher/clock` By default, it will have one replica, translating to one container. 3.2. `kubectl scale deployment ticktock --replicas=3` This will scale the deployment to three replicas (two more containers). 3.3. `kubectl logs --selector=app=ticktock --tail=1` All the resources created with `kubectl create deployment xxx` will have the label `app=xxx`. If you needed to use a pod selector, you can see them in the resource that created them. In this case that's the ReplicaSet, so `kubectl describe replicaset ticktock-xxxxx` would help. Therefore, we use the selector `app=ticktock` here to match all the pods belonging to this deployment. .debug[[assignments/01kubectlrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/01kubectlrun.md)] --- ## 19,000 words They say, "a picture is worth one thousand words." The following 19 slides show what really happens when we run: ```bash kubectl run web --image=nginx --replicas=3 ``` .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic  .debug[[k8s/deploymentslideshow.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/deploymentslideshow.md)] --- class: pic .interstitial[] --- name: toc-exposing-containers class: title Exposing containers .nav[ [Previous section](#toc-assignment--first-steps) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-kubernetes-network-model) ] .debug[(automatically generated title slide)] --- # Exposing containers - We can connect to our pods using their IP address - Then we need to figure out a lot of things: - how do we look up the IP address of the pod(s)? - how do we connect from outside the cluster? - how do we load balance traffic? - what if a pod fails? - Kubernetes has a resource type named *Service* - Services address all these questions! .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## Services in a nutshell - Services give us a *stable endpoint* to connect to a pod or a group of pods - An easy way to create a service is to use `kubectl expose` - If we have a deployment named `my-little-deploy`, we can run: `kubectl expose deployment my-little-deploy --port=80` ... and this will create a service with the same name (`my-little-deploy`) - Services are automatically added to an internal DNS zone (in the example above, our code can now connect to http://my-little-deploy/) .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## Advantages of services - We don't need to look up the IP address of the pod(s) (we resolve the IP address of the service using DNS) - There are multiple service types; some of them allow external traffic (e.g. `LoadBalancer` and `NodePort`) - Services provide load balancing (for both internal and external traffic) - Service addresses are independent from pods' addresses (when a pod fails, the service seamlessly sends traffic to its replacement) .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## Many kinds and flavors of service - There are different types of services: `ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName` - There are also *headless services* - Services can also have optional *external IPs* - There is also another resource type called *Ingress* (specifically for HTTP services) - Wow, that's a lot! Let's start with the basics ... .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## `ClusterIP` - It's the default service type - A virtual IP address is allocated for the service (in an internal, private range; e.g. 10.96.0.0/12) - This IP address is reachable only from within the cluster (nodes and pods) - Our code can connect to the service using the original port number - Perfect for internal communication, within the cluster .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## `LoadBalancer` - An external load balancer is allocated for the service (typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...) - This is available only when the underlying infrastructure provides some kind of "load balancer as a service" - Each service of that type will typically cost a little bit of money (e.g. a few cents per hour on AWS or GCE) - Ideally, traffic would flow directly from the load balancer to the pods - In practice, it will often flow through a `NodePort` first .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## `NodePort` - A port number is allocated for the service (by default, in the 30000-32767 range) - That port is made available *on all our nodes* and anybody can connect to it (we can connect to any node on that port to reach the service) - Our code needs to be changed to connect to that new port number - Under the hood: `kube-proxy` sets up a bunch of `iptables` rules on our nodes - Sometimes, it's the only available option for external traffic (e.g. most clusters deployed with kubeadm or on-premises) .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## Running containers with open ports - Since `ping` doesn't have anything to connect to, we'll have to run something else - We could use the `nginx` official image, but ... ... we wouldn't be able to tell the backends from each other! - We are going to use `bretfisher/httpenv`, a tiny HTTP server written in Go - `bretfisher/httpenv` listens on port 8888 - It serves its environment variables in JSON format - The environment variables will include `HOSTNAME`, which will be the pod name (and therefore, will be different on each backend) .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## Creating a deployment for our HTTP server - We *could* do `kubectl run httpenv --image=bretfisher/httpenv` ... - But since `kubectl run` is changing, let's see how to use `kubectl create` instead .exercise[ - In another window, watch the pods (to see when they are created): ```bash kubectl get pods -w ``` <!-- ```wait NAME``` ```tmux split-pane -h``` --> - Create a deployment for this very lightweight HTTP server: ```bash kubectl create deployment httpenv --image=bretfisher/httpenv ``` - Scale it to 10 replicas: ```bash kubectl scale deployment httpenv --replicas=10 ``` ] .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## Exposing our deployment - We'll create a default `ClusterIP` service .exercise[ - Expose the HTTP port of our server: ```bash kubectl expose deployment httpenv --port 8888 ``` - Look up which IP address was allocated: ```bash kubectl get service ``` ] .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## Services are layer 4 constructs - You can assign IP addresses to services, but they are still *layer 4* (i.e. a service is not an IP address; it's an IP address + protocol + port) - This is caused by the current implementation of `kube-proxy` (it relies on mechanisms that don't support layer 3) - As a result: you *have to* indicate the port number for your service (with some exceptions, like `ExternalName` or headless services, covered later) .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- ## Testing our service - We will now send a few HTTP requests to our pods .exercise[ - Run [`shpod`](#shpod) if not on Linux host so we can access internal ClusterIP ```bash kubectl attach --namespace=shpod -ti shpod ``` - Let's obtain the IP address that was allocated for our service, *programmatically:* ```bash IP=$(kubectl get svc httpenv -o go-template --template '{{ .spec.clusterIP }}') ``` <!-- ```hide kubectl wait deploy httpenv --for condition=available``` ```key ^D``` ```key ^C``` --> - Send a few requests: ```bash curl http://$IP:8888/ ``` - Too much output? Filter it with `jq`: ```bash curl -s http://$IP:8888/ | jq .HOSTNAME ``` ] .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: extra-details ## `ExternalName` - Services of type `ExternalName` are quite different - No load balancer (internal or external) is created - Only a DNS entry gets added to the DNS managed by Kubernetes - That DNS entry will just be a `CNAME` to a provided record Example: ```bash kubectl create service externalname k8s --external-name kubernetes.io ``` *Creates a CNAME `k8s` pointing to `kubernetes.io`* .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: extra-details ## External IPs - We can add an External IP to a service, e.g.: ```bash kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4 ``` - `1.2.3.4` should be the address of one of our nodes (it could also be a virtual address, service address, or VIP, shared by multiple nodes) - Connections to `1.2.3.4:80` will be sent to our service - External IPs will also show up on services of type `LoadBalancer` (they will be added automatically by the process provisioning the load balancer) .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: extra-details ## Headless services - Sometimes, we want to access our scaled services directly: - if we want to save a tiny little bit of latency (typically less than 1ms) - if we need to connect over arbitrary ports (instead of a few fixed ones) - if we need to communicate over another protocol than UDP or TCP - if we want to decide how to balance the requests client-side - ... - In that case, we can use a "headless service" .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: extra-details ## Creating a headless services - A headless service is obtained by setting the `clusterIP` field to `None` (Either with `--cluster-ip=None`, or by providing a custom YAML) - As a result, the service doesn't have a virtual IP address - Since there is no virtual IP address, there is no load balancer either - CoreDNS will return the pods' IP addresses as multiple `A` records - This gives us an easy way to discover all the replicas for a deployment .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: extra-details ## Services and endpoints - A service has a number of "endpoints" - Each endpoint is a host + port where the service is available - The endpoints are maintained and updated automatically by Kubernetes .exercise[ - Check the endpoints that Kubernetes has associated with our `httpenv` service: ```bash kubectl describe service httpenv ``` ] In the output, there will be a line starting with `Endpoints:`. That line will list a bunch of addresses in `host:port` format. .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: extra-details ## Viewing endpoint details - When we have many endpoints, our display commands truncate the list ```bash kubectl get endpoints ``` - If we want to see the full list, we can use a different output: ```bash kubectl get endpoints httpenv -o yaml ``` - These IP addresses should match the addresses of the corresponding pods: ```bash kubectl get pods -l app=httpenv -o wide ``` .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: extra-details ## `endpoints` not `endpoint` - `endpoints` is the only resource that cannot be singular ```bash $ kubectl get endpoint error: the server doesn't have a resource type "endpoint" ``` - This is because the type itself is plural (unlike every other resource) - There is no `endpoint` object: `type Endpoints struct` - The type doesn't represent a single endpoint, but a list of endpoints .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: extra-details ## The DNS zone - In the `kube-system` namespace, there should be a service named `kube-dns` - This is the internal DNS server that can resolve service names - The default domain name for the service we created is `default.svc.cluster.local` .exercise[ - Get the IP address of the internal DNS server: ```bash IP=$(kubectl -n kube-system get svc kube-dns -o jsonpath={.spec.clusterIP}) ``` - Resolve the cluster IP for the `httpenv` service: ```bash host httpenv.default.svc.cluster.local $IP ``` ] .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: extra-details ## `Ingress` - Ingresses are another type (kind) of resource - They are specifically for HTTP services (not TCP or UDP) - They can also handle TLS certificates, URL rewriting ... - They require an *Ingress Controller* to function .debug[[k8smastery/kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/kubectlexpose.md)] --- class: cleanup ## Cleanup Let's cleanup before we start the next lecture! .exercise[ - remove our httpenv resources: ```bash kubectl delete deployment/httpenv service/httpenv ``` ] .debug[[k8smastery/cleanup-httpenv.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/cleanup-httpenv.md)] --- class: pic .interstitial[] --- name: toc-kubernetes-network-model class: title Kubernetes network model .nav[ [Previous section](#toc-exposing-containers) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-assignment--more-about-deployments) ] .debug[(automatically generated title slide)] --- # Kubernetes network model - TL,DR: *Our cluster (nodes and pods) is one big flat IP network.* -- - In detail: - all nodes must be able to reach each other, without NAT - all pods must be able to reach each other, without NAT - pods and nodes must be able to reach each other, without NAT - each pod is aware of its IP address (no NAT) - pod IP addresses are assigned by the network implementation - Kubernetes doesn't mandate any particular implementation .debug[[k8s/kubenet.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the good - Everything can reach everything - No address translation - No port translation - No new protocol - The network implementation can decide how to allocate addresses - IP addresses don't have to be "portable" from a node to another (For example, We can use a subnet per node and use a simple routed topology) - The specification is simple enough to allow many various implementations .debug[[k8s/kubenet.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubenet.md)] --- ## Kubernetes network model: the less good - Everything can reach everything - if you want security, you need to add network policies - the network implementation you use needs to support them - There are literally dozens of implementations out there (15 are listed in the Kubernetes documentation) - Pods have level 3 (IP) connectivity, but *services* are level 4 (TCP or UDP) (Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets) - `kube-proxy` is on the data path when connecting to a pod or container, <br/>and it's not particularly fast (relies on userland proxying or iptables) .debug[[k8s/kubenet.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubenet.md)] --- ## Kubernetes network model: in practice - The nodes we are using have been set up to use kubenet, Calico, or something else - Don't worry about the warning about `kube-proxy` performance - Unless you: - routinely saturate 10G network interfaces - count packet rates in millions per second - run high-traffic VOIP or gaming platforms - do weird things that involve millions of simultaneous connections <br/>(in which case you're already familiar with kernel tuning) - If necessary, there are alternatives to `kube-proxy`; e.g. [`kube-router`](https://www.kube-router.io) .debug[[k8s/kubenet.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubenet.md)] --- class: extra-details ## The Container Network Interface (CNI) - Most Kubernetes clusters use CNI "plugins" to implement networking - When a pod is created, Kubernetes delegates the network setup to these plugins (it can be a single plugin, or a combination of plugins, each doing one task) - Typically, CNI plugins will: - allocate an IP address (by calling an IPAM plugin) - add a network interface into the pod's network namespace - configure the interface as well as required routes, etc. .debug[[k8s/kubenet.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubenet.md)] --- class: extra-details ## Multiple moving parts - The "pod-to-pod network" or "pod network": - provides communication between pods and nodes - is generally implemented with CNI plugins - The "pod-to-service network": - provides internal communication and load balancing - is generally implemented with kube-proxy (or maybe kube-router) - Network policies: - provide firewalling and isolation - can be bundled with the "pod network" or provided by another component .debug[[k8s/kubenet.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubenet.md)] --- class: extra-details ## Even more moving parts - Inbound traffic can be handled by multiple components: - something like kube-proxy or kube-router (for NodePort services) - load balancers (ideally, connected to the pod network) - It is possible to use multiple pod networks in parallel (with "meta-plugins" like CNI-Genie or Multus) - Some solutions can fill multiple roles (e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy) .debug[[k8s/kubenet.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/kubenet.md)] --- class: pic .interstitial[] --- name: toc-assignment--more-about-deployments class: title Assignment 2: more about deployments .nav[ [Previous section](#toc-kubernetes-network-model) | [Back to table of contents](#toc-chapter-5) | [Next section](#toc-our-sample-application) ] .debug[(automatically generated title slide)] --- name: assignment2 # Assignment 2: more about deployments 1. Create a deployment called `littletomcat` using the `tomcat` image. 2. What command will help you get the IP address of that Tomcat server? 3. What steps would you take to ping it from another container? (Use the `shpod` environment if necessary.) 4. What command would delete the running pod inside that deployment? 5. What happens if we delete the pod that holds Tomcat, while the ping is running? (answers on next two slides) .debug[[assignments/02kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/02kubectlexpose.md)] --- class: answers ## Answers 1. `kubectl create deployment littletomcat --image=tomcat` 2. List all pods with label `app=littletomcat`, with extra details including IP address: `kubectl get pods --selector=app=littletomcat -o wide`. You could also describe the pod: `kubectl describe pod littletomcat-XXX-XXX` 3. Start a shell *inside* the cluster: One way to start a shell inside the cluster: `kubectl apply -f https://k8smastery.com/shpod.yaml` then `kubectl attach --namespace=shpod -ti shpod` - A easier way is to use a special domain we created `curl https://shpod.sh | sh` - Then the IP address of the pod should ping correctly. You could also start a deployment or pod temporarily (like nginx), then exec in, install ping, and ping the IP. .debug[[assignments/02kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/02kubectlexpose.md)] --- class: answers ## Answers 4. We can delete the pod with: `kubectl delete pods --selector=app=littletomcat` or copy/paste the exact pod name and delete it. 5. If we delete the pod, the following things will happen: - the pod will be gracefully terminated, - the ping command that we left running will fail, - the replica set will notice that it doens't have the right count of pods and create a replacement pod, - that new pod will have a different IP address (so the `ping` command won't recover). .debug[[assignments/02kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/02kubectlexpose.md)] --- ## Assignment 2: first service 1. What command can give our Tomcat server a stable DNS name and IP address? (An address that doesn't change when something bad happens to the container.) 2. What commands would you run to curl Tomcat with that DNS address? (Use the `shpod` environment if necessary.) 3. If we delete the pod that holds Tomcat, does the IP address still work? (answers on next slide) .debug[[assignments/02kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/02kubectlexpose.md)] --- class: answers ## Answers 1. We need to create a *Service* for our deployment, which will have a *ClusterIP* that is usable from within the cluster. One way is with `kubectl expose deployment littletomcat --port=8080` (The Tomcat image is listening on port 8080 according to Docker Hub). Another way is with `kubectl create service clusterip littletomcat --tcp 8080` 2. In the `shpod` environment that we started earlier: ```bash # Install curl apk add curl # Make a request to the littletomcat service (in a different namespace) curl http://littletomcat.default:8080 ``` Note that shpod runs in the shpod namespace, so to find a DNS name of a different namespace in the same cluster, you should use `<hostname>.<namespace>` syntax. That was a little advanced, so A+ if you got it on the first try! 3. Yes. If we delete the pod, another will be created to replace it. The *ClusterIP* will still work. (Except during a short period while the replacement container is being started.) .debug[[assignments/02kubectlexpose.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/02kubectlexpose.md)] --- class: pic .interstitial[] --- name: toc-our-sample-application class: title Our sample application .nav[ [Previous section](#toc-assignment--more-about-deployments) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-shipping-images-with-a-registry) ] .debug[(automatically generated title slide)] --- class: not-mastery # Our sample application - We will clone the GitHub repository onto our `node1` - The repository also contains scripts and tools that we will use through the workshop .exercise[ <!-- ```bash cd ~ if [ -d container.training ]; then mv container.training container.training.$RANDOM fi ``` --> - Clone the repository on `node1`: ```bash git clone https://github.com/BretFisher/kubernetes-mastery ``` ] (You can also fork the repository on GitHub and clone your fork if you prefer that.) .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: not-mastery ## Downloading and running the application Let's start this before we look around, as downloading will take a little time... .exercise[ - Go to the `dockercoins` directory, in the cloned repo: ```bash cd ~/container.training/dockercoins ``` - Use Compose to build and run all containers: ```bash docker-compose up ``` <!-- ```longwait units of work done``` --> ] Compose tells Docker to build all container images (pulling the corresponding base images), then starts all containers, and displays aggregated logs. .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- ## What's this application? -- - It is a DockerCoin miner! .emoji[💰🐳📦🚢] -- - No, you can't buy coffee with DockerCoins -- - How DockerCoins works: - generate a few random bytes - hash these bytes - increment a counter (to keep track of speed) - repeat forever! -- - DockerCoins is *not* a cryptocurrency (the only common points are "randomness," "hashing," and "coins" in the name) .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- ## DockerCoins in the microservices era - DockerCoins is made of 5 services: - `rng` = web service generating random bytes - `hasher` = web service computing hash of POSTed data - `worker` = background process calling `rng` and `hasher` - `webui` = web interface to watch progress - `redis` = data store (holds a counter updated by `worker`) - These 5 services are visible in the application's Compose file, [dockercoins-compose.yml]( https://github.com/BretFisher/kubernetes-mastery/blob/mastery/k8s/dockercoins-compose.yml) .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- ## How DockerCoins works - `worker` invokes web service `rng` to generate random bytes - `worker` invokes web service `hasher` to hash these bytes - `worker` does this in an infinite loop - Every second, `worker` updates `redis` to indicate how many loops were done - `webui` queries `redis`, and computes and exposes "hashing speed" in our browser *(See diagram on next slide!)* .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: pic  .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- ## Service discovery in container-land How does each service find out the address of the other ones? -- - We do not hard-code IP addresses in the code - We do not hard-code FQDNs in the code, either - We just connect to a service name, and container-magic does the rest (And by container-magic, we mean "a crafty, dynamic, embedded DNS server") .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- ## Example in `worker/worker.py` ```python redis = Redis("`redis`") def get_random_bytes(): r = requests.get("http://`rng`/32") return r.content def hash_bytes(data): r = requests.post("http://`hasher`/", data=data, headers={"Content-Type": "application/octet-stream"}) ``` (Full source code available [here]( https://github.com/BretFisher/kubernetes-mastery/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17 )) .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: extra-details, not-mastery ## Links, naming, and service discovery - Containers can have network aliases (resolvable through DNS) - Compose file version 2+ makes each container reachable through its service name - Compose file version 1 required "links" sections to accomplish this - Network aliases are automatically namespaced - you can have multiple apps declaring and using a service named `database` - containers in the blue app will resolve `database` to the IP of the blue database - containers in the green app will resolve `database` to the IP of the green database .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: not-mastery ## Show me the code! - You can check the GitHub repository with all the materials of this workshop: <br/>https://github.com/BretFisher/kubernetes-mastery - The application is in the [dockercoins]( https://github.com/BretFisher/kubernetes-mastery/tree/master/dockercoins) subdirectory - The Compose file ([docker-compose.yml]( https://github.com/BretFisher/kubernetes-mastery/blob/master/dockercoins/docker-compose.yml)) lists all 5 services - `redis` is using an official image from the Docker Hub - `hasher`, `rng`, `worker`, `webui` are each built from a Dockerfile - Each service's Dockerfile and source code is in its own directory (`hasher` is in the [hasher](https://github.com/BretFisher/kubernetes-mastery/blob/master/dockercoins/hasher/) directory, `rng` is in the [rng](https://github.com/BretFisher/kubernetes-mastery/blob/master/dockercoins/rng/) directory, etc.) .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: extra-details, not-mastery ## Compose file format version *This is relevant only if you have used Compose before 2016...* - Compose 1.6 introduced support for a new Compose file format (aka "v2") - Services are no longer at the top level, but under a `services` section - There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer) - Containers are placed on a dedicated network, making links unnecessary - There are other minor differences, but upgrade is easy and straightforward .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: not-mastery ## Our application at work - On the left-hand side, the "rainbow strip" shows the container names - On the right-hand side, we see the output of our containers - We can see the `worker` service making requests to `rng` and `hasher` - For `rng` and `hasher`, we see HTTP access logs .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- ## DockerCoins at work - `worker` will log HTTP requests to `rng` and `hasher` - `rng` and `hasher` will log incoming HTTP requests - `webui` will give us a graph on coins mined per second <img style="max-height:400px" src="k8smastery/dockercoins-webui.png"> .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- ## Check out the app in Docker Compose - Compose is (still) great for local development - You can test this app if you have Docker and Compose installed - If not, remember [play-with-docker.com](https://play-with-docker.com) .exercise[ - Download the compose file somewhere and run it ```bash curl -o docker-compose.yml https://k8smastery.com/dockercoins-compose.yml docker-compose up ``` ] - View the `webui` on `localhost:8000` or click the `8080` link in PWD .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: not-mastery ## Connecting to the web UI - "Logs are exciting and fun!" (No one, ever) - The `webui` container exposes a web dashboard; let's view it .exercise[ - With a web browser, connect to `node1` on port 8000 - Remember: the `nodeX` aliases are valid only on the nodes themselves - In your browser, you need to enter the IP address of your node <!-- ```open http://node1:8000``` --> ] A drawing area should show up, and after a few seconds, a blue graph will appear. .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: self-paced, extra-details ## If the graph doesn't load If you just see a `Page not found` error, it might be because your Docker Engine is running on a different machine. This can be the case if: - you are using the Docker Toolbox - you are using a VM (local or remote) created with Docker Machine - you are controlling a remote Docker Engine When you run DockerCoins in development mode, the web UI static files are mapped to the container using a volume. Alas, volumes can only work on a local environment, or when using Docker Desktop for Mac or Windows. How to fix this? Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again. .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: extra-details ## Why does the speed seem irregular? - It *looks like* the speed is approximately 4 hashes/second - Or more precisely: 4 hashes/second, with regular dips down to zero - Why? -- class: extra-details - The app actually has a constant, steady speed: 3.33 hashes/second <br/> (which corresponds to 1 hash every 0.3 seconds, for *reasons*) - Yes, and? .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: extra-details ## The reason why this graph is *not awesome* - The worker doesn't update the counter after every loop, but up to once per second - The speed is computed by the browser, checking the counter about once per second - Between two consecutive updates, the counter will increase either by 4, or by 0 - The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc. - What can we conclude from this? -- class: extra-details - "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- ## Stopping the application - If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app - The Docker Engine will send a `TERM` signal to the containers - If the containers do not exit in a timely manner, the Engine sends a `KILL` signal .exercise[ - Stop the application by hitting `^C` <!-- ```key ^C``` --> ] -- Some containers exit immediately, others take longer. The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time! .debug[[shared/sampleapp.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/sampleapp.md)] --- class: cleanup ## Clean up - Before moving on, let's remove those containers - Or if using PWD for compose, just hit "close session" button .exercise[ - Tell Compose to remove everything: ```bash docker-compose down ``` ] .debug[[shared/composedown.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/composedown.md)] --- class: pic .interstitial[] --- name: toc-shipping-images-with-a-registry class: title Shipping images with a registry .nav[ [Previous section](#toc-our-sample-application) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-running-dockercoins-on-kubernetes) ] .debug[(automatically generated title slide)] --- # Shipping images with a registry - For development using Docker, it has *build*, *ship*, and *run* features - Now that we want to run on a cluster, things are different - Kubernetes doesn't have a *build* feature built-in - The way to ship (pull) images to Kubernetes is to use a registry .debug[[k8s/shippingimages.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/shippingimages.md)] --- ## How Docker registries work (a reminder) - What happens when we execute `docker run alpine` ? - If the Engine needs to pull the `alpine` image, it expands it into `library/alpine` - `library/alpine` is expanded into `index.docker.io/library/alpine` - The Engine communicates with `index.docker.io` to retrieve `library/alpine:latest` - To use something else than `index.docker.io`, we specify it in the image name - Examples: ```bash docker pull gcr.io/google-containers/alpine-with-bash:1.0 docker build -t registry.mycompany.io:5000/myimage:awesome . docker push registry.mycompany.io:5000/myimage:awesome ``` .debug[[k8s/shippingimages.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/shippingimages.md)] --- ## Building and shipping images - There are *many* options! - Manually: - build locally (with `docker build` or otherwise) - push to the registry - Automatically: - build and test locally - when ready, commit and push a code repository - the code repository notifies an automated build system - that system gets the code, builds it, pushes the image to the registry .debug[[k8s/shippingimages.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/shippingimages.md)] --- ## Which registry do we want to use? - There are SAAS products like Docker Hub, Quay, GitLab ... - Each major cloud provider has an option as well (ACR on Azure, ECR on AWS, GCR on Google Cloud...) -- - There are also commercial products to run our own registry (Docker Enterprise DTR, Quay, GitLab, JFrog Artifactory...) -- - And open source options, too! (Quay, Portus, OpenShift OCR, GitLab, Harbor, Kraken...) (I don't mention Docker Distribution here because it's too basic) -- - When picking a registry, pay attention to: - Its build system - Multi-user auth and mgmt (RBAC) - Storage features (replication, caching, garbage collection) .debug[[k8s/shippingimages.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/shippingimages.md)] --- ## Running DockerCoins on Kubernetes - Create one deployment for each component (hasher, redis, rng, webui, worker) - Expose deployments that need to accept connections (hasher, redis, rng, webui) - For redis, we can use the official redis image - For the 4 others, we need to build images and push them to some registry .debug[[k8s/shippingimages.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/shippingimages.md)] --- ## Using images from the Docker Hub - For everyone's convenience, we took care of building DockerCoins images - We pushed these images to the DockerHub, under the [dockercoins](https://hub.docker.com/u/dockercoins) user - These images are *tagged* with a version number, `v0.1` - The full image names are therefore: - `dockercoins/hasher:v0.1` - `dockercoins/rng:v0.1` - `dockercoins/webui:v0.1` - `dockercoins/worker:v0.1` .debug[[k8s/buildshiprun-dockerhub.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/buildshiprun-dockerhub.md)] --- class: pic .interstitial[] --- name: toc-running-dockercoins-on-kubernetes class: title Running DockerCoins on Kubernetes .nav[ [Previous section](#toc-shipping-images-with-a-registry) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-assignment--deploy-wordsmith) ] .debug[(automatically generated title slide)] --- # Running DockerCoins on Kubernetes - We can now deploy our code (as well as a redis instance) .exercise[ - Deploy `redis`: ```bash kubectl create deployment redis --image=redis ``` - Deploy everything else: ```bash kubectl create deployment hasher --image=dockercoins/hasher:v0.1 kubectl create deployment rng --image=dockercoins/rng:v0.1 kubectl create deployment webui --image=dockercoins/webui:v0.1 kubectl create deployment worker --image=dockercoins/worker:v0.1 ``` ] .debug[[k8s/ourapponkube.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/ourapponkube.md)] --- class: extra-details, not-mastery ## Deploying other images - If we wanted to deploy images from another registry ... - ... Or with a different tag ... - ... We could use the following snippet: ```bash REGISTRY=dockercoins TAG=v0.1 for SERVICE in hasher rng webui worker; do kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG done ``` .debug[[k8s/ourapponkube.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/ourapponkube.md)] --- ## Is this working? - After waiting for the deployment to complete, let's look at the logs! (Hint: use `kubectl get deploy -w` to watch deployment events) .exercise[ <!-- ```hide kubectl wait deploy/rng --for condition=available kubectl wait deploy/worker --for condition=available ``` --> - Look at some logs: ```bash kubectl logs deploy/rng kubectl logs deploy/worker ``` ] -- 🤔 `rng` is fine ... But not `worker`. -- 💡 Oh right! We forgot to `expose`. .debug[[k8s/ourapponkube.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/ourapponkube.md)] --- ## Connecting containers together - Three deployments need to be reachable by others: `hasher`, `redis`, `rng` - `worker` doesn't need to be exposed - `webui` will be dealt with later .exercise[ - Expose each deployment, specifying the right port: ```bash kubectl expose deployment redis --port 6379 kubectl expose deployment rng --port 80 kubectl expose deployment hasher --port 80 ``` ] .debug[[k8s/ourapponkube.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/ourapponkube.md)] --- ## Is this working yet? - The `worker` has an infinite loop, that retries 10 seconds after an error .exercise[ - Stream the worker's logs: ```bash kubectl logs deploy/worker --follow ``` (Give it about 10 seconds to recover) <!-- ```wait units of work done, updating hash counter``` ```key ^C``` --> ] -- We should now see the `worker`, well, working happily. .debug[[k8s/ourapponkube.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/ourapponkube.md)] --- ## Exposing services for external access - Now we would like to access the Web UI - We will expose it with a `NodePort` (just like we did for the registry) .exercise[ - Create a `NodePort` service for the Web UI: ```bash kubectl expose deploy/webui --type=NodePort --port=80 ``` - Check the port that was allocated: ```bash kubectl get svc ``` ] .debug[[k8s/ourapponkube.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/ourapponkube.md)] --- ## Accessing the web UI - We can now connect to *any node*, on the allocated node port, to view the web UI .exercise[ - Open the web UI in your browser (http://localhost:3xxxx/) <!-- ```open http://node1:3xxxx/``` --> ] -- Yes, this may take a little while to update. *(Narrator: it was DNS.)* -- *Alright, we're back to where we started, when we were running on a single node!* .debug[[k8s/ourapponkube.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/ourapponkube.md)] --- class: pic .interstitial[] --- name: toc-assignment--deploy-wordsmith class: title Assignment 3: deploy wordsmith .nav[ [Previous section](#toc-running-dockercoins-on-kubernetes) | [Back to table of contents](#toc-chapter-6) | [Next section](#toc-scaling-our-demo-app) ] .debug[(automatically generated title slide)] --- name: assignment3 # Assignment 3: deploy wordsmith - Let's deploy another application called *wordsmith* - Wordsmith has 3 components: - a web frontend: `bretfisher/wordsmith-web` - a API backend: `bretfisher/wordsmith-words` (NOTE: won't run on Raspberry Pi's arm/v7 yet [GH Issue](https://github.com/carlossg/docker-maven/issues/213)) - a postgres database: `bretfisher/wordsmith-db` - We have built images for these components, and pushed them on the Docker Hub - We want to deploy all 3 components on Kubernetes - We want to be able to connect to the web frontend with our browser .debug[[assignments/03deploywordsmith.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/03deploywordsmith.md)] --- ## Wordsmith details - Here are all the network flows in the app: - the web frontend listens on port 80 - the web frontend connects to the API at the address http://words:8080 - the API backend listens on port 8080 - the API connects to the database with the connection string pgsql://db:5432 - the database listens on port 5432 .debug[[assignments/03deploywordsmith.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/03deploywordsmith.md)] --- ## Winning conditions - After deploying and connecting everything together, open the web frontend - This is what we should see:  (You will probably see a different sentence, though.) - If you see empty LEGO bricks, something's wrong ... .debug[[assignments/03deploywordsmith.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/03deploywordsmith.md)] --- ## Scaling things up - If we reload that page, we get the same sentence - And that sentence repeats the same adjective and noun anyway - Can we do better? - Yes, if we scale up the API backend! - Try to scale up the API backend and see what happens .footnote[Wondering what this app is all about? <br/> It was a demo app showecased at DockerCon] .debug[[assignments/03deploywordsmith.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/03deploywordsmith.md)] --- class: answers ## Answers First, we need to create deployments for all three components: ```bash kubectl create deployment db --image=bretfisher/wordsmith-db kubectl create deployment web --image=bretfisher/wordsmith-web kubectl create deployment words --image=bretfisher/wordsmith-words ``` Note: we need to use these exact names, because these names will be used for the *service* that we will create and their DNS entries as well. To put it differently: if our code connects to `words` then the service should be named `words` and the deployment should also be named `words` (unless we want to write our own service YAML manifest by hand; but we won't do that yet). .debug[[assignments/03deploywordsmith.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/03deploywordsmith.md)] --- class: answers ## Answers Then, we need to create the services for these deployments: ```bash kubectl expose deployment db --port=5432 kubectl expose deployment web --port=80 --type=NodePort kubectl expose deployment words --port=8080 ``` or ```bash kubectl create service clusterip db --tcp=5432 kubectl create service nodeport web --tcp=80 kubectl create service clusterip words --tcp=8080 ``` Find out the node port allocated to `web`: `kubectl get service web` Open it in your browser. If you hit "reload", you always see the same sentence. .debug[[assignments/03deploywordsmith.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/03deploywordsmith.md)] --- class: answers ## Answers Finally, scale up the API for more words on refresh: ```bash kubectl scale deployment words --replicas=5 ``` If you hit "reload", you should now see different sentences each time. .debug[[assignments/03deploywordsmith.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/03deploywordsmith.md)] --- class: pic .interstitial[] --- name: toc-scaling-our-demo-app class: title Scaling our demo app .nav[ [Previous section](#toc-assignment--deploy-wordsmith) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-deploying-with-yaml) ] .debug[(automatically generated title slide)] --- # Scaling our demo app - Our ultimate goal is to get more DockerCoins (i.e. increase the number of loops per second shown on the web UI) - Let's look at the [architecture](images/dockercoins-diagram.svg) again:  -- - We're at 4 hashes a second. Let's ramp this up! - The loop is done in the worker; perhaps we could try adding more workers? .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- ## Adding another worker - All we have to do is scale the `worker` Deployment .exercise[ - Open a new terminal to keep an eye on our pods: ```bash kubectl get pods -w ``` <!-- ```wait RESTARTS``` ```tmux split-pane -h``` --> - Now, create more `worker` replicas: ```bash kubectl scale deployment worker --replicas=2 ``` ] -- After a few seconds, the graph in the web UI should show up. .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- ## Adding more workers - If 2 workers give us 2x speed, what about 3 workers? .exercise[ - Scale the `worker` Deployment further: ```bash kubectl scale deployment worker --replicas=3 ``` ] -- The graph in the web UI should go up again. (This is looking great! We're gonna be RICH!) .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- ## Adding even more workers - Let's see if 10 workers give us 10x speed! .exercise[ - Scale the `worker` Deployment to a bigger number: ```bash kubectl scale deployment worker --replicas=10 ``` <!-- ```key ^D``` ```key ^C``` --> ] -- The graph will peak at 10-12 hashes/second. (We can add as many workers as we want: we will never go past 10-12 hashes/second.) .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- class: extra-details ## Didn't we briefly exceed 10 hashes/second? - It may *look like it*, because the web UI shows instant speed - The instant speed can briefly exceed 10 hashes/second - The average speed cannot - The instant speed can be biased because of how it's computed .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- class: extra-details, not-mastery ## Why instant speed is misleading - The instant speed is computed client-side by the web UI - The web UI checks the hash counter once per second <br/> (and does a classic (h2-h1)/(t2-t1) speed computation) - The counter is updated once per second by the workers - These timings are not exact <br/> (e.g. the web UI check interval is client-side JavaScript) - Sometimes, between two web UI counter measurements, <br/> the workers are able to update the counter *twice* - During that cycle, the instant speed will appear to be much bigger <br/> (but it will be compensated by lower instant speed before and after) .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- ## Why are we stuck at 10-12 hashes per second? - If this was high-quality, production code, we would have instrumentation (Datadog, Honeycomb, New Relic, statsd, Sumologic, ...) - It's not! - Perhaps we could benchmark our web services? (with tools like `ab`, or even simpler, `httping`) .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- ## Benchmarking our web services - We want to check `hasher` and `rng` - We are going to use `httping` - It's just like `ping`, but using HTTP `GET` requests (it measures how long it takes to perform one `GET` request) - It's used like this: ``` httping [-c count] http://host:port/path ``` - Or even simpler: ``` httping ip.ad.dr.ess ``` - We will use `httping` on the ClusterIP addresses of our services .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- ## Obtaining ClusterIP addresses - We can simply check the output of `kubectl get services` - Or do it programmatically, as in the example below .exercise[ - Retrieve the IP addresses: ```bash HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}}) RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}}) ``` ] Now we can access the IP addresses of our services through `$HASHER` and `$RNG`. .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- ## Checking `hasher` and `rng` response times .exercise[ - Remember to use [`shpod`](#shpod) on macOS and Windows: ```bash kubectl attach --namespace=shpod -ti shpod ``` - Check the response times for both services: ```bash httping -c 3 $HASHER httping -c 3 $RNG ``` ] -- - `hasher` is fine (it should take a few milliseconds to reply) - `rng` is not (it should take about 700 milliseconds if there are 10 workers) - Something is wrong with `rng`, but ... what? .debug[[k8s/scalingdockercoins.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/scalingdockercoins.md)] --- ## Let's draw hasty conclusions - The bottleneck seems to be `rng` - *What if* we don't have enough entropy and can't generate enough random numbers? - We need to scale out the `rng` service on multiple machines! Note: this is a fiction! We have enough entropy. But we need a pretext to scale out. .footnote[ (In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy... <br/> ...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).) ] -- - **Oops** we only have one node for learning. 🤔 -- - Let's pretend and I'll explain along the way .debug[[shared/hastyconclusions.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/shared/hastyconclusions.md)] --- class: pic .interstitial[] --- name: toc-deploying-with-yaml class: title Deploying with YAML .nav[ [Previous section](#toc-scaling-our-demo-app) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-the-kubernetes-dashboard) ] .debug[(automatically generated title slide)] --- # Deploying with YAML - So far, we created resources with the following commands: - `kubectl run` - `kubectl create deployment` - `kubectl expose` - We can also create resources directly with YAML manifests .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- ## `kubectl apply` vs `create` - `kubectl create -f whatever.yaml` - creates resources if they don't exist - if resources already exist, don't alter them <br/>(and display error message) - `kubectl apply -f whatever.yaml` - creates resources if they don't exist - if resources already exist, update them <br/>(to match the definition provided by the YAML file) - stores the manifest as an *annotation* in the resource .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- ## Creating multiple resources - The manifest can contain multiple resources separated by `---` ```yaml kind: ... apiVersion: ... metadata: name: ... ... spec: ... --- kind: ... apiVersion: ... metadata: name: ... ... spec: ... ``` .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- ## Creating multiple resources - The manifest can also contain a list of resources ```yaml apiVersion: v1 kind: List items: - kind: ... apiVersion: ... ... - kind: ... apiVersion: ... ... ``` .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- name: dockercoins ## Deploying DockerCoins with YAML - Here's a YAML manifest with all the resources for DockerCoins (Deployments and Services) - We can use it if we need to deploy or redeploy DockerCoins - Yes YAML file commands can use URL's! .exercise[ - Deploy or redeploy DockerCoins: ```bash kubectl apply -f https://k8smastery.com/dockercoins.yaml ``` ] .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- ## `Apply` errors for `create` or `run` resources - Note the warnings if you already had the resources created - This is because we didn't use `apply` before - This is OK for us learning, so ignore the warnings - Generally in production you want to stick with one method or the other .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- ## Deleting resources - We can also use a YAML file to *delete* resources - `kubectl delete -f ...` will delete all the resources mentioned in a YAML file (useful to clean up everything that was created by `kubectl apply -f ...`) - The definitions of the resources don't matter (just their `kind`, `apiVersion`, and `name`) .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- ## Pruning¹ resources - We can also tell `kubectl` to remove old resources - This is done with `kubectl apply -f ... --prune` - It will remove resources that don't exist in the YAML file(s) - But only if they were created with `kubectl apply` in the first place (technically, if they have an annotation `kubectl.kubernetes.io/last-applied-configuration`) .footnote[¹If English is not your first language: *to prune* means to remove dead or overgrown branches in a tree, to help it to grow.] .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- ## YAML as source of truth - Imagine the following workflow: - do not use `kubectl run`, `kubectl create deployment`, `kubectl expose` ... - define everything with YAML - `kubectl apply -f ... --prune --all` that YAML - keep that YAML under version control - enforce all changes to go through that YAML (e.g. with pull requests) - Our version control system now has a full history of what we deploy - Compares to "Infrastructure-as-Code", but for app deployments .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- class: extra-details ## Specifying the namespace - When creating resources from YAML manifests, the namespace is optional - If we specify a namespace: - resources are created in the specified namespace - this is typical for things deployed only once per cluster - example: system components, cluster add-ons ... - If we don't specify a namespace: - resources are created in the current namespace - this is typical for things that may be deployed multiple times - example: applications (production, staging, feature branches ...) .debug[[k8s/yamldeploy.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/yamldeploy.md)] --- class: pic .interstitial[] --- name: toc-the-kubernetes-dashboard class: title The Kubernetes Dashboard .nav[ [Previous section](#toc-deploying-with-yaml) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-security-implications-of-kubectl-apply) ] .debug[(automatically generated title slide)] --- # The Kubernetes Dashboard - Kubernetes resources can also be viewed with an official web UI - That dashboard is usually exposed over HTTPS (this requires obtaining a proper TLS certificate) - Dashboard users need to authenticate - We are going to take a *dangerous* shortcut .debug[[k8s/dashboard.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/dashboard.md)] --- ## The insecure method - We could (and should) use [Let's Encrypt](https://letsencrypt.org/) ... - ... but we don't want to deal with TLS certificates - We could (and should) learn how authentication and authorization work ... - ... but we will use a guest account with admin access instead .footnote[.warning[Yes, this will open our cluster to all kinds of shenanigans. Don't do this at home.]] .debug[[k8s/dashboard.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/dashboard.md)] --- ## Running a very insecure dashboard - We are going to deploy that dashboard with *one single command* - This command will create all the necessary resources (the dashboard itself, the HTTP wrapper, the admin/guest account) - All these resources are defined in a YAML file - All we have to do is load that YAML file with with `kubectl apply -f` .exercise[ - Create all the dashboard resources, with the following command: ```bash kubectl apply -f https://k8smastery.com/insecure-dashboard.yaml ``` ] .debug[[k8s/dashboard.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/dashboard.md)] --- ## Connecting to the dashboard .exercise[ - Check which port the dashboard is on: ```bash kubectl get svc dashboard ``` ] You'll want the `3xxxx` port. .exercise[ - Connect to http://localhost:3xxxx/ <!-- ```open http://node1:3xxxx/``` --> ] The dashboard will then ask you which authentication you want to use. .debug[[k8s/dashboard.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/dashboard.md)] --- ## Dashboard authentication - We have three authentication options at this point: - token (associated with a role that has appropriate permissions) - kubeconfig (e.g. using the `~/.kube/config` file) - "skip" (use the dashboard "service account") - Let's use "skip": we're logged in! -- .warning[By the way, we just added a backdoor to our Kubernetes cluster!] .debug[[k8s/dashboard.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/dashboard.md)] --- ## Running the Kubernetes Dashboard securely - The steps that we just showed you are *for educational purposes only!* - If you do that on your production cluster, people [can and will abuse it](https://redlock.io/blog/cryptojacking-tesla) - For an in-depth discussion about securing the dashboard, <br/> check [this excellent post on Heptio's blog](https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca) -- - Minikube/microK8s can be enabled with easy commands `minikube dashboard` and `microk8s enable dashboard` .debug[[k8s/dashboard.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/dashboard.md)] --- ## Other dashboards - [Kube Web View](https://codeberg.org/hjacobs/kube-web-view) - read-only dashboard - optimized for "troubleshooting and incident response" - see [vision and goals](https://kube-web-view.readthedocs.io/en/latest/vision.html#vision) for details - [Kube Ops View](https://github.com/hjacobs/kube-ops-view) - "provides a common operational picture for multiple Kubernetes clusters" -- - Your Kubernetes distro comes with one! -- - Cloud-provided control-planes often don't come with one .debug[[k8s/dashboard.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/dashboard.md)] --- class: pic .interstitial[] --- name: toc-security-implications-of-kubectl-apply class: title Security implications of `kubectl apply` .nav[ [Previous section](#toc-the-kubernetes-dashboard) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-daemon-sets) ] .debug[(automatically generated title slide)] --- # Security implications of `kubectl apply` - When we do `kubectl apply -f <URL>`, we create arbitrary resources - Resources can be evil; imagine a `deployment` that ... -- - starts bitcoin miners on the whole cluster -- - hides in a non-default namespace -- - bind-mounts our nodes' filesystem -- - inserts SSH keys in the root account (on the node) -- - encrypts our data and ransoms it -- - ☠️☠️☠️ .debug[[k8s/dashboard.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/dashboard.md)] --- ## `kubectl apply` is the new `curl | sh` - `curl | sh` is convenient - It's safe if you use HTTPS URLs from trusted sources -- - `kubectl apply -f` is convenient - It's safe if you use HTTPS URLs from trusted sources - Example: the official setup instructions for most pod networks -- - It introduces new failure modes (for instance, if you try to apply YAML from a link that's no longer valid) .debug[[k8s/dashboard.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/dashboard.md)] --- class: pic .interstitial[] --- name: toc-daemon-sets class: title Daemon sets .nav[ [Previous section](#toc-security-implications-of-kubectl-apply) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-labels-and-selectors) ] .debug[(automatically generated title slide)] --- # Daemon sets - We want to scale `rng` in a way that is different from how we scaled `worker` - We want one (and exactly one) instance of `rng` per node - We *do not want* two instances of `rng` on the same node - We will do that with a *daemon set* .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Why not a deployment? - Can't we just do `kubectl scale deployment rng --replicas=...`? -- - Nothing guarantees that the `rng` containers will be distributed evenly -- - If we add nodes later, they will not automatically run a copy of `rng` -- - If we remove (or reboot) a node, one `rng` container will restart elsewhere (and we will end up with two instances `rng` on the same node) -- - By contrast, a daemon set will start one pod per node and keep it that way (as nodes are added or removed) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Daemon sets in practice - Daemon sets are great for cluster-wide, per-node processes: - `kube-proxy` - CNI network plugins - monitoring agents - hardware management tools (e.g. SCSI/FC HBA agents) - etc. - They can also be restricted to run [only on some nodes](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Creating a daemon set <!-- ##VERSION## --> - Unfortunately, as of Kubernetes 1.17, the CLI cannot create daemon sets -- - More precisely: it doesn't have a subcommand to create a daemon set -- - But any kind of resource can always be created by providing a YAML description: ```bash kubectl apply -f foo.yaml ``` -- - How do we create the YAML file for our daemon set? -- - option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset) -- - option 2: `vi` our way out of it .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Creating the YAML file for our daemon set - Let's start with the YAML file for the current `rng` resource .exercise[ - Dump the `rng` resource in YAML: ```bash kubectl get deploy/rng -o yaml >rng.yml ``` - Edit `rng.yml` ] .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## "Casting" a resource to another - What if we just changed the `kind` field? (It can't be that easy, right?) .exercise[ - Change `kind: Deployment` to `kind: DaemonSet` <!-- ```bash vim rng.yml``` ```wait kind: Deployment``` ```keys /Deployment``` ```key ^J``` ```keys cwDaemonSet``` ```key ^[``` ] ```keys :wq``` ```key ^J``` --> - Save, quit - Try to create our new resource: ```bash kubectl apply -f rng.yml ``` <!-- ```wait error:``` --> ] -- We all knew this couldn't be that easy, right! .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Understanding the problem - The core of the error is: ``` error validating data: [ValidationError(DaemonSet.spec): unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec, ... ``` -- - *Obviously,* it doesn't make sense to specify a number of replicas for a daemon set -- - Workaround: fix the YAML - remove the `replicas` field - remove the `strategy` field (which defines the rollout mechanism for a deployment) - remove the `progressDeadlineSeconds` field (also used by the rollout mechanism) - remove the `status: {}` line at the end -- - Or, we could also ... .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Use the `--force`, Luke - We could also tell Kubernetes to ignore these errors and try anyway - The `--force` flag's actual name is `--validate=false` .exercise[ - Try to load our YAML file and ignore errors: ```bash kubectl apply -f rng.yml --validate=false ``` ] -- 🎩✨🐇 -- Wait ... Now, can it be *that* easy? .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Checking what we've done - Did we transform our `deployment` into a `daemonset`? .exercise[ - Look at the resources that we have now: ```bash kubectl get all ``` ] -- We have two resources called `rng`: - the *deployment* that was existing before - the *daemon set* that we just created We also have one too many pods. <br/> (The pod corresponding to the *deployment* still exists.) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## `deploy/rng` and `ds/rng` - You can have different resource types with the same name (i.e. a *deployment* and a *daemon set* both named `rng`) - We still have the old `rng` *deployment* ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/rng 1 1 1 1 18m ``` - But now we have the new `rng` *daemon set* as well ``` NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/rng 2 2 2 2 2 <none> 9s ``` .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Too many pods - If we check with `kubectl get pods`, we see: - *one pod* for the deployment (named `rng-xxxxxxxxxx-yyyyy`) - *one pod per node* for the daemon set (named `rng-zzzzz`) ``` NAME READY STATUS RESTARTS AGE rng-54f57d4d49-7pt82 1/1 Running 0 11m rng-b85tm 1/1 Running 0 25s rng-hfbrr 1/1 Running 0 25s [...] ``` -- The daemon set created one pod per node. In a multi-node setup, masters usually have [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) preventing pods from running there. (To schedule a pod on this node anyway, the pod will require appropriate [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Is this working? - Look at the web UI -- - The graph should now go above 10 hashes per second! -- - It looks like the newly created pods are serving traffic correctly - How and why did this happen? (We didn't do anything special to add them to the `rng` service load balancer!) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- class: pic .interstitial[] --- name: toc-labels-and-selectors class: title Labels and selectors .nav[ [Previous section](#toc-daemon-sets) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-assignment--custom-load-balancing) ] .debug[(automatically generated title slide)] --- # Labels and selectors - The `rng` *service* is load balancing requests to a set of pods - That set of pods is defined by the *selector* of the `rng` service .exercise[ - Check the *selector* in the `rng` service definition: ```bash kubectl describe service rng ``` ] - The selector is `app=rng` - It means "all the pods having the label `app=rng`" (They can have additional labels as well, that's OK!) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Selector evaluation - We can use selectors with many `kubectl` commands - For instance, with `kubectl get`, `kubectl logs`, `kubectl delete` ... and more .exercise[ - Get the list of pods matching selector `app=rng`: ```bash kubectl get pods -l app=rng kubectl get pods --selector app=rng ``` ] But ... why do these pods (in particular, the *new* ones) have this `app=rng` label? .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Where do labels come from? - When we create a deployment with `kubectl create deployment rng`, <br/>this deployment gets the label `app=rng` - The replica sets created by this deployment also get the label `app=rng` - The pods created by these replica sets also get the label `app=rng` - When we created the daemon set from the deployment, we re-used the same spec - Therefore, the pods created by the daemon set get the same labels - When we use `kubectl run stuff`, the label is `run=stuff` instead .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Updating load balancer configuration - We would like to remove a pod from the load balancer - What would happen if we removed that pod, with `kubectl delete pod ...`? -- It would be re-created immediately (by the replica set or the daemon set) -- - What would happen if we removed the `app=rng` label from that pod? -- It would *also* be re-created immediately -- Why?!? .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Selectors for replica sets and daemon sets - The "mission" of a replica set is: "Make sure that there is the right number of pods matching this spec!" - The "mission" of a daemon set is: "Make sure that there is a pod matching this spec on each node!" -- - *In fact,* replica sets and daemon sets do not check pod specifications - They merely have a *selector*, and they look for pods matching that selector - Yes, we can fool them by manually creating pods with the "right" labels - Bottom line: if we remove our `app=rng` label ... ... The pod "disappears" for its parent, which re-creates another pod to replace it .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- class: extra-details ## Isolation of replica sets and daemon sets - Since both the `rng` daemon set and the `rng` replica set use `app=rng` ... ... Why don't they "find" each other's pods? -- - *Replica sets* have a more specific selector, visible with `kubectl describe` (It looks like `app=rng,pod-template-hash=abcd1234`) - *Daemon sets* also have a more specific selector, but it's invisible (It looks like `app=rng,controller-revision-hash=abcd1234`) - As a result, each controller only "sees" the pods it manages .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer - Currently, the `rng` service is defined by the `app=rng` selector - The only way to remove a pod is to remove or change the `app` label - ... But that will cause another pod to be created instead! - What's the solution? -- - We need to change the selector of the `rng` service! - Let's add another label to that selector (e.g. `active=yes`) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Complex selectors - If a selector specifies multiple labels, they are understood as a logical *AND* (In other words: the pods must match all the labels) - Kubernetes has support for advanced, set-based selectors (But these cannot be used with services, at least not yet!) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## The plan 1. Add the label `active=yes` to all our `rng` pods 2. Update the selector for the `rng` service to also include `active=yes` 3. Toggle traffic to a pod by manually adding/removing the `active` label 4. Profit! *Note: if we swap steps 1 and 2, it will cause a short service disruption, because there will be a period of time during which the service selector won't match any pod. During that time, requests to the service will time out. By doing things in the order above, we guarantee that there won't be any interruption.* .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Adding labels to pods - We want to add the label `active=yes` to all pods that have `app=rng` - We could edit each pod one by one with `kubectl edit` ... - ... Or we could use `kubectl label` to label them all - `kubectl label` can use selectors itself .exercise[ - Add `active=yes` to all pods that have `app=rng`: ```bash kubectl label pods -l app=rng active=yes ``` ] .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Updating the service selector - We need to edit the service specification - Reminder: in the service definition, we will see `app: rng` in two places - the label of the service itself (we don't need to touch that one) - the selector of the service (that's the one we want to change) .exercise[ - Update the service to add `active: yes` to its selector: ```bash kubectl edit service rng ``` <!-- ```wait Please edit the object below``` ```keys /app: rng``` ```key ^J``` ```keys noactive: yes``` ```key ^[``` ] ```keys :wq``` ```key ^J``` --> ] -- ... And then we get *the weirdest error ever.* Why? .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## When the YAML parser is being too smart - YAML parsers try to help us: - `xyz` is the string `"xyz"` - `42` is the integer `42` - `yes` is the boolean value `true` - If we want the string `"42"` or the string `"yes"`, we have to quote them - So we have to use `active: "yes"` .footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!] .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Updating the service selector, take 2 .exercise[ - Update the YAML manifest of the service - Add `active: "yes"` to its selector <!-- ```wait Please edit the object below``` ```keys /yes``` ```key ^J``` ```keys cw"yes"``` ```key ^[``` ] ```keys :wq``` ```key ^J``` --> ] This time it should work! If we did everything correctly, the web UI shouldn't show any change. .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Updating labels - We want to disable the pod that was created by the deployment - All we have to do, is remove the `active` label from that pod - To identify that pod, we can use its name - ... Or rely on the fact that it's the only one with a `pod-template-hash` label - Good to know: - `kubectl label ... foo=` doesn't remove a label (it sets it to an empty string) - to remove label `foo`, use `kubectl label ... foo-` - to change an existing label, we would need to add `--overwrite` .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Removing a pod from the load balancer .exercise[ - In one window, check the logs of that pod: ```bash POD=$(kubectl get pod -l app=rng,pod-template-hash -o name) kubectl logs --tail 1 --follow $POD ``` (We should see a steady stream of HTTP logs) <!-- ```wait HTTP/1.1``` ```tmux split-pane -v``` --> - In another window, remove the label from the pod: ```bash kubectl label pod -l app=rng,pod-template-hash active- ``` (The stream of HTTP logs should stop immediately) <!-- ```key ^D``` ```key ^C``` --> ] There might be a slight change in the web UI (since we removed a bit of capacity from the `rng` service). If we remove more pods, the effect should be more visible. .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- class: extra-details ## Updating the daemon set - If we scale up our cluster by adding new nodes, the daemon set will create more pods - These pods won't have the `active=yes` label - If we want these pods to have that label, we need to edit the daemon set spec - We can do that with e.g. `kubectl edit daemonset rng` .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- class: extra-details ## We've put resources in your resources - Reminder: a daemon set is a resource that creates more resources! - There is a difference between: - the label(s) of a resource (in the `metadata` block in the beginning) - the selector of a resource (in the `spec` block) - the label(s) of the resource(s) created by the first resource (in the `template` block) - We would need to update the selector and the template (metadata labels are not mandatory) - The template must match the selector (i.e. the resource will refuse to create resources that it will not select) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Labels and debugging - When a pod is misbehaving, we can delete it: another one will be recreated - But we can also change its labels - It will be removed from the load balancer (it won't receive traffic anymore) - Another pod will be recreated immediately - But the problematic pod is still here, and we can inspect and debug it - We can even re-add it to the rotation if necessary (Very useful to troubleshoot intermittent and elusive bugs) .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- ## Labels and advanced rollout control - Conversely, we can add pods matching a service's selector - These pods will then receive requests and serve traffic - Examples: - one-shot pod with all debug flags enabled, to collect logs - pods created automatically, but added to rotation in a second step <br/> (by setting their label accordingly) - This gives us building blocks for canary and blue/green deployments .debug[[k8s/daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/daemonset.md)] --- class: cleanup ## Cleanup Let's cleanup before we start the next lecture! .exercise[ - remove our DockerCoin resources (for now): ```bash kubectl delete -f https://k8smastery.com/dockercoins.yaml kubectl delete daemonset/rng ``` ] .debug[[k8smastery/cleanup-dockercoins-daemonset.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/cleanup-dockercoins-daemonset.md)] --- class: pic .interstitial[] --- name: toc-assignment--custom-load-balancing class: title Assignment 4: custom load balancing .nav[ [Previous section](#toc-labels-and-selectors) | [Back to table of contents](#toc-chapter-7) | [Next section](#toc-authoring-yaml) ] .debug[(automatically generated title slide)] --- name: assignment4 # Assignment 4: custom load balancing Our goal here will be to create a service that load balances connections to two different deployments. You might use this as a simplistic way to run two versions of your apps in parallel. In the real world, you'll likely use a 3rd party load balancer to provide advanced blue/green or canary-style deployments, but this assignment will help further your understanding of how service selectors are used to find pods to use as service endpoints. For simplicity, version 1 of our application will be using the NGINX image, and version 2 of our application will be using the Apache image. They both listen on port 80 by default. When we connect to the service, we expect to see some requests being served by NGINX, and some requests being served by Apache. .debug[[assignments/04customlb.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/04customlb.md)] --- ## Hints We need to create two deployments: one for v1 (NGINX), another for v2 (Apache). -- They will be exposed through a single service. -- The *selector* of that service will need to match the pods created by *both* deployments. -- For that, we will need to change the deployment specification to add an extra label, to be used solely by the service. -- That label should be different from the pre-existing labels of our deployments, otherwise our deployments will step on each other's toes. -- We're not at the point of writing our own YAML from scratch, so you'll need to use the `kubectl edit` command to modify existing resources. .debug[[assignments/04customlb.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/04customlb.md)] --- ## Deploying version 1 1.1. Create a deployment running one pod using the official NGINX image. 1.2. Expose that deployment. 1.3. Check that you can successfully connect to the exposed service. .debug[[assignments/04customlb.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/04customlb.md)] --- ## Setting up the service 2.1. Use a custom label/value to be used by the service. How about `myapp: web`. 2.2. Change (edit) the service definition to use that label/value. 2.3. Check that you *cannot* connect to the exposed service anymore. 2.4. Change (edit) the deployment definition to add that label/value to the pods. 2.5. Check that you *can* connect to the exposed service again. .debug[[assignments/04customlb.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/04customlb.md)] --- ## Deploying version 2 3.1. Create a deployment running one pod using the official Apache image. 3.2. Change (edit) the deployment definition to add the label/value picked previously. 3.3. Connect to the exposed service again. (It should now yield responses from both Apache and NGINX.) .debug[[assignments/04customlb.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/04customlb.md)] --- class: answers ## Answers 1.1. `kubectl create deployment v1-nginx --image=nginx` 1.2. `kubectl expose deployment v1-nginx --port=80` or `kubectl create service v1-nginx --tcp=80` 1.3.A If you are using `shpod`, or if you are running directly on the cluster: ```bash ### Obtain the ClusterIP that was allocated to the service kubectl get svc v1-nginx curl http://`A.B.C.D` ``` 1.3.B You can also run a program like `curl` in a container: ```bash kubectl run --restart=Never --image=alpine -ti --rm testcontainer ### Then, once you get a prompt, install curl apk add curl ### Then, connect to the service curl v1-nginx ``` .debug[[assignments/04customlb.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/04customlb.md)] --- class: answers ## Answers 2.1. Edit the YAML manifest of the service with `kubectl edit service v1-nginx`. Look for the `selector:` section, and change `app: v1-nginx` to `myapp: web`. Make sure to change the `selector:` section, not the `labels:` section! After making the change, save and quit. 2.2. The `curl` command (see previous slide) should now time out. 2.3. Edit the YAML manifest of the deployment with `kubectl edit deployment v1-nginx`. Look for the `labels:` section **within the `template:` section**, as we want to change the labels of the pods created by the deployment, not of the deployment itself. Make sure to change the `labels:` section, not the `matchLabels:` one. Add `myapp: web` just below `app: v1-nginx`, with the same indentation level. After making the change, save and quit. We need both labels here, unlike the service selector. The app label keeps the pod "linked" to the deployment/replicaset, and the new one will cause the service to match to this pod. 2.4. The `curl` command should now work again. (It might need a minute, since changing the label will trigger a rolling update and create a new pod.) .debug[[assignments/04customlb.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/04customlb.md)] --- class: answers ## Answers 3.1. `kubectl create deployment v2-apache --image=httpd` 3.2. Same as previously: `kubectl edit deployment v2-apache`, then add the label `myapp: web` below `app: v2-apache`. Again, make sure to change the labels in the pod template, not of the deployment itself. 3.3. The `curl` command show now yield responses from NGINX and Apache. (Note: you won't see a perfect round-robin, i.e. NGINX/Apache/NGINX/Apache etc., but on average, Apache and NGINX should serve approximately 50% of the requests each.) .debug[[assignments/04customlb.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/assignments/04customlb.md)] --- class: pic .interstitial[] --- name: toc-authoring-yaml class: title Authoring YAML .nav[ [Previous section](#toc-assignment--custom-load-balancing) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-using-server-dry-run-and-diff) ] .debug[(automatically generated title slide)] --- # Authoring YAML - To use Kubernetes is to "live in YAML"! - It's more important to learn the foundations then to memorize all YAML keys (hundreds+) -- - There are various ways to *generate* YAML with Kubernetes, e.g.: - `kubectl run` - `kubectl create deployment` (and a few other `kubectl create` variants) - `kubectl expose` -- - These commands use "generators" because the API only accepts YAML (actually JSON) -- - Pro: They are easy to use - Con: They have limits -- - When and why do we need to write our own YAML? - How do we write YAML from scratch? - And maybe, what is YAML? .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## YAML Basics (just in case you need a refresher) - It's technically a superset of JSON, designed for humans - JSON was good for machines, but not for humans - Spaces set the structure. One space off and game over - Remember spaces not tabs, Ever! - Two spaces is standard, but four spaces works too - You don't have to learn all YAML features, but key concepts you need: - Key/Value Pairs - Array/Lists - Dictionary/Maps - Good online tutorials exist [here](https://www.tutorialspoint.com/yaml/index.htm), [here](https://developer.ibm.com/tutorials/yaml-basics-and-usage-in-kubernetes/), [here](https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html), and [YouTube here](https://www.youtube.com/watch?v=cdLNKUoMc6c) .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## Basic parts of any Kubernetes resource manifest - Can be in YAML or JSON, but YAML is 💯 -- - Each file contains one or more manifests -- - Each manifest describes an API object (deployment, service, etc.) -- - Each manifest needs four parts (root key:values in the file) ```yaml apiVersion: kind: metadata: spec: ``` .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## A simple Pod in YAML - This is a single manifest that creates one Pod ```yaml apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.17.3 ``` .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## Deployment and Service manifests in one YAML file .small[ ```yaml apiVersion: v1 kind: Service metadata: name: mynginx spec: type: NodePort ports: - port: 80 selector: app: mynginx --- apiVersion: apps/v1 kind: Deployment metadata: name: mynginx spec: replicas: 3 selector: matchLabels: app: mynginx template: metadata: labels: app: mynginx spec: containers: - name: nginx image: nginx:1.17.3 ``` ] .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## The limits of generated YAML - Advanced (and even not-so-advanced) features require us to write YAML: - pods with multiple containers - resource limits - healthchecks - many other resource options -- - Other resource types don't have their own commands! - DaemonSets - StatefulSets - and more! - How do we access these features? .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## We don't have to start from scratch - Output YAML from existing resources - Create a resource (e.g. Deployment) - Dump its YAML with `kubectl get -o yaml ...` - Edit the YAML - Use `kubectl apply -f ...` with the YAML file to: - update the resource (if it's the same kind) - create a new resource (if it's a different kind) -- - Or... we have the docs, with good starter YAML - [StatefulSet](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#creating-a-statefulset), [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset), [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap), [and a ton more on GitHub](https://github.com/kubernetes/website/tree/master/content/en/examples) -- - Or... we can use `-o yaml --dry-run` .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## Generating YAML without creating resources - We can use the `-o yaml --dry-run` option combo with `run` and `create` .exercise[ - Generate the YAML for a Deployment without creating it: ```bash kubectl create deployment web --image nginx -o yaml --dry-run ``` - Generate the YAML for a Namespace without creating it: ```bash kubectl create namespace awesome-app -o yaml --dry-run ``` ] - We can clean up the YAML even more if we want (for instance, we can remove the `creationTimestamp` and empty dicts) .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## Try `-o yaml --dry-run` with other create commands ```bash clusterrole # Create a ClusterRole. clusterrolebinding # Create a ClusterRoleBinding for a particular ClusterRole. configmap # Create a configmap from a local file, directory or literal. cronjob # Create a cronjob with the specified name. deployment # Create a deployment with the specified name. job # Create a job with the specified name. namespace # Create a namespace with the specified name. poddisruptionbudget # Create a pod disruption budget with the specified name. priorityclass # Create a priorityclass with the specified name. quota # Create a quota with the specified name. role # Create a role with single rule. rolebinding # Create a RoleBinding for a particular Role or ClusterRole. secret # Create a secret using specified subcommand. service # Create a service using specified subcommand. serviceaccount # Create a service account with the specified name. ``` - Ensure you use valid `create` commands with required options for each .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## Writing YAML from scratch, "YAML The Hard Way" - Paying homage to Kelsey Hightower's "[Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way)" -- - A reminder about manifests: - Each file contains one or more manifests - Each manifest describes an API object (deployment, service, etc.) - Each manifest needs four parts (root key:values in the file) ```yaml apiVersion: # find with "kubectl api-versions" kind: # find with "kubectl api-resources" metadata: spec: # find with "kubectl describe pod" ``` -- - Those three `kubectl` commands, plus the API docs, is all we'll need .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## General workflow of YAML from scratch - Find the resource `kind` you want to create (`api-resources`) -- - Find the latest `apiVersion` your cluster supports for `kind` (`api-versions`) -- - Give it a `name` in metadata (minimum) -- - Dive into the `spec` of that `kind` - `kubectl explain <kind>.spec` - `kubectl explain <kind> --recursive` -- - Browse the docs [API Reference](https://kubernetes.io/docs/reference/) for your cluster version to supplement -- - Use `--dry-run` and `--server-dry-run` for testing - `kubectl create` and `delete` until you get it right <!--TODO: create example of YAML from scratch --> .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## Advantage of YAML - Using YAML (instead of `kubectl run`/`create`/etc.) allows to be *declarative* - The YAML describes the desired state of our cluster and applications - YAML can be stored, versioned, archived (e.g. in git repositories) - To change resources, change the YAML files (instead of using `kubectl edit`/`scale`/`label`/etc.) - Changes can be reviewed before being applied (with code reviews, pull requests ...) - This workflow is sometimes called "GitOps" (there are tools like Weave Flux or GitKube to facilitate it) .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## YAML in practice - Get started with `kubectl run`/`create`/`expose`/etc. - Dump the YAML with `kubectl get -o yaml` - Tweak that YAML and `kubectl apply` it back - Store that YAML for reference (for further deployments) - Feel free to clean up the YAML: - remove fields you don't know - check that it still works! - That YAML will be useful later when using e.g. Kustomize or Helm .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- ## YAML linting and validation - Use generic linters to check proper YAML formatting - [yamllint.com](http://www.yamllint.com) - [codebeautify.org/yaml-validator](https://codebeautify.org/yaml-validator) - For humans without kubectl, use a web Kubernetes YAML validator: [kubeyaml.com](https://kubeyaml.com/) - In CI, you might use CLI tools - YAML linter: `pip install yamllint` [github.com/adrienverge/yamllint](https://github.com/adrienverge/yamllint) - Kuberentes validator: `kubeval` [github.com/instrumenta/kubeval](https://github.com/instrumenta/kubeval) - We'll learn about Kubernetes cluster-specific validation with kubectl later .debug[[k8smastery/authoringyaml.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/authoringyaml.md)] --- class: pic .interstitial[] --- name: toc-using-server-dry-run-and-diff class: title Using server-dry-run and diff .nav[ [Previous section](#toc-authoring-yaml) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-rolling-updates) ] .debug[(automatically generated title slide)] --- # Using server-dry-run and diff - We already talked about using `--dry-run` for building YAML - Let's talk more about options for testing YAML - Including testing against the live cluster API! .debug[[k8smastery/dryrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/dryrun.md)] --- ## Using `--dry-run` with `kubectl apply` - The `--dry-run` option can also be used with `kubectl apply` - However, it can be misleading (it doesn't do a "real" dry run) - Let's see what happens in the following scenario: - generate the YAML for a Deployment - tweak the YAML to transform it into a DaemonSet - apply that YAML to see what would actually be created .debug[[k8smastery/dryrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/dryrun.md)] --- ## The limits of `kubectl apply --dry-run` .exercise[ - Generate the YAML for a deployment: ```bash kubectl create deployment web --image=nginx -o yaml > web.yaml ``` - Change the `kind` in the YAML to make it a `DaemonSet` - Ask `kubectl` what would be applied: ```bash kubectl apply -f web.yaml --dry-run --validate=false -o yaml ``` ] The resulting YAML doesn't represent a valid DaemonSet. .debug[[k8smastery/dryrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/dryrun.md)] --- ## Server-side dry run - Since Kubernetes 1.13, we can use [server-side dry run and diffs](https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/) - Server-side dry run will do all the work, but *not* persist to etcd (all validation and mutation hooks will be executed) .exercise[ - Try the same YAML file as earlier, with server-side dry run: ```bash kubectl apply -f web.yaml --server-dry-run --validate=false -o yaml ``` ] The resulting YAML doesn't have the `replicas` field anymore. Instead, it has the fields expected in a DaemonSet. .debug[[k8smastery/dryrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/dryrun.md)] --- ## Advantages of server-side dry run - The YAML is verified much more extensively - The only step that is skipped is "write to etcd" - YAML that passes server-side dry run *should* apply successfully (unless the cluster state changes by the time the YAML is actually applied) - Validating or mutating hooks that have side effects can also be an issue .debug[[k8smastery/dryrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/dryrun.md)] --- ## `kubectl diff` - Kubernetes 1.13 also introduced `kubectl diff` - `kubectl diff` does a server-side dry run, *and* shows differences .exercise[ - Try `kubectl diff` on a simple Pod YAML: ```bash curl -O https://k8smastery.com/just-a-pod.yaml kubectl apply -f just-a-pod.yaml # edit the image tag to :1.17 kubectl diff -f just-a-pod.yaml ``` ] Note: we don't need to specify `--validate=false` here. .debug[[k8smastery/dryrun.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/dryrun.md)] --- class: cleanup ## Cleanup Let's cleanup before we start the next lecture! .exercise[ - remove our "hello" pod: ```bash kubectl delete -f just-a-pod.yaml ``` ] .debug[[k8smastery/cleanup-hello.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/cleanup-hello.md)] --- ## Re-deploying DockerCoins with YAML - OK back to DockerCoins! Let's deploy all the resources: .exercise[ - Deploy or redeploy DockerCoins: ```bash kubectl apply -f https://k8smastery.com/dockercoins.yaml ``` ] .debug[[k8smastery/dockercoins-apply.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8smastery/dockercoins-apply.md)] --- class: pic .interstitial[] --- name: toc-rolling-updates class: title Rolling updates .nav[ [Previous section](#toc-using-server-dry-run-and-diff) | [Back to table of contents](#toc-chapter-8) | [Next section](#toc-healthchecks) ] .debug[(automatically generated title slide)] --- # Rolling updates - By default (without rolling updates), when a scaled resource is updated: - new pods are created - old pods are terminated - ... all at the same time - if something goes wrong, ¯\\\_(ツ)\_/¯ .debug[[k8s/rollout.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/rollout.md)] --- ## Rolling updates - With rolling updates, when a Deployment is updated, it happens progressively - The Deployment controls multiple ReplicaSets -- - Each ReplicaSet is a group of identical Pods (with the same image, arguments, parameters ...) -- - During the rolling update, we have at least two ReplicaSets: - the "new" set (corresponding to the "target" version) - at least one "old" set -- - We can have multiple "old" sets (if we start another update before the first one is done) .debug[[k8s/rollout.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/rollout.md)] --- ## Update strategy - Two parameters determine the pace of the rollout: `maxUnavailable` and `maxSurge` -- - They can be specified in absolute number of pods, or percentage of the `replicas` count -- - At any given time ... - there will always be at least `replicas`-`maxUnavailable` pods available - there will never be more than `replicas`+`maxSurge` pods in total - there will therefore be up to `maxUnavailable`+`maxSurge` pods being updated -- - We have the possibility of rolling back to the previous version <br/>(if the update fails or is unsatisfactory in any way) .debug[[k8s/rollout.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/rollout.md)] --- ## Checking current rollout parameters - Recall how we build custom reports with `kubectl` and `jq`: .exercise[ - Show the rollout plan for our deployments: ```bash kubectl get deploy -o json | jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate" ``` ] .debug[[k8s/rollout.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/rollout.md)] --- ## Rolling updates in practice - As of Kubernetes 1.8, we can do rolling updates with: `deployments`, `daemonsets`, `statefulsets` - Editing one of these resources will automatically result in a rolling update - Rolling updates can be monitored with the `kubectl rollout` subcommand .debug[[k8s/rollout.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/rollout.md)] --- ## Rolling out the new `worker` service .exercise[ - Let's monitor what's going on by opening a few terminals, and run: ```bash kubectl get pods -w kubectl get replicasets -w kubectl get deployments -w ``` <!-- ```wait NAME``` ```key ^C``` --> - Update `worker` either with `kubectl edit`, or by running: ```bash kubectl set image deploy worker worker=dockercoins/worker:v0.2 ``` ] -- That rollout should be pretty quick. What shows in the web UI? .debug[[k8s/rollout.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/rollout.md)] --- ## Give it some time - At first, it looks like nothing is happening (the graph remains at the same level) - According to `kubectl get deploy -w`, the `deployment` was updated really quickly - But `kubectl get pods -w` tells a different story - The old `pods` are still here, and they stay in `Terminating` state for a while - Eventually, they are terminated; and then the graph decreases significantly - This delay is due to the fact that our worker doesn't handle signals - Kubernetes sends a "polite" shutdown request to the worker, which ignores it - After a grace period, Kubernetes gets impatient and kills the container (The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed) .debug[[k8s/rollout.md](https://github.com/BretFisher/kubernetes-mastery/tree/mastery/slides/k8s/rollout.md)] --- ## Rolling out something invalid - What happens if we make a mistake? .exercise[ - Update `worker` by specifying a non-existent image: ```bash kubectl set image deploy worker worker=dockercoins/worker:v0.3 ``` - Check what's going on: ```bash kubectl rollout status deploy worker ``` <!-- ```wait Waiting for deployment``` ```key ^C``` --> ] -- Our rollout is stuck. However, the app is not dead. (After a minute, it will stabilize to be 20-25% slower.) .debug[[k8s/rollout.md](https://github.com/BretFisher/kubernetes-mastery/tree/ma